GithubHelp home page GithubHelp logo

tensorflow / probability Goto Github PK

View Code? Open in Web Editor NEW
4.1K 163.0 1.1K 154.18 MB

Probabilistic reasoning and statistical analysis in TensorFlow

Home Page: https://www.tensorflow.org/probability/

License: Apache License 2.0

Python 24.43% Shell 0.04% Jupyter Notebook 74.57% Starlark 0.96%
tensorflow bayesian-methods deep-learning machine-learning data-science neural-networks statistics probabilistic-programming

probability's Introduction

TensorFlow Probability

TensorFlow Probability is a library for probabilistic reasoning and statistical analysis in TensorFlow. As part of the TensorFlow ecosystem, TensorFlow Probability provides integration of probabilistic methods with deep networks, gradient-based inference via automatic differentiation, and scalability to large datasets and models via hardware acceleration (e.g., GPUs) and distributed computation.

TFP also works as "Tensor-friendly Probability" in pure JAX!: from tensorflow_probability.substrates import jax as tfp -- Learn more here.

Our probabilistic machine learning tools are structured as follows.

Layer 0: TensorFlow. Numerical operations. In particular, the LinearOperator class enables matrix-free implementations that can exploit special structure (diagonal, low-rank, etc.) for efficient computation. It is built and maintained by the TensorFlow Probability team and is now part of tf.linalg in core TF.

Layer 1: Statistical Building Blocks

Layer 2: Model Building

  • Joint Distributions (e.g., tfp.distributions.JointDistributionSequential): Joint distributions over one or more possibly-interdependent distributions. For an introduction to modeling with TFP's JointDistributions, check out this colab
  • Probabilistic Layers (tfp.layers): Neural network layers with uncertainty over the functions they represent, extending TensorFlow Layers.

Layer 3: Probabilistic Inference

  • Markov chain Monte Carlo (tfp.mcmc): Algorithms for approximating integrals via sampling. Includes Hamiltonian Monte Carlo, random-walk Metropolis-Hastings, and the ability to build custom transition kernels.
  • Variational Inference (tfp.vi): Algorithms for approximating integrals via optimization.
  • Optimizers (tfp.optimizer): Stochastic optimization methods, extending TensorFlow Optimizers. Includes Stochastic Gradient Langevin Dynamics.
  • Monte Carlo (tfp.monte_carlo): Tools for computing Monte Carlo expectations.

TensorFlow Probability is under active development. Interfaces may change at any time.

Examples

See tensorflow_probability/examples/ for end-to-end examples. It includes tutorial notebooks such as:

It also includes example scripts such as:

Representation learning with a latent code and variational inference.

Installation

For additional details on installing TensorFlow, guidance installing prerequisites, and (optionally) setting up virtual environments, see the TensorFlow installation guide.

Stable Builds

To install the latest stable version, run the following:

# Notes:

# - The `--upgrade` flag ensures you'll get the latest version.
# - The `--user` flag ensures the packages are installed to your user directory
#   rather than the system directory.
# - TensorFlow 2 packages require a pip >= 19.0
python -m pip install --upgrade --user pip
python -m pip install --upgrade --user tensorflow tensorflow_probability

For CPU-only usage (and a smaller install), install with tensorflow-cpu.

To use a pre-2.0 version of TensorFlow, run:

python -m pip install --upgrade --user "tensorflow<2" "tensorflow_probability<0.9"

Note: Since TensorFlow is not included as a dependency of the TensorFlow Probability package (in setup.py), you must explicitly install the TensorFlow package (tensorflow or tensorflow-cpu). This allows us to maintain one package instead of separate packages for CPU and GPU-enabled TensorFlow. See the TFP release notes for more details about dependencies between TensorFlow and TensorFlow Probability.

Nightly Builds

There are also nightly builds of TensorFlow Probability under the pip package tfp-nightly, which depends on one of tf-nightly or tf-nightly-cpu. Nightly builds include newer features, but may be less stable than the versioned releases. Both stable and nightly docs are available here.

python -m pip install --upgrade --user tf-nightly tfp-nightly

Installing from Source

You can also install from source. This requires the Bazel build system. It is highly recommended that you install the nightly build of TensorFlow (tf-nightly) before trying to build TensorFlow Probability from source. The most recent version of Bazel that TFP currently supports is 6.4.0; support for 7.0.0+ is WIP.

# sudo apt-get install bazel git python-pip  # Ubuntu; others, see above links.
python -m pip install --upgrade --user tf-nightly
git clone https://github.com/tensorflow/probability.git
cd probability
bazel build --copt=-O3 --copt=-march=native :pip_pkg
PKGDIR=$(mktemp -d)
./bazel-bin/pip_pkg $PKGDIR
python -m pip install --upgrade --user $PKGDIR/*.whl

Community

As part of TensorFlow, we're committed to fostering an open and welcoming environment.

See the TensorFlow Community page for more details. Check out our latest publicity here:

Contributing

We're eager to collaborate with you! See CONTRIBUTING.md for a guide on how to contribute. This project adheres to TensorFlow's code of conduct. By participating, you are expected to uphold this code.

References

If you use TensorFlow Probability in a paper, please cite:

  • TensorFlow Distributions. Joshua V. Dillon, Ian Langmore, Dustin Tran, Eugene Brevdo, Srinivas Vasudevan, Dave Moore, Brian Patton, Alex Alemi, Matt Hoffman, Rif A. Saurous. arXiv preprint arXiv:1711.10604, 2017.

(We're aware there's a lot more to TensorFlow Probability than Distributions, but the Distributions paper lays out our vision and is a fine thing to cite for now.)

probability's People

Contributors

axch avatar bchetioui avatar brianwa84 avatar colcarroll avatar csuter avatar davmre avatar derifatives avatar dustinvtran avatar emilyfertig avatar fehiepsi avatar froystig avatar gisilvs avatar gnecula avatar hawkinsp avatar jakevdp avatar jburnim avatar jeffpollock9 avatar jekbradbury avatar junpenglao avatar jvdillon avatar langmore avatar leandrolcampos avatar mattjj avatar rupei avatar sharadmv avatar shoyer avatar siegelordex avatar skye avatar srvasude avatar tensorflower-gardener avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

probability's Issues

Installation on Colab.research.google.com?

I am trying to install tensorflow-probability with the following script on a Colab notebook:

!pip3 install -q --user --upgrade tfp-nightly
import tensorflow_probability as tfp

However it doesnt work:

---------------------------------------------------------------------------
ModuleNotFoundError                       Traceback (most recent call last)
<ipython-input-1-791d55ac0b39> in <module>()
      1 get_ipython().system('pip3 install -q --user --upgrade tfp-nightly')
----> 2 import tensorflow_probability as tfp

ModuleNotFoundError: No module named 'tensorflow_probability'

---------------------------------------------------------------------------
NOTE: If your import is failing due to a missing package, you can
manually install dependencies using either !pip or !apt.

To view examples of installing some common dependencies, click the
"Open Examples" button below.
---------------------------------------------------------------------------

Am I doing it correctly? Is it possible to install it on Colab?

`ImportError: cannot import name 'abs'` when importing TFP in Python 3 (and in Python 2)

So I used the following code to import my dependencies:

from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

import numpy as np
import matplotlib.pyplot as plt
from matplotlib.patches import Ellipse
import seaborn as sns
import tensorflow as tf                            # importing Tensorflow
import tensorflow_probability as tfp               # and Tensorflow probability
from tensorflow_probability import edward2 as ed   # Edwardlib extension

tfd = tfp.distributions             # Basic probability distribution toolkit
tfb = tfp.distributions.bijectors   # and their modifiers

# Eager Execution
# tfe = tf.contrib.eager
# tfe.enable_eager_execution()

%matplotlib inline
plt.style.use("fivethirtyeight")        # Styling plots like FiveThirtyEight

import warnings
warnings.filterwarnings('ignore')
%config InlineBackend.figure_format="retina" # improves resolution of plots

But I get this error when trying to import tensorflow_probability

---------------------------------------------------------------------------
ImportError                               Traceback (most recent call last)
<ipython-input-3-47fdbecb20a4> in <module>()
      7 from matplotlib.patches import Ellipse
      8 import seaborn as sns
----> 9 import tensorflow as tf                            # importing Tensorflow
     10 import tensorflow_probability as tfp               # and Tensorflow probability
     11 from tensorflow_probability import edward2 as ed   # Edwardlib extension

/usr/local/lib/python3.6/dist-packages/tensorflow/__init__.py in <module>()
     22 
     23 # pylint: disable=g-bad-import-order
---> 24 from tensorflow.python import pywrap_tensorflow  # pylint: disable=unused-import
     25 # pylint: disable=wildcard-import
     26 from tensorflow.tools.api.generator.api import *  # pylint: disable=redefined-builtin

/usr/local/lib/python3.6/dist-packages/tensorflow/python/__init__.py in <module>()
     79 # Bring in subpackages.
     80 from tensorflow.python import data
---> 81 from tensorflow.python import keras
     82 from tensorflow.python.estimator import estimator_lib as estimator
     83 from tensorflow.python.feature_column import feature_column_lib as feature_column

/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/__init__.py in <module>()
     22 from __future__ import print_function
     23 
---> 24 from tensorflow.python.keras import activations
     25 from tensorflow.python.keras import applications
     26 from tensorflow.python.keras import backend

/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/activations/__init__.py in <module>()
     20 
     21 # Activation functions.
---> 22 from tensorflow.python.keras._impl.keras.activations import elu
     23 from tensorflow.python.keras._impl.keras.activations import hard_sigmoid
     24 from tensorflow.python.keras._impl.keras.activations import linear

/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/_impl/keras/__init__.py in <module>()
     19 from __future__ import print_function
     20 
---> 21 from tensorflow.python.keras._impl.keras import activations
     22 from tensorflow.python.keras._impl.keras import applications
     23 from tensorflow.python.keras._impl.keras import backend

/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/_impl/keras/activations.py in <module>()
     21 import six
     22 
---> 23 from tensorflow.python.keras._impl.keras import backend as K
     24 from tensorflow.python.keras._impl.keras.utils.generic_utils import deserialize_keras_object
     25 from tensorflow.python.layers.base import Layer

/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/_impl/keras/backend.py in <module>()
     35 from tensorflow.python.framework import ops
     36 from tensorflow.python.framework import sparse_tensor
---> 37 from tensorflow.python.layers import base as tf_base_layers
     38 from tensorflow.python.ops import array_ops
     39 from tensorflow.python.ops import clip_ops

/usr/local/lib/python3.6/dist-packages/tensorflow/python/layers/base.py in <module>()
     23 from tensorflow.python.framework import dtypes
     24 from tensorflow.python.framework import ops
---> 25 from tensorflow.python.keras.engine import base_layer
     26 from tensorflow.python.ops import variable_scope as vs
     27 from tensorflow.python.ops import variables as tf_variables

/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/__init__.py in <module>()
     19 from __future__ import print_function
     20 
---> 21 from tensorflow.python.keras.engine.base_layer import InputSpec
     22 from tensorflow.python.keras.engine.base_layer import Layer
     23 from tensorflow.python.keras.engine.input_layer import Input

/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer.py in <module>()
     30 from tensorflow.python.framework import tensor_shape
     31 from tensorflow.python.framework import tensor_util
---> 32 from tensorflow.python.keras import backend
     33 from tensorflow.python.keras import constraints
     34 from tensorflow.python.keras import initializers

/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/backend/__init__.py in <module>()
     20 
     21 # pylint: disable=redefined-builtin
---> 22 from tensorflow.python.keras._impl.keras.backend import abs
     23 from tensorflow.python.keras._impl.keras.backend import all
     24 from tensorflow.python.keras._impl.keras.backend import any

ImportError: cannot import name 'abs'

---------------------------------------------------------------------------
NOTE: If your import is failing due to a missing package, you can
manually install dependencies using either !pip or !apt.

To view examples of installing some common dependencies, click the
"Open Examples" button below.
---------------------------------------------------------------------------

This only just started happening after I entered pip install --upgrade tfp-nightly for into the terminal, which leads me to conclude that something within the combination of tb-nightly-1.9.0a20180519, tf-nightly-1.9.0.dev20180519, and tfp-nightly-0.0.1.dev20180519 is not working.

It was working previously but is not any longer. This error applies to both Python 3 AND python 2, so it's not as simple as just fixing the runtime type.

Tutorial request: simple metropolis sampling

I was using tensorflow.contrib.bayesflow to implement a metropolis-Hastings sampling from a given distribution. After noticing tfp, I began trying to write a new version using tfp. However I find it not clear how to do so. The documentations are not so clear. Could anyone write about how to do so please? Do we have to write a kernel?

Muldimensional Uniform Distributions

I have been trying to create a 2-dimensional Uniform distribution, something that might take the following form:
tf.distributions.Uniform("halo_position_4/", batch_shape=(), event_shape=(2,), dtype=float32).
However My attempts have nearly always resulted in a form that resembles this: tf.distributions.Uniform("halo_position_4/", batch_shape=(2,), event_shape=(), dtype=float32).

Largely I've been trying to make use of multiple uniform distributions

two_d_uniform = tfd.Uniform(name="two_d_uniform", low=[0.0, 0.0], high=100.0, 100.0)

as well as trying to make use of an independent distribution

two_d_uniform = tfd.Independent(tfd.Uniform(name="two_d_uniform", 
                                    low=0.0, 
                                    high=100.0), reinterpreted_batch_ndims=1) 

but the output for that last one resembles the following:

ValueErrorTraceback (most recent call last)
<ipython-input-14-9c97b2535035> in <module>()
     31     two_d_uniform = tfd.Independent(tfd.Uniform(name="two_d_uniform", 
     32                                     low=0.0,
---> 33                                     high=100.0), reinterpreted_batch_ndims=1)    
     34     print(halo_position)
     35 

/content/.local/lib/python2.7/site-packages/tensorflow/contrib/distributions/python/ops/independent.pyc in __init__(self, distribution, reinterpreted_batch_ndims, validate_args, name)
    144           name=name)
    145       self._runtime_assertions = self._make_runtime_assertions(
--> 146           distribution, reinterpreted_batch_ndims, validate_args)
    147 
    148   @property

/content/.local/lib/python2.7/site-packages/tensorflow/contrib/distributions/python/ops/independent.pyc in _make_runtime_assertions(self, distribution, reinterpreted_batch_ndims, validate_args)
    227         raise ValueError("reinterpreted_batch_ndims({}) cannot exceed "
    228                          "distribution.batch_ndims({})".format(
--> 229                              static_reinterpreted_batch_ndims, batch_ndims))
    230     elif validate_args:
    231       batch_shape = distribution.batch_shape_tensor()

ValueError: reinterpreted_batch_ndims(1) cannot exceed distribution.batch_ndims(0)

Is there a simple way to make multi-dimensional uniform distributions like the litany of methods for multivariate normal can? If not, could this be added as a feature?

Mainly what I've been trying to do is find a substitute for the following:

two_d_uniform = pymc.Uniform("two_d_uniform", 0, 100, size=(1, 2))

and modifying the event_shape` (as opposed to the batch_shape) appears to be the way to go with

PROBABILITY ESTIMATION FROM TEXT EVIDENCE & OCCURRENCE

Hi All ,
Thanks for making such a brilliant Probability estimation library , i have some doubts for a project , can you all tell me how to achieve that using Probability estimation library or any of your methods ..

PROBLEM STATEMENTย - i am trying to create a probability tool for creating scores of importance eg (9/10, 8/10 or 6/10 ) depending on the degree of importance ,from a book text corpus , i am trying to calculate the probability of certain topic coming in the exam using previous year question papers . i have around 10 yrs of question papers comprising of 300 question in total from the same syllabus . but books are different . since the questions are not straight forward , i am unable to do this

WHAT HAVE I TRIEDย - i tried using topic modelling and LDA method in it ,but it gives only important topics or words from a text corpus . i was able to generate important topics from text , using GENSIM library but it couldn't solve my problem of matching and comparing it with the questions from the previous years question papers as some times questions were indirect or twisted .

WHAT I AM HOPINGย - i am hoping is there a way Probability estimation library can help me with understanding the indirect questions or twisted questions to generate a topic which may help in matching it with the topics in book corpus from the same syllabus thus generating a probability of topics occuring in the exam using previous years question papers .
Looking forward for your help , any kind of help in solving this particular problem statement would be of great help to us

Thanks

Error reporting for "ValueError: Encountered `None` gradient."

So I have been trying to run a Monte Carlo Markov chain, with a format similar to the following,

states, kernel_results = tfp.mcmc.sample_chain(
    num_results=2000,
    num_burnin_steps=1400,
    current_state=[                # Based on my undrstanding the initial states are defined here
        tf.constant(100, dtype=tf.float32, name='init_weight'),
        tf.constant([2100., 2100.], dtype=tf.float32, name='init_position'),
        tf.random_normal(shape=[2, num_inputs], mean=0.0, stddev=0.3,  # random initialization
                         dtype=tf.float32, name='init_effect_avg')
    ],
    kernel=tfp.mcmc.HamiltonianMonteCarlo( 
        target_log_prob_fn=approximate_posterior_log_prob,
        step_size=7, 
        num_leapfrog_steps=10))

And lately, when I've been running it, the following error arises:

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-56-803fa2bc7bec> in <module>()
     12         target_log_prob_fn=approximate_posterior_log_prob,
     13         step_size=7,
---> 14         num_leapfrog_steps=10))
     15 
     16 halo_position, ellpty_avg, test_mass = states

/usr/local/lib/python3.6/dist-packages/tensorflow_probability/python/mcmc/sample.py in sample_chain(num_results, current_state, previous_kernel_results, kernel, num_burnin_steps, num_steps_between_results, parallel_iterations, name)
    238 
    239     if previous_kernel_results is None:
--> 240       previous_kernel_results = kernel.bootstrap_results(current_state)
    241     return tf.scan(
    242         fn=_scan_body,

/usr/local/lib/python3.6/dist-packages/tensorflow_probability/python/mcmc/hmc.py in bootstrap_results(self, init_state)
    367   def bootstrap_results(self, init_state):
    368     """Creates initial `previous_kernel_results` using a supplied `state`."""
--> 369     return self._impl.bootstrap_results(init_state)
    370 
    371 

/usr/local/lib/python3.6/dist-packages/tensorflow_probability/python/mcmc/metropolis_hastings.py in bootstrap_results(self, init_state)
    264         name=mcmc_util.make_name(self.name, 'mh', 'bootstrap_results'),
    265         values=[init_state]):
--> 266       pkr = self.inner_kernel.bootstrap_results(init_state)
    267       if not has_target_log_prob(pkr):
    268         raise ValueError(

/usr/local/lib/python3.6/dist-packages/tensorflow_probability/python/mcmc/hmc.py in bootstrap_results(self, init_state)
    507           init_target_log_prob,
    508           init_grads_target_log_prob,
--> 509       ] = mcmc_util.maybe_call_fn_and_grads(self.target_log_prob_fn, init_state)
    510       return UncalibratedHamiltonianMonteCarloKernelResults(
    511           log_acceptance_correction=tf.zeros_like(init_target_log_prob),

/usr/local/lib/python3.6/dist-packages/tensorflow_probability/python/mcmc/util.py in maybe_call_fn_and_grads(fn, fn_arg_list, result, grads, check_non_none_grads, name)
    235                        'with grads.')
    236     if check_non_none_grads and any(g is None for g in grads):
--> 237       raise ValueError('Encountered `None` gradient.')
    238     return result, grads

ValueError: Encountered `None` gradient.

The problem is that this does not give a lot of information. I don't know which of the initial states has been erroneous (I have been trying out many different states with different shapes and initialization strategies, all of which have been leading to this same error). I also don't know whether this is just a zero gradient, if this is the result of a gradient missing entirely, if data of dtype=None is somehow being fed in, if this is a gradient that eventually vanishes (which makes it slightly harder to know if the usual approaches to vanishing gradients do any good).

If there was something like reporting on the index of the initial state, that would be extremely helpful.

Is it possible that I'm just misusing TFP in this case? Should I try to find a way to manually input the gradients?

Failed building wheel (Conda Install)

I'm having trouble installing tfp using pip from Anaconda - perhaps this isn't supported? I ran the command pip install --user --upgrade tfp-nightly-gpu and got the following errors

Building wheels for collected packages: tfp-nightly, tfp-nightly
  Running setup.py bdist_wheel for tfp-nightly ... done
  Stored in directory: ~/.cache/pip/wheels/71/3e/99/77aebe0e3796cf1322deb90448b4a5c2e30a35d4813eb831f5
  Running setup.py bdist_wheel for tfp-nightly ... error
  Complete output from command ~/anaconda/bin/python -u -c "import setuptools, tokenize;__file__='/tmp/pip-install-os72hn3m/tfp-nightly/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" bdist_wheel -d /tmp/pip-wheel-uar327sb --python-tag cp36:
  Traceback (most recent call last):
    File "<string>", line 1, in <module>
    File "~/anaconda/lib/python3.6/tokenize.py", line 452, in open
      buffer = _builtin_open(filename, 'rb')
  FileNotFoundError: [Errno 2] No such file or directory: '/tmp/pip-install-os72hn3m/tfp-nightly/setup.py'
  ----------------------------------------
  Failed building wheel for tfp-nightly

Cleaning the build then also fails, and finally I get

Could not install packages due to an EnvironmentError: [Errno 2] No such file or directory: '~/anaconda/lib/python3.6/site-packages/tfp_nightly-0.0.1.dev20180430.dist-info/METADATA'

(In all of the commands I've replaced my home directory with ~)

A little more information:

$ conda --version
conda 4.4.10
$ pip --version
pip 10.0.1 from ~/anaconda/lib/python3.6/site-packages/pip (python 3.6)

This machine is running Ubuntu 16.04.

Issues with the VAE example

Morning team,

I've been working through your VAE example, https://github.com/tensorflow/probability/blob/master/tensorflow_probability/examples/vae.py, this morning, great work.

I had to drop the csiszar_divergence prefix on line 195 in vae.py in order for it to run.

  elbo_loss = tf.reduce_sum(
      tfp.vi.csiszar_divergence.monte_carlo_csiszar_f_divergence(
          f=tfp.vi.csiszar_divergence.kl_reverse,
          p_log_prob=joint_log_prob,
          q=encoder,
          num_draws=1))

became

  elbo_loss = tf.reduce_sum(
      tfp.vi.monte_carlo_csiszar_f_divergence(
          f=tfp.vi.kl_reverse,
          p_log_prob=joint_log_prob,
          q=encoder,
          num_draws=1))
  tf.summary.scalar("elbo", elbo_loss)

Hope this helps.

Thanks,
George

Feature Request: Maximum A Posteriori (MAP) estimation

For mixed models, it would be great if MCMC could be initialized with a more accurate estimation. This was a feature in Pymc and Edwardlib, but does not appear to be available in either tensorflow_probability or the edward2.

dynamic programming layers

From https://arxiv.org/abs/1802.03676
https://twitter.com/arthurmensch/status/994976710071373824

Differentiable Dynamic Programming for Structured Prediction and Attention
Arthur Mensch, Mathieu Blondel
(Submitted on 11 Feb 2018 (v1), last revised 20 Feb 2018 (this version, v2))

Dynamic programming (DP) solves a variety of structured combinatorial problems by iteratively breaking them down into smaller subproblems. In spite of their versatility, DP algorithms are usually non-differentiable, which hampers their use as a layer in neural networks trained by backpropagation. To address this issue, we propose to smooth the max operator in the dynamic programming recursion, using a strongly convex regularizer. This allows to relax both the optimal value and solution of the original combinatorial problem, and turns a broad class of DP algorithms into differentiable operators. Theoretically, we provide a new probabilistic perspective on backpropagating through these DP operators, and relate them to inference in graphical models. We derive two particular instantiations of our framework, a smoothed Viterbi algorithm for sequence prediction and a smoothed DTW algorithm for time-series alignment. We showcase these instantiations on two structured prediction tasks and on structured and sparse attention for neural machine translation.

A useful start could be the linear-chain CRF layer. @fchollet also mentioned this would be useful to have ported into TF(P) as implementations already exist.

Small typo in line 218, sgld.py

Hi team,

I think the injected noise in line 218, sgld.py should be:

tf.random_normal(tf.shape(grad), 0., 1., dtype=grad.dtype) * stddev * tf.sqrt(preconditioner)

, rather than

tf.random_normal(tf.shape(grad), 1., dtype=grad.dtype) * stddev * tf.sqrt(preconditioner) 

right?
Or am I missed something?

User-defined random variables in Edward2

Currently Edward2 provides predefined random variables for each TF Distribution, e.g., tfd.Normal is wrapped ased.Normal. In general, users should be able to extend Edward by defining their own custom random variables, just as they can define custom Distributions.

@dustinvtran and I discussed this briefly; there are at least three solutions we might consider:

  1. Publicly expose the make_random_variable(distribution_cls) utility function. A user would implement their custom CrazyDistribution(tf.Distribution), then write a probabilistic program as
CrazyRV = ed.make_random_variable(CrazyDistribution)
def ed_model():
  a = ed.Normal(0., 1.)
  b = CrazyRV(scale = a)
  return some_fn(b)
  1. Implement and expose ed.CustomRV(distribution_instance). This is an RV whose only parameter is a Distribution instance. A user would write
def ed_model():
  a = ed.Normal(0., 1.)
  b = ed.CustomRV(CrazyDistribution(scale = a))
  return some_fn(b)

or if desired, could use the sugar CrazyRV = lambda *args, **kwargs : ed.CustomRV(CrazyDistribution(*args, **kwargs)) to recover the syntax above.

  1. Publicly expose the @interceptable decorator. This is the lowest-level approach; the user could directly implement their own analogue of make_random_variable.
@interceptable
def CrazyRV(*args, sample_shape=(), value=None, **kwargs):
  return ed.RandomVariable(
              CrazyDistribution(*args, **kwargs), 
              sample_shape=sample_shape, value=value)

def ed_model():
  a = ed.Normal(0., 1.)
  b = CrazyRV(scale = a)
  return some_fn(b)

These solutions are not mutually exclusive; we could support any combination of them.

My impulse to support a combination of (1) and (2), because:

  • allowing make_random_variable externally matches the pattern we use for defining RVs inside of TFP, and hides the implementation details of @interceptable.
  • the CustomRV object is independently useful in allowing edward2 to serve as a backend for APIs that deal in Distribution objects on the frontend. E.g., the user specifies a prior distribution with a Distribution instance, and the API internally builds an Edward model by wrapping this distribution as a CustomRV.

But this is only a weak feeling. Are there other approaches? Or advantages/disadvantages of the ones I've listed?

Cannot import tensorflow_probability

Hello,

Unfortunately, I cannot import tensorflow_probability, and I get the error below, which I thought to report, as I cannot get around this. Any suggestions much appreciated..

I am using a Windows PC, and have created a python 3.5 environment, installed the latest version of tensorflow (1.7), and then installed the latest version of tensorflow probability following the installation instructions here .

\Users\myname\AppData\Roaming\Python\Python35\site-packages\tensorflow\python\util\tf_inspect.py:45:
 DeprecationWarning: inspect.getargspec() is deprecated, use inspect.signature() instead
  if d.decorator_argspec is not None), _inspect.getargspec(target))

Best,
M

Feature request: intuitive replacement for pymc.Normal(tau=foo, observed=True, value=bar)

I've been searching both the Tensorflow probability documentation and the pymc/pymc3 documentation for a good way to convert between something like this from pymc:

obs = pymc.Normal(name="obs", mu=mean, tau=prec, value=Y, observed=True)

and something like this?

import tensorflow_probability.distributions as tfd

obs = tfd.Normal(name="obs", loc=mean, scale=std, data=Y, observed=True)

Does something like this already exist, or am I missing something here?

Ordered bijector

Constrain a vector/tensor being ordered along a specific dimension.

Feature Request: No U-Turn Sampler (NUTS)

The NUTS sampler (http://www.jmlr.org/papers/volume15/hoffman14a/hoffman14a.pdf) would be a great addition to the HMC code already in tf.contrib.bayesflow.

A reference implementation is at: https://github.com/stan-dev/stan/blob/develop/src/stan/mcmc/hmc/nuts/base_nuts.hpp, and this has already been mentioned by @jvdillon and @dustinvtran in #4965. The HMC code has been brought over, but I don't see NUTS in there.

If someone is already working on this, is there any timeline for this being available?

Replica Exchange Monte Carlo

Hi.

I implemented Replica Exchange Monte Carlo method for edward, recently.
(blei-lab/edward#865)
I also want to implement this method for this repository.

Can I contribute for this project if I implement it?

Thanks.

Request: Updates in which features in the Medium announcement have changed

I've been using the TFP whitepaper and the Medium.com announcement as references for the TFP API so far. Recently I tried to use a method from the Variational Auto Encoder from the article:

import tensorflow as tf
import tensorflow_probability as tfp
# Assumes user supplies `likelihood`, `prior`, `surrogate_posterior`
# functions and that each returns a 
# tf.distribution.Distribution-like object.
elbo_loss = tfp.vi.monte_carlo_csiszar_f_divergence(
    f=tfp.vi.kl_reverse,  # Equivalent to "Evidence Lower BOund"
    p_log_prob=lambda z: likelihood(z).log_prob(x) + prior().log_prob(z),
    q=surrogate_posterior(x),
    num_draws=1)
train = tf.train.AdamOptimizer(
    learning_rate=0.01).minimize(elbo_loss)

More specifically, I was trying to use the elbo_loss formula. However, when I ran my version of the elbo_loss, the following error came up:

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-13-22415713f296> in <module>()
     10     kernel=tfp.mcmc.HamiltonianMonteCarlo(
     11             target_log_prob_fn=lambda z: elbo_loss(z),
---> 12             step_size=0.05, num_leapfrog_steps=3))
     13 
     14 halo_sample = states

/usr/local/lib/python3.6/dist-packages/tensorflow_probability/python/mcmc/sample.py in sample_chain(num_results, current_state, previous_kernel_results, kernel, num_burnin_steps, num_steps_between_results, parallel_iterations, name)
    238 
    239     if previous_kernel_results is None:
--> 240       previous_kernel_results = kernel.bootstrap_results(current_state)
    241     return tf.scan(
    242         fn=_scan_body,

/usr/local/lib/python3.6/dist-packages/tensorflow_probability/python/mcmc/hmc.py in bootstrap_results(self, init_state)
    367   def bootstrap_results(self, init_state):
    368     """Creates initial `previous_kernel_results` using a supplied `state`."""
--> 369     return self._impl.bootstrap_results(init_state)
    370 
    371 

/usr/local/lib/python3.6/dist-packages/tensorflow_probability/python/mcmc/metropolis_hastings.py in bootstrap_results(self, init_state)
    264         name=mcmc_util.make_name(self.name, 'mh', 'bootstrap_results'),
    265         values=[init_state]):
--> 266       pkr = self.inner_kernel.bootstrap_results(init_state)
    267       if not has_target_log_prob(pkr):
    268         raise ValueError(

/usr/local/lib/python3.6/dist-packages/tensorflow_probability/python/mcmc/hmc.py in bootstrap_results(self, init_state)
    507           init_target_log_prob,
    508           init_grads_target_log_prob,
--> 509       ] = mcmc_util.maybe_call_fn_and_grads(self.target_log_prob_fn, init_state)
    510       return UncalibratedHamiltonianMonteCarloKernelResults(
    511           log_acceptance_correction=tf.zeros_like(init_target_log_prob),

/usr/local/lib/python3.6/dist-packages/tensorflow_probability/python/mcmc/util.py in maybe_call_fn_and_grads(fn, fn_arg_list, result, grads, check_non_none_grads, name)
    226     fn_arg_list = (list(fn_arg_list) if is_list_like(fn_arg_list)
    227                    else [fn_arg_list])
--> 228     result, grads = _value_and_gradients(fn, fn_arg_list, result, grads)
    229     if not all(r.dtype.is_floating
    230                for r in (result if is_list_like(result) else [result])):  # pylint: disable=superfluous-parens

/usr/local/lib/python3.6/dist-packages/tensorflow_probability/python/mcmc/util.py in _value_and_gradients(fn, fn_arg_list, result, grads, name)
    181 
    182     if result is None:
--> 183       result = fn(*fn_arg_list)
    184     result = _convert_to_tensor(result, 'fn_result')
    185 

<ipython-input-13-22415713f296> in <lambda>(z)
      9     ],
     10     kernel=tfp.mcmc.HamiltonianMonteCarlo(
---> 11             target_log_prob_fn=lambda z: elbo_loss(z),
     12             step_size=0.05, num_leapfrog_steps=3))
     13 

<ipython-input-12-e0221d61616e> in elbo_loss(z)
      4     """
      5     x = data[:,2:]
----> 6     return tfp.vi.monte_carlo_csiszar_f_divergence(
      7         f=tfp.vi.kl_reverse,  # Equivalent to "Evidence Lower BOund"
      8         # ellpty_approx_posterior(z) behaves similarly to the likelihood

AttributeError: module 'tensorflow_probability.python.vi' has no attribute 'monte_carlo_csiszar_f_divergence'

Is there any kind of notice-board of which parts of the Medium announcement are no longer applicable?

sample mh sampling not working well(at least for py3.5/3.6)

I tried the following code provided in the unit tests. My environment is python3.6 with the latest nightly version of both tf and tfp. According to @junpenglao, it also does not work for py35.

import tensorflow as tf
import numpy as np
import tensorflow_probability as tfp
tfd = tf.contrib.distributions
dtype = np.float32
target = tfd.Normal(loc=dtype(0.0), scale=dtype(1.0))
samples, _ = tfp.mcmc.sample_chain(
    num_results=1000,
    current_state=dtype(1.0),
    kernel=tfp.mcmc.RandomWalkMetropolis(
        target.log_prob,
        new_state_fn=tfp.mcmc.random_walk_uniform_fn(scale=2.),seed=42),
    num_burnin_steps=500,
    parallel_iterations=1
)  # For determinism.

sample_mean = tf.reduce_mean(samples, axis=0)
sample_std = tf.sqrt(tf.reduce_mean(tf.squared_difference(samples, sample_mean),axis=0))
    
[sample_mean_, sample_std_] = sess.run([sample_mean, sample_std])

However what I get is

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-29-faad2d5672e2> in <module>()
      8         new_state_fn=tfp.mcmc.random_walk_uniform_fn(scale=2.),seed=42),
      9     num_burnin_steps=500,
---> 10     parallel_iterations=1
     11 )  # For determinism.
     12 

~/anaconda3/lib/python3.6/site-packages/tensorflow_probability/python/mcmc/sample.py in sample_chain(num_results, current_state, previous_kernel_results, kernel, num_burnin_steps, num_steps_between_results, parallel_iterations, name)
    247                          dtype=tf.int32),  # num_steps
    248         initializer=[current_state, previous_kernel_results],
--> 249         parallel_iterations=parallel_iterations)

~/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/functional_ops.py in scan(fn, elems, initializer, parallel_iterations, back_prop, swap_memory, infer_shape, name)
    618         parallel_iterations=parallel_iterations,
    619         back_prop=back_prop, swap_memory=swap_memory,
--> 620         maximum_iterations=n)
    621 
    622     results_flat = [r.stack() for r in r_a]

~/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/control_flow_ops.py in while_loop(cond, body, loop_vars, shape_invariants, parallel_iterations, back_prop, swap_memory, name, maximum_iterations)
   3206     if loop_context.outer_context is None:
   3207       ops.add_to_collection(ops.GraphKeys.WHILE_CONTEXT, loop_context)
-> 3208     result = loop_context.BuildLoop(cond, body, loop_vars, shape_invariants)
   3209     if maximum_iterations is not None:
   3210       return result[1]

~/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/control_flow_ops.py in BuildLoop(self, pred, body, loop_vars, shape_invariants)
   2944       with ops.get_default_graph()._lock:  # pylint: disable=protected-access
   2945         original_body_result, exit_vars = self._BuildLoop(
-> 2946             pred, body, original_loop_vars, loop_vars, shape_invariants)
   2947     finally:
   2948       self.Exit()

~/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/control_flow_ops.py in _BuildLoop(self, pred, body, original_loop_vars, loop_vars, shape_invariants)
   2881         flat_sequence=vars_for_body_with_tensor_arrays)
   2882     pre_summaries = ops.get_collection(ops.GraphKeys._SUMMARY_COLLECTION)  # pylint: disable=protected-access
-> 2883     body_result = body(*packed_vars_for_body)
   2884     post_summaries = ops.get_collection(ops.GraphKeys._SUMMARY_COLLECTION)  # pylint: disable=protected-access
   2885     if not nest.is_sequence(body_result):

~/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/control_flow_ops.py in <lambda>(i, lv)
   3182         cond = lambda i, lv: (  # pylint: disable=g-long-lambda
   3183             math_ops.logical_and(i < maximum_iterations, orig_cond(*lv)))
-> 3184         body = lambda i, lv: (i + 1, orig_body(*lv))
   3185 
   3186     if context.executing_eagerly():

~/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/functional_ops.py in compute(i, a_flat, tas)
    607       packed_elems = input_pack([elem_ta.read(i) for elem_ta in elems_ta])
    608       packed_a = output_pack(a_flat)
--> 609       a_out = fn(packed_a, packed_elems)
    610       nest.assert_same_structure(
    611           elems if initializer is None else initializer, a_out)

~/anaconda3/lib/python3.6/site-packages/tensorflow_probability/python/mcmc/sample.py in _scan_body(args_list, num_steps)
    235               previous_kernel_results,
    236           ],
--> 237           parallel_iterations=parallel_iterations)[1:]  # Lop off `it_`.
    238 
    239     if previous_kernel_results is None:

~/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/control_flow_ops.py in while_loop(cond, body, loop_vars, shape_invariants, parallel_iterations, back_prop, swap_memory, name, maximum_iterations)
   3206     if loop_context.outer_context is None:
   3207       ops.add_to_collection(ops.GraphKeys.WHILE_CONTEXT, loop_context)
-> 3208     result = loop_context.BuildLoop(cond, body, loop_vars, shape_invariants)
   3209     if maximum_iterations is not None:
   3210       return result[1]

~/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/control_flow_ops.py in BuildLoop(self, pred, body, loop_vars, shape_invariants)
   2944       with ops.get_default_graph()._lock:  # pylint: disable=protected-access
   2945         original_body_result, exit_vars = self._BuildLoop(
-> 2946             pred, body, original_loop_vars, loop_vars, shape_invariants)
   2947     finally:
   2948       self.Exit()

~/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/control_flow_ops.py in _BuildLoop(self, pred, body, original_loop_vars, loop_vars, shape_invariants)
   2903     # during this comparison, because inputs are typically lists and
   2904     # outputs of the body are typically tuples.
-> 2905     nest.assert_same_structure(list(packed_vars_for_body), list(body_result))
   2906 
   2907     # Store body_result to keep track of TensorArrays returned by body

~/anaconda3/lib/python3.6/site-packages/tensorflow/python/util/nest.py in assert_same_structure(nest1, nest2, check_types)
    181       their substructures. Only possible if `check_types` is `True`.
    182   """
--> 183   _pywrap_tensorflow.AssertSameStructure(nest1, nest2, check_types)
    184 
    185 

TypeError: The two structures don't have the same nested structure.

First structure: type=list str=[<tf.Tensor 'mcmc_sample_chain_11/scan/while/while/Identity:0' shape=() dtype=int32>, <tf.Tensor 'mcmc_sample_chain_11/scan/while/while/Identity_1:0' shape=() dtype=float32>, MetropolisHastingsKernelResults(accepted_results=UncalibratedRandomWalkResults(log_acceptance_correction=<tf.Tensor 'mcmc_sample_chain_11/scan/while/while/Identity_2:0' shape=() dtype=float32>, target_log_prob=<tf.Tensor 'mcmc_sample_chain_11/scan/while/while/Identity_3:0' shape=() dtype=float32>), is_accepted=<tf.Tensor 'mcmc_sample_chain_11/scan/while/while/Identity_4:0' shape=() dtype=bool>, log_accept_ratio=<tf.Tensor 'mcmc_sample_chain_11/scan/while/while/Identity_5:0' shape=() dtype=float32>, proposed_state=<tf.Tensor 'mcmc_sample_chain_11/scan/while/while/Identity_6:0' shape=() dtype=float32>, proposed_results=UncalibratedRandomWalkResults(log_acceptance_correction=<tf.Tensor 'mcmc_sample_chain_11/scan/while/while/Identity_7:0' shape=() dtype=float32>, target_log_prob=<tf.Tensor 'mcmc_sample_chain_11/scan/while/while/Identity_8:0' shape=() dtype=float32>))]

Second structure: type=list str=[<tf.Tensor 'mcmc_sample_chain_11/scan/while/while/add:0' shape=() dtype=int32>, <tf.Tensor 'mcmc_sample_chain_11/scan/while/while/mh_one_step/choose_next_state/Select:0' shape=() dtype=float32>, MetropolisHastingsKernelResults(accepted_results=[<tf.Tensor 'mcmc_sample_chain_11/scan/while/while/mh_one_step/choose_inner_results/Select:0' shape=() dtype=float32>, <tf.Tensor 'mcmc_sample_chain_11/scan/while/while/mh_one_step/choose_inner_results/Select_1:0' shape=() dtype=float32>], is_accepted=<tf.Tensor 'mcmc_sample_chain_11/scan/while/while/mh_one_step/Less:0' shape=() dtype=bool>, log_accept_ratio=<tf.Tensor 'mcmc_sample_chain_11/scan/while/while/mh_one_step/compute_log_accept_ratio/Sum:0' shape=() dtype=float32>, proposed_state=<tf.Tensor 'mcmc_sample_chain_11/scan/while/while/mh_one_step/rwm_one_step/random_walk_uniform_fn/random_uniform:0' shape=() dtype=float32>, proposed_results=UncalibratedRandomWalkResults(log_acceptance_correction=<tf.Tensor 'mcmc_sample_chain_11/scan/while/while/mh_one_step/rwm_one_step/zeros:0' shape=() dtype=float32>, target_log_prob=<tf.Tensor 'mcmc_sample_chain_11/scan/while/while/mh_one_step/rwm_one_step/Normal/log_prob/sub:0' shape=() dtype=float32>))]

More specifically: The two namedtuples don't have the same sequence type. First structure type=UncalibratedRandomWalkResults str=UncalibratedRandomWalkResults(log_acceptance_correction=<tf.Tensor 'mcmc_sample_chain_11/scan/while/while/Identity_2:0' shape=() dtype=float32>, target_log_prob=<tf.Tensor 'mcmc_sample_chain_11/scan/while/while/Identity_3:0' shape=() dtype=float32>) has type UncalibratedRandomWalkResults, while second structure type=list str=[<tf.Tensor 'mcmc_sample_chain_11/scan/while/while/mh_one_step/choose_inner_results/Select:0' shape=() dtype=float32>, <tf.Tensor 'mcmc_sample_chain_11/scan/while/while/mh_one_step/choose_inner_results/Select_1:0' shape=() dtype=float32>] has type list

Any ideas why and how to fix this?

`ImportError: cannot import name 'base'` when importing TFP in Python 3 (or Python 2)

So I used the following code to import my first dependencies (including tensorflow 1.8.0, with the correct protobuf version: protobuf-3.5.2.post1), and everything works fine.

from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

import numpy as np
import matplotlib.pyplot as plt
from matplotlib.patches import Ellipse
import seaborn as sns
import tensorflow as tf                            # importing Tensorflow

%matplotlib inline
plt.style.use("fivethirtyeight")        # Styling plots like FiveThirtyEight

import warnings
warnings.filterwarnings('ignore')
%config InlineBackend.figure_format="retina" # improves resolution of plots

And everything imports fine.

Then I install tfp-nightly and run the following:

import tensorflow_probability as tfp               # and Tensorflow probability
from tensorflow_probability import edward2 as ed   # Edwardlib extension

tfd = tfp.distributions             # Basic probability distribution toolkit
tfb = tfp.distributions.bijectors   # and their modifiers

# Eager Execution
# tfe = tf.contrib.eager
# tfe.enable_eager_execution()

But I get this error when trying to import tensorflow_probability

---------------------------------------------------------------------------
ImportError                               Traceback (most recent call last)
<ipython-input-11-ebc2c1a2a0fa> in <module>()
----> 1 import tensorflow_probability as tfp               # Tensorflow probability
      2 from tensorflow_probability import edward2 as ed   # Edwardlib extension
      3 
      4 tfd = tfp.distributions             # Basic probability distribution toolkit
      5 tfb = tfp.distributions.bijectors   # and their modifiers

/usr/local/lib/python3.6/dist-packages/tensorflow_probability/__init__.py in <module>()
     19 
     20 # from tensorflow_probability.google import staging  # DisableOnExport
---> 21 from tensorflow_probability.python import *  # pylint: disable=wildcard-import

/usr/local/lib/python3.6/dist-packages/tensorflow_probability/python/__init__.py in <module>()
     19 from __future__ import print_function
     20 
---> 21 from tensorflow_probability.python import edward2
     22 from tensorflow_probability.python import glm
     23 from tensorflow_probability.python import layers

/usr/local/lib/python3.6/dist-packages/tensorflow_probability/python/edward2/__init__.py in <module>()
     20 
     21 # pylint: disable=wildcard-import
---> 22 from tensorflow_probability.python.edward2.generated_random_variables import *
     23 from tensorflow_probability.python.edward2.generated_random_variables import __all__ as rv_all
     24 from tensorflow_probability.python.edward2.interceptor import get_interceptor

/usr/local/lib/python3.6/dist-packages/tensorflow_probability/python/edward2/generated_random_variables.py in <module>()
     27 from tensorflow_probability.python.util import docstring as docstring_util
     28 
---> 29 tfd = tf.contrib.distributions
     30 
     31 __all__ = [

/usr/local/lib/python3.6/dist-packages/tensorflow/python/util/lazy_loader.py in __getattr__(self, item)
     51 
     52   def __getattr__(self, item):
---> 53     module = self._load()
     54     return getattr(module, item)
     55 

/usr/local/lib/python3.6/dist-packages/tensorflow/python/util/lazy_loader.py in _load(self)
     40   def _load(self):
     41     # Import the target module and insert it into the parent's namespace
---> 42     module = importlib.import_module(self.__name__)
     43     self._parent_module_globals[self._local_name] = module
     44 

/usr/lib/python3.6/importlib/__init__.py in import_module(name, package)
    124                 break
    125             level += 1
--> 126     return _bootstrap._gcd_import(name[level:], package, level)
    127 
    128 

/usr/local/lib/python3.6/dist-packages/tensorflow/contrib/__init__.py in <module>()
     25 from tensorflow.contrib import batching
     26 from tensorflow.contrib import bayesflow
---> 27 from tensorflow.contrib import checkpoint
     28 from tensorflow.contrib import cloud
     29 from tensorflow.contrib import cluster_resolver

/usr/local/lib/python3.6/dist-packages/tensorflow/contrib/checkpoint/__init__.py in <module>()
     31 from __future__ import print_function
     32 
---> 33 from tensorflow.contrib.checkpoint.python.containers import UniqueNameTracker
     34 from tensorflow.contrib.checkpoint.python.split_dependency import split_dependency
     35 from tensorflow.contrib.checkpoint.python.visualize import dot_graph_from_checkpoint

/usr/local/lib/python3.6/dist-packages/tensorflow/contrib/checkpoint/python/containers.py in <module>()
     18 from __future__ import print_function
     19 
---> 20 from tensorflow.python.training.checkpointable import base as checkpointable_lib
     21 
     22 

ImportError: cannot import name 'base'

---------------------------------------------------------------------------
NOTE: If your import is failing due to a missing package, you can
manually install dependencies using either !pip or !apt.

To view examples of installing some common dependencies, click the
"Open Examples" button below.
---------------------------------------------------------------------------

This only just started happening after I entered pip install --upgrade tfp-nightly for into the terminal, which leads me to conclude that something within the combination of tb-nightly-1.9.0a20180521, tf-nightly-1.9.0.dev20180521, and tfp-nightly-0.0.1.dev20180521 is not working. I had previously corrected errors with tensorflow main that involved name errors that were easily fixable by uninstalling tensorflow and protobuf, and then reinstalling the most recent version of tensorflow (without having an installed tf-nightly to interfere during this process). This error has been completely resisting those efforts, including when uninstalling tf-nightly, tfp-nightly, and tb-nightly both before and after uninstalling tensorflow and protobuf.

This error currently applies to Python 3, and similarly to python 2. When I run the scripts above in a Python 2 environment, with tensorflow 1.8 and the apporpriate protobuf version, the following stack trace emerges:

ImportErrorTraceback (most recent call last)
<ipython-input-12-ebc2c1a2a0fa> in <module>()
----> 1 import tensorflow_probability as tfp               # Tensorflow probability
      2 from tensorflow_probability import edward2 as ed   # Edwardlib extension
      3 
      4 tfd = tfp.distributions             # Basic probability distribution toolkit
      5 tfb = tfp.distributions.bijectors   # and their modifiers

/usr/local/lib/python2.7/dist-packages/tensorflow_probability/__init__.py in <module>()
     19 
     20 # from tensorflow_probability.google import staging  # DisableOnExport
---> 21 from tensorflow_probability.python import *  # pylint: disable=wildcard-import

/usr/local/lib/python2.7/dist-packages/tensorflow_probability/python/__init__.py in <module>()
     19 from __future__ import print_function
     20 
---> 21 from tensorflow_probability.python import edward2
     22 from tensorflow_probability.python import glm
     23 from tensorflow_probability.python import layers

/usr/local/lib/python2.7/dist-packages/tensorflow_probability/python/edward2/__init__.py in <module>()
     20 
     21 # pylint: disable=wildcard-import
---> 22 from tensorflow_probability.python.edward2.generated_random_variables import *
     23 from tensorflow_probability.python.edward2.generated_random_variables import __all__ as rv_all
     24 from tensorflow_probability.python.edward2.interceptor import get_interceptor

/usr/local/lib/python2.7/dist-packages/tensorflow_probability/python/edward2/generated_random_variables.py in <module>()
     27 from tensorflow_probability.python.util import docstring as docstring_util
     28 
---> 29 tfd = tf.contrib.distributions
     30 
     31 __all__ = [

/usr/local/lib/python2.7/dist-packages/tensorflow/python/util/lazy_loader.pyc in __getattr__(self, item)
     51 
     52   def __getattr__(self, item):
---> 53     module = self._load()
     54     return getattr(module, item)
     55 

/usr/local/lib/python2.7/dist-packages/tensorflow/python/util/lazy_loader.pyc in _load(self)
     40   def _load(self):
     41     # Import the target module and insert it into the parent's namespace
---> 42     module = importlib.import_module(self.__name__)
     43     self._parent_module_globals[self._local_name] = module
     44 

/usr/lib/python2.7/importlib/__init__.pyc in import_module(name, package)
     35             level += 1
     36         name = _resolve_name(name[level:], package, level)
---> 37     __import__(name)
     38     return sys.modules[name]

/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/__init__.py in <module>()
     23 
     24 # Add projects here, they will show up under tf.contrib.
---> 25 from tensorflow.contrib import batching
     26 from tensorflow.contrib import bayesflow
     27 from tensorflow.contrib import checkpoint

ImportError: cannot import name batching

---------------------------------------------------------------------------
NOTE: If your import is failing due to a missing package, you can
manually install dependencies using either !pip or !apt.

To view examples of installing some common dependencies, click the
"Open Examples" button below.
---------------------------------------------------------------------------

hmc with adaptive step size in `mcmc.sample`

I adopted the adaptive stepsize code provided in the example in the help of mcmc.hmc.
The hmc stepsize is adapted during warmup and the final warmup stepsize is used to sample from the posteriors.

I was wondering if its possible to somehow do adaptive stepsizes with mcmc.sample to run (multiple) hmc chains?
Or should I resort to implementing the adaptive stepsize low level as done in the mcmc.hmc help and copy the final step_size and current_state to a new hmc kernel to use with 'mcmc.sample'?

seed_stream.py ImportError & Cannot upgrade to anything later than tf-nightly 1.8.x

I've been running into a bit of trouble with installing tensorflow_probability.

Installation works fine on Ubuntu 16.04, but trying to install it on Windows or Google Colab (for Python3, at least).

Whenever I try to import tensorflow_probability, it runs into an error like the following:

---------------------------------------------------------------------------
ImportError                               Traceback (most recent call last)
<ipython-input-31-d5c6f05abbe6> in <module>()
     10 
     11 import tensorflow as tf # importing Tensorflow
---> 12 import tensorflow_probability as tfp # and Tensorflow probability
     13 from tensorflow_probability import edward2 as ed # Edwardlib extension
     14 import warnings

/content/.local/lib/python3.6/site-packages/tensorflow_probability/__init__.py in <module>()
     19 
     20 # from tensorflow_probability.google import staging  # DisableOnExport
---> 21 from tensorflow_probability.python import *  # pylint: disable=wildcard-import

/content/.local/lib/python3.6/site-packages/tensorflow_probability/python/__init__.py in <module>()
     22 from tensorflow_probability.python import glm
     23 from tensorflow_probability.python import layers
---> 24 from tensorflow_probability.python import mcmc
     25 from tensorflow_probability.python import monte_carlo
     26 from tensorflow_probability.python import optimizer

/content/.local/lib/python3.6/site-packages/tensorflow_probability/python/mcmc/__init__.py in <module>()
     24 from tensorflow_probability.python.mcmc.hmc import UncalibratedHamiltonianMonteCarlo
     25 from tensorflow_probability.python.mcmc.kernel import TransitionKernel
---> 26 from tensorflow_probability.python.mcmc.langevin import MetropolisAdjustedLangevinAlgorithm
     27 from tensorflow_probability.python.mcmc.langevin import UncalibratedLangevin
     28 from tensorflow_probability.python.mcmc.metropolis_hastings import MetropolisHastings

/content/.local/lib/python3.6/site-packages/tensorflow_probability/python/mcmc/langevin.py in <module>()
     28 from tensorflow_probability.python.mcmc import util as mcmc_util
     29 
---> 30 from tensorflow.contrib.distributions.python.ops import seed_stream
     31 from tensorflow.python.ops.distributions import util as distributions_util
     32 

ImportError: cannot import name 'seed_stream'

---------------------------------------------------------------------------
NOTE: If your import is failing due to a missing package, you can
manually install dependencies using either !pip or !apt.

To view examples of installing some common dependencies, click the
"Open Examples" button below.
---------------------------------------------------------------------------

It appears that on Ubuntu, tf-nightly==1.9.0.dev20180509, tfp-nightly==0.0.1.dev20180510, & tb-nightly==1.9.0a20180426 are correctly installing. On any other machine, however, no version beyond 1.8.x is available from pip. This appears to be the source of the dependency errors, but so far it's defied installation using Anaconda and on Google Colab.

Any ideas what might be causing this? (the reason I've been trying to install it in other environments is that my specific Ubuntu machine just doesn't have decent CPU capacity, not to mention nonexistenc GPU capacity)

MNIST VAE Documentation

Hi team,

You've done some great work creating Jupyter notebooks for the Eight Schools problem etc.

I'm working through your example scripts, in particular: https://github.com/tensorflow/probability/blob/master/tensorflow_probability/examples/vae.py

However, it's quite tough to grapple what's going on inside the VAE from the code comments alone and I'm struggling to decipher the large amount of output that's being produced. The questions I'm looking to ask are:

  • what's happening inside the VAE at each timestep
  • what the output images at each time step signify
  • what's a good value for my loss
  • the expected result

Do you have any existing documentation that may cover this? Even a small amount of documentation would greatly help me here.

If you don't, that's fine, perhaps I can help to produce some?

Thanks,
George

Conditional Random Fields

Just wondering if it can support discriminative model like conditional random fields and if I can build flexible and user-defined graph structure with this tool?

feature request: gradients of expected values

Algorithmic construction of surrogates to estimate gradients of expected values has always seemed like a natural feature for tensorflow. I think we tried it a few years back but it never got off the ground. Maybe the time is now. Possibly even using modern surrogates such as dice, that accomodate higher order derivatives. There is also some rumbling about this in the edward community (cf. this issue), but I thought I would mention it here to see what the tensorflow probability community thought.

If you're not familiar with the so-called "stochastic computational graph" (SCG) scene, the bottom line is this:

Say we want to estimate the gradient of the expected value of a random variable with respect to some parameters. If we can use the reparametrization trick then it turns out to be really easy -- but in many cases that trick doesn't apply. In particular, consider the following case:

  • Say loss is a random tensor, whose distribution is somehow determined by another tensor T. For example, maybe loss is a sample from a negative binomial distribution, and T gives the alpha parameter. Or maybe loss is some complicated function of a sample from a negative binomial distribution where T gives the alpha parameter.
  • So if I call sess.run(loss) that will give me a sample from loss, which can be understood as an unbiased estimator for the expected value of loss.
  • If I call sess.run(tf.gradients(loss,T)) that will generally not be an unbiased estimator for the derivative of the expected value of loss with respect to T.

However, at least as of 2016 we now know how to write a general function surrogate(loss) that crawls the graph and automatically produces a tensor loss_surrogate so that

  • If I call sess.run(tf.gradients(surrogate(loss),T)) then I will get an unbiased estimator for the derivative of the expected value of loss with respect to T.

To work, the algorithm basically just needs to be able to compute the pmf of pdf for any op which is stochastic in a way that depends on its input. In most cases we can write any complicated random stuff in terms of compositions of simple distributions for which we know the likelihood, so this is no problem. The algorithm can then define a loss_surrogate tensor which will let you get estimators of the gradient of expected values. Note you don't have to know ahead of time what you might want to take the gradient with respect to.

It would be super nice to implement this surrogate function for tf. I think it would actually be fairly straightforward to implement, but we would definitely need community support to keep it maintained. We would need corner cases for random ops for which the density can't be written down. Moreover, anytime someone invents a new way of drawing randomness, we would need to think about how to make sure it plays nice with whatever surrogate(loss) function we might cook up.

Support batching in tfp.edward2.make_log_joint_fn

Currently, the log-probability function returned by Edward2's make_log_joint_fn produces a scalar Tensor, summing over any sample and batch dimensions. This prevents sample/batch-wise manipulations of the log-probability.

A naive solution is to simply remove the tf.reduce_sum. This assumes that any computations broadcast when the log-probabilities are summed together. That sounds okay, but it produces wrong results. For example,

from tensorflow_probability import edward2 as ed

 def model():
    loc = ed.Normal(loc=0., scale=1., name="loc")
    x = ed.Normal(loc=loc, scale=0.5, sample_shape=5, name="x")
    return x

will have a log-joint of

import tensorflow as tf
tfd = tf.contrib.distributions

def log_joint(loc, x):
  log_prob = tfd.Normal(loc=0., scale=1.).log_prob(loc)
  log_prob += tfd.Normal(loc=loc, scale=0.5).log_prob(x)
  return log_prob

This element-wise adds the global variable's log-probability to the vector of 5 data point log-probabilities. A final tf.reduce_sum will count the global variable's log-probability 5 times instead of 1.

Another solution is to not do any reducing and to have the log-joint return a dict of random variable names and their sample/batch-wise log-probabilities. Another solution is to keep the reduce sum over sample dimensions, but assume broadcasting over batch dimensions.

`ImportError: cannot import name batching` when importing TFP in Python 2

So I used the following code to import my dependencies:

from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

import numpy as np
import matplotlib.pyplot as plt
from matplotlib.patches import Ellipse
import seaborn as sns
import tensorflow as tf                            # importing Tensorflow
import tensorflow_probability as tfp               # and Tensorflow probability
from tensorflow_probability import edward2 as ed   # Edwardlib extension

tfd = tfp.distributions             # Basic probability distribution toolkit
tfb = tfp.distributions.bijectors   # and their modifiers

# Eager Execution
# tfe = tf.contrib.eager
# tfe.enable_eager_execution()

%matplotlib inline
plt.style.use("fivethirtyeight")        # Styling plots like FiveThirtyEight

import warnings
warnings.filterwarnings('ignore')
%config InlineBackend.figure_format="retina" # improves resolution of plots

But I get this error when trying to import tensorflow_probability

ImportErrorTraceback (most recent call last)
<ipython-input-11-72abfe18676d> in <module>()
      8 import seaborn as sns
      9 import tensorflow as tf                            # importing Tensorflow
---> 10 import tensorflow_probability as tfp               # and Tensorflow probability
     11 from tensorflow_probability import edward2 as ed   # Edwardlib extension
     12 

/usr/local/lib/python2.7/dist-packages/tensorflow_probability/__init__.py in <module>()
     19 
     20 # from tensorflow_probability.google import staging  # DisableOnExport
---> 21 from tensorflow_probability.python import *  # pylint: disable=wildcard-import

/usr/local/lib/python2.7/dist-packages/tensorflow_probability/python/__init__.py in <module>()
     19 from __future__ import print_function
     20 
---> 21 from tensorflow_probability.python import edward2
     22 from tensorflow_probability.python import glm
     23 from tensorflow_probability.python import layers

/usr/local/lib/python2.7/dist-packages/tensorflow_probability/python/edward2/__init__.py in <module>()
     20 
     21 # pylint: disable=wildcard-import
---> 22 from tensorflow_probability.python.edward2.generated_random_variables import *
     23 from tensorflow_probability.python.edward2.generated_random_variables import __all__ as rv_all
     24 from tensorflow_probability.python.edward2.interceptor import get_interceptor

/usr/local/lib/python2.7/dist-packages/tensorflow_probability/python/edward2/generated_random_variables.py in <module>()
     27 from tensorflow_probability.python.util import docstring as docstring_util
     28 
---> 29 tfd = tf.contrib.distributions
     30 
     31 __all__ = [

/usr/local/lib/python2.7/dist-packages/tensorflow/python/util/lazy_loader.pyc in __getattr__(self, item)
     51 
     52   def __getattr__(self, item):
---> 53     module = self._load()
     54     return getattr(module, item)
     55 

/usr/local/lib/python2.7/dist-packages/tensorflow/python/util/lazy_loader.pyc in _load(self)
     40   def _load(self):
     41     # Import the target module and insert it into the parent's namespace
---> 42     module = importlib.import_module(self.__name__)
     43     self._parent_module_globals[self._local_name] = module
     44 

/usr/lib/python2.7/importlib/__init__.pyc in import_module(name, package)
     35             level += 1     36         name = _resolve_name(name[level:], package, level)
---> 37     __import__(name)
     38     return sys.modules[name]

/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/__init__.py in <module>()
     23 
     24 # Add projects here, they will show up under tf.contrib.
---> 25 from tensorflow.contrib import batching
     26 from tensorflow.contrib import bayesflow
     27 from tensorflow.contrib import checkpoint

ImportError: cannot import name batching

---------------------------------------------------------------------------
NOTE: If your import is failing due to a missing package, you can
manually install dependencies using either !pip or !apt.

To view examples of installing some common dependencies, click the
"Open Examples" button below.
---------------------------------------------------------------------------

This only just started happening after I entered pip install --upgrade tfp-nightly into the terminal, which leads me to conclude that something within the combination of tb-nightly-1.9.0a20180516, tf-nightly-1.9.0.dev20180516, and tfp-nightly-0.0.1.dev20180516 is not working.

It was working previously but is not any longer. It's also worth mentioning that I was running this in a Google Colab notebook in Python 2.

Activation function as string

There seem to be issues either with using a string name for an activation function or with using an InputLayer. In particular, when I run

import tensorflow as tf
import tensorflow_probability as tfp

input_layer = tf.keras.layers.InputLayer((10, 10))
layer = tfp.layers.Convolution1DFlipout(32, kernel_size=3, activation='relu')
model = tf.keras.Sequential()
model.add(input_layer)
model.add(layer)

I get

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-19-e22be31ff736> in <module>()
      4 m = tf.keras.Sequential()
      5 m.add(input_layer)
----> 6 m.add(layer)

~/anaconda3/lib/python3.6/site-packages/tensorflow/python/keras/_impl/keras/engine/sequential.py in add(self, layer)
    183         self.inputs = network.get_source_inputs(self.outputs[0])
    184     elif self.outputs:
--> 185       output_tensor = layer(self.outputs[0])
    186       if isinstance(output_tensor, list):
    187         raise TypeError('All layers in a Sequential model '

~/anaconda3/lib/python3.6/site-packages/tensorflow/python/keras/_impl/keras/engine/base_layer.py in __call__(self, inputs, *args, **kwargs)
    312     """
    313     # Actually call the layer (optionally building it).
--> 314     output = super(Layer, self).__call__(inputs, *args, **kwargs)
    315 
    316     if args and getattr(self, '_uses_inputs_arg', True):

~/anaconda3/lib/python3.6/site-packages/tensorflow/python/layers/base.py in __call__(self, inputs, *args, **kwargs)
    714 
    715         if not in_deferred_mode:
--> 716           outputs = self.call(inputs, *args, **kwargs)
    717           if outputs is None:
    718             raise ValueError('A layer\'s `call` method should return a Tensor '

~/anaconda3/lib/python3.6/site-packages/tensorflow_probability/python/layers/conv_variational.py in call(self, inputs)
    248     outputs = self._apply_variational_bias(outputs)
    249     if self.activation is not None:
--> 250       outputs = self.activation(outputs)
    251     if not self._built_kernel_divergence:
    252       self._apply_divergence(self.kernel_divergence_fn,

TypeError: 'str' object is not callable

However, if I comment out the line where I add the input layer to the model, then there's no error. I'm unsure whether I shouldn't be using InputLayer with tfp or if there's a bug where activation function names are only supported depending on the layers in the particular graph.

tfp-nightly Import error

Hi TFP Team,

I recently ran a Colab that functioned last week. It seems that the latest TFP nightly install has trouble importing the following function:

` ---> 22 from tensorflow.python.keras._impl.keras.backend import abs
23 from tensorflow.python.keras._impl.keras.backend import all
24 from tensorflow.python.keras._impl.keras.backend import any

ImportError: cannot import name 'abs' `

So, I reverted the tfp-nightly install to an earlier developer version based on the syntax available to me with pip show:

!pip install tfp-nightly==0.0.1.dev20180512

Unfortunately, I then ran into the following error:

`---> 49 from tensorflow.python import pywrap_tensorflow
50
51 # Protocol buffers

ImportError: cannot import name pywrap_tensorflow`

Which supposedly has to do with running Tensorflow in the install directory. So, I tried to cd out of the install directory - but colab limits changing directories. I will attempt to run this program in a Jupyter notebook to see if this resolves and update you then.

Tutorial request for SGLD

I get the following error when I apply_gradients() or use minimize()
Can you please guide me through the issue. BTW a tutorial will be usefull

<ipython-input-9-a2eece1e2073> in <module>()
     14 lossSummary = tf.summary.scalar('Loss', loss)
     15 sgd_optimizer = tfp.optimizer.StochasticGradientLangevinDynamics(3.0,preconditioner_decay_rate=0.95,num_pseudo_batches=10)
---> 16 sgd_op = sgd_optimizer.minimize(loss)

c:\users\sanjay\anaconda3\lib\site-packages\tensorflow\python\training\optimizer.py in minimize(self, loss, global_step, var_list, gate_gradients, aggregation_method, colocate_gradients_with_ops, name, grad_loss)
    398 
    399     return self.apply_gradients(grads_and_vars, global_step=global_step,
--> 400                                 name=name)
    401 
    402   def compute_gradients(self, loss, var_list=None,

c:\users\sanjay\anaconda3\lib\site-packages\tensorflow\python\training\optimizer.py in apply_gradients(self, grads_and_vars, global_step, name)
    558           scope_name = var.op.name
    559         with ops.name_scope("update_" + scope_name), ops.colocate_with(var):
--> 560           update_ops.append(processor.update_op(self, grad))
    561       if global_step is None:
    562         apply_updates = self._finish(update_ops, name)

c:\users\sanjay\anaconda3\lib\site-packages\tensorflow\python\training\optimizer.py in update_op(self, optimizer, g)
    146       return optimizer._resource_apply_sparse_duplicate_indices(
    147           g.values, self._v, g.indices)
--> 148     update_op = optimizer._resource_apply_dense(g, self._v)
    149     if self._v.constraint is not None:
    150       with ops.control_dependencies([update_op]):

c:\users\sanjay\anaconda3\lib\site-packages\tensorflow\python\training\optimizer.py in _resource_apply_dense(self, grad, handle)
    777       An `Operation` which updates the value of the variable.
    778     """
--> 779     raise NotImplementedError()
    780 
    781   def _resource_apply_sparse_duplicate_indices(self, grad, handle, indices):

NotImplementedError: 

Tutorial request: For saving model as SavedModel

I have been able to save the model as SavedModel, but when I use the saved model, it gives me random output. Can someone please provide a working example of how to save model trained in tensorflow probability?

release timeline?

This is absolutely amazing work! I'm really excited to see where this goes and contribute if I can.

I'm switching to using probability as the sampler backend for greta. The technical implementation is all done on the dev branch of greta, and works like a charm. But I can't release this version of greta until there's a stable release of probability that I can point users to install.

Do you have a rough timeline for when the first version might be released, so I can plan ahead?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.