GithubHelp home page GithubHelp logo

brendanhasz / probflow Goto Github PK

View Code? Open in Web Editor NEW
165.0 4.0 17.0 3.31 MB

A Python package for building Bayesian models with TensorFlow or PyTorch

Home Page: http://probflow.readthedocs.io

License: MIT License

Python 99.82% Makefile 0.18%
machine-learning data-science python tensorflow statistics bayesian-inference bayesian-statistics bayesian-methods bayesian-neural-networks pytorch bayesian

probflow's Introduction

ProbFlow

Version Badge Build Badge Docs Badge Coverage Badge

ProbFlow is a Python package for building probabilistic Bayesian models with TensorFlow 2.0 or PyTorch, performing stochastic variational inference with those models, and evaluating the models' inferences. It provides both high-level modules for building Bayesian neural networks, as well as low-level parameters and distributions for constructing custom Bayesian models.

It's very much still a work in progress.

Getting Started

ProbFlow allows you to quickly and less painfully build, fit, and evaluate custom Bayesian models (or ready-made ones!) which run on top of either TensorFlow 2.0 and TensorFlow Probability or PyTorch.

With ProbFlow, the core building blocks of a Bayesian model are parameters and probability distributions (and, of course, the input data). Parameters define how the independent variables (the features) predict the probability distribution of the dependent variables (the target).

For example, a simple Bayesian linear regression

image

can be built by creating a ProbFlow Model. This is just a class which inherits pf.Model (or pf.ContinuousModel or pf.CategoricalModel depending on the target type). The __init__ method sets up the parameters, and the __call__ method performs a forward pass of the model, returning the predicted probability distribution of the target:

import probflow as pf
import tensorflow as tf

class LinearRegression(pf.ContinuousModel):

    def __init__(self):
        self.weight = pf.Parameter(name='weight')
        self.bias = pf.Parameter(name='bias')
        self.std = pf.ScaleParameter(name='sigma')

    def __call__(self, x):
        return pf.Normal(x*self.weight()+self.bias(), self.std())

model = LinearRegression()

Then, the model can be fit using stochastic variational inference, in one line:

# x and y are Numpy arrays or pandas DataFrame/Series
model.fit(x, y)

You can generate predictions for new data:

# x_test is a Numpy array or pandas DataFrame
>>> model.predict(x_test)
[0.983]

Compute probabilistic predictions for new data, with 95% confidence intervals:

model.pred_dist_plot(x_test, ci=0.95)

image

Evaluate your model's performance using metrics:

>>> model.metric('mse', x_test, y_test)
0.217

Inspect the posterior distributions of your fit model's parameters, with 95% confidence intervals:

model.posterior_plot(ci=0.95)

image

Investigate how well your model is capturing uncertainty by examining how accurate its predictive intervals are:

>>> model.pred_dist_coverage(ci=0.95)
0.903

and diagnose where your model is having problems capturing uncertainty:

model.coverage_by(ci=0.95)

image

ProbFlow also provides more complex modules, such as those required for building Bayesian neural networks. Also, you can mix ProbFlow with TensorFlow (or PyTorch!) code. For example, even a somewhat complex multi-layer Bayesian neural network like this:

image

Can be built and fit with ProbFlow in only a few lines:

class DensityNetwork(pf.ContinuousModel):

    def __init__(self, units, head_units):
        self.core = pf.DenseNetwork(units)
        self.mean = pf.DenseNetwork(head_units)
        self.std  = pf.DenseNetwork(head_units)

    def __call__(self, x):
        z = tf.nn.relu(self.core(x))
        return pf.Normal(self.mean(z), tf.exp(self.std(z)))

# Create the model
model = DensityNetwork([x.shape[1], 256, 128], [128, 64, 32, 1])

# Fit it!
model.fit(x, y)

For convenience, ProbFlow also includes several pre-built models for standard tasks (such as linear regressions, logistic regressions, and multi-layer dense neural networks). For example, the above linear regression example could have been done with much less work by using ProbFlow's ready-made LinearRegression model:

model = pf.LinearRegression(x.shape[1])
model.fit(x, y)

And a multi-layer Bayesian neural net can be made easily using ProbFlow's ready-made DenseRegression model:

model = pf.DenseRegression([x.shape[1], 128, 64, 1])
model.fit(x, y)

Using parameters and distributions as simple building blocks, ProbFlow allows for the painless creation of more complicated Bayesian models like generalized linear models, deep time-to-event models, neural matrix factorization models, and Gaussian mixture models. You can even mix probabilistic and non-probabilistic models! Take a look at the examples and the user guide for more!

Installation

If you already have your desired backend installed (i.e. Tensorflow/TFP or PyTorch), then you can just do:

pip install probflow

Or, to install both ProbFlow and the CPU version of TensorFlow + TensorFlow Probability,

pip install probflow[tensorflow]

Or, to install ProbFlow and the GPU version of TensorFlow + TensorFlow Probability,

pip install probflow[tensorflow_gpu]

Or, to install ProbFlow and PyTorch,

pip install probflow[pytorch]

Support

Post bug reports, feature requests, and tutorial requests in GitHub issues.

Contributing

Pull requests are totally welcome! Any contribution would be appreciated, from things as minor as pointing out typos to things as major as writing new applications and distributions.

Why the name, ProbFlow?

Because it's a package for probabilistic modeling, and it was built on TensorFlow. ¯\_(ツ)_/¯

probflow's People

Contributors

brendanhasz avatar srwi avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

probflow's Issues

Gaussian Mixture Model example bug

Related to the example:
https://probflow.readthedocs.io/en/latest/example_gmm.html

The model is defined as follows:

class GaussianMixtureModel(pf.Model):
    def __init__(self, k, d, *args):
        super().__init__(*args)
        self.mu = pf.Parameter([k, d])
        self.sigma = pf.ScaleParameter([k, d])
        self.theta = pf.DirichletParameter(k)

    def __call__(self):
        dists = tfd.MultivariateNormalDiag(self.mu(), self.sigma())
        return pf.Mixture(dists, probs=self.theta())

And then evaluated as:

Np = 100  # number of grid points
xx = np.linspace(-6, 6, Np)
Xp, Yp = np.meshgrid(xx, xx)
Pp = np.column_stack([Xp.ravel(), Yp.ravel()])
probs = model.prob(Pp.astype('float32'))

This produces following error, because there is no X -> Y mapping in this example. This is pure generative model:

TypeError: __call__() takes 1 positional argument but 2 were given

model.prob calls the above implemenation of __call__ which doesn't accept any x.

It can be easily fixed by passing the values as y:

probs = model.prob(x=None, y=Pp.astype('float32'))

I'm not sure whether it's documentation or implementation error. It would seem more natural to just provide the X-value if the Y is not present in the model.

Inverted Logistic Regression coefficients

Hi,

I'm comparing the Logistic Regression application with TFP.glm and PYMC. I'm consistently getting inverted signs on the coefficients for the regressors from probflow vs TFP.glm and PYMC. Below is the summary from the pymc model with the appended coefficients from tfp and probflow

`

coeff pymc_mean pf_mean tfp_glm_mean
coeffs[0] 1.073 -0.884923 0.245316
coeffs[1] -1.394 0.593529 -0.760361
coeffs[2] 0.775 -0.524494 0.000000
coeffs[3] 0.654 -0.413936 -0.112381
coeffs[4] 1.052 -0.811504 0.179820
coeffs[5] -0.973 0.203751 -0.368867
coeffs[6] -0.101 -0.760126 0.487476
coeffs[7] -0.389 0.094798 0.000000
coeffs[8] -1.542 1.092922 -1.059176
coeffs[9] -2.604 1.211063 -1.140007
coeffs[10] -1.814 0.985445 -0.953259
coeffs[11] -1.752 1.183555 -1.161227
coeffs[12] -0.954 0.547342 -0.444315
coeffs[13] -1.981 0.841193 -0.525483
coeffs[14] -26.866 0.654583 0.000000
coeffs[15] -26.745 0.328704 0.000000
coeffs[16] -1.228 0.516039 -0.276005
coeffs[17] -1.282 0.318989 0.000000
coeffs[18] 26.991 -0.357874 0.000000
coeffs[19] -1.081 0.551722 -0.392427

`

You'll see that although the tfp.glm differs in magnitude from pymc, generally the signs are the same, while the probflow coefficients generally have the opposite sign. This will create issues when trying to interpret probflow models. Any ideas what might be the cause of this?

Doc extension

Hi @brendanhasz

I found your project just yesterday while searching for probabilistic programming solutions because I wanted to try pytorch-based frameworks.
I think it would be good to extend the documentation with some discussion about differences to other existing frameworks like pyro and where probflow stands out.

Add batch_norm kwarg to DenseNetwork

Add batch_norm kwarg to DenseNetwork (a bool).

Also additional kwarg, batch_norm_loc either after (adds batch norm after the activation function, the default) or before (adds batch norm before the activation func, as originally suggested).

Dimension Error when running GAN tutorial with TF 2.0 CPU

Hi,

Great package! Really enjoy the simple API that you've designed. I started playing around with the GAN model and am receiving the following error when calling fit.
InvalidArgumentError: Matrix size-incompatible: In[0]: [128,3], In[1]: [7,128] [Op:MatMul] name: matmul/

Full traceback:


---------------------------------------------------------------------------
InvalidArgumentError                      Traceback (most recent call last)
<ipython-input-19-e11e146e9aeb> in <module>
      1 # Fit both models by fitting the discriminator w/ the callback
----> 2 D.fit(x_meta.values[:,:3], epochs=10, callbacks=[train_g])

~/virtualenvs/tf2/lib/python3.7/site-packages/probflow/models.py in fit(self, x, y, batch_size, epochs, shuffle, optimizer, optimizer_kwargs, learning_rate, flipout, callbacks)
    256             # Update gradients for each batch
    257             for x_data, y_data in self._data:
--> 258                 self.train_step(x_data, y_data)
    259 
    260             # Run callbacks at end of epoch

~/virtualenvs/tf2/lib/python3.7/site-packages/probflow/models.py in train_step(self, x_data, y_data)
    161     def train_step(self, x_data, y_data):
    162         """Perform one training step"""
--> 163         self._train_fn(x_data, y_data)
    164 
    165 

~/virtualenvs/tf2/lib/python3.7/site-packages/probflow/models.py in train_step(x_data, y_data)
    142             with Sampling(n=1, flipout=flipout):
    143                 with tf.GradientTape() as tape:
--> 144                     log_loss = self.log_likelihood(x_data, y_data)/nb
    145                     kl_loss = self.kl_loss()/n + self.kl_loss_batch()/nb
    146                     elbo_loss = kl_loss - log_loss

<ipython-input-7-d9d7a30c87d5> in log_likelihood(self, _, x)
     29     def log_likelihood(self, _, x):
     30         labels = tf.ones([x.shape[0], 1])
---> 31         true_ll = self(x).log_prob(labels)
     32         fake_ll = self(self.G(x)).log_prob(0*labels)
     33         return tf.reduce_sum(true_ll + fake_ll)

<ipython-input-7-d9d7a30c87d5> in __call__(self, x)
     25 
     26     def __call__(self, x):
---> 27         return pf.Bernoulli(self.D(x))
     28 
     29     def log_likelihood(self, _, x):

~/virtualenvs/tf2/lib/python3.7/site-packages/probflow/applications.py in __call__(self, x)
    189     def __call__(self, x):
    190         for i in range(len(self.layers)):
--> 191             x = self.layers[i](x)
    192             x = self.activations[i](x)
    193         return x

~/virtualenvs/tf2/lib/python3.7/site-packages/probflow/modules.py in __call__(self, x)
    174                 norm_samples = tf.random.normal([self.d_in, self.d_out])
    175                 w_samples = self.weights.variables['scale'] * norm_samples
--> 176                 w_noise = r*((x*s) @ w_samples)
    177                 w_outputs = x @ self.weights.variables['loc'] + w_noise
    178 

~/virtualenvs/tf2/lib/python3.7/site-packages/tensorflow_core/python/ops/math_ops.py in binary_op_wrapper(x, y)
    897     with ops.name_scope(None, op_name, [x, y]) as name:
    898       if isinstance(x, ops.Tensor) and isinstance(y, ops.Tensor):
--> 899         return func(x, y, name=name)
    900       elif not isinstance(y, sparse_tensor.SparseTensor):
    901         try:

~/virtualenvs/tf2/lib/python3.7/site-packages/tensorflow_core/python/util/dispatch.py in wrapper(*args, **kwargs)
    178     """Call target, and fall back on dispatchers if there is a TypeError."""
    179     try:
--> 180       return target(*args, **kwargs)
    181     except (TypeError, ValueError):
    182       # Note: convert_to_eager_tensor currently raises a ValueError, not a

~/virtualenvs/tf2/lib/python3.7/site-packages/tensorflow_core/python/ops/math_ops.py in matmul(a, b, transpose_a, transpose_b, adjoint_a, adjoint_b, a_is_sparse, b_is_sparse, name)
   2763     else:
   2764       return gen_math_ops.mat_mul(
-> 2765           a, b, transpose_a=transpose_a, transpose_b=transpose_b, name=name)
   2766 
   2767 

~/virtualenvs/tf2/lib/python3.7/site-packages/tensorflow_core/python/ops/gen_math_ops.py in mat_mul(a, b, transpose_a, transpose_b, name)
   6124       else:
   6125         message = e.message
-> 6126       _six.raise_from(_core._status_to_exception(e.code, message), None)
   6127   # Add nodes to the TensorFlow graph.
   6128   if transpose_a is None:

~/virtualenvs/tf2/lib/python3.7/site-packages/six.py in raise_from(value, from_value)

InvalidArgumentError: Matrix size-incompatible: In[0]: [128,3], In[1]: [7,128] [Op:MatMul] name: matmul/

The dataset I'm testing on has shape (8500,100), but I've played around a bit and nothing seems to help. I also tried changing get_flipout() to False, thinking that perhaps somethign in the flipout section was causing the error, but even so, the same error results.

~/virtualenvs/tf2/lib/python3.7/site-packages/probflow/modules.py in __call__(self, x)
    187         # Without Flipout
    188         else:
--> 189             return x @ self.weights() + self.bias()
    190 
    191 

~/virtualenvs/tf2/lib/python3.7/site-packages/tensorflow_core/python/ops/math_ops.py in r_binary_op_wrapper(y, x)
    923     with ops.name_scope(None, op_name, [x, y]) as name:
    924       x = ops.convert_to_tensor(x, dtype=y.dtype.base_dtype, name="x")
--> 925       return func(x, y, name=name)
    926 
    927   # Propagate func.__doc__ to the wrappers

~/virtualenvs/tf2/lib/python3.7/site-packages/tensorflow_core/python/util/dispatch.py in wrapper(*args, **kwargs)
    178     """Call target, and fall back on dispatchers if there is a TypeError."""
    179     try:
--> 180       return target(*args, **kwargs)
    181     except (TypeError, ValueError):
    182       # Note: convert_to_eager_tensor currently raises a ValueError, not a

~/virtualenvs/tf2/lib/python3.7/site-packages/tensorflow_core/python/ops/math_ops.py in matmul(a, b, transpose_a, transpose_b, adjoint_a, adjoint_b, a_is_sparse, b_is_sparse, name)
   2763     else:
   2764       return gen_math_ops.mat_mul(
-> 2765           a, b, transpose_a=transpose_a, transpose_b=transpose_b, name=name)
   2766 
   2767 

~/virtualenvs/tf2/lib/python3.7/site-packages/tensorflow_core/python/ops/gen_math_ops.py in mat_mul(a, b, transpose_a, transpose_b, name)
   6124       else:
   6125         message = e.message
-> 6126       _six.raise_from(_core._status_to_exception(e.code, message), None)
   6127   # Add nodes to the TensorFlow graph.
   6128   if transpose_a is None:

~/virtualenvs/tf2/lib/python3.7/site-packages/six.py in raise_from(value, from_value)

InvalidArgumentError: Matrix size-incompatible: In[0]: [128,3], In[1]: [7,128] [Op:MatMul] name: matmul/

Any suggestions?

Figure out mac/windows multiprocessing issues

Getting errors w/ the multiprocessing in DataGenerator for windows builds on Python <3.8, and for Mac + Python >= 3.8.

AttributeError: Can't pickle local object 'DataGenerator.__iter__.<locals>.get_data'

Think it might be due to the switch from default process start method of "fork" to "spawn" modes, see here. But that doesn't explain the windows issues...

Example windows test failure

Example mac test failure

Tried doing multiprocessing.set_start_method("fork") for mac + python 3.8, but then just got RuntimeError: context has already been set (example)...

Contribution Guide

Hello!

First of all, I really like this package you've created! It's a really nice stepping stone into the world of Bayesian NN without the need for a lot of boilerplate code.

But, I was wondering if you had a priority list potential contributions? Also perhaps a schematic of what type of things should be added or what things are out of the scope of this package?

Thanks,
Emmanuel

Refactor tests

Tests are a bit of a mess... Refactor to match the structure of src/probflow, with unit tests for each module in their own file.

Allow MC KL divergence estimates

To allow for parameters to have variational posteriors for which the KL divergence from their prior can't be computed analytically (e.g. posterior = mixture of gaussians - though there are more efficient ways to estimate that than this...).

Add a n_mc_kl kwarg to Parameter (default None) which when set to a positive int uses that many samples from the posterior to estimate the KL divergence via:

where q is the variational posterior distribution, p is the prior distribution, and x are samples from q.

Probabilistic option for Dense, DenseNetwork, Embedding, and BatchNormalization

Add a probabilistic kwarg (True or False) to Dense, DenseNetwork, Embedding, and BatchNormalization modules.

That way you can pretty easily do, say, a non-probabilistic net with a probabilistic linear layer on top (see Snoek et al., 2015 and Riquelme et al. 2018):

class NeuralLinear(pf.ContinuousModel):

    def __init__(self, units):
        self.net = pf.DenseNetwork(units, probabilistic=False)
        self.linear = pf.Dense(units[-1], 2, probabilistic=True)

    def __call__(self, x):
        a = self.linear(tf.math.relu(self.net(x)))
        return pf.Normal(a[..., 0], tf.exp(a[..., 1]))

And, set whether you want your embeddings to be probabilistic or not (for Embedding).

  • probabilistic kwarg for Dense
  • tests
  • probabilistic kwarg for DenseNetwork
  • tests
  • probabilistic kwarg for Embedding
  • tests
  • Add a “Neural Linear Model” section to the examples with the NeuralLinear example above.

Memory leak when training models

Especially noticable when training on large datasets. Not sure if it's on ProbFlow's end or TensorFlow's (but almost surely ProbFlow's haha).

Training on 1M rows from the taxi trip dataset w/ TF GPU caused a leak of around 50MB/epoch 😬 Was even worse w/ TF CPU

Likely is a problem with DataGenerator. 50MB with that test was about exactly the size of the x+y dataset, so DataGenerator is probably making copies when shuffling (instead of creating views) and somehow holding on to them ad infinitum.

Also was using MonitorMetric callback, that might be the problem instead (or the corresponding Model.metric / metric function).

Also some preliminary poking around w/ pympler found that a bunch of large pandas Series were being created, but wasn't training using pandas arrays, so don't know where those were coming from.

A few things to test:

  • Does training w/o shuffling solve the problem?
  • Does training w/o MonitorMetric callback solve the problem?

Simplified code causing the problem:

import numpy as np
import probflow as pf

# Data
x_train = np.random.randn(1000000, 8).astype('float32')
y_train = np.random.randn(1000000, 1).astype('float32')
x_val =   np.random.randn( 100000, 8).astype('float32')
y_val =   np.random.randn( 100000, 1).astype('float32')

# Callbacks
monitor_elbo = pf.MonitorELBO()
monitor_mae = pf.MonitorMetric('mae', x_val, y_val)
lr_scheduler = pf.LearningRateScheduler(lambda e: 2e-4-2e-6*e)
callbacks = [monitor_elbo, monitor_mae, lr_scheduler]

# Model
model = pf.DenseRegression([7, 256, 128, 64, 32, 1])

# Train
model.fit(x_train, y_train,
          epochs=100,
          batch_size=1024,
          callbacks=callbacks)

Multiple MC samples per batch

Currently ProbFlow uses just a single MC sample from variational posteriors per batch. Fitting will be much more stable if we can use more. In fact I'm pretty sure it's impossible to use mixture distributions as variational posteriors with just 1 MC sample...?

This'll require some expand_dims-ing of the input tensors/numpy arrays/pandas d... Hmm won't work with pandas dataframes 🤔 .

Also means user slicing code in __call__ methods could cause problems.... Maybe just have the default be 1 and tell users to handle it if they want >1 (but handle it in applications and modules like Dense / DenseNetwork).

  • add ability to use >1 MC sample ber batch (n_mc_samples kwarg to Model.fit?)
  • tests
  • update applications and modules to be compatible w/ >1 MC sample/batch
  • tests
  • add section to user guide about it

Backend Defaults to tensorflow

Hello,
Recently wanted to try out this package as it seems to be insanely easy to prototype with. I encountered an issue where the backend defaulted to tensorflow even though I installed the pytorch version. I edited the settings file and changed the default 'tensorflow' param value in the init method to 'pytorch' and its been working fine.

Hope to see this repository continue to be worked on! I think it really could be awesome for non-native ML guys like myself.

Maximum Recursion Depth error in modules.py

Hi Brendan,

In further experimentation I'm encountering the following error. The code I'm using is a bit more involved, but I can provide it if the issue it would help. Any ideas where to start? What is the purpose of the Module listing where the error occurs?

---------------------------------------------------------------------------
RecursionError                            Traceback (most recent call last)
<ipython-input-12-271ef7ecd1c9> in <module>
----> 1 model.G.fit(dataset.X.values)

~/virtualenvs/tf2/lib/python3.7/site-packages/probflow/models.py in fit(self, x, y, batch_size, epochs, shuffle, optimizer, optimizer_kwargs, learning_rate, flipout, callbacks)
    256             # Update gradients for each batch
    257             for x_data, y_data in self._data:
--> 258                 self.train_step(x_data, y_data)
    259 
    260             # Run callbacks at end of epoch

~/virtualenvs/tf2/lib/python3.7/site-packages/probflow/models.py in train_step(self, x_data, y_data)
    161     def train_step(self, x_data, y_data):
    162         """Perform one training step"""
--> 163         self._train_fn(x_data, y_data)
    164 
    165 

~/virtualenvs/tf2/lib/python3.7/site-packages/probflow/models.py in train_step(x_data, y_data)
    139         def train_step(x_data, y_data):
    140             nb = y_data.shape[0] #number of samples in this batch
--> 141             self.reset_kl_loss()
    142             with Sampling(n=1, flipout=flipout):
    143                 with tf.GradientTape() as tape:

~/virtualenvs/tf2/lib/python3.7/site-packages/probflow/modules.py in reset_kl_loss(self)
    109     def reset_kl_loss(self):
    110         """Reset additional loss due to KL divergences"""
--> 111         for m in self.modules:
    112             m._kl_losses = []
    113 

~/virtualenvs/tf2/lib/python3.7/site-packages/probflow/modules.py in modules(self)
     81     def modules(self):
     82         """A list of sub-Modules in this |Module|, including itself."""
---> 83         return [m for a in vars(self).values()
     84                 if isinstance(a, BaseModule)
     85                 for m in a.modules] + [self]

~/virtualenvs/tf2/lib/python3.7/site-packages/probflow/modules.py in <listcomp>(.0)
     83         return [m for a in vars(self).values()
     84                 if isinstance(a, BaseModule)
---> 85                 for m in a.modules] + [self]
     86 
     87 

... last 2 frames repeated, from the frame below ...

~/virtualenvs/tf2/lib/python3.7/site-packages/probflow/modules.py in modules(self)
     81     def modules(self):
     82         """A list of sub-Modules in this |Module|, including itself."""
---> 83         return [m for a in vars(self).values()
     84                 if isinstance(a, BaseModule)
     85                 for m in a.modules] + [self]

RecursionError: maximum recursion depth exceeded in comparison

Centered parameter using QR reparameterization

Add a CenteredParameter, which should use a QR decomposition reparameterization for a length-N vector of parameters centered at zero using N-1 underlying variables.

Have a center_by kwarg (one of 'all', 'column', 'row') which determines how they're centered. 'all' (the default) means the sum of all elements, regardless of shape, is 0. 'column' means the sum of each column is 0, and 'row' means the sum of each row is 0. For 'all', get a prod(shape)-length vector via the QR decomposition, then reshape into the correct shape. For 'column' and 'row', only allow 2d shape, and can matrix multiply the A from the QR decomp by a matrix, then transpose for row. Can do that in the transform function, and have appropriate priors such that resulting parameters have prior ~ normal(0, 1). And make sure to mention that the prior is fixed in the docs.

  • Add CenteredParameter
  • tests
  • docs
  • add section to parameters page of user guide

Heteroskedastic pf.LinearRegression has no bias!

For pf.applications.LinearRegression with heteroskedastic=True, the bias has shape [1, 1]. It should have shape [1, 2] and use the 2nd for the std dev bias - currently the std dev is missing a bias term!

Predictive interval fails for Dense model example

Hi, while trying to run the example of dense regression for fully connected NN, I run into an issue when I try to sample from the posterior of the model. I use exactly the commands as shown in the documentation.

My backend is pytorch,

Thank you so much !

# Compute 95% confidence intervals
lb, ub = model.predictive_interval(x_test, ci=0.95, batch_size=1)

# Plot em!
plt.fill_between(x_test[:, 0], lb[:, 0], ub[:, 0],
                 alpha=0.2, label='95% ci')
plt.plot(x, y, '.', label='Data')```


I get :

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-61-eb2905895d26> in <module>
      3 
      4 # Plot em!
----> 5 plt.fill_between(x_test[:, 0], lb[:, 0], ub[:, 0],
      6                  alpha=0.2, label='95% ci')
      7 plt.plot(x, y, '.', label='Data')

~/anaconda3/envs/causalode/lib/python3.8/site-packages/matplotlib/pyplot.py in fill_between(x, y1, y2, where, interpolate, step, data, **kwargs)
   2636         x, y1, y2=0, where=None, interpolate=False, step=None, *,
   2637         data=None, **kwargs):
-> 2638     return gca().fill_between(
   2639         x, y1, y2=y2, where=where, interpolate=interpolate, step=step,
   2640         **({"data": data} if data is not None else {}), **kwargs)

~/anaconda3/envs/causalode/lib/python3.8/site-packages/matplotlib/__init__.py in inner(ax, data, *args, **kwargs)
   1445     def inner(ax, *args, data=None, **kwargs):
   1446         if data is None:
-> 1447             return func(ax, *map(sanitize_sequence, args), **kwargs)
   1448 
   1449         bound = new_sig.bind(ax, *args, **kwargs)

~/anaconda3/envs/causalode/lib/python3.8/site-packages/matplotlib/axes/_axes.py in fill_between(self, x, y1, y2, where, interpolate, step, **kwargs)
   5299     def fill_between(self, x, y1, y2=0, where=None, interpolate=False,
   5300                      step=None, **kwargs):
-> 5301         return self._fill_between_x_or_y(
   5302             "x", x, y1, y2,
   5303             where=where, interpolate=interpolate, step=step, **kwargs)

~/anaconda3/envs/causalode/lib/python3.8/site-packages/matplotlib/axes/_axes.py in _fill_between_x_or_y(self, ind_dir, ind, dep1, dep2, where, interpolate, step, **kwargs)
   5221                     f"must have the same size as {ind} in {func_name}(). This "
   5222                     "will become an error %(removal)s.")
-> 5223         where = where & ~functools.reduce(
   5224             np.logical_or, map(np.ma.getmask, [ind, dep1, dep2]))
   5225 

ValueError: operands could not be broadcast together with shapes (101,) (101000,) 

Collect backend variables/modules

Make Module.trainable_variables also return tf.Variables or (or for pytorch, tensors with requires_grad = True) which are properties of modules + sub-modules as well (and are not necessarily in parameters).

Also allow embedding of tf.Modules (or for Pytorch, nn.Module) and recursively search them for backend variables.

This will mean you can mix probflow parameters + modules with backend variables + modules. For example:

class DenseNetwork(tf.keras.Model):
    """A totally tensorflow-only module"""

    def __init__(self, units):
        self.layers = [
            tf.keras.layers.Dense(units[i+1], input_shape=(units[i],))
            for i in range(len(units)-1)
        ]

    def call(self, x):
        for layer in self.layer:
            x = tf.nn.relu(layer(x))

class NeuralLinear(pf.ContinuousModel):

    def __init__(self, units):
        self.net = DenseNetwork(units)  # tensorflow model!
        self.w = pf.Parameter([units[-1], 1])  # probflow parameters
        self.b = pf.Parameter([1, 1])
        self.s = tf.Variable(tf.random.normal([1, 1]))  # tensorflow variable!

    def __call__(self, x):
        loc = self.net(x) @ self.w() + self.b()
        scale = tf.exp(self.s)
        return pf.Normal(loc, scale)

And then with recursive variable/model, ProbFlow will also optimize those variables along with the ones in ProbFlow modules/parameters.

Bayesian decision making example

Add an example using Bayesian decision making. Start with a simple one (binary action space, a few continuous features, continuous value) and then a more complicated one for, like, Ebay resale bidding using two models: a cost regression model and a win classification, then bid at the price corresponding to the optimal expected value (regression prediction * predicted win probability).

NotImplementedError:... issue when try to run SimpleLinearRegression example

HI
"NotImplementedError: Cannot convert a symbolic Tensor (gradients/stateless_random_gamma/StatelessRandomGammaV2_grad/sub:0) to a numpy array. This error may indicate that you're trying to pass a Tensor to a NumPy call, which is not supported"

this issue rises when try to fit SimpleLinearRegression model...(the same example on doc ---> a simple 'copy and paste')

PyTorch distribution.mean() is weird for InverseGamma, Bernoulli, Categorical, and OneHotCategorical

Some of PyTorch's distributions have weird mean behavior:

  • InverseGamma - PyTorch doesn't have an InverseGamma dist, so I created one using a TransformedDistribution, but then mean isn't implemented. This means if you call pf.Model.predict() on a model w/ an InverseGamma parameter, it crashes.
  • Bernoulli - mean returns the actual mean (continuous between 0 and 1) but we'd want it to return the mode when using that dist as the observation distribution (can't really use it as the variational posterior b/c can't prop gradients short of something like this). TFP's Bernoulli throws an error w/ mean, so ProbFlow is able to catch it and return the mode.
  • Categorical - mean returns NaN, and doesn't throw an error... But again we'd want the mode when calling mean on it, when it's used as the observation distribution during predict().

Probably the only way to fix all these is to re-implement those distributions (in probflow.utils.torch_distributions), implementing or overriding the mean/mode properties so as to give the desired behavior.

predictive_interval gives an error

I am following the tutorial on DenseRegression (using the same versions of packages) but using my own data, when I try to do model.predictive_interval(x_test, ci=0.95), it outputs the following error:


InvalidArgumentError Traceback (most recent call last)
in
----> 1 model.predictive_interval(xx_test, ci=0.95)

~/proc_optim/opt_env/lib/python3.6/site-packages/probflow/models.py in predictive_interval(self, x, ci, side, n)
956 for samples in x. Doesn't return this if side='lower'.
957 """
--> 958 return self._intervals(self.predictive_sample, x, side, ci=ci, n=n)
959
960

~/proc_optim/opt_env/lib/python3.6/site-packages/probflow/models.py in _intervals(self, fn, x, side, ci, n)
903 def _intervals(self, fn, x, side, ci=0.95, n=1000):
904 """Compute intervals on some type of sample"""
--> 905 samples = fn(x, n=n)
906 if side == 'lower':
907 return np.percentile(samples, 100*(1.0-ci), axis=0)

~/proc_optim/opt_env/lib/python3.6/site-packages/probflow/models.py in predictive_sample(self, x, n)
346 """
347 with Sampling(n=n, flipout=False):
--> 348 return self._sample(x, lambda x: x.sample(), ed=0)
349
350

~/proc_optim/opt_env/lib/python3.6/site-packages/probflow/models.py in _sample(self, x, func, ed, axis)
320 samples += [func(self())]
321 else:
--> 322 samples += [func(self(O.expand_dims(x_data, ed)))]
323 return np.concatenate(to_numpy(samples), axis=axis)
324

~/proc_optim/opt_env/lib/python3.6/site-packages/probflow/applications.py in call(self, x)
248 return Normal(m_preds, s_preds)
249 else:
--> 250 return Normal(self.network(x), self.std())
251
252

~/proc_optim/opt_env/lib/python3.6/site-packages/probflow/applications.py in call(self, x)
194 x = to_tensor(x)
195 for i in range(len(self.layers)):
--> 196 x = self.layersi
197 x = self.activationsi
198 return x

~/proc_optim/opt_env/lib/python3.6/site-packages/probflow/modules.py in call(self, x)
199 # Without Flipout
200 else:
--> 201 return x @ self.weights() + self.bias()
202
203

~/proc_optim/opt_env/lib/python3.6/site-packages/tensorflow/python/ops/math_ops.py in binary_op_wrapper(x, y)

~/proc_optim/opt_env/lib/python3.6/site-packages/tensorflow/python/util/dispatch.py in wrapper(*args, **kwargs)

~/proc_optim/opt_env/lib/python3.6/site-packages/tensorflow/python/ops/math_ops.py in matmul(a, b, transpose_a, transpose_b, adjoint_a, adjoint_b, a_is_sparse, b_is_sparse, name)

~/proc_optim/opt_env/lib/python3.6/site-packages/tensorflow/python/ops/gen_math_ops.py in batch_mat_mul_v2(x, y, adj_x, adj_y, name)

~/proc_optim/opt_env/lib/python3.6/site-packages/tensorflow/python/framework/ops.py in raise_from_not_ok_status(e, name)

~/proc_optim/opt_env/lib/python3.6/site-packages/six.py in raise_from(value, from_value)

InvalidArgumentError: cannot compute BatchMatMulV2 as input #1(zero-based) was expected to be a double tensor but is a float tensor [Op:BatchMatMulV2]

DenseNet example

Add an example of building a net using DenseNet blocks? Something like

class DenseNetBlock(pf.Module):
    
    def __init__(self, dims):
        self.num_layers = len(dims) - 2
        cdims = np.cumsum(dims[:-1])
        self.layers = [pf.Dense(cdims[i], cdims[i+1]) for i in range(self.num_layers)]
        self.batch_norms = [pf.BatchNormalization(d) for d in cdims[1:]]
        self.linear = pf.Dense(cdims[-1], dims[-1])  # last layer reduces
        
    def __call__(self, x):
        outputs = [x]
        for i in range(self.num_layers):
            x = tf.concat(outputs, -1)
            x = self.layers[i](x)
            x = tf.keras.activations.swish(x)
            x = self.batch_norms[i](x)
            outputs.append(x)
        return self.linear(tf.concat(outputs, -1))

        

class DenseNetNetwork(pf.Module):

    def __init__(self, dims):
        self.num_blocks = len(dims)
        self.blocks = [DenseNetBlock(d) for d in dims]
        self.batch_norms = [pf.BatchNormalization(d[-1]) for d in dims[:-1]]

    def __call__(self, x):
        for i in range(self.num_blocks):
            x = self.blocks[i](x)
            if i < self.num_blocks - 1:
                x = tf.keras.activations.swish(x)
                x = self.batch_norms[i](x)
        return x



class DenseNetRegression(pf.Model):
    
    def __init__(self, dims):
        assert dims[-1][-1] == 2
        self.net = DenseNetNetwork(dims)

    def __call__(self, x):
        x = self.net(x)
        return pf.Normal(x[..., 0], tf.exp(x[..., 1]))



model = DenseNetRegression([[256, 128, 64, 32, 16], [16, 128, 64, 2]])

Maybe even add DenseNet and DenseNetNetwork to modules, and DenseNetRegression to applications (though versions with more options (i.e. pass choice of activation function, whether or not you want batch normalization, etc).

Add flipout support for PyTorch

Just need to implement two new ops: randn and rademacher, then use them to implement flipout in a backend-independent way in Dense. Can implement rademacher for pytorch via 2*torch.randint(0, 1, size)-1.

Also, in the TF implementation, update tfp.python.math.random_rademacher (which is deprecated) to tfp.random.rademacher.

Allow learning rate to be changed w/ PyTorch

Currently you can't change the learning rate mid-training using ProbFlow w/ the PyTorch backend. But could implement that pretty easily using torch.optim.lr_scheduler.LambdaLR.

The only trick would be that LambdaLR takes a function which should return a multiplicative factor, not the actual LR value. So would need to pass it the desired LR divided by the original LR.

Docs for using GPU w/ PyTorch

Currently PyTorch models are just running on the CPU. To make them run on the GPU, I think all you'll need to do is:

  1. Set the default datatype to torch.cuda.FloatTensor (via e.g. pf.set_datatype(torch.cuda.float32))
  2. Cast your input data to torch.cuda.FloatTensor (either before calling model.fit() or within your model's __call__ method itself via torch.from_numpy(your_numpy_array).cuda())

Should first validate that that gets models running on the GPU (by running nvidia-smi).

And add a section to the user guide "Using the CPU or GPU" which describes how to do this (and also have a section for tensorflow which says that you don't need to worry about it if you're using tensorflow_gpu).

PyTorch mixture distribution

Currently PyTorch doesn't have a mixture distribution, so trying to use pf.distributions.Mixture will give an error when using PyTorch as the backend.

Though, they're working on it! See this issue and this pull request. So, could just wait till that's implemented and then use it to implement pf.distributions.Mixture w/ PyTorch backend.

Or, in the meantime could add a manual implementation like this to pf.utils.torch_distributions. (Basically the same thing as get_TorchDeterministic`, but returns the mixture distribution class)

Update: looks like they've got a MixtureSameFamily implementation now!
So use that. https://pytorch.org/docs/stable/distributions.html#mixturesamefamily

Bayesian Network Implementation?

Hi Brendan, thanks for the library - really nice.

I'm wondering if there's a way to construct a simple Bayes Net with mixed data types using probflow. The model I'm thinking of would look something like:

  • A -> B
  • A -> C
  • B -> C

where:

  • A is a Categorical Distribution (~5 categories)
  • B is a Continuous Distribution (probably Exponential but can be Normal)
  • C is a Bernoulli which takes A and C and classifies as a 1 or a 0

I realize this looks a lot like a standard Logistic Regression model, but the difference is:

  • the inputs aren't independent (A -> B)
  • I'd like to use a Categorical Distribution for A instead of creating dummy variables

Do you think this is possible by inheriting from pf.CategoricalModel? If so, any pointers on building it?

Cheers,

Vahndi

model.coverage_by(ci=0.95) is not working

Hi, I am exploring this package. I am trying to get familiar with the example problem given in the getting started. It throws an error that it expects x_by and x parameters to pass

Below is the data and use the LinearRegression model and model.coverage_by(ci=0.95). Use Pytorch as the backend.

randn = lambda *x: np.random.randn(*x).astype('float32')

Generate some data

x = randn(100)
y = 2*x -1 + randn(100)

As the documentation is not clear, not sure, what information has to pass. Request input to perform Getting Started

regards
Selva

Monitor callbacks check only once every n epochs

Would be nice to add a kwarg to the MonitorMetric callback which makes it only compute the metric on validation data every n epochs, instead of every single epoch.

Course then you'll (ie users) will have to be careful when using it in combo w/ EarlyStopping (since if you have EarlyStopping w/ patience of, say, 5, and have it take a MonitorMetric which is only checking the metric every 5 epochs, then it'll stop after the first 5 epochs because the metric "hasn't changed"!)

DenseClassifier errors on Colab

Hello! I was super excited to discover this package because I'm a TFP newbie. However, when I try to run it on Colab with one of my datasets, I run into these error messages:

https://pastebin.com/raw/f0dWt7U4

Here's an example Colab notebook with what I'm trying to do:
https://colab.research.google.com/drive/14B2vsG-WPID3YWo6gpXeCVy3K7Hk105Z
And here's the example data needed to upload to the notebook:
https://drive.google.com/file/d/1xJ-dOFrI8BJnoTXCT5c7RiL2HQk85yKj/view?usp=sharing

Any help would be more than appreciated. Thank you in advance.

Ordered Parameter

Add an OrderedParameter class where samples from the vector are always ordered (ie p = OrderedParameter(3); p[0] < p[1]; p[1] < p[2]).

This is trickier w/ SVI than w/ MCMC because you can't just do the exp/increment transform with independent variances, because the variances could cause some samples from adjacent parameters to be on the "wrong side" of each other.

Maybe do something like this? Where centered_vars make centered variables from raw ones using same QR transform as in #19

def ordered_transform(vars):
    return vars[0] + tf.exp(vars[1]) * centered_vars(vars[2:])

Downside of that is each parameter's variance is correlated, but also variance depends on distance from mean?

Add jax support!

TFP has a JAX-backed impl that's fairly functional at this point (note, some distributions are still coming together).
If you wanted to add support for running on JAX as well, it might just be a matter of tfp = tfp.experimental.substrates.jax.

PyTorch tracing during training

Use torch.jit.trace during pf.models.Model's train_step to speed up fitting with the PyTorch backend.

  • Implement using tracing during training
  • Check if it works w/ pandas DataFrames (presumably it won't 😢 )
  • In the docs, mention that you'll have to use eager=True w/ DataFrames if using pytorch if that's the case

Requires tensorflow installed even if you not using it as the backend.

doing a:

import probflow as pf

raises:

---------------------------------------------------------------------------
ModuleNotFoundError                       Traceback (most recent call last)
<ipython-input-7-aa87442b1141> in <module>
      1 import torch
----> 2 import probflow as pf

~/github/probflow/src/probflow/__init__.py in <module>
----> 1 from probflow.applications import *
      2 from probflow.callbacks import *
      3 from probflow.data import *
      4 from probflow.distributions import *
      5 from probflow.models import *

~/github/probflow/src/probflow/applications/__init__.py in <module>
     22 
     23 
---> 24 from .dense_classifier import DenseClassifier
     25 from .dense_regression import DenseRegression
     26 from .linear_regression import LinearRegression

~/github/probflow/src/probflow/applications/dense_classifier.py in <module>
      3 import probflow.utils.ops as O
      4 from probflow.distributions import Categorical
----> 5 from probflow.models import CategoricalModel
      6 from probflow.modules import DenseNetwork
      7 from probflow.utils.casting import to_tensor

~/github/probflow/src/probflow/models/__init__.py in <module>
     23 
     24 
---> 25 from .categorical_model import CategoricalModel
     26 from .continuous_model import ContinuousModel
     27 from .discrete_model import DiscreteModel

~/github/probflow/src/probflow/models/categorical_model.py in <module>
      4 from probflow.utils.plotting import plot_categorical_dist
      5 
----> 6 from .model import Model
      7 
      8 

~/github/probflow/src/probflow/models/model.py in <module>
      7 import probflow.utils.ops as O
      8 from probflow.data import make_generator
----> 9 from probflow.modules import Module
     10 from probflow.utils.base import BaseCallback
     11 from probflow.utils.casting import to_numpy

~/github/probflow/src/probflow/modules/__init__.py in <module>
     28 
     29 
---> 30 from .batch_normalization import BatchNormalization
     31 from .dense import Dense
     32 from .dense_network import DenseNetwork

~/github/probflow/src/probflow/modules/batch_normalization.py in <module>
      3 import probflow.utils.ops as O
      4 from probflow.distributions import Deterministic, Normal
----> 5 from probflow.parameters import Parameter
      6 from probflow.utils.base import BaseDistribution
      7 from probflow.utils.initializers import xavier

~/github/probflow/src/probflow/parameters/__init__.py in <module>
     51 
     52 
---> 53 from .bounded_parameter import BoundedParameter
     54 from .categorical_parameter import CategoricalParameter
     55 from .deterministic_parameter import DeterministicParameter

~/github/probflow/src/probflow/parameters/bounded_parameter.py in <module>
      4 from probflow.utils.initializers import scale_xavier, xavier
      5 
----> 6 from .parameter import Parameter
      7 
      8 

~/github/probflow/src/probflow/parameters/parameter.py in <module>
     13 
     14 
---> 15 class Parameter(BaseParameter):
     16     r"""Probabilistic parameter(s).
     17 

~/github/probflow/src/probflow/parameters/parameter.py in Parameter()
     87         shape: Union[int, List[int]] = 1,
     88         posterior: Type[BaseDistribution] = Normal,
---> 89         prior: BaseDistribution = Normal(0, 1),
     90         transform: Callable = None,
     91         initializer: Dict[str, Callable] = {

~/github/probflow/src/probflow/distributions/normal.py in __init__(self, loc, scale)
     48 
     49         # Check input
---> 50         ensure_tensor_like(loc, "loc")
     51         ensure_tensor_like(scale, "scale")
     52 

~/github/probflow/src/probflow/utils/validation.py in ensure_tensor_like(obj, name)
     23         tensor_types = (torch.Tensor, BaseParameter)
     24     else:
---> 25         import tensorflow as tf
     26 
     27         tensor_types = (tf.Tensor, tf.Variable, BaseParameter)

ModuleNotFoundError: No module named 'tensorflow'

both from the pip version and a clone of this repo.

images

This image looks beautiful and elegant! What tools/scripts did you use to make it?

Thanks!

Model predictive sampling methods don't work with Pandas DataFrames

Because you can't tf.expand_dims on a Pandas DataFrame (or any kind of equivalent operation)... Normally with sampling, will add a singleton dimension as the first dimension to the input data x, and then sample from the variational posteriors (which returns tensors of shape [Nsamples, Shape0, ..., ShapeN]), and that way can broadcast between x and the parameter sample tensors.

Can't cast to tensor and then expand dims, b/c that would break the user's code in pf.Model.__call__, which (might) assume it's a Pandas DataFrame object.

Don't really see a good fix here short of just not supporting DataFrames at all...

Convolutional modules

Add Bayesian Conv1D and Conv2D modules.

Looks like maybe you can just get a sample from the filter's variational posteriors, then just pass the data and the sampled tensor through tf.ops.nn_ops.convolution.

Gaussian Process support

Currently I can't think of a good way to fit nonparametric models (like Gaussian processes) into the ProbFlow framework. For example, the probflow.model.predict interface would have to change, and all the methods which depend on it, since you'd need both the x and y training data as well as the x test data.

That said, you can still use ProbFlow to fit GP kernel and/or latent parameters. For example, fitting the kernel parameters would look something like:

import tensorflow_probability as tfp
import probflow as pf

class GP(pf.Model):

    def __init__(self):
        self.amplitude = pf.ScaleParameter()
        self.length_scale = pf.ScaleParameter()
        self.var = pf.ScaleParameter()

    def __call__(self, x):
        kernel = tfp.positive_semidefinite_kernels.ExponentiatedQuadratic(
            amplitude=self.amplitude(),
            length_scale=self.length_scale())
        return tfp.distributions.GaussianProcess(kernel, x, 
            observation_noise_variance=self.var())

Which can be fit just fine, but then runs into issues if you were to call predict on the model, since the model doesn't store the training data.

For now I've put further support for GPs on the "out-of-scope" list, but if anyone has ideas for how to make GPs work with the rest of the framework (say, with probflow.model.predict), and want to see that be part of the package, I'm definitely open to discussion!

Bayesian update method

Add a set_priors_to_posteriors (or perhaps something more elegant... bayesian_update? Just update?) method to pf.models.Model which sets the prior distributions to the current value of the posterior distributions, to allow Bayesian updating / streaming inference / incremental updates.

Should be relatively straightforward I think. 🤔 Can't just do for p in self.parameters: p.prior = p.posterior since that would set it by reference, but will need to make a copy of each parameter's posterior Distribution object and fix the underlying variables at their current values (get constants from variables), then set the prior to that. Maybe add a bayesian_udpate() method to Parameter which does that, then Module's bayesian_update() can just be for p in self.parameters: p.bayesian_update()

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.