GithubHelp home page GithubHelp logo

google / jax Goto Github PK

View Code? Open in Web Editor NEW
28.0K 321.0 2.6K 81.03 MB

Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more

Home Page: http://jax.readthedocs.io/

License: Apache License 2.0

Python 91.39% Shell 0.15% C++ 6.24% Jupyter Notebook 0.71% Starlark 1.37% C 0.12% MAXScript 0.01%
jax

jax's People

Contributors

apaszke avatar axch avatar bchetioui avatar bythew3i avatar chr1sj0nes avatar dougalm avatar fehiepsi avatar froystig avatar gnecula avatar hawkinsp avatar j-towns avatar jakevdp avatar jblespiau avatar jekbradbury avatar jyingl3 avatar lenamartens avatar levskaya avatar marcvanzee avatar mattjj avatar nouiz avatar pschuh avatar sharadmv avatar shoyer avatar skye avatar superbobry avatar tlongeri avatar tlu7 avatar tomhennigan avatar yashk2810 avatar zhangqiaorjc avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

jax's Issues

grad of cond and while

First and higher order derivatives of functions that use lax.cond and lax.while should be possible.

Invalid proto descriptor for file "tensorflow/compiler/xla/xla_data.proto"

import jax.numpy as np
File "/home/nesa320/anaconda2/envs/py3/lib/python3.6/site-packages/jax/init.py", line 17, in
from jax.api import *
File "/home/nesa320/anaconda2/envs/py3/lib/python3.6/site-packages/jax/api.py", line 30, in
from .abstract_arrays import ShapedArray
File "/home/nesa320/anaconda2/envs/py3/lib/python3.6/site-packages/jax/abstract_arrays.py", line 25, in
from .lib import xla_bridge
File "/home/nesa320/anaconda2/envs/py3/lib/python3.6/site-packages/jax/lib/xla_bridge.py", line 32, in
from jaxlib import xla_data_pb2
File "/home/nesa320/anaconda2/envs/py3/lib/python3.6/site-packages/jaxlib/xla_data_pb2.py", line 23, in
serialized_pb=_b('\n&tensorflow/compiler/xla/xla_data.proto\x12\x03xla"\xb7\x01\n\rPaddingConfig\x12=\n\ndimensions\x18\x01 \x03(\x0b\x32).xla.PaddingConfig.PaddingConfigDimension\x1ag\n\x16PaddingConfigDimension\x12\x18\n\x10\x65\x64ge_padding_low\x18\x01 \x01(\x03\x12\x19\n\x11\x65\x64ge_padding_high\x18\x02 \x01(\x03\x12\x18\n\x10interior_padding\x18\x03 \x01(\x03"\x1a\n\x04Tile\x12\x12\n\ndimensions\x18\x01 \x03(\x03"\xc0\x01\n\x06Layout\x12\x1b\n\x06\x66ormat\x18\x04 \x01(\x0e\x32\x0b.xla.Format\x12\x16\n\x0eminor_to_major\x18\x01 \x03(\x03\x12\x1b\n\x13max_sparse_elements\x18\x05 \x01(\x03\x12\x18\n\x05tiles\x18\x06 \x03(\x0b\x32\t.xla.Tile\x12\x1c\n\x14\x65lement_size_in_bits\x18\x07 \x01(\x03J\x04\x08\x02\x10\x03J\x04\x08\x03\x10\x04R\x11padded_dimensionsR\rpadding_value"\x9a\x01\n\nShapeProto\x12(\n\x0c\x65lement_type\x18\x02 \x01(\x0e\x32\x12.xla.PrimitiveType\x12\x12\n\ndimensions\x18\x03 \x03(\x03\x12%\n\x0ctuple_shapes\x18\x04 \x03(\x0b\x32\x0f.xla.ShapeProto\x12\x1b\n\x06layout\x18\x05 \x01(\x0b\x32\x0b.xla.LayoutJ\x04\x08\x01\x10\x02R\x04rank"r\n\x11ProgramShapeProto\x12#\n\nparameters\x18\x01 \x03(\x0b\x32\x0f.xla.ShapeProto\x12\x1f\n\x06result\x18\x02 \x01(\x0b\x32\x0f.xla.ShapeProto\x12\x17\n\x0fparameter_names\x18\x03 \x03(\t"D\n\x10\x43omputationStats\x12\x12\n\nflop_count\x18\x01 \x01(\x01\x12\x1c\n\x14transcendental_count\x18\x02 \x01(\x01"X\n\nOpMetadata\x12\x0f\n\x07op_type\x18\x01 \x01(\t\x12\x0f\n\x07op_name\x18\x02 \x01(\t\x12\x13\n\x0bsource_file\x18\x03 \x01(\t\x12\x13\n\x0bsource_line\x18\x04 \x01(\x05"\xc8\x01\n\x10\x45xecutionProfile\x12\x1d\n\x15\x63ompilation_cache_hit\x18\x01 \x01(\x08\x12\x17\n\x0f\x63ompile_time_ms\x18\x02 \x01(\x03\x12\x1b\n\x13\x63ompute_cycle_count\x18\x03 \x01(\x03\x12\x17\n\x0f\x63ompute_time_ns\x18\x04 \x01(\x03\x12$\n\x1c\x63ompute_and_transfer_time_ns\x18\x05 \x01(\x03\x12 \n\x18\x65xecutable_size_in_bytes\x18\x06 \x01(\x03"!\n\x0f\x45xecutionHandle\x12\x0e\n\x06handle\x18\x01 \x01(\x03""\n\x10GlobalDataHandle\x12\x0e\n\x06handle\x18\x01 \x01(\x03"4\n\x0c\x44\x65viceHandle\x12\x0e\n\x06handle\x18\x01 \x01(\x03\x12\x14\n\x0c\x64\x65vice_count\x18\x02 \x01(\x03"\xb4\x01\n\rChannelHandle\x12\x0e\n\x06handle\x18\x01 \x01(\x03\x12,\n\x04type\x18\x02 \x01(\x0e\x32\x1e.xla.ChannelHandle.ChannelType"e\n\x0b\x43hannelType\x12\x18\n\x14\x43HANNEL_TYPE_INVALID\x10\x00\x12\x14\n\x10\x44\x45VICE_TO_DEVICE\x10\x01\x12\x12\n\x0e\x44\x45VICE_TO_HOST\x10\x02\x12\x12\n\x0eHOST_TO_DEVICE\x10\x03"\xc5\x01\n\x15\x44\x65viceAssignmentProto\x12\x15\n\rreplica_count\x18\x01 \x01(\x05\x12\x19\n\x11\x63omputation_count\x18\x02 \x01(\x05\x12I\n\x13\x63omputation_devices\x18\x03 \x03(\x0b\x32,.xla.DeviceAssignmentProto.ComputationDevice\x1a/\n\x11\x43omputationDevice\x12\x1a\n\x12replica_device_ids\x18\x01 \x03(\x05"\xb5\x02\n\x0cLiteralProto\x12\x1e\n\x05shape\x18\x01 \x01(\x0b\x32\x0f.xla.ShapeProto\x12\r\n\x05preds\x18\x02 \x03(\x08\x12\x0b\n\x03s8s\x18\x0f \x01(\x0c\x12\x0b\n\x03u8s\x18\x03 \x01(\x0c\x12\x0c\n\x04s32s\x18\x04 \x03(\x05\x12\x0c\n\x04s64s\x18\x05 \x03(\x03\x12\x0c\n\x04u32s\x18\x06 \x03(\r\x12\x0c\n\x04u64s\x18\x07 \x03(\x04\x12\x0c\n\x04\x66\x33\x32s\x18\x08 \x03(\x02\x12\x0c\n\x04\x66\x36\x34s\x18\t \x03(\x01\x12\x0c\n\x04\x63\x36\x34s\x18\x0c \x03(\x02\x12)\n\x0etuple_literals\x18\n \x03(\x0b\x32\x11.xla.LiteralProto\x12\x0c\n\x04\x66\x31\x36s\x18\x0b \x01(\x0c\x12\r\n\x05\x62\x66\x31\x36s\x18\r \x01(\x0c\x12\x0c\n\x04u16s\x18\x10 \x01(\x0c\x12\x0c\n\x04s16s\x18\x11 \x01(\x0c\x12\x16\n\x0esparse_indices\x18\x0e \x03(\x03"\xa3\x01\n\x0fWindowDimension\x12\x0c\n\x04size\x18\x01 \x01(\x03\x12\x0e\n\x06stride\x18\x02 \x01(\x03\x12\x13\n\x0bpadding_low\x18\x03 \x01(\x03\x12\x14\n\x0cpadding_high\x18\x04 \x01(\x03\x12\x17\n\x0fwindow_dilation\x18\x05 \x01(\x03\x12\x15\n\rbase_dilation\x18\x06 \x01(\x03\x12\x17\n\x0fwindow_reversal\x18\x07 \x01(\x08"2\n\x06Window\x12(\n\ndimensions\x18\x01 \x03(\x0b\x32\x14.xla.WindowDimension"~\n\x16GatherDimensionNumbers\x12\x13\n\x0boffset_dims\x18\x01 \x03(\x03\x12\x1c\n\x14\x63ollapsed_slice_dims\x18\x02 \x03(\x03\x12\x17\n\x0fstart_index_map\x18\x03 \x03(\x03\x12\x18\n\x10index_vector_dim\x18\x04 \x01(\x03"\x93\x01\n\x17ScatterDimensionNumbers\x12\x1a\n\x12update_window_dims\x18\x01 \x03(\x03\x12\x1c\n\x14inserted_window_dims\x18\x02 \x03(\x03\x12$\n\x1cscatter_dims_to_operand_dims\x18\x03 \x03(\x03\x12\x18\n\x10index_vector_dim\x18\x04 \x01(\x03"\xd8\x02\n\x1b\x43onvolutionDimensionNumbers\x12\x1d\n\x15input_batch_dimension\x18\x07 \x01(\x03\x12\x1f\n\x17input_feature_dimension\x18\x08 \x01(\x03\x12 \n\x18input_spatial_dimensions\x18\x0b \x03(\x03\x12&\n\x1ekernel_input_feature_dimension\x18\x03 \x01(\x03\x12'\n\x1fkernel_output_feature_dimension\x18\x04 \x01(\x03\x12!\n\x19kernel_spatial_dimensions\x18\x06 \x03(\x03\x12\x1e\n\x16output_batch_dimension\x18\t \x01(\x03\x12 \n\x18output_feature_dimension\x18\n \x01(\x03\x12!\n\x19output_spatial_dimensions\x18\x0c \x03(\x03"\x99\x01\n\x13\x44otDimensionNumbers\x12"\n\x1alhs_contracting_dimensions\x18\x01 \x03(\x03\x12"\n\x1arhs_contracting_dimensions\x18\x02 \x03(\x03\x12\x1c\n\x14lhs_batch_dimensions\x18\x03 \x03(\x03\x12\x1c\n\x14rhs_batch_dimensions\x18\x04 \x03(\x03"\xff\x01\n\nOpSharding\x12"\n\x04type\x18\x01 \x01(\x0e\x32\x14.xla.OpSharding.Type\x12#\n\ntile_shape\x18\x02 \x01(\x0b\x32\x0f.xla.ShapeProto\x12"\n\x1atile_assignment_dimensions\x18\x03 \x03(\x03\x12\x1f\n\x17tile_assignment_devices\x18\x04 \x03(\x03\x12(\n\x0ftuple_shardings\x18\x05 \x03(\x0b\x32\x0f.xla.OpSharding"9\n\x04Type\x12\x0e\n\nREPLICATED\x10\x00\x12\x0b\n\x07MAXIMAL\x10\x01\x12\t\n\x05TUPLE\x10\x02\x12\t\n\x05OTHER\x10\x03"#\n\x0cReplicaGroup\x12\x13\n\x0breplica_ids\x18\x01 \x03(\x03".\n\x0cSourceTarget\x12\x0e\n\x06source\x18\x01 \x01(\x03\x12\x0e\n\x06target\x18\x02 \x01(\x03"}\n\x0fPrecisionConfig\x12\x39\n\x11operand_precision\x18\x01 \x03(\x0e\x32\x1e.xla.PrecisionConfig.Precision"/\n\tPrecision\x12\x0b\n\x07\x44\x45\x46\x41ULT\x10\x00\x12\x08\n\x04HIGH\x10\x01\x12\x0b\n\x07HIGHEST\x10\x02*\xcb\x01\n\rPrimitiveType\x12\x1a\n\x16PRIMITIVE_TYPE_INVALID\x10\x00\x12\x08\n\x04PRED\x10\x01\x12\x06\n\x02S8\x10\x02\x12\x07\n\x03S16\x10\x03\x12\x07\n\x03S32\x10\x04\x12\x07\n\x03S64\x10\x05\x12\x06\n\x02U8\x10\x06\x12\x07\n\x03U16\x10\x07\x12\x07\n\x03U32\x10\x08\x12\x07\n\x03U64\x10\t\x12\x07\n\x03\x46\x31\x36\x10\n\x12\x07\n\x03\x46\x33\x32\x10\x0b\x12\x08\n\x04\x42\x46\x31\x36\x10\x10\x12\x07\n\x03\x46\x36\x34\x10\x0c\x12\x07\n\x03\x43\x36\x34\x10\x0f\x12\t\n\x05TUPLE\x10\r\x12\n\n\x06OPAQUE\x10\x0e\x12\t\n\x05TOKEN\x10\x113\n\x06\x46ormat\x12\x12\n\x0eINVALID_FORMAT\x10\x00\x12\t\n\x05\x44\x45NSE\x10\x01\x12\n\n\x06SPARSE\x10\x021\n\x07\x46\x66tType\x12\x07\n\x03\x46\x46T\x10\x00\x12\x08\n\x04IFFT\x10\x01\x12\x08\n\x04RFFT\x10\x02\x12\t\n\x05IRFFT\x10\x03*F\n\x12RandomDistribution\x12\x0f\n\x0bRNG_INVALID\x10\x00\x12\x0f\n\x0bRNG_UNIFORM\x10\x01\x12\x0e\n\nRNG_NORMAL\x10\x02\x42\x03\xf8\x01\x01\x62\x06proto3')
File "/home/nesa320/anaconda2/envs/py3/lib/python3.6/site-packages/google/protobuf/descriptor.py", line 878, in new
return _message.default_pool.AddSerializedFile(serialized_pb)
TypeError: Couldn't build proto file into descriptor pool!
Invalid proto descriptor for file "tensorflow/compiler/xla/xla_data.proto":
tensorflow/compiler/xla/xla_data.proto: A file with this name is already in the pool.

Conda installations

Conda is a great package distribution system, especially for those with binary dependencies. It'd be good to have some ourselves.

Vmap cookbook

A notebook highlighting creative uses of vmap.
Ideas:

  • vanilla matrix-vector to matrix-matrix
  • more complicated, but still vanilla, function
  • pdist
  • cdist
  • vmapping random seeds for hessian trace estimation
  • bootstrap

Frequenty asked questions doc

We're beginning to acquire some questions, some of which have been asked frequently. Collecting in this issue for eventual inclusion in a more polished markdown file.

  • How is JAX different than PyTorch?
  • How is JAX different than CuPy?
  • Is automatic differentiation the same as finite differences?
  • Is automatic differentiation the same as symbolic differentiation?
  • What if I feed grad something non-differentiable?
    • A function whose output doesn't depend on the input
    • A function that is constant with respect to the input
    • A function with a sharp discontinuity
    • max(0, x)
  • I'm trying to use JAX but I'm getting an XYZ not implemented for ABC error (grad, vmap, jit)

Feature request: export TF ops

Thanks for this project! Looking forward to using it more.

This is a feature request, feel free to close if this is not a good place to track:

I'd love to be able to export Tensorflow Ops (.so) from functions defined via JAX. The main use case is for embedding these functions in a serving context. For training this is is less necessary bc the two systems can interact at the python level, though I'm not clear on how to eliminate memory copies in that scenario.

Ideally the API would be something like passing in a tf.placeholder to the function, or otherwise using the annotations being introduced in TF 2.0. Would be fine if this was a separate package to avoid direct dependency on TF in JAX.

Thanks!

Batching rules for pad and concatenate primitives not implemented

I'm interested in using JAX to compute Jacobians for functions which involve iteratively applying operations to generate a sequence. When using Autograd for this, in order to avoid indexed assignment I would create a list which is iteratively populated with the sequence values and then create an array from the list using np.array or np.stack. Attempting the same in JAX (built from source with fc4afb4) raises a NotImplementedError when trying to compute the Jacobian of such a function with either jacrev or jacfwd as batching rules are not implemented for the pad and concatenate primitives respectively.

As a minimal example

import jax.numpy as np
from jax import jacrev, jacfwd

def func(xs): 
    return np.array([x for x in xs])

jacrev_func = jacrev(func)
jacfwd_func = jacfwd(func)

xs = np.ones((5, 1))

jacrev_func(xs)
# raises NotImplementedError: Batching rule for 'pad' not implemented

jacfwd_func(xs)
# raises NotImplementedError: Batching rule for 'concatenate' not implemented

The same errors are raised when replacing np.array in the defnition of func with np.stack, np.hstack, np.vstack or np.concatenate.

np.linalg.inv support

I've been trying to implement Gaussian Processes Regression, which require the calculation of a matrix inverse. With regular numpy I would use np.linalg.inv, but I can't find this function back in jax.

Everything else is working as expected, and I can use np.linalg.inv for basic calculations.
Unfortunately, the use of np.linalg.inv keeps me from using grad to calculate gradients, which would be the most exciting part of the whole implementation!

I would love to contribute a PR if someone can tell me where to start.

Error on NaN?

Is it possible to raise an error on NaN, a la np.seterr?

import numpy as np

np.seterr(all='raise')
np.divide(0, 0)  # FloatingPointError: divide by zero encountered in divide

Cloud TPU Support

Hello all,

Super stoked about this project and so glad it's out in the open now! I just wanted to make an issue people can follow to track the support of Cloud TPUs in JAX.

As soon as this is ready to be tested for.ai and I would be super eager to give it a try and help out!

All the best,
Aidan

Unimplemented NumPy core functions

Remaining functions to be implemented:

  • np.argpartition (tricky because XLA does not provide any partial sort functionality)
  • np.partition (tricky because XLA does not provide any partial sort functionality)

The list above was made by inspecting jnp._NOT_IMPLEMENTED and excluding deprecated functions (such as np.alen, np.ipmt, etc.), functions not relevant to JAX (such as np.setbufsize, np.ascontiguousarray, etc), and functions that modify buffers in-place (np.put, np.place, etc.):

Bugs for high-level categories:

Numpy-style indexed update support.

Hi, I use numpy to read data. In the naive numpy x_train[(i - 1) * 10000 : i * 10000, :, :, :] = data is OK!However, in the jax.numpy , it raise a ValueError: assignment destination is read-only. Meanwhile, np.set_printoptions() is not implemented.

Scenarios to prefer over cupy

Cupy is quite stable and efficient "numpy for GPU" (which has no restrictions mentioned in readme), chainer over cupy provides necessary audo-diff and primitives for deep learning. There are also other alternatives.

It would be nice to have showcases when jax is expected to be beneficial compared to already existing tools.

Thanks!

np.einsum support

support for generic tensor contractions would cover a large class of computations and also provide a foundation for higher order operations. Perhaps jax could then also be added as a opt_einsum backend?

Improving jax.scipy.stats

Hi,

Are there any plans for improving the stats modules in jax.scipy? I look forward to contributing to this project by adding new distributions and properties for distributions (mean, variance for e.g.). Please let me know your thoughts about this.

Thank you.

jacrev and jacfwd usage example

Is there a simple example of how to use jacrev and jacfwd? There's currently no useful docstring. Some usage details would be helpful.

For example:
(1) when calling grad(fun), does it differentiate w.r.t. the first input argument?
(2) how to use jacrev(func), when func takes multiple inputs? (e.g. differentiate w.r.t. 3rd input variable)

Thanks.

Python 3 compatibility issues

flake8 testing of https://github.com/google/jax on Python 3.7.1

$ flake8 . --count --select=E901,E999,F821,F822,F823 --show-source --statistics

./jax/lax.py:481:25: F821 undefined name '_ndim'
  start_indices = [0] * _ndim(operand)
                        ^
./jax/lax.py:487:6: F821 undefined name '_ndim'
  if _ndim(update) != _ndim(operand):
     ^
./jax/lax.py:487:23: F821 undefined name '_ndim'
  if _ndim(update) != _ndim(operand):
                      ^
./jax/lax.py:488:12: F821 undefined name '_ndim'
    assert _ndim(update) + 1 == _ndim(operand)
           ^
./jax/lax.py:488:33: F821 undefined name '_ndim'
    assert _ndim(update) + 1 == _ndim(operand)
                                ^
./jax/lax.py:489:17: F821 undefined name '_ndim'
    ax = axis % _ndim(operand)
                ^
./jax/lax.py:1955:35: F821 undefined name 'c'
  select = _reduction_computation(c, select_jaxpr, select_consts, init_value)
                                  ^
./jax/lax.py:1956:36: F821 undefined name 'c'
  scatter = _reduction_computation(c, scatter_jaxpr, scatter_consts, init_value)
                                   ^
./jax/lax.py:1957:10: F821 undefined name 'c'
  return c.SelectAndScatter(operand, select, window_dimensions, window_strides,
         ^
./jax/lax.py:2156:32: F821 undefined name 'name'
    raise TypeError(msg.format(name, len(lhs_shape), len(rhs_shape)))
                               ^
./jax/abstract_arrays.py:59:46: F821 undefined name 'long'
    _long    = concretization_function_error(long)
                                             ^
./jax/core.py:163:7: F821 undefined name 'print_trace_stack'
      print_trace_stack()
      ^
./jax/core.py:371:7: F821 undefined name 'print_trace_stack'
      print_trace_stack()
./jax/interpreters/xla.py:252:48: F821 undefined name 'long'
    __long__ = partialmethod(forward_to_value, long)
                                               ^
./jax/interpreters/ad.py:189:20: F821 undefined name 'JaxTuple'
        return xt, JaxTuple(map(zeros_like_jaxval, xt))
                   ^
./jax/interpreters/ad.py:196:16: F821 undefined name 'JaxTuple'
        return JaxTuple(map(zeros_like_jaxval, yt)), yt
               ^
./jax/numpy/lax_numpy.py:379:41: F821 undefined name 'isfortran'
    dims = onp.arange(ndim(a))[::-1] if isfortran(a) else onp.arange(ndim(a))
                                        ^
./examples/mnist_vae.py:113:21: E999 SyntaxError: invalid syntax
    def body_fun(i, (rng, opt_state, images)):
                    ^
./examples/resnet50.py:124:12: F821 undefined name 'xrange'
  for i in xrange(num_steps):
           ^
./tests/minmax_test.py:46:16: F821 undefined name 'fax'
    infeeder = fax.make_infeed_from_sequence(
               ^
1     E999 SyntaxError: invalid syntax
19    F821 undefined name 'xrange'
20

curry `vmap`

We want to change the API to look more like vmap(fun) and vmap(fun)(x, y) instead of the current vmap(fun, x, y). That makes it more consistent with the other main transformations (jit and grad, plus all the other autodiff ones) and seems to be more convenient given the experiences of @shoyer and @alexbw.

Complex number support

There is some support for complex numbers already, but many things are missing. Even things as simple as:

np.array([[1,-2j],[2j,5]])

don't work. We should fix this!

Travis CI automated testing

We shouldn't need to worry if a commit breaks a test or not -- it should be run automatically. Once the pip install situation stabilizes (i.e., we do not want to build xlapy from scratch at every commit or PR), this should not be too difficult to do.

Hard crash when no compatible cuda devices found

Hi,

I've been playing around with this trying to get it to work with my GPU and I've been having some issues. First things first though, do you think it would make sense to add a note to the README about which versions of Python are recommended/supported? My understanding is that tensorflow doesn't yet support 3.7, is that true here? What version do you use for development? 3.6? 2.7? Also, any notes on supported versions of the CUDA libraries (cuda/cudnn)? (the build times are a bit long, so knowing that I'm starting from a known working environment would be really helpful!)

Thanks!

Building on OSX with CUDA

I have a Mac laptop with a CUDA-compatible card in it (last generation they sold!) and have CUDA 9.2 and cuDNN 7.2.1 installed, and both seem to be working fine. I'm getting a build failure for JAX.

ERROR: /private/tmp/jax-build/jax-bazel-output-user-root/output-base/external/org_tensorflow/tensorflow/compiler/xla/service/gpu/BUILD:784:1: C++ compilation of rule '@org_tensorflow//tensorflow/compiler/xla/service/gpu:gpu_layout_assignment' failed (Exit 1)
external/org_tensorflow/tensorflow/compiler/xla/service/gpu/gpu_layout_assignment.cc:53:18: error: constexpr variable 'kAllNCHW' must be initialized by a constant expression
  constexpr auto kAllNCHW =
                 ^
external/org_tensorflow/tensorflow/compiler/xla/service/gpu/gpu_layout_assignment.cc:54:7: note: non-constexpr function 'make_tuple<stream_executor::dnn::DataLayout, stream_executor::dnn::FilterLayout, stream_executor::dnn::DataLayout>' cannot be used in a constant expression
      std::make_tuple(DataLayout::kBatchDepthYX, FilterLayout::kOutputInputYX,
      ^
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/include/c++/v1/tuple:1094:1: note: declared here
make_tuple(_Tp&&... __t)
^
external/org_tensorflow/tensorflow/compiler/xla/service/gpu/gpu_layout_assignment.cc:56:18: error: constexpr variable 'kAllNHWC' must be initialized by a constant expression
  constexpr auto kAllNHWC =
                 ^
external/org_tensorflow/tensorflow/compiler/xla/service/gpu/gpu_layout_assignment.cc:57:7: note: non-constexpr function 'make_tuple<stream_executor::dnn::DataLayout, stream_executor::dnn::FilterLayout, stream_executor::dnn::DataLayout>' cannot be used in a constant expression
      std::make_tuple(DataLayout::kBatchYXDepth, FilterLayout::kOutputYXInput,
      ^
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/include/c++/v1/tuple:1094:1: note: declared here
make_tuple(_Tp&&... __t)
^
2 errors generated.
Target //jax:build_jax failed to build

Any ideas?

Batching broken for non-monoidal reducers

repro:

>>> import jax.numpy as np
>>> from jax import vmap
>>> vmap(np.any)(np.array([[True, False], [False, False]]))
jax/lib/xla_bridge.py:138: UserWarning: No GPU found, falling back to CPU.
  warnings.warn('No GPU found, falling back to CPU.')
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "jax/api.py", line 149, in batched_fun
    out_flat = batching.batch(flat_fun, in_flat, in_axes_, out_axes)
  File "jax/interpreters/batching.py", line 43, in batch
    out_val, out_dim = batch_transform(fun).call_wrapped(in_vals, in_dims)
  File "jax/linear_util.py", line 85, in call_wrapped
    ans = self.f(*args, **self.kwargs)
  File "jax/numpy/lax_numpy.py", line 607, in reduction
    result = lax.reduce(a, _reduction_init_val(a, init_val), op, dims)
  File "jax/lax.py", line 260, in reduce
    dimensions=tuple(dimensions))
  File "jax/core.py", line 74, in bind
    out_tracer = top_trace.process_primitive(self, tracers, kwargs)
  File "jax/interpreters/batching.py", line 119, in process_primitive
    val_out, dim_out = batched_primitive(vals_in, dims_in, **params)
TypeError: reducer_batcher() takes exactly 4 arguments (3 given)

v0.2 tasks

Collecting issues for a hypothetical v0.2.

  • easy API for custom primitives (w/ docs) (#116)
  • Experimental Cloud TPU support (#27)
  • Frequently asked questions doc (#58)
  • Autodiff cookbook (#59)
  • Vmap cookbook (#60)
  • api.py documentation (#61)
  • Upload some materials to docs folder (e.g. PRNG design doc, rev-from-fwd) (#62)
  • JAX development philosophy doc (#64)
    - [ ] Rename jaxlib to xlapy (#65)
  • Conda installations (#66)
  • Full Good NumPy core function coverage (#70)
  • Full np.fft coverage
  • Full np.linalg coverage
  • Full np.random Good random coverage
  • grad of cond and while (#83)
  • mature stax (#137)
  • JAX + TF interop (demonstrates composability with internal end external tools) (#45)
  • In-place update syntax (#122)
  • Keras NumPy backend demo (#63)
  • Open source tests (#67)
  • ONNX -> jaxpr loading (#91)
  • Travis CI (#68)

Open source tests

We'd like contributors to be able to make sure their code isn't breaking anything, and that they add tests that cover their own contributions.

Broadcasting of size-0 dimensions not implemented

Numpy supports broadcasts with size-0 dimensions against size-1 dimensions:

onp.ones([0,1]) + onp.ones([1,128])

produces:

array([], shape=(0, 128), dtype=float64)

However

to_device = jax.jit(lambda x:x)
to_device(np.ones([0,1])) + to_device(np.ones([1,128]))
ValueError: Incompatible shapes for broadcasting: ((0, 1), (1, 128))

The broadcasting rule computes the output shape as

result_shape = onp.max(shapes, axis=0)

but it probably needs to be something like this:

min_shape = onp.min(shapes, axis=0)
max_shape = onp.max(shapes, axis=0)
result_shape = onp.where(min_shape == 0, 0, max_shape)

Add support for `np.trace`

Goal is to use @jit on a function containing a call np.trace(..). My rudimentary attempt to implement via indexing also fails

from jax import numpy
from jax.api import jit
import numpy as onp

@jit
def trace(A):
  return np.trace(A)  # Exception: Numpy function <function trace at 0x7f89bee1eb90> not yet implemented

@jit
def trace(A):
  idx = onp.diag_indices(len(A))
  diag = A[idx]  # TypeError: No abstraction handler for type: <type 'tuple'>
  return np.sum(diag)

Easy api for custom primitives and vjps

JAX supports custom primitives and vjps, just like Autograd did. Improvements:

  1. add this to documentation
  2. add a minimal example of this in the examples section
  3. add a wrapper function if appropriate?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.