GithubHelp home page GithubHelp logo

buriburisuri / sugartensor Goto Github PK

View Code? Open in Web Editor NEW
372.0 19.0 63.0 13.16 MB

A slim tensorflow wrapper that provides syntactic sugar for tensor variables. This library will be helpful for practical deep learning researchers not beginners.

License: MIT License

Python 100.00%

sugartensor's Introduction

Sugar Tensor - A slim tensorflow wrapper that provides syntactic sugar for tensor variables

Sugar Tensor aims to help deep learning researchers/practitioners. It adds some syntactic sugar functions to tensorflow to avoid tedious repetitive tasks. Sugar Tensor was developed under the following principles:

Current Version : 1.0.0.2

Principles

  1. Don't mess up tensorflow. We provide no wrapping classes. Instead, we use a tensor itself so that developers can program freely as before with tensorflow.
  2. Don't mess up the python style. We believe python source codes should look pretty and simple. Practical deep learning codes are very different from those of complex GUI programs. Do we really need inheritance and/or encapsulation in our deep learning code? Instead, we seek for simplicity and readability. For that, we use pure python functions only and avoid class style conventions.

Installation

  1. Requirements

    1. tensorflow == 1.0.0
  2. Dependencies ( Will be installed automatically )

    1. tqdm >= 4.8.4
  3. Installation

python 2

pip install --upgrade sugartensor

python 3

pip3 install sugartensor

docker installation : See docker README.md.

API Document

See SugarTensor's official API documentation.

Quick start

###Imports

import sugartensor as tf   # no need of 'import tensorflow'

Features

Sugar functions

All tensors--variables, operations, and constants--automatically have sugar functions which start with 'sg_' to avoid name space chaos. :-)

Chainable object syntax

Inspired by prettytensor library, we support chainable object syntax for all sugar functions. This should improve productivity and readability. Look at the following snippet.


logit = (tf.placeholder(tf.float32, shape=(BATCH_SIZE, DATA_SIZE))
         .sg_dense(dim=400, act='relu', bn=True)
         .sg_dense(dim=200, act='relu', bn=True)
         .sg_dense(dim=10))

All returned objects are tensors.

In the above snippet, all values returned by sugar functions are pure tensorflow's tensor variables/constants. So, the following example is completely legal.


ph = tf.placeholder(tf.float32, shape=(BATCH_SIZE, DATA_SIZE)   # <-- this is a tensor 
ph = ph.sg_dense(dim=400, act='relu', bn=True)   # <-- this is a tensor
ph = ph * 100 + 10  # <-- this is ok.
ph = tf.reshape(ph, (-1, 20, 20, 1)).conv(dim=30)   # <-- all tensorflow's function can be applied and chained.

Practical DRY (Don't repeat yourself) functions for deep learning researchers

We provide pre-defined powerful training and report functions for practical developers. The following code is a full mnist training module with saver, report and early stopping support.


# -*- coding: utf-8 -*-
import sugartensor as tf

# MNIST input tensor ( with QueueRunner )
data = tf.sg_data.Mnist()

# inputs
x = data.train.image
y = data.train.label

# create training graph
logit = (x.sg_flatten()
         .sg_dense(dim=400, act='relu', bn=True)
         .sg_dense(dim=200, act='relu', bn=True)
         .sg_dense(dim=10))

# cross entropy loss with logit ( for training set )
loss = logit.sg_ce(target=y)

# accuracy evaluation ( for validation set )
acc = (logit.sg_reuse(input=data.valid.image).sg_softmax()
       .sg_accuracy(target=data.valid.label, name='val'))

# train
tf.sg_train(loss=loss, eval_metric=[acc], ep_size=data.train.num_batch)

You can check all statistics through the tensorboard's web interface like the following.

If you want to write another more complex training module without repeating saver, report, or whatever, you can do that like the following.


# def alternate training func
@tf.sg_train_func   # <-- sugar annotator for training function wrapping
def alt_train(sess, opt):
    l_disc = sess.run([loss_disc, train_disc])[0]  # training discriminator
    l_gen = sess.run([loss_gen, train_gen])[0]  # training generator
    return np.mean(l_disc) + np.mean(l_gen)
    
# do training
alt_train(log_interval=10, ep_size=data.train.num_batch, early_stop=False, save_dir='asset/train/gan')    

Please see the example codes in the 'sugartensor/example/' directory.

Custom layers

You can add your own custom layer functions like the following code snippet.

# residual block
@tf.sg_sugar_func
def sg_res_block(tensor, opt):
    # default rate
    opt += tf.sg_opt(size=3, rate=1, causal=False)

    # input dimension
    in_dim = tensor.get_shape().as_list()[-1]

    # reduce dimension
    input_ = (tensor
              .sg_bypass(act='relu', bn=(not opt.causal), ln=opt.causal)
              .sg_conv1d(size=1, dim=in_dim/2, act='relu', bn=(not opt.causal), ln=opt.causal))

    # 1xk conv dilated
    out = input_.sg_aconv1d(size=opt.size, rate=opt.rate, causal=opt.causal, act='relu', bn=(not opt.causal), ln=opt.causal)

    # dimension recover and residual connection
    out = out.sg_conv1d(size=1, dim=in_dim) + tensor

    return out

# inject residual block
tf.sg_inject_func(sg_res_block)

For more information, see ByteNet example code or WaveNet example code.

Multip GPU support

You can train your model with multiple GPUs using sg_parallel decorator as follow:

# batch size
batch_size = 128


# MNIST input tensor ( batch size should be adjusted for multiple GPUS )
data = tf.sg_data.Mnist(batch_size=batch_size * tf.sg_gpus())

# split inputs for each GPU tower
inputs = tf.split(data.train.image, tf.sg_gpus(), axis=0)
labels = tf.split(data.train.label, tf.sg_gpus(), axis=0)


# simple wrapping function with decorator for parallel training
@tf.sg_parallel
def get_loss(opt):

    # conv layers
    with tf.sg_context(name='convs', act='relu', bn=True):
        conv = (opt.input[opt.gpu_index]
                .sg_conv(dim=16, name='conv1')
                .sg_pool()
                .sg_conv(dim=32, name='conv2')
                .sg_pool()
                .sg_conv(dim=32, name='conv3')
                .sg_pool())

    # fc layers
    with tf.sg_context(name='fcs', act='relu', bn=True):
        logit = (conv
                 .sg_flatten()
                 .sg_dense(dim=256, name='fc1')
                 .sg_dense(dim=10, act='linear', bn=False, name='fc2'))

        # cross entropy loss with logit
        return logit.sg_ce(target=opt.target[opt.gpu_index])

# parallel training ( same as single GPU training )
tf.sg_train(loss=get_loss(input=inputs, target=labels), ep_size=data.train.num_batch)

Author

Namju Kim ([email protected]) at KakaoBrain Corp.

sugartensor's People

Contributors

andreasmadsen avatar buriburisuri avatar mdeboc avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sugartensor's Issues

TypeError occurs when executing generate.py of ac-gan with tensorflow 1.0.0-rc0

Dear,

When I run generate.py of ac-gan project (https://github.com/buriburisuri/ac-gan), TypeError occurs from sugartensor stack as follows:

webofthink@titan4x:~/ac-gan$ python generate.py
/home/adsl/anaconda2/lib/python2.7/site-packages/matplotlib/font_manager.py:273: UserWarning: Matplotlib is building the font cache using fc-list. This may take a moment.
  warnings.warn('Matplotlib is building the font cache using fc-list. This may take a moment.')
Traceback (most recent call last):
  File "generate.py", line 55, in <module>
    z = z.sg_concat(target=[target_cval_1.sg_expand_dims(), target_cval_2.sg_expand_dims()])
  File "/home/adsl/anaconda2/lib/python2.7/site-packages/sugartensor/sg_main.py", line 84, in wrapper
    out = func(tensor, tf.sg_opt(kwargs))
  File "/home/adsl/anaconda2/lib/python2.7/site-packages/sugartensor/sg_transform.py", line 75, in sg_concat
    return tf.concat(opt.dim, [tensor] + target, name=opt.name)
  File "/home/adsl/anaconda2/lib/python2.7/site-packages/tensorflow/python/ops/array_ops.py", line 1047, in concat
    dtype=dtypes.int32).get_shape(
  File "/home/adsl/anaconda2/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 651, in convert_to_tensor
    as_ref=False)
  File "/home/adsl/anaconda2/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 716, in internal_convert_to_tensor
    ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
  File "/home/adsl/anaconda2/lib/python2.7/site-packages/tensorflow/python/framework/constant_op.py", line 176, in _constant_tensor_conversion_function
    return constant(v, dtype=dtype, name=name)
  File "/home/adsl/anaconda2/lib/python2.7/site-packages/tensorflow/python/framework/constant_op.py", line 165, in constant
    tensor_util.make_tensor_proto(value, dtype=dtype, shape=shape, verify_shape=verify_shape))
  File "/home/adsl/anaconda2/lib/python2.7/site-packages/tensorflow/python/framework/tensor_util.py", line 367, in make_tensor_proto
    _AssertCompatible(values, dtype)
  File "/home/adsl/anaconda2/lib/python2.7/site-packages/tensorflow/python/framework/tensor_util.py", line 302, in _AssertCompatible
    (dtype.name, repr(mismatch), type(mismatch).__name__))
TypeError: Expected int32, got list containing Tensors of type '_Message' instead.

As a prerequite you mentioned, tensorflow 1.0.0-rc0 (tensorflow >= 0.12.0) may be supported properly.
Below shows my tensorflow version:

>>> import tensorflow as tf
>>> print tf.__version__
1.0.0-rc0
>>>

Thank you for sharing nice work.
Hope that this makes it better.

[log of emails]How to use placeholder and feed_dict way to train

This is a copy of a conversation via emails. I guess it can benefit a little more to put it here as a log.

Hello Namju,

I'm a programmer from GitHub and developing some DL models based on your sugartensor. It has brought me much convenience and thank you a lot for it.

But there seems to be only one way to input data, that is putting them in queues. I know it could be faster compared to the traditional placeholder and feed_dict stuff. However, it would be embarrassing when dealing with really large data, because TensorFlow has set the limit of a single tensor as 2GB. I want to process a large pile of images now but all of them cannot be loaded at the same time.

So may I ask if it any possible to use the traditional placeholder and feed_dict way to input data in sugartensor? Or what should I modify to create such functions? Thank you.

Looking forward to your reply.

Exon

How to stack LSTM layer after conv1d layer

I am experimenting on speech-to-text-wavenet.
I tried to add a LSTM layer after conv1d layer, for example:
skip
.sg_conv1d(size=1, act='tanh', bn=True, name='conv_1')
.sg_lstm(last_only=True, name='rnn_1')
.sg_dense(dim=emotion_size)
The output shape of sg_conv1d is (16, ?, 128)

When running, I got the following error:
sg_layer.py", line 499, in sg_rnn
for i in range(tensor.get_shape().as_list()[1]):
TypeError: range() integer end argument expected, got NoneType.

Is there have any advice?
thanks

how to load pre-trained parameters

I have a question here, In sugartensor, a convolution can be done by adopt "tensor.sg_conv(size=3, dim=1), and a 3x3 conv kernel is randomly initialized. But if I want the conv kernel is initialized by a specific kernel, such as [1,2,3;3,2,1;2,1,3], how should I do it?

confused about function sg_aconv1d() and sg_conv1d()...

I have some problem when working with function sg_conv1d() and sg_aconv1d().
As the example in README shows, we can call like that: "out = input_.sg_aconv1d(size=opt.size, rate=opt.rate, causal=opt.causal, act='relu', bn=(not opt.causal), ln=opt.causal)" .
But here is a problem, i can't find parameters 'act' and 'bn' in the sg_conv1d function definition code:

@tf.sg_layer_func
def sg_aconv(tensor, opt):
r"""Applies a 2-D atrous (or dilated) convolution.

Args:
  tensor: A 4-D `Tensor` (automatically passed by decorator).
  opt:
    size: A tuple/list of positive integers of length 2 representing `[kernel height, kernel width]`.
      Can be an integer if both values are the same.
      If not specified, (3, 3) is set automatically.
    rate: A positive integer. The stride with which we sample input values across
      the `height` and `width` dimensions. Default is 2.
    in_dim: A positive `integer`. The size of input dimension.
    dim: A positive `integer`. The size of output dimension.
    pad: Either `SAME` (Default) or `VALID`.
    bias: Boolean. If True, biases are added.
    regularizer:  A (Tensor -> Tensor or None) function; the result of applying it on a newly created variable
      will be added to the collection tf.GraphKeys.REGULARIZATION_LOSSES and can be used for regularization
    summary: If True, summaries are added. The default is True.

So I think maybe these two parameters are invalid,and I try to pass some arbitrary parameters in it to do some test , and it turns out to work normally without raising any exceptions.

So I'm wondering how do these two functions work with this two parameters?('act' and 'bn')

Looking forward to any replay . Thanks a lot!

Python 3 compatibility

Sugartensor is not compatible with Python 3(in my case, Python 3.5.2).
Is there way to use sugartensor with Python 3?

python3 support

Still issues with python3

tf.scalar_summary needs to change to
tf.summary.scalar
same for
tf.histogram_summary
tf.summary.histogram

Having problems in
tf.nn.ctc_loss
as SparseTensor is expected

AttributeError: 'list' object has no attribute 'startswith'

Exception in thread QueueRunnerThread-fifo_queue-fifo_queue_enqueue:
Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 801, in __bootstrap_inner
self.run()
File "/usr/lib/python2.7/threading.py", line 754, in run
self.__target(*self.__args, **self.__kwargs)
File "load_audio_template.py", line 125, in _run
self.func(sess, enqueue_op) # call enqueue function
File "load_audio_template.py", line 72, in enqueue_func
data = func(sess.run(inputs))
File "load_audio_template.py", line 183, in get_audio_spectrograms
_spectrogram, _magnitude, _length = utils.get_spectrograms(_sound_file)
File "utils.py", line 40, in get_spectrograms
y, sr = librosa.load(sound_file, sr=hp.sr) # or set sr to hp.sr.
File "/home/elu/.local/lib/python2.7/site-packages/librosa/core/audio.py", line 107, in load
with audioread.audio_open(os.path.realpath(path)) as input_file:
File "/usr/lib/python2.7/posixpath.py", line 375, in realpath
path, ok = _joinrealpath('', filename, {})
File "/usr/lib/python2.7/posixpath.py", line 381, in _joinrealpath
if isabs(rest):
File "/usr/lib/python2.7/posixpath.py", line 54, in isabs
return s.startswith('/')
AttributeError: 'list' object has no attribute 'startswith'

I got this error when I run the load_data module.
Anyone knows how to solve this?

Allow passing of parameter for variable initialization

Allow passing of an opt parameter for variable initialization (scale) in the conv1d, aconv1d, embed, etc. methods (can be found here: https://github.com/buriburisuri/sugartensor/blob/master/sugartensor/sg_layer.py).

Currently those methods are automatically using he_uniform, with assumed scale of 1. This causes problems on large shaped objects, e.g. at some input / outputs I get scale of 0.005 for the uniform method, which causes the network to misbehave and dead neurons to appear (gradients close/equal to 0).

There's no other trivial way to change the initialization methodology except editing the library code.

Setting bias to False

I could be misunderstanding this one, but does line 117 of sg_main mean that bias is always determined by either opt.bn or opt.ln? Is there a way to determine it explicitly?

How to realize the sg_conv() in pytorch?

v2=v1.sg_conv(dim=64, size=(1,1), name='gen1',pad="SAME",bn=True)

conv2 = torch.nn.conv2d(16,64,kernel_size=(1,1), bias=True, padding=0)
bn2 = nn.BatchNorm2d(64)
bn2(conv2(v1)

are these two equal ?
thank you

Add tables initializer in the local_init_op of the Supervisor class

One of my implementations required usage of tf.contrib.lookup.index_table_from_file which creates a hash table which needs to be initialized before training via tf.tables_initializer(). The place to do this is within the sg_train.py file, where the Supervisor class is created. E.g.:

        # console logging function
        def console_log(sess_):
            if epoch >= 0:
                tf.sg_info('\tEpoch[%03d:gs=%d] - loss = %s' %
                           (epoch, sess_.run(tf.sg_global_step()),
                            ('NA' if loss is None else '%8.6f' % loss)))

        local_init_op = tf.group(tf.sg_phase().assign(True), tf.tables_initializer())

        # create supervisor
        sv = tf.train.Supervisor(logdir=opt.save_dir,
                                 saver=saver,
                                 save_model_secs=opt.save_interval,
                                 summary_writer=summary_writer,
                                 save_summaries_secs=opt.log_interval,
                                 global_step=tf.sg_global_step(),
                                 local_init_op=local_init_op)

Adding Custom layers

Hi, this is a really amazing repo!
My question is again about custom layers.
What is the best way to add them?

Easier way to add custom layers

There's a lot of great functionality that comes with making a layer, like the activation, bias, and logging. I think it would be great to have an identity layer function, in the case where you want to apply activation and bias but don't want the other transformation.

For example, I'm trying to add a scaling process to convolutions that has to happen before the bias and activation. What I want to do is:

    conv = tensor.conv(dim=64, act='linear', bn=false) #don't apply bias or act yet...
    scaled_conv = tf.mul(conv, scaling_matrix)
    bias_and_act_applied = scaled_conv.sg_ident_layer() #act and bias from 'ops' applied here.
    return bias_and_act_applied

I see there's tf.identity, but it doesn't come with the nice layer add-ons.

Same as before, I'd consider making a pull request if you think it's worthwhile, but for this one I would need a little bit of direction.

Thanks again

AttributeError: 'FuncQueueRunner' object has no attribute '_runs'

Got errors when training:

Exception in thread Thread-4:
Traceback (most recent call last):
  File "/usr/lib/python2.7/threading.py", line 810, in __bootstrap_inner
    self.run()
  File "/usr/lib/python2.7/threading.py", line 763, in run
    self.__target(*self.__args, **self.__kwargs)
  File "/usr/local/lib/python2.7/dist-packages/sugartensor/sg_queue.py", line 108, in _run
    self._runs -= 1
AttributeError: 'FuncQueueRunner' object has no attribute '_runs'

ValueError: Unable to convert message to str

import os
from hyperparams import Hp
import sugartensor as tf

def main():
printLineFileFunc()
g = Graph();
print("Graph Loaded")
tf.sg_train(optim="Adam", lr=0.0001, lr_reset=True, loss=g.reduced_loss, ep_size=g.num_batch,
save_dir='asset/train', max_ep=10, early_stop=False)

I got error:

XX.shape = (8173196, 20)
YY.shape = (8173196, 20)
Graph Loaded
Traceback (most recent call last):
File "train.py", line 365, in
main();
File "train.py", line 353, in main
save_dir='asset/train', max_ep=10, early_stop=False)
File "/home/yike.yk/.local/lib/python3.6/site-packages/sugartensor/sg_train.py", line 69, in sg_train
train_func(**opt)
File "/home/yike.yk/.local/lib/python3.6/site-packages/sugartensor/sg_train.py", line 313, in wrapper
with sv.managed_session(config=tf.ConfigProto(allow_soft_placement=True)) as sess:
File "/usr/local/lib/python3.6/contextlib.py", line 82, in enter
return next(self.gen)
File "/usr/local/lib/python3.6/site-packages/tensorflow/python/training/supervisor.py", line 960, in managed_session
self.stop(close_summary_writer=close_summary_writer)
File "/usr/local/lib/python3.6/site-packages/tensorflow/python/training/supervisor.py", line 788, in stop
stop_grace_period_secs=self._stop_grace_secs)
File "/usr/local/lib/python3.6/site-packages/tensorflow/python/training/coordinator.py", line 386, in join
six.reraise(*self._exc_info_to_raise)
File "/usr/local/lib/python3.6/site-packages/six.py", line 686, in reraise
raise value
File "/usr/local/lib/python3.6/site-packages/tensorflow/python/training/supervisor.py", line 949, in managed_session
start_standard_services=start_standard_services)
File "/usr/local/lib/python3.6/site-packages/tensorflow/python/training/supervisor.py", line 707, in prepare_or_wait_for_session
self._write_graph()
File "/usr/local/lib/python3.6/site-packages/tensorflow/python/training/supervisor.py", line 610, in _write_graph
self._logdir, "graph.pbtxt")
File "/usr/local/lib/python3.6/site-packages/tensorflow/python/framework/graph_io.py", line 67, in write_graph
file_io.atomic_write_string_to_file(path, str(graph_def))
ValueError: Unable to convert message to str

Running GAN example with placeholder for data.train, data.train.label instead of queue runner

@tf.sg_train_func
def alt_train(sess, opt):
    x_batch, x_label_batch = train_generator.next()
    l_disc = sess.run([loss_disc, train_disc], {x: x_batch, x_label: x_label_batch})[0]  # training discriminator
    l_gen = sess.run([loss_gen, train_gen], {x: x_batch, x_label: x_label_batch})[0]  # training generator
    return np.mean(l_disc) + np.mean(l_gen)

This causes some problems in the function: sugartensor.sg_train.sg_train_func(func)

Specifically, because it is required for x_label and x_label_batch to be passed when calling sess.run() calls to sess.run() like this: sugartensor/sg_train.py#L331 fail.

The error thrown: InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'Placeholder' with dtype float [[Node: Placeholder = Placeholder[dtype=DT_FLOAT, shape=[], _device="/job:localhost/replica:0/task:0/cpu:0"]()]]

My temporary workaround is just to comment out sugartensor/sg_train.py#L331

inconsistency between sg_ce and sg_train

sg_ce says it returns:

"A 1-D Tensor with the same shape as tensor".

(actually it return an n-D Tensor that has shape tensor.get_shape()[:-1].)

sg_train says in takes an:

A 0-D Tensor containing the value to minimize.

but somehow this supports an n-D tensor. See for example your ByteNet implementation:
https://github.com/buriburisuri/ByteNet/blob/master/train.py#L103L106
sg_train also calls np.mean internally: https://github.com/buriburisuri/sugartensor/blob/master/sugartensor/sg_train.py#L339


I'm not really sure that the intended behaviour is, but I would definitely like if sg_train continued to able to take a scalar value.

leaky relu slope, parameter?

Hi, would it be possible to convert the slope of the negative piece of leaky relu to a parameter instead of hard-coded value?

Path to Tensorboard

The path to tensorboard has changed, so sg_train.py needs to be updated.
For some reason, removing contrib works fine in python3, but not python2 (for me, at least).

Adding default support for other databases

First off, I really love this library. It makes tensorflow development so fast.

I wanted to make a plug for default support of CIFAR, like you have for MNIST. It's a real pain setting up the queue for CIFAR manually, and some image techniques only show their benefits on things more complex than MNIST.

I'd consider making a pull request for this, if you'd accept it

Does it not work while using the newest tensorflow?

It seems that it doesn't work when using the newest tensorflow.

As to the MNIST sample:

Traceback (most recent call last):
File "test.py", line 18, in
loss = logit.sg_ce(target=y)
File "/usr/local/lib/python2.7/dist-packages/sugartensor/sg_main.py", line 151, in wrapper
out = func(tensor, tf.sg_opt(kwargs))
File "/usr/local/lib/python2.7/dist-packages/sugartensor/sg_loss.py", line 44, in sg_ce
out = tf.identity(tf.nn.sparse_softmax_cross_entropy_with_logits(tensor, opt.target), 'ce')
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/nn_ops.py", line 1684, in sparse_softmax_cross_entropy_with_logits
labels, logits)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/nn_ops.py", line 1533, in _ensure_xent_args
"named arguments (labels=..., logits=..., ...)" % name)
ValueError: Only call sparse_softmax_cross_entropy_with_logits with named arguments (labels=..., logits=..., ...)

tensorflow import error

this command has an error, how can I solve it?
from tensorflow import *
AttributeError: module 'tensorflow' has no attribute 'absolute_import'

module 'sugartensor' has no attribute 'GraphKeys'. Why this error apear ?

I just start to run import sugartensor as tf and that error happened why ?

_phase = tf.Variable(False, name='phase', trainable=False, collections=[tf.GraphKeys.LOCAL_VARIABLES])
AttributeError: partially initialized module 'sugartensor' has no attribute 'GraphKeys' (most likely due to a circular import)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.