While research in Deep Learning continues to improve the world, we use a bunch of tricks to implement algorithms with TensorLayer day to day.
Here are a summary of the tricks to use TensorLayer, you can also find more tricks in FQA.
If you find a trick that is particularly useful in practice, please open a Pull Request to add it to the document. If we find it to be reasonable and verified, we will merge it in.
- 🇨🇳 《深度学习:一起玩转TensorLayer》已上架。
- To keep your TL version and edit the source code easily, you can download the whole repository by excuting
git clone https://github.com/zsdonghao/tensorlayer.git
in your terminal, then copy thetensorlayer
folder into your project - As TL is growing very fast, if you want to use
pip
install, we suggest you to install the master version - For NLP application, you will need to install NLTK and NLTK data
- TF to TL : use InputLayer
- TL to TF : use network.outputs
- Other methods issues7, multiple inputs issues31
- Use network.all_drop to control the training/testing phase (for DropoutLayer only) see tutorial_mnist.py and Understand Basic layer
- Alternatively, set
is_fix
toTrue
in DropoutLayer, and build different graphs for training/testing by reusing the parameters. You can also set differentbatch_size
and noise probability for different graphs. This method is the best when you use GaussianNoiseLayer, BatchNormLayer and etc. Here is an example:
def mlp(x, is_train=True, reuse=False):
with tf.variable_scope("MLP", reuse=reuse):
tl.layers.set_name_reuse(reuse)
net = InputLayer(x, name='in')
net = DropoutLayer(net, 0.8, True, is_train, name='drop1')
net = DenseLayer(net, n_units=800, act=tf.nn.relu, name='dense1')
net = DropoutLayer(net, 0.8, True, is_train, name='drop2')
net = DenseLayer(net, n_units=800, act=tf.nn.relu, name='dense2')
net = DropoutLayer(net, 0.8, True, is_train, name='drop3')
net = DenseLayer(net, n_units=10, act=tf.identity, name='out')
logits = net.outputs
net.outputs = tf.nn.sigmoid(net.outputs)
return net, logits
x = tf.placeholder(tf.float32, shape=[None, 784], name='x')
y_ = tf.placeholder(tf.int64, shape=[None, ], name='y_')
net_train, logits = mlp(x, is_train=True, reuse=False)
net_test, _ = mlp(x, is_train=False, reuse=True)
cost = tl.cost.cross_entropy(logits, y_, name='cost')
- Use tl.layers.get_variables_with_name instead of using net.all_params
train_vars = tl.layers.get_variables_with_name('MLP', True, True)
train_op = tf.train.AdamOptimizer(learning_rate=0.0001).minimize(cost, var_list=train_vars)
-
This method can also be used to freeze some layers during training, just simply don't get some variables
-
Use tl.layers.get_layers_with_name to get list of activation outputs from a network.
layers = tl.layers.get_layers_with_name(network, "MLP", True)
- This method usually be used for activation regularization.
- Pre-trained CNN
- Many applications make need pre-trained CNN model
- TL examples provide pre-trained VGG16, VGG19, Inception and etc : TL/example
- tl.layers.SlimNetsLayer allows you to use all Tf-Slim pre-trained models
- Resnet
- Implemented by "for" loop issues85
- Other methods by @ritchieng
- Use TFRecord, see cifar10 and tfrecord examples; good wrapper: imageflow
- Use python-threading with tl.prepro.threading_data and the functions for images augmentation see tutorial_image_preprocess.py
- If your data size is small enough to feed into the memory of your machine.
- Use tl.iterate.minibatches to shuffle and return the examples and labels by the given batchsize.
- For time-series data, use tl.iterate.seq_minibatches, tl.iterate.seq_minibatches2, tl.iterate.ptb_iterator and etc
- If your data size is very large
- Use tl.prepro.threading_data to read a batch of data at the beginning of every step
- Use TFRecord again, see cifar10 and tfrecord examples
-
- Use LambdaLayer, it can also accept functions with new variables. With this layer you can connect all third party TF libraries and your customized function to TL. Here is an example of using Keras and TL together.
import tensorflow as tf
import tensorlayer as tl
from keras.layers import *
from tensorlayer.layers import *
def my_fn(x):
x = Dropout(0.8)(x)
x = Dense(800, activation='relu')(x)
x = Dropout(0.5)(x)
x = Dense(800, activation='relu')(x)
x = Dropout(0.5)(x)
logits = Dense(10, activation='linear')(x)
return logits
network = InputLayer(x, name='input')
network = LambdaLayer(network, my_fn, name='keras')
...
- Use tl.nlp.process_sentence to tokenize the sentences, NLTK and NLTK data is required
- Then use tl.nlp.create_vocab to create a vocabulary and save as txt file (it will return a tl.nlp.SimpleVocabulary object for word to id only)
- Finally use tl.nlp.Vocabulary to create a vocabulary object from the txt vocabulary file created by
tl.nlp.create_vocab
- More pre-processing functions for sentences in tl.prepro and tl.nlp
- Use tl.layers.retrieve_seq_length_op2 to automatically compute the sequence length from placeholder, and feed it to the
sequence_length
of DynamicRNNLayer - Apply zero padding on a batch of tokenized sentences as follow:
b_sentence_ids = tl.prepro.pad_sequences(b_sentence_ids, padding='post')
- Other methods issues18
- Disable console logging: if you are building a very deep network and don't want to view them in the terminal, disable
print
bywith tl.ops.suppress_stdout():
:
print("You can see me")
with tl.ops.suppress_stdout():
print("You can't see me") # build your graphs here
print("You can see me")
TL can interact with other TF wrappers, which means if you find some codes or models implemented by other wrappers, you can just use it !
- Keras to TL: KerasLayer (if you find some codes implemented by Keras, just use it. example here)
- TF-Slim to TL: SlimNetsLayer (you can use all Google's pre-trained convolutional models with this layer !!!)
- I think more libraries will be compatible with TL
- RNN cell_fn: use tf.contrib.rnn.{cell_fn} for TF1.0+, or tf.nn.rnn_cell.{cell_fn} for TF1.0-
- cross_entropy: have to give a unique name for TF1.0+
- TL official sites: Docs, 中文文档, Github
- Learning Deep Learning with TF and TL
- Follow zsdonghao for further examples
- Zhang Rui
- You