GithubHelp home page GithubHelp logo

lulllabs / styletransfer-in-td Goto Github PK

View Code? Open in Web Editor NEW

This project forked from exsstas/styletransfer-in-td

0.0 2.0 0.0 13.91 MB

TensorFlow implementation of Neural Style Transfer in TouchDesigner

styletransfer-in-td's Introduction

Style Transfer in TouchDesigner

This is a TouchDesigner implementation of the Style Transfer using Neural Networks. Project is based on

You can read about underlying math of the algorithm here

Here is some results next to the original photo:

Setup

  1. Install TouchDesigner
  2. Install Tensorflow for Windows. It's higly recomended to use GPU version (so, you'll also need do install CUDA and, optionally, cuDNN). You can install Tensorflow directly to Python directory or with Anaconda.
  3. In TouchDesigner menu Edit - Preferences - Python 32/64 bit Module Path add path to folder, where Tensorflow is installed (i.e. C:/Anaconda3/envs/TFinTD/Lib/site-packages). Details here. To check your installation run in Textport (Alt+t):
import tensorflow as tf
hello = tf.constant('Hello, TensorFlow!')
sess = tf.Session()
print(sess.run(hello))

If the system outputs Hello, TensorFlow!, then Tensorflow in TouchDesigner works well. 3. Run command line or Powershell, activate conda enviroment (if Tensorflow was installed in conda) and install:

  • numpy
  • scipy
  • opencv (OpenCV preinstalled in TouchDesigner 099 works fine, but for 088 you should install it manually in Python (or conda))
  1. Override built-in numpy module. To check in TouchDesigner Textport enter
import numpy
numpy

You should see path to numpy in your Python directory or Conda enviroment (i.e. C:/Anaconda3/envs/TFinTD/Lib/site-packages\\numpy\\__init__.py) 5. Download the VGG-19 model weights (see the "VGG-VD models from the Very Deep Convolutional Networks for Large-Scale Visual Recognition project" section). After downloading, copy the weights file imagenet-vgg-verydeep-19.mat to the project directory or set path to it, using Style transfer user interface in TouchDesigner (StyleTransfer.toe last row Path to VGG in UI).

Usage

Basic Usage

  1. It's recomended to copy all images you need inside project folder derectories /input and /styles (or create your own directories). Long absolute paths cannot work sometimes (especially in windows %USER% folder)
  2. Choose content image in input TOP
  3. Choose style image in style1 TOP
  4. Press Run Style Transfer in UI
  5. Wait. TouchDesigner wouldn't respond for some seconds or minutes (depends on your GPU and resolutions of the images).
  6. Result will be in result TOP, linked to a file in the /output folder. Log with some info is in log DAT - save it somewhere, if needed.
  7. Experiment with settings
  8. Experiment with the code in /StyleTransfer/local/modules/main DAT
  9. If something isn't working - first check errors in Textport

Settings

  • You can always load default parameters, when experiments goes too far.
  • Num of iterations - Maximum number of iterations for optimizer: larger number increase an effect of stylization.
  • Maximum resolution - Max width or height of the input/style images. High resolutions increases time and GPU memory usage. Good news: you don't need Commercial version of TouchDesigner to produce images larger than 1280×1280.
  • You can perform style transfer on GPU or CPU device. GPU mode is many times faster and highly recommended, but requires NVIDIA CUDA (see Setup section)
  • You can transfer more than one style to input image. Set number of styles, weight for each of it and choose files in style TOPs. If you want to go beyond 5 styles — make changes in /StyleTransfer/UI/n_styles
  • Use style masks if you want to apply style transfer to specific areas of the image. Choose masks in stylemask TOPs. Style applied to white regions.
  • Keep original colors if the style is transferred but not the colors.
  • Color space convertion: Color spaces (YUV, YCrCb, CIE L*u*v*, CIE L*a*b*) for luminance-matching conversion to original colors.
  • Content_weight - Weight for the content loss function. You can use numbers in scientific E notation
  • Style_weight - Weight for the style loss function.
  • Temporal_weight - Weight for the temporal loss function.
  • Total variation weight - Weight for the total variational loss function.
  • Type of initialization image - You can initialize the network with content, random (noise) or style image.
  • Noise_ratio: Interpolation value between the content image and noise image if network is initialized with random.
  • Optimizer - Loss minimization optimizer. L-BFGS gives better results. Adam uses less memory.`
  • Learning_rate - Learning-rate parameter for the Adam optimizer.

  • VGG19 layers for content\style image: VGG-19 layers and weights used for the content\style image.
  • Constant (K) for the lossfunction - Different constants K in the content loss function.

  • Type of pooling in CNN - Maximum or average ype of pooling in convolutional neural network.
  • Path to VGG file: Path to imagenet-vgg-verydeep-19.mat Download it here.

Memory

By default, Style transfer uses the NVIDIA cuDNN GPU backend for convolutions and L-BFGS for optimization. These produce better and faster results, but can consume a lot of memory. You can reduce memory usage with the following:

  • Use Adam: Set Optimizer to Adam instead of L-BFGS. This should significantly reduce memory usage, but will require tuning of other parameters for good results; in particular you should experiment with different values of Learning rate, Content weight, Style_weight.
  • Reduce image size: You can reduce the size of the generated image with the Maximum resolution setting.

This code was generated and tested on system:

  • CPU: Intel Core i7-4790K @ 4.0GHz × 8
  • GPU: NVIDIA GeForce GTX 1070 8 Gb
  • CUDA: 8.0
  • cuDNN: v5.1
  • OS: Windows 10 64-bit
  • TouchDesigner: 099 64-bit built 2017.10000
  • Anaconda: 4.3.14
  • tensorflow-gpu: 1.2.0
  • opencv: 3.2.0-dev (used built-in TouchDesigner)
  • numpy: 1.13.0 (used version installed in conda enviroment)
  • scipy: 0.19.1 (used version installed in conda enviroment)

The implementation is based on the project:

Contacts

Contact me via [email protected] or in Twitter

styletransfer-in-td's People

Contributors

exsstas avatar

Watchers

James Cloos avatar Elekezem avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.