GithubHelp home page GithubHelp logo

tonio73 / dnnviewer Goto Github PK

View Code? Open in Web Editor NEW
18.0 18.0 2.0 80.12 MB

Deep Neural Network viewer

Home Page: https://tonio73.github.io/dnnviewer/

License: MIT License

CSS 0.07% Python 18.83% Jupyter Notebook 81.03% Makefile 0.06%
convolutional-neural-networks deep-learning deep-neural-networks image-classification

dnnviewer's People

Contributors

tonio73 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

Forkers

ckaelig lufeng22

dnnviewer's Issues

Support for Pytorch

Study and initial development to support the Pytorch equivalent to the Tensorflow Keras Sequential model and common layers

Handle the case in which classification is not on the original dataset class

Currently:

  • the test dataset loader is providing some class captions corresponding to the classes
  • the first sample of the dataset is pre-selected and the unit corresponding to this class is pre-selected on the last layer

This creates issue if the task is not classification or if classification is on the class recognition. Example: the GAN discriminator is not looking for the object class but is a binary classification fake/genuine.

To Do:

  • detect when the number of layer units is not equal to the number of classes
  • in this case disable the pre-selection and do not associate the class captions to the last layer

Display model loss and metrics

New widget at the top of the window to display the model loss and metrics history.

  • Layout to be defined first
  • Depends on the recording of the history
  • Single loss since sequential model is currently a requirement
  • Multiple metrics

Performance issue on main view

When the number of layer is increasing and the number of units per layer is large, it takes from 10 to 20s to update the view when selecting a new number of displayed connections

Handling of test dataset input format

Currently the tensor shape of the test data is checked before computing the gradients while model is being loaded and when computing an activation map.

Refactor this to:

  • add a step in between dataset loading and model loading to reformat the test data
  • more general handling of padding (currently linked to convo2d)
  • remove later data modifications listed above

Emphasize selected unit

In current implementation, the selected unit is described on the bottom panel, but there is no clear indication on the central view. One may only guess it is the unit of the layer with single unit connected.

Proposal for a better visual of the selected layer and unit :

  • square shape around the selected layer (issue: how to set the width with respect to zoom)
  • text and graphical annotations stuck to the selected unit

Handle case in which model loading fails

Two cases :

  • Loading model based on command line args
  • Selection of model from the UI

For the 2nd case, ideally, insert a step in the model selection to check model loading AND compatibility with test data

Restructure application in 3 panes

Three panes structure for the application : top, center, bottom

Separate modules containing for each pane:

  • local data structures
  • layout
  • local callbacks

Introduce the task

Introduce the selection of the task at hand, leading to the specific view.

Currently, the task is image classification.

Verify that generic classification works fine.
Verify also that no test dataset selection works fine.

Graphical design: icon

Create an icon for the DNN Viewer.

To be displayed in:

  • browser tab / application icon (to save as assets/favicon.ico to be picked by Dash)
  • title panel

Gradient visualization

As an alternative to displaying weights, display gradients :

  • as links between neuron units
  • as minimax on the layer details
  • as histogram on the unit details
  • as map on the convolutional unit details

Graphical design:

  • choose distinct colormap (Parula)
  • layer minimax, unit histo & maps within a new tab of corresponding subpanel
  • links on the main view, plus selector on the right side, see #24

More layer and unit details

Missing layer information in the viewer:

  • loss (training)
  • activation at output (output)
  • regularization (L1/L2, kernel/bias/activity) (training)
  • dropout at input (training)
  • batch normalization at input (input)
  • Max/Average pooling at output (output)

Missing unit information in the viewer:

  • bias/intercept (.use_bias, .bias) (training)
  • input shape (.input.get_shape()) (structure)
  • output shape (.ouput_shape[1:]) (structure), take into account for pooling and flatten
  • padding (Convo) (.padding) (structure)

Later:

  • Layer metrics

Implementation:

  • foreach layer and unit create a decorator dictionary with following attributes containing lists of strings
  • attributes : input, structure, output, training
  • fill this structure in bridge.tensorflow.keras_extract_sequential_network() taking into account for the fact that the layer may not exist (previous layers like dropout, batchnorm) or may already exist (layers at output like pooling, activation)

Input * Weight contribution graph

Given a neural unit of a Dense layer, given an input sample (image), display the bar chart of the products of the weight times the input as a new tab in the unit quadrant

Windows compatibility

Windows file and directory path handling is quite different to Unix & Macos.

  • Review the issues in class KerasModelSequence
  • Add information to the README if needed

Less state on the server

Dash is recommending to be stateless on the server... we are definitively not since the server keeps:

  • the DNN model (Keras)
  • the graphical representation (Grapher instance)
  • command line arguments or their derivatives

To do:

  • inspect all these stateful dependencies
  • provide recommendations on how to remove some of them while keeping good performance of the application

Logger

Install an application central logger to handle all current print out (model loading...)

Saliency map

Provide basic and quick saliency map based on gradient ascent with Ridge regularization

Parameters (to wire to the UI):

  • Number of iterations (1..10)
  • Learning rate (real positive, default TBD)
  • L2 regularization parameter (real positive, default TBD)

Support for Activation layer in Keras models

Activation is either a parameter on a layer or a specific layer of type "Activation"

Add support for the latter: set the activation structural property on previous layer.

Note: the case in which the previous layer already has an activation (not the linear activation) is not handled

Support for GAN

Current assumed DNN task is image classification (image in, class probabilities out)

Generative Adversarial networks could be also supported for simple architectures: image in, image out, OR alternate random in of the latent space to generate an image or a map of images.

Conda package

Create and submit a Conda package (automate if possible)

Click to select in layer detail "minimax" graph

Within layer detail, the "minimax" graph is displaying the min and max weight amplitude of each unit.

Enhance this plot to allow for unit selection when clicking on a unit min or max bar.

Shall update the selection in:

  • main network view (displayed connections)
  • layer unit detail
  • activation map
  • ... any other widget using the unit selection...

Main challenge: the layer object is required by most callbacks handling the selection. We may first need to save this layer within the layer detail widget

Fix computation of top-n weights for Convo2d: stripe and pooling

Stripe and pooling parameters are not handled in the top-n weights computation when output is flattened: when propagating weight computation backward along layers from a dense to a convo through a flatten, the index are wrapped modulo the number of units, should take into account for the strip/pool parameters

Online hosting

Provided that the application is stateless on the server side (see #7), solve remaining code issues to be able to deploy online on a free or quasi free host

Colorbar for the main view

There is currently no colorbar for the network view. The color bar is within the Convo unit details only (filter heatmaps).

Add a colorbar on the left of the view

Adapt figures for a laptop screen height

Currently the Plotly figures use default Plotly height. There is not enough height on a laptop screen (15") to see at the same time the main and detail figures.

Set the height of the figures to be adaptive as function of screen height, with a minimum TBD

Support for Time series

As an alternative the current image classification task, provide support for time series as input, and a classification or regression at the output.

  • Display of the input time series example
  • Output display
  • No saliency (?) (#1)

Enable own user test dataset

Give possibility to load own test data

Issues:

  • Data format within tabular files ? CSV ? TSV ?...
  • Image formats and other directory-file based datasets ?
  • Adapters for both Keras and (future) Pytorch ?

Multipage application with a page to select model

  • Create a layout function as in Dash documentation referenced below
  • Initial screen is model and test data selection in an HTML form
  • Models are listed from directories, new command line option --model-directories value is a comma separated list of directory paths

Ensure backward compability with existing options to select models (single or sequence)

https://dash.plotly.com/urls

Pip package

Create and submit a Pypi package (automate if possible)

Activation maps : Handling of B&W image and padding

-1- B&W images like in MNIST are often described by 2D arrays (number of channels is 1), but Keras requires 3D tensors
=> Detect and expand dimensions

-2- Some networks require pre-padding of input images to cope with convolution margins.
=> Detect dimension lag and pad image

Support for Upsampling2D

Upsampling layer(s) are not displayed as a layer on the network representation but as a feature to the output of previous layer.

Also, the sampling factor is set on the previous layer through append_sampling_factor() method.

Model information

Create a panel on the right of the main network view, size md3

In this panel:

  • model information
    -- loss (losses) description (type) (.loss, .loss_function, .loss_weights)
    -- metrics description
    -- optimizer description (.optimizer)

  • configuration of the view
    -- show topn connections parameter

Implementation: categorize as in #19

User guide for 0.1

  • Explain the panels and figures
  • Update README on motivation (observations, intuitions)

Select target unit on application load

Currently, the initial selection of a neuron unit is "mocked" by drawing the top n weight connections of the output unit corresponding to the label of the selected test sample (#0).

This is improper as:

  • the bottom pane is not updated accordingly
  • it creates dependency between the center pane and the middle pane (label of selected test sample)

To do : find a mean to set the selected data of the main view, and automatically call linked callbacks

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.