GithubHelp home page GithubHelp logo

lucent's Introduction

Lucent

Travis build status Code coverage

PyTorch + Lucid = Lucent

The wonderful Lucid library adapted for the wonderful PyTorch!

Lucent is not affiliated with Lucid or OpenAI's Clarity team, although we would love to be! Credit is due to the original Lucid authors, we merely adapted the code for PyTorch and we take the blame for all issues and bugs found here.

Usage

Lucent is still in pre-alpha phase and can be installed locally with the following command:

pip install torch-lucent

In the spirit of Lucid, get up and running with Lucent immediately, thanks to Google's Colab!

You can also clone this repository and run the notebooks locally with Jupyter.

Quickstart

import torch

from lucent.optvis import render
from lucent.modelzoo import inceptionv1

device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
model = inceptionv1(pretrained=True)
model.to(device).eval()

render.render_vis(model, "mixed4a:476")

Tutorials

Other Notebooks

Here, we have tried to recreate some of the Lucid notebooks! You can also check out the lucent-notebooks repo to clone all the notebooks.

Recommended Readings

Related Talks

Slack

Check out #proj-lucid and #circuits on the Distill slack!

Additional Information

License and Disclaimer

You may use this software under the Apache 2.0 License. See LICENSE.

lucent's People

Contributors

alik-git avatar animadversio avatar dozed avatar greentfrapp avatar mehdidc avatar michalgregor avatar ndey96 avatar progamergov avatar slawekslex avatar smarginatura avatar tal-golan avatar unnir avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

lucent's Issues

A More Elegant Dead ReLU Fix

To quote from redirected_relu_grad.py in the original Lucid library:

When we visualize ReLU networks, the initial random input we give the model may not cause the neuron we're visualizing to fire at all. For a ReLU neuron, this means that no gradient flow backwards and the visualization never takes off. One solution would be to find the pre-ReLU tensor, but that can be tedious. These functions provide a more convenient solution: temporarily override the gradient of ReLUs to allow gradient to flow back through the ReLU -- even if it didn't activate and had a derivative of zero -- allowing the visualization process to get started. These functions override the gradient for at most 16 steps. Thus, you need to initialize global_step before using these functions.

Lucid uses tensorflow, which allows for gradient overrides with gradient_override_map (although Lucid overrides that with their own implementation). It is also possible to keep track of the global step in tensorflow, and this is used in Lucid to make the "gradient fix" temporary (see above).

In comparison, Lucent implements a hacky workaround that is much less sophisticated.

We simply replace the ReLU function with our own RedirectedReLU, which has a modified backward method. When the gradient should be 0 (because of negative output), we simply scale the gradient by 0.1 and let it through. See here for the exact implementation.

We do this at the model initialization stage only for the InceptionV1 model and we never switch off the redirected gradient. I suspect that not switching it off is not as bad as we might imagine, because we are updating the input values instead of the model weights. In any case, this seems to work fine so far, but I would really prefer a more principled approach.

To be frank, I haven't spent too much time thinking about this with torch. But here are the main elements of a better fix, primarily following Lucid's implementation:

  • A general wrapper that can be used on all models, rather than just InceptionV1 (although I haven't actually encountered this problem with other models)
  • A temporary change that will be reverted after a fixed number of steps (how do we access the number of steps from inside the backward function?)

Questions and discussions welcomed!

Code Breaks as GPU Index > 0

When using GPU, this codebase only works for torch.device('cuda:0') -- the GPU index has to be 0.

For example, if you choos torch.device('cuda:1'), then when you run the code demo

import torch

from lucent.optvis import render
from lucent.modelzoo import inceptionv1

# Let's use cuda:1
device = torch.device("cuda:1")
model = inceptionv1(pretrained=True)
model.to(device).eval()

render.render_vis(model, "mixed4a:476")

you will see an error like

..........
File .....lucent/optvis/render.py:206, in hook_model.<locals>.hook(layer)
    204     assert layer in features, f"Invalid layer {layer}. Retrieve the list of layers with `lucent.modelzoo.util.get_model_layers(model)`."
    205     out = features[layer].features
--> 206 assert out is not None, "There are no saved feature maps. Make sure to put the model in eval mode, like so: `model.to(device).eval()`. See README for example."
    207 return out

AssertionError: There are no saved feature maps. Make sure to put the model in eval mode, like so: `model.to(device).eval()`. See README for example.

Change torch device

Hi,

Thank you for this amazing library.
Do you know a simple way to change the GPU device of
param_f = lambda: param.image(self.image_shape[0], self.image_shape[1], batch=1, channels=channels)

In the file lucent.optvis.param.spatial.py the device is set by
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
But I'd like to set the device to "cuda:1" for instance.

Thanks!

ValueError in Render

Hi there,

I am trying to run the tutorial and am running into the following error:

>>> import torch
>>> from lucent.optvis import render, param, transform, objectives
>>> from lucent.modelzoo import inceptionv1
>>> device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
>>> model = inceptionv1(pretrained=True)
>>> _ = model.to(device).eval()
>>> _ = render.render_vis(model, "mixed4a:476", show_inline=True)
  0%|                                                                                       | 0/512 [00:00<?, ?it/s]
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/Users/tatekeller/opt/anaconda3/envs/spec/lib/python3.8/site-packages/lucent/optvis/render.py", line 113, in render_vis
    optimizer.step(closure)
  File "/Users/tatekeller/.local/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 26, in decorate_context
    return func(*args, **kwargs)
  File "/Users/tatekeller/.local/lib/python3.8/site-packages/torch/optim/adam.py", line 66, in step
    loss = closure()
  File "/Users/tatekeller/opt/anaconda3/envs/spec/lib/python3.8/site-packages/lucent/optvis/render.py", line 97, in closure
    model(transform_f(image_f()))
  File "/Users/tatekeller/opt/anaconda3/envs/spec/lib/python3.8/site-packages/lucent/optvis/transform.py", line 85, in inner
    x = transform(x)
  File "/Users/tatekeller/opt/anaconda3/envs/spec/lib/python3.8/site-packages/lucent/optvis/transform.py", line 75, in inner
    M = kornia.get_rotation_matrix2d(center, angle, scale).to(device)
  File "/Users/tatekeller/opt/anaconda3/envs/spec/lib/python3.8/site-packages/kornia/geometry/transform/imgwarp.py", line 347, in get_rotation_matrix2d
    raise ValueError("Input scale must be a B tensor. Got {}"
ValueError: Input scale must be a B tensor. Got torch.Size([1, 2])

I am using a conda environment with Python 3.8.5 and pytorch=1.7.0.

Any help regarding this error would be much appreciated!

Installing lucent causes pytorch downgrade

I installed pytorch 1.7.0 on a fresh Python 3.8 conda environment. Then I installed lucent via pip, and it uninstalled pytorch 1.7 and installed 1.6. That's because kornia 0.4.0 specifically wants pytorch < 1.7. Can you upgrade to kornia 0.4.1 or 0.4.2? It's less picky about Pytorch versions.

(lucent) pmin@patrick-deep:/mnt/d/Documents/brain-scorer$ pip install torch-lucent
Collecting torch-lucent
  Using cached torch_lucent-0.1.4-py3-none-any.whl (45 kB)
Collecting pytest
  Using cached pytest-6.2.2-py3-none-any.whl (280 kB)
Collecting future
  Using cached future-0.18.2-py3-none-any.whl
Requirement already satisfied: ipython in /home/pmin/miniconda3/envs/lucent/lib/python3.8/site-packages (from torch-lucent) (7.20.0)
Requirement already satisfied: torchvision in /home/pmin/miniconda3/envs/lucent/lib/python3.8/site-packages (from torch-lucent) (0.8.2)
Collecting kornia==0.4.0
  Using cached kornia-0.4.0-py2.py3-none-any.whl (195 kB)
Requirement already satisfied: pillow in /home/pmin/miniconda3/envs/lucent/lib/python3.8/site-packages (from torch-lucent) (8.1.0)
Collecting pytest-mock
  Using cached pytest_mock-3.5.1-py3-none-any.whl (12 kB)
Requirement already satisfied: tqdm in /home/pmin/miniconda3/envs/lucent/lib/python3.8/site-packages (from torch-lucent) (4.57.0)
Collecting coverage
  Downloading coverage-5.4-cp38-cp38-manylinux2010_x86_64.whl (245 kB)
     |████████████████████████████████| 245 kB 672 kB/s 
Requirement already satisfied: decorator in /home/pmin/miniconda3/envs/lucent/lib/python3.8/site-packages (from torch-lucent) (4.4.2)
Collecting coveralls
  Using cached coveralls-3.0.0-py2.py3-none-any.whl (13 kB)
Requirement already satisfied: scikit-learn in /home/pmin/miniconda3/envs/lucent/lib/python3.8/site-packages (from torch-lucent) (0.24.1)
Requirement already satisfied: numpy in /home/pmin/miniconda3/envs/lucent/lib/python3.8/site-packages (from torch-lucent) (1.20.1)
Requirement already satisfied: torch>=1.5.0 in /home/pmin/miniconda3/envs/lucent/lib/python3.8/site-packages (from torch-lucent) (1.7.1)
Collecting torch>=1.5.0
  Using cached torch-1.6.0-cp38-cp38-manylinux1_x86_64.whl (748.8 MB)
Collecting docopt>=0.6.1
  Using cached docopt-0.6.2-py2.py3-none-any.whl
Requirement already satisfied: requests>=1.0.0 in /home/pmin/miniconda3/envs/lucent/lib/python3.8/site-packages (from coveralls->torch-lucent) (2.25.1)
Requirement already satisfied: certifi>=2017.4.17 in /home/pmin/miniconda3/envs/lucent/lib/python3.8/site-packages (from requests>=1.0.0->coveralls->torch-lucent) (2020.12.5)
Requirement already satisfied: chardet<5,>=3.0.2 in /home/pmin/miniconda3/envs/lucent/lib/python3.8/site-packages (from requests>=1.0.0->coveralls->torch-lucent) (4.0.0)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in /home/pmin/miniconda3/envs/lucent/lib/python3.8/site-packages (from requests>=1.0.0->coveralls->torch-lucent) (1.26.3)
Requirement already satisfied: idna<3,>=2.5 in /home/pmin/miniconda3/envs/lucent/lib/python3.8/site-packages (from requests>=1.0.0->coveralls->torch-lucent) (2.10)
Requirement already satisfied: traitlets>=4.2 in /home/pmin/miniconda3/envs/lucent/lib/python3.8/site-packages (from ipython->torch-lucent) (5.0.5)
Requirement already satisfied: pygments in /home/pmin/miniconda3/envs/lucent/lib/python3.8/site-packages (from ipython->torch-lucent) (2.8.0)
Requirement already satisfied: jedi>=0.16 in /home/pmin/miniconda3/envs/lucent/lib/python3.8/site-packages (from ipython->torch-lucent) (0.18.0)
Requirement already satisfied: pexpect>4.3 in /home/pmin/miniconda3/envs/lucent/lib/python3.8/site-packages (from ipython->torch-lucent) (4.8.0)
Requirement already satisfied: setuptools>=18.5 in /home/pmin/miniconda3/envs/lucent/lib/python3.8/site-packages (from ipython->torch-lucent) (52.0.0.post20210125)
Requirement already satisfied: backcall in /home/pmin/miniconda3/envs/lucent/lib/python3.8/site-packages (from ipython->torch-lucent) (0.2.0)
Requirement already satisfied: prompt-toolkit!=3.0.0,!=3.0.1,<3.1.0,>=2.0.0 in /home/pmin/miniconda3/envs/lucent/lib/python3.8/site-packages (from ipython->torch-lucent) (3.0.16)
Requirement already satisfied: pickleshare in /home/pmin/miniconda3/envs/lucent/lib/python3.8/site-packages (from ipython->torch-lucent) (0.7.5)
Requirement already satisfied: parso<0.9.0,>=0.8.0 in /home/pmin/miniconda3/envs/lucent/lib/python3.8/site-packages (from jedi>=0.16->ipython->torch-lucent) (0.8.1)
Requirement already satisfied: ptyprocess>=0.5 in /home/pmin/miniconda3/envs/lucent/lib/python3.8/site-packages (from pexpect>4.3->ipython->torch-lucent) (0.7.0)
Requirement already satisfied: wcwidth in /home/pmin/miniconda3/envs/lucent/lib/python3.8/site-packages (from prompt-toolkit!=3.0.0,!=3.0.1,<3.1.0,>=2.0.0->ipython->torch-lucent) (0.2.5)
Requirement already satisfied: ipython-genutils in /home/pmin/miniconda3/envs/lucent/lib/python3.8/site-packages (from traitlets>=4.2->ipython->torch-lucent) (0.2.0)
Collecting attrs>=19.2.0
  Using cached attrs-20.3.0-py2.py3-none-any.whl (49 kB)
Collecting py>=1.8.2
  Using cached py-1.10.0-py2.py3-none-any.whl (97 kB)
Collecting pluggy<1.0.0a1,>=0.12
  Using cached pluggy-0.13.1-py2.py3-none-any.whl (18 kB)
Collecting iniconfig
  Using cached iniconfig-1.1.1-py2.py3-none-any.whl (5.0 kB)
Collecting toml
  Using cached toml-0.10.2-py2.py3-none-any.whl (16 kB)
Collecting packaging
  Downloading packaging-20.9-py2.py3-none-any.whl (40 kB)
     |████████████████████████████████| 40 kB 6.2 MB/s 
Requirement already satisfied: pyparsing>=2.0.2 in /home/pmin/miniconda3/envs/lucent/lib/python3.8/site-packages (from packaging->pytest->torch-lucent) (2.4.7)
Requirement already satisfied: scipy>=0.19.1 in /home/pmin/miniconda3/envs/lucent/lib/python3.8/site-packages (from scikit-learn->torch-lucent) (1.6.1)
Requirement already satisfied: joblib>=0.11 in /home/pmin/miniconda3/envs/lucent/lib/python3.8/site-packages (from scikit-learn->torch-lucent) (1.0.1)
Requirement already satisfied: threadpoolctl>=2.0.0 in /home/pmin/miniconda3/envs/lucent/lib/python3.8/site-packages (from scikit-learn->torch-lucent) (2.1.0)
Installing collected packages: toml, py, pluggy, packaging, iniconfig, future, attrs, torch, pytest, docopt, coverage, pytest-mock, kornia, coveralls, torch-lucent
  Attempting uninstall: torch
    Found existing installation: torch 1.7.1
    Uninstalling torch-1.7.1:
      Successfully uninstalled torch-1.7.1
Successfully installed attrs-20.3.0 coverage-5.4 coveralls-3.0.0 docopt-0.6.2 future-0.18.2 iniconfig-1.1.1 kornia-0.4.0 packaging-20.9 pluggy-0.13.1 py-1.10.0 pytest-6.2.2 pytest-mock-3.5.1 toml-0.10.2 torch-1.6.0 torch-lucent-0.1.4

Support for torchvision.models.video models

Video models require 5D inputs (batch, channels, frames, height, width). But most of the parameterization, transforms and rendering functions in Lucent assume 4D inputs.

A simple workaround is to initialize a batch of images where batchsize = batch * frames. Then, inside the render_vis function, just before we pass the input to the model, we transpose and unsqueeze the input to a 5D shape.

Specifically, in render.py, replace

model(transform_f(image_f()))
with

image_t = transform_f(image_f())
image_t = torch.transpose(image_t, 0, 1).unsqueeze(0)
model(image_t)

But I'm wondering if there is a better solution.

Also, ideally we want the frames to be continuous, which suggests an objective to maximize alignment between the frames. Maybe objectives.alignment("input") will be sufficient for this.

pytorch version rollover breaks some version checking logic

  1. Thanks for this awesome library!
  2. Since torch doesn't report version with zero fill, lexicographical sort breaks from numerical at digit rollover (1.10). The particular instance I've just hit is line 58 of lucent/optvis/param/spatial.py

Input with more than 3 channels

Thanks so much for this repo. Does lucent support an input with more than 3 channels? For example, an input with (4 channels = 3 channels RGB + 1 channel semantic channel). If it is possible, is it also possible to optimize 3 RGB channels, with the semantic channel fixed?

Reproduce key elements of Curve-Detector-Paper.ipynb

The new Curve Detectors paper in the Circuits thread just got published last week!

I think it would be nice to support key tf-dependent elements of the notebook in Lucent, for anyone who decides to extend the work with PyTorch.

Possibly:

Not super sure what this would look like, possibly the start of a separate section dedicated to reproducing papers in the Circuits thread? I'm open to discussion!

Q: Do you use the same architecture and weights as Clarity does?

Hi,

I am looking for a trainable InceptionV1 model which shares the same weights with the ones Clarity team uses.
Reading your code, I've found these lines:

model_urls = {
    # InceptionV1 model used in Lucid examples, converted by ProGamerGov
    'inceptionv1': 'https://github.com/ProGamerGov/pytorch-old-tensorflow-models/raw/master/inception5h.pth',
}

Does it mean you're using exactly the same architecture and weights, so your render_vis function can reproduce the same pictures that Clarity has published?

Thanks!

activation grid for hierarchical custom model

Hi,
is there a way to visualize the activation grid for a custom model with nested modules, which are not explicitely named as a model's attribute?
E.g. when I call get_model_layers(), you can see the following output for this custom model:
image

I followed your notebook on the activation grid (https://colab.research.google.com/github/greentfrapp/lucent-notebooks/blob/master/notebooks/activation_grids.ipynb#scrollTo=BDH9cXnSuu5Q).
For example, I choose layer = "net_down1_maxpool_conv" (is there some kind of syntax for specifying the layers?)
I also rewrote the get_layer() helper function to parse the networks layer from the string, because that layer is not a direct attribute of the network class. But when I then try to use the rendering function, there is an error in the first line of the objective function: In lines 203-206 of render.py either one of the assertions is thrown, depending on how I choose the layer string.
Can you help me with this problem?
Many thanks!

Propagating KeyboardInterrupt in render_vis

Currently, when one executes
for i in range(num_channels): _ = render.render_vis(model, objective.channels(layer_name, i))

and then interrupts, e.g. by hitting the stop button in Colab, then it will simply stop the current optimization and then continue to loop. This is annoying for interactive programming. I suggest re-raising the KeyboradInterrupt after catching it with the except. This way the code is still stopped when interrupting from keyboard.

get raw_activations

Hi, thanks for this great library!
I'm trying to reproduce the Activation Atlas notebook using lucent, creating grid cells in the end.

In the notebook, "raw activation" is available in a numpy.ndarray format by "model.layers[7].activations" to utilise in the next Dimensionality reduction section. How can I get this raw activation using lucent?

I did create visualised images using lucent render_vis first, and then flattened them to use UMAP fit method, but I'm not sure this is correct.
Any suggestion would be appreciated.

Grayscale Optimal stimuli

Hello,
I am trying to understand the working of Lucent a bit better with one of my models. My model is trained on grayscale images (grayscale version of natural images) but uses a VGG16 backbone for feature extraction. Therefore, it accepts 3-channel images just like other torchvision zoo models. When I run Lucent on certain units, it returns RGB optimal stimuli. I believe this is because VGG16 (pretrained) has its own implicit color processing filtering operations that the Lucent optimization framework leverages to return RGB optimal stimuli. However, given that I trained (finetuned) my model on grayscale images, I want to optimize for optimal stimuli in the same space. Is it possible to do this in Lucent?
I tried a naive solution, i.e. adding the following transform:
rgb2gray_tfo = lambda x: torch.tensordot(x[...,:3],torch.Tensor([0.2989, 0.5870, 0.1140]).cuda(),dims=1).unsqueeze(-1).expand_as(x)
which should convert a RGB image to grayscale and repeat it in 3 channels to make the input suitable for passing to the network. However, the optimal Stimuli generated are just blank (gray) images. So, I am wondering if there's a solution to my problem.
Thanks in advance. I must add that using Lucent for my project has been amazing so far.😄

Using Lucent with smaller images (CIFAR-100)

I am currently trying to use Lucent with a VGG model that I have trained on CIFAR-100 (32x32x3 images). I modified the network by removing the global average pool and replacing the linear layers with a single 512>100 linear layer before training from scratch. I was previously using ONNX to transfer my trained models to Tensorflow and visualizing using Lucid. Unfortunately, the newer version of pytorch are no longer compatible on Tensorflow 1.X and Lucid is not built for Tensorflow 2.X. I found your library recently and was going through the example code and getting great results. When I tried to use Lucent on my trained models while also using fixed_image_size=32, the visualizations are more blurry, not as colorful, and not as semantic.

Here is an example of a network visualization on Lucid (all 512 filters of the last layer of vgg11_bn)
image

and here is an example of a network visualization (same network) on Lucent
image

both images use the parameters:
param_f = lambda: param.image(32, fft=True, decorrelate=True, batch=1)
where in the Lucent library, I also use fixed_image_size=32

I already looked into transform.py in both Lucid and Lucent and they both seem to have identical standard_transforms. I also did some tests in jupyter notebook where I toggled decorrelate, fft, and transforms separately, and none of them seem to affect the visualization quality.

No transforms, no FFT, no decorrelate:
image

standard transforms, no FFT, no decorrelate:
image

standard transforms, FFT, no decorrelate:
image

standard transforms, no FFT, decorrelate:
image

no transforms, FFT, decorrelate:
image

Low GPU utilization

I am trying to use Lucent to visualize deep neurons, but whatever I do it seems like GPU is under-utilized:
Examining utilization via nvidia-smi I see low utilization (~10%) with occasional peaks at ~50%, but never above that.
This happens both for cppn prior as well as fourier image representation.

Any suggestions?

Lucent handles greyscale images in function view incorrectly

When rendering and visualizing greyscale images not inline, i.e., with show_inline=False, PIL throws following error:
TypeError: Cannot handle this data type: (1, 1, 1), |u1
The problem is that Lucent passes a tensor of shape [H, W, C] with C=1 and range from 0-225 to PIL, but PIL can handle only two-dimensional tensors with integer values.
This Stackoverflow answer provides more information.

Solution: Lucent should check if shape is [H, W, C=1] and reduce to [H, W]. Alternatively, introduce a param, e.g, greycale=True in the view function.

render_vis becomes slow when used multiple times

Hi,
I noticed this strange behavior using the render_vis function when optimizing multiple images (like when you use it with videos). The time spent to otpimize the same image with same amount of iterations increases, both using CPU and GPU! What do you think could be the reason?

Neuron has grey optimal input

For some neurons the optimized image is simply grey. This does not happen when choosing objectives.channel

Minimal example for colab:

!pip install torch-lucent
import torch
import torchvision as tv
from lucent.optvis import render, objectives, transform, param

device = 'cuda' if torch.cuda.is_available() else 'cpu'
model = tv.models.googlenet(pretrained=True).to(device)
model.eval()

param_f = lambda: param.image(64, decorrelate=False, fft=True)
objs = [objectives.neuron('inception3b_branch2_1', i) for i in range(3)]

for i, obj in enumerate(objs):
    _ = render.render_vis(model, obj, param_f, thresholds=(256,), show_inline=True)
    # for i = 1 the image will be grey, if fft=False, then i=2 will also be grey.

image

Generating a batch of optimal stimuli, one for each unit in a layer

Hi,
I was trying to use Lucent to generate optimal stimuli for several units/neurons of a layer parallely. So, I figured I would leverage the batch processing. As illustrated in the neuron interaction tutorial notebook, I was passing a sum of objectives to the render.render_vis() function.
Here is a toy example of what I want and my approach:
Units to be visualized = [10,20,30]
Layer = 'readout_fc'
tot_objective = objectives.channel("readout_fc",10,batch=0)+objectives.channel("readout_fc",20,batch=1)+objectives.channel("readout_fc",30,batch=2)
param_f = lambda:param.image(135,batch=3)
imgs = render.render_vis(model,tot_objective,param_f=param_f,preprocess=False,fixed_input_image_size=135)

The parameter settings works beautifully when I try one unit.😄 However, I wasn't sure if this is the correct way to approach this for multiple units in parallel (this gives me seperate images for each unit). Also when the number of units is more, I was hoping to avoid writing it out individually or run an explicit for loop to compute the objective. I tried using reduce as below:
neurons = [10,20,30]
tot_objective = reduce(lambda x,y: x+objectives.channel("readout_fc",y[0],batch=y[1]),list(zip(neurons,np.arange(len(neurons)))),0)
Doing so gives me the same image 3 times. So, I was wondering if there is something wrong in how I am using the objective function to generate optimal stimuli from multiple units in parallel.
Thanks in advance.

use custom model?

Hi,
I see its possible to use models from the modelzoo, is it possible to use a custom trained model? Ay documentation or direction would be appreciated.

Activation Grid Notebook

Reproduce Lucid's Activation Grid Notebook with PyTorch and Lucent.

The only new function required seems to be ChannelReducer, which doesn't rely on Tensorflow so it should be relatively simple to port over.

Help wanted for this!

show() and export() give different images

I am working on visualising a custom model with Lucent, and need to save the images for my research. When setting save_image and show_inline to true in the render_vis function, I get different results:

  • Save_image saves a gaussian noisy image
    image
  • show_inline displays a nice feature visualisation
    image

Has anyone encountered this problem too, and how did you fix it?

Edit: for now, what I have done is take the code from Lucid (saving.py and associated files). If you then use the save function like the following, it saves the image, albeit a bit differently than the one being displayed by lucent:

save(rendered_image[-1], "image.png)

The export function from lucent still does not save the images properly though

Suggestion for `lucent.optvis.render.hook_model`

First, thanks for making this. Lifesaver. Two thoughts (Fwiw, the nested functions, higher-order functions and decorators make things a biiiiit hard to follow when debugging):

  1. I initially dun goofed and didn't eval the model (even though the very example notebook I'm using from lucent does lol). Maybe the hook_model function could check for nonetypes and tell the user to eval, if no saved feature maps are found?
  2. PyTorch module names usually use dot notation. Maybe use dots instead of underscores? Or just tell the user which feature map names are available and the user'll figure it out quickly enough

Suggested replacement for this function:

def hook(layer):

def hook(layer):
        if layer == "input":
            out = image_f()
        elif layer == "labels":
            out = list(features.values())[-1].features
        else:
            assert layer in features, f"Invalid layer {layer}. Pick from one of {features.keys()}"  # suggestion 2 ish
            out = features[layer].features
        assert out is not None, "There are no saved feature maps. Make sure to put the model in eval mode, like so: `model.to(device).eval()`. See Lucent notebook for example."   # suggestion 1, tell user to eval
        return out

*I ran it on resnet18. Gorgeous and worked out of the box btw.

download-4
download-5
download-6
download-7

Visuals losing contrast over time

Hi,

When I was visualizing more neurons, I've noticed that after a while I lost the contrast in some images. I've made some research and found this true on 3/3 neurons I tried.

Steps to reproduce

img = render.render_vis(imagenet, "mixed4a:476", show_image=False)[0][0]
print(img.mean(), img.std())
plt.imshow(img)

This works fine. If you run it a couple of times, the std value will remain stable (0.18ish). Now, provide custom preprocess & transformation, but provide the same as the default.

transforms = transform.standard_transforms
transforms.append(lambda x: x * 255 - 117)
img = render.render_vis(imagenet, "mixed4a:476", preprocess=False, transforms=transforms, show_image=False)[0][0]
print(img.mean(), img.std())
plt.imshow(img)

This should also work okay. But after running this, run either of these blocks and you will experience a significant drop in contrast (0.16), which you cannot undo.

Do you know what the problem might be? If not, could you please have a look? I've tried to find the cause of this behaviour in the code, but every variable seems to be local, so I don't know what the problem is.

PyTorch 1.7 support

Kornia 0.4.1. (available on PyPI) supports PyTorch 1.7. Is there a reason not to update the requirements so lucent supports PyTorch 1.7 as well? Is there some functionality expected to break?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.