GithubHelp home page GithubHelp logo

brain-score / candidate_models Goto Github PK

View Code? Open in Web Editor NEW
8.0 5.0 20.0 3.64 MB

Candidate Models to evaluate on Brain-Score benchmarks

Home Page: http://brain-score.org

Python 100.00%
neuroscience neural-networks deep-learning vision

candidate_models's Introduction

Build Status

Candidate Models for Brain-Score: Which Artificial Neural Network is most Brain-Like?

Candidate models to evaluate on brain measurements, i.e. neural and behavioral recordings. Brain recordings are packaged in Brain-Score.

Quick setup

pip install "candidate_models @ git+https://github.com/brain-score/candidate_models"

The above command will not install ML frameworks such as Pytorch, TensorFlow or Keras.

You can install them yourself or using the following commands (in a conda environment):

  • Pytorch: conda install pytorch torchvision -c pytorch
  • Keras: conda install keras
  • TensorFlow: conda install tensorflow. To use the predefined TensorFlow models, you will have to install the TF-slim library. See here for quick instructions.

Usage

PYTHONPATH=. python candidate_models --model alexnet

During first-time use, ImageNet validation images (9.8 GB) will be downloaded, so give it a couple of minutes.

See the examples for more elaborate examples.

Environment variables

Environment variables are prefixed with CM_ for this framework. Environment variables from brain-score and model-tools might also be useful.

Variable Description
CM_HOME path to framework root
CM_TSLIM_WEIGHTS_DIR path to stored weights for TensorFlow/research/slim models
MT_IMAGENET_PATH path to ImageNet file containing the validation image set
RESULTCACHING_HOME directory to cache results (benchmark ceilings) in, ~/.result_caching by default (see https://github.com/mschrimpf/result_caching)

Installing the TF-slim image models library

TensorFlow-slim does unfortunately not provide an actual pip-installable library here, instead we have to download the code and make it available.

git clone https://github.com/qbilius/models/ tf-models
export PYTHONPATH="$PYTHONPATH:$(pwd)/tf-models/research/slim"
# verify
python -c "from nets import cifarnet; mynet = cifarnet.cifarnet"

Alternatively, you can also move/symlink these packages to your site-packages.

Troubleshooting

Could not find a version that satisfies the requirement brain-score

pip has trouble when dependency links are private repositories (as is the case now for brain-score). To circumvent, install brain-score by hand before installing candidate_models: pip install --process-dependency-links git+https://github.com/dicarlolab/brain-score.

Could not find a version that satisfies the requirement tensorflow

TensorFlow doesn't always catch up with newer Python versions. For instance, if you have Python 3.7 (check with python -V), TensorFlow might only work up to Python 3.6. If you're using conda, it usually installs the very newest version of Python. To fix, downgrade python: conda install python=3.6.

Failed to build pytorch

Some versions of PyTorch cannot be installed via pip (e.g. CPU). Instead, you need to build pytorch from their provided wheel. Check the website for installation instructions, right now they are (e.g. for Linux, Python 3.6, no CUDA): pip install http://download.pytorch.org/whl/cpu/torch-0.4.1-cp36-cp36m-linux_x86_64.whl && pip install torchvision. Or just use conda, e.g., for CPU: conda install pytorch-cpu torchvision-cpu -c pytorch

No module named `nets` / `preprocessing` You probably haven't installed TensorFlow/research/slim. Follow the instructions [here](https://github.com/tensorflow/models/tree/master/research/slim#Install).
ImportError: cannot import name '_obtain_input_shape'

keras_squeezenet unfortunately does not run with keras > 2.2.0. To fix, pip install keras==2.2.0.

tensorflow.python.framework.errors_impl.FailedPreconditionError: Error while reading resource variable

If this happened when running a keras model, your tensorflow and keras versions are probably incompatible. See the setup.py for which versions are supported.

Restoring from checkpoint failed. (...) Assign requires shapes of both tensors to match.

Most likely your passed image_size does not match up with the image size the model expects (e.g. inception_v{3,4} expect 299 insead of 224). Either let the framework infer what image_size the model needs (run without --image_size) or set the correct image_size yourself.

MobileNet weight loading failed.

Error message e.g. Assign requires shapes of both tensors to match. lhs shape= [1,1,240,960] rhs shape= [1,1,240,1280].

There is an error in the MobileNet implementation which causes the multiplier to not be applied properly: the number of channels sometimes go beyond what they ought to be (e.g. for the last layer). The line in question needs to be prefixed with a conditional:

if i != len(conv_defs['spec']) - 1 or multiplier >= 1:
    opdef.multiplier_func(params, multiplier)

This is already done in @qbilius' fork of tensorflow/models.

Installation error due to version mismatch after re-submission.

Error message e.g.

ERROR: Cannot install brain-score and candidate-models==0.1.0 because these package versions have conflicting dependencies.
The conflict is caused by:
    candidate-models 0.1.0 depends on pandas==0.25.3
    brainio-base 0.1.0 depends on pandas>=1.2.0

This can happen when re-submitting a model because the underlying submission.zip might point to versions that were okay at the time, but are in conflict after updates to the brain-score framework. For instance, old versions of candidate-models specified pandas==0.25.3 which was removed in newer versions and leads to old versions being incompatible with newer specifications of pandas in BrainIO.

The best solution is to re-submit a zip file without those version conflicts. Ideally submissions should avoid specifying any versions themselves as much as possible to prevent this error. We have also been updating the zip files internally on the server, but this is not a long-term solution.

candidate_models's People

Contributors

anayebi avatar dapello avatar franzigeiger avatar jjpr-mit avatar mike-ferguson avatar mschrimpf avatar qbilius avatar tiagogmarques avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

candidate_models's Issues

ModuleNotFoundError: No module named 'unsup_vvs.network_training.models.simclr'

I'm trying to use candidate_models with Brain Score, but I received the following error:

  File "python3.7/site-packages/candidate_models/base_models/unsupervised_vvs/__init__.py", line 77, in __call__
    cfg_kwargs=self.CFG_KWARGS.get(identifier, {}))
  File "python3.7/site-packages/candidate_models/base_models/unsupervised_vvs/__init__.py", line 133, in __get_tf_model
    model_type=model_type, cfg_kwargs=cfg_kwargs)
  File "python3.7/site-packages/candidate_models/base_models/unsupervised_vvs/__init__.py", line 157, in _build_model_ending_points
    **cfg_kwargs)
  File "python3.7/site-packages/unsup_vvs/neural_fit/cleaned_network_builder.py", line 189, in get_network_outputs
    ending_points = get_simclr_ending_points(inputs)
  File "python3.7/site-packages/unsup_vvs/neural_fit/cleaned_network_builder.py", line 22, in get_simclr_ending_points
    from unsup_vvs.network_training.models.simclr import resnet
ModuleNotFoundError: No module named 'unsup_vvs.network_training.models.simclr'

store pca projection

instead of re-computing pca projection for all stimuli, store it and re-use it

cannot import name 'LayerModel'

from candidate_models import score_model raises on error cannot import name 'LayerModel'

candidate_models/base_models/cornet.py attempts to import from model_tools.brain_transformation import LayerModel, but LayerModel is not defined.
Removing LayerModel resolved the error.

Issue with code crashing due to high RAM usage

I am currently evaluating a 3D ResNet that has been pre-trained on short video segments. However, I encountered a problem while running the "score_model" function with an instance of the "ModelCommitment" class. It appears that the code is utilizing all available memory resources, causing it to crash.

To reproduce this issue, I modified the "preprocess_images" method located in the "activations/pytorch.py" Python file. In my modification, I pass concatenated images as a 4D input to the model. The activations model is then initialized based on this preprocessing step.

During execution, I monitored both GPU and CPU activity. Although the GPU usage is temporary, the CPU usage gradually increases until it eventually freezes the procedure. I would like to note that when using small resolutions for input images that are concatenated to create a static video frame, everything works fine. However, increasing the input image size beyond 32x32 leads to the problem mentioned above.

Please investigate this issue and provide guidance on resolving it. Thank you.

TF-slim models label_offset is not handled

all the models that have a non-zero label offset (i.e. TensorFlow slim models with a background class at index zero) are outputting the wrong ImageNet logits. E.g. inception_v1 thereby produces a score of 0 on the ImageNet benchmark.

ONNX i/o

From Ko:

I was wondering whether you have considered the ONNX format (https://onnx.ai) for dealing with neural network models built on various different platforms. This is the open neural network exchange format especially built for interchangeable AI models built on almost any platform (PyTorch, tensor-flow, Caffe etc). This has been developed by Microsoft and Facebook. It seems fairly simple to convert any existing model built on any of these platforms to ONNX format and share it with others (who might not be using the same platform for their own model development endeavors).

I was thinking may be for simplicity at our end โ€” we can make it mandatory to convert every model to the ONNX format before submission.

Having ONNX I/O (import/export) is probably a good idea once we want to release mapped brain-models.

quick question regarding batch norm for vgg models

Hi @mschrimpf,
I hope all is going well.
Quick question: for models "vgg-16" and "vgg-19" on brain-score.org, do these correspond to PyTorch models torchvision.models.vgg16_bn or to torchvision.models.vgg16 (with / without batch norm)?
Cheers, Robert

Layers corresponding to each region

Hi, I see here that you list the layer that you used to compare to V1 but I cant find the layers you used for V2, V4, IT, or Behavior. I was wondering where I can find this information for all the candidate models you all benchmarked

containerify models

Brain-Score now allows the submission of zip files and runs them in their own conda environment. We are starting to run into version mismatches between the many different models in candidate_models (e.g. 8885cd4) so should start to tease these apart.

Especially together with brain-score/vision#231, each model family can be its own submission that Brain-Score then provides for download without version conflicts.

visual-degrees branch tries to access PixelsToDegrees from model_tools which was moved to brainscore

On candidate_models/model_commitments/ml_pool.py
It adds an activation_hook that no longer exists.

from model_tools.brain_transformation import ModelCommitment, PixelsToDegrees

class Hooks:
HOOK_SEPARATOR = "--"

def __init__(self):
    pca_components = 1000
    self.activation_hooks = {
        f"pca_{pca_components}": lambda activations_model: LayerPCA.hook(
            activations_model, n_components=pca_components),
        "degrees": lambda activations_model: PixelsToDegrees.hook(
            activations_model, target_pixels=activations_model.image_size)}

Cannot install candidate models because of xarray version conflict

I try to install candidate models to test my trained CORnet-S via pip install "candidate_models @ git+https://github.com/brain-score/candidate_models", but it has an error message below.

ERROR: Cannot install brain-score and candidate-models==0.1.0 because these package versions have conflicting dependencies.

The conflict is caused by:
candidate-models 0.1.0 depends on xarray<=0.12
brainio-base 0.1.0 depends on xarray==0.16.1

To fix this you could try to:

  1. loosen the range of package versions you've specified
  2. remove package versions to allow pip attempt to solve the dependency conflict

ERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/user_guide/#fixing-conflicting-dependencies

I try to install it using different sever with both windows and Linux, and cannot solve the problem.
What should I do?

ERROR: Cannot find key: --model

Hi, I am trying to run the minimal example that is in the documentation, but I am facing this error when invoquing candidate_models:

2020-02-24 15:57:15,230 INFO:main:Running candidate_models --model alexnet
ERROR: Cannot find key: --model

Not sure if I am running it correctly. Thanks.

cross validation is slow

Hi

I am working with public benchmarks for IT and V4 layers. My goal is to get scores for a modified 2-block resnet architecture for each layer on the benchmarks to select layers to run on private benchmarks.
It is taking me around 7 hours for encoder.conv1 layer out of which cross validation takes ~5-6 hours and similar for other layers as well.
Attaching screenshot for reference.
issue

R50 example

Hi,

I would like to test some R50 weights that I have, but am struggling to translate the AlexNet sample to work for ResNet50.
If possible could you please point me to some implementation / code that allows me to just specify the weights and otherwise submits a ResNet50?

Best

Difference between public and private benchmarks?

I'm trying to replicate the brain-scoring of CORNet-S. I noticed that the brain-score leaderboard uses the MajajHong2015.IT-pls (V3) benchmark while the PyTorch candidate_models example uses dicarlo.MajajHong2015public.IT-pls. Are these the same?

package ImageNet validation in brainscore instead of manually loading

right now, ImageNet validation images are downloaded through candidate_models to initialize the PCA.
The suggestion here is to package ImageNet-val in brainscore as a StimulusSet so that we can just retrieve it from there.
However, this introduces additional dependency between the two frameworks such that candidate_models with PCA can no longer be used without brainscore.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.