GithubHelp home page GithubHelp logo

princetonuniversity / psyneulink Goto Github PK

View Code? Open in Web Editor NEW
85.0 12.0 29.0 857.61 MB

A block modeling system for cognitive neuroscience

Home Page: https://psyneulink.org

License: Apache License 2.0

Python 50.45% MATLAB 0.16% Jupyter Notebook 0.68% Shell 0.01% HTML 48.70%
cognitive-science neuroscience modeling-tools

psyneulink's Introduction

https://github.com/PrincetonUniversity/PsyNeuLink/workflows/PsyNeuLink%20CI/badge.svg?branch=master

Welcome to PsyNeuLink

(pronounced: /sīnyoolingk - sigh-new-link) Documentation is available at https://princetonuniversity.github.io/PsyNeuLink/

Purpose

PsyNeuLink is an open-source, software environment written in Python, and designed for the needs of neuroscientists, psychologists, computational psychiatrists and others interested in learning about and building models of the relationship between brain function, mental processes and behavior.

PsyNeuLink can be used as a "block modeling environment", in which to construct, simulate, document, and exchange computational models of neural mechanisms and/or psychological processes at the subsystem and system levels. A block modeling environment allows components to be constructed that implement various, possibly disparate functions, and then link them together into a system to examine how they interact. In PsyNeuLink, components are used to implement the function of brain subsystems and/or psychological processes, the interaction of which can then be simulated at the system level.

The purpose of PsyNeuLink is to make it as easy as possible to create new and/or import existing models, and integrate them to simluate system-level interactions. It provides a suite of core components for implementing models of various forms of processing, learning, and control, and its Library includes examples that combine these components to implement published models. As an open source project, its suite of components is meant to be enhanced and extended, and its library is meant to provide an expanding repository of models, written in a concise, executable, and easy to interpret form, that can be shared, compared, and extended by the scientific community.

What PsyNeuLink IS

It is:

  • open source, freeing users of the costs or restrictions associated with proprietary software.
  • computationally general -- it can be used to implement, seamlessly integrate, and simulate interactions among disparate components that vary in their granularity of representation and function (from individual neurons or neural populations to functional subsystems and abstract cognitive functions) and at any time scale of execution.
  • integrative -- it provides a standard and accessible environment for model comparison, sharing, and documentation;
  • extensible -- it has an interface (API) that allows it to be used with other powerful tools for implementing individual components, such as:

    • Neuron and Nengo (biophysically realistic models of neuronal function)
    • Emergent (broad class of neurally-plausible connectionist models);
    • Pytorch and TensorFlow (ODE's, deep learning);
    • ACT-R (symbolic, production system models).

Note

PsyNeuLink is alpha software, that is still being actively developed. Although it is useable, and most of the documented functionality is available, some features may not yet be fully implemented and/or subject to modification. Please report any bugs and/or suggestions for development to [email protected].

What PsyNeuLink is NOT

The longterm goal of PsyNeuLink is to provide an environment that integrates comptutational modeling of brain function and behavior at all levels of analysis. While it is designed to be fully general, and can in principle be used to implement models at any level, it is still under development, and current efficiency considerations make it more suitable for some of forms of modeling than others. In its present form, it is well suited to the creation of simple to moderately complex models, and for the integration of disparate models into a single environment, while in it is presently less well suited to efforts involving massively large computations, such as:

  • extensive model fitting
  • large scale simulations
  • highly detailed biophysical models of neurons or neuronal populations

Other packages currently better suited to such applications are: Emergent for biologically-inspired neural network models Pytorch and TensorFlow (for deep learning models); HDDM (for Drift Diffusion Models); ACT-R (for production system models); Genesis, Neuron, and Nengo (for biophysically-realistic models of neuronal function).

These packages are good for elaborate and detailed models of a particular form. In contrast, the focus in designing PsyNeuLink has been to make it as flexible and easy to use as possible, with the ability to integrate components constructed in other packages (including some of the ones listed above) into a single environment. These are characteristics that are often (at least in the initial stages of development) in tension with efficiency (think: interpreted vs. compiled).

That said, priorities for ongoing development of PsyNeuLink are:
  1. acceleration, using just-in-time compilation methods and parallelization (see Compilation, and Vesely et al., 2022);
  2. enhancement of the API to facilitate wrapping modules from other packages for integration into the PsyNeuLink environment (examples currently exist for Pytorch ) and translating into a standard Model Description Format (MDF);
  3. integration of tools for parameter estimation, model comparison and data fitting (see ParameterEstimationComposition); and
  4. a graphic interface for the construction of models and realtime display of their execution (see PsyNeuLinkView).

Environment Overview

PsyNeuLink is written in Python, and conforms to the syntax, coding standards and modular organization shared by most Python packages. BasicsAndPrimer provides an orientation to PsyNeuLinks Components, some examples of what PsyNeuLink models look like, and some of its capabilities. QuickReference provides an overview of how PsyNeuLink is organized and some of its basic principles of operation. The Tutorial provides an interactive guide to the construction of models using PsyNeuLink. Core contains the fundamental objects used to build PsyNeuLink models, and Library contains extensions, including speciality components, implemented compositions, and published models.

Installation

PsyNeuLink is compatible with python versions >= 3.5, and is available through PyPI:

pip install psyneulink

All prerequisite packages will be automatically added to your environment.

If you downloaded the source code, navigate to the cloned directory in a terminal, switch to your preferred python3 environment, then run

pip install .

Dependencies that are automatically installed (except those noted as optional) include:

  • numpy
  • matplotlib
  • toposort
  • typecheck-decorator (version 1.2)
  • pillow
  • llvmlite
  • mpi4py (optional)
  • autograd (optional)

Lists of required packages for PsyNeuLink, developing PsyNeuLink, and running the PsyNeuLink tutorial are also stored in pip-style requirements.txt, dev_requirements.txt, and tutorial_requirements.txt in the source code.

PsyNeuLink is an open source project maintined on GitHub. The repo can be cloned from here.

If you have trouble installing the package, or run into other problems, please contact [email protected].

Tutorial

PsyNeuLink includes a tutorial (/tutorial/PsyNeuLink Tutorial.ipynb), that provides examples of how to create basic Components in PsyNeuLink, and combine them into Processes and a System. The examples include construction of a simple decision making process using a Drift Diffusion Model, a neural network model of the Stroop effect, and a backpropagation network for learning the XOR problem.

The tutorial can be run in a browser by clicking the badge below, or this link.

To run the tutorial locally, you must run python 3.5 and install additional packages:

pip install psyneulink[tutorial]

or if you downloaded the source:

pip install .[tutorial]

To access the tutorial, make sure you fulfill the requirements mentioned above, download the tutorial notebook (/tutorial/PsyNeuLink Tutorial.ipynb), then run the terminal command

jupyter notebook

Once the notebook opens in your browser, navigate to the location where you saved the tutorial notebook, and click on "PsyNeuLink Tutorial.ipynb".

Help and Issue Reporting

Help is available at [email protected].

Issues can be reported at https://github.com/PrincetonUniversity/PsyNeuLink/issues.

Contributors

(in alphabetical order)

  • Allie Burton, Princeton Neuroscience Institute, Princeton University (formerly)
  • Laura Bustamante, Princeton Neuroscience Institute, Princeton University
  • Jonathan D. Cohen, Princeton Neuroscience Institute, Princeton University
  • Samyak Gupta, Department of Computer Science, Princeton University
  • Abigail Hoskin, Department of Psychology, Princeton University
  • Peter Johnson, Princeton Neuroscience Institute, Princeton University (formerly)
  • Justin Junge, Department of Psychology, Princeton University
  • Qihong Lu, Department of Psychology, Princeton University
  • Kristen Manning, Princeton Neuroscience Institute, Princeton University (formerly)
  • Katherine Mantel, Princeton Neuroscience Institute, Princeton University
  • Lena Rosendahl, Department of Mechanical and Aerospace Engineering, Princeton University
  • Dillon Smith, Princeton Neuroscience Institute, Princeton University (formerly)
  • Markus Spitzer, Princeton Neuroscience Institute, Princeton University (formerly)
  • David Turner, Princeton Neuroscience Institute, Princeton University
  • Jan Vesely, Department of Computer Science, Rutgers University (formerly)
  • Changyan Wang, Princeton Neuroscience Institute, Princeton University (formerly)
  • Nate Wilson, Princeton Neuroscience Institute, Princeton University (formerly)

With substantial and greatly appreciated assistance from:

  • Abhishek Bhattacharjee, Department of Computer Science, Yale University
  • Mihai Capota, Intel Labs, Intel Corporation
  • Bryn Keller, Intel Labs, Intel Corporation
  • Susan Liu, Princeton Neuroscience Institute, Princeton University (formerly)
  • Garrett McGrath, Princeton Neuroscience Institute, Princeton University
  • Sebastian Musslick, Princeton Neuroscience Institute, Princeton University (formerly)
  • Amitai Shenhav, Cognitive, Linguistic, & Psychological Sciences, Brown University
  • Michael Shvartsman, Princeton Neuroscience Institute, Princeton University (formerly)
  • Ben Singer, Princeton Neuroscience Institute, Princeton University
  • Ted Willke, Brain Inspired Computing Lab, Intel Corporation

Support

The development of PsyNeuLink has benefited by generous support from the following agencies:

License

Princeton University licenses this file to You under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.  You may obtain a copy of the License at:
     http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed
on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and limitations under the License.

psyneulink's People

Contributors

allieluu avatar bdsinger avatar benureau avatar davidt0x avatar dcw3 avatar dependabot[bot] avatar dillontsmith avatar jdcpni avatar justinjunge avatar jvesely avatar karoara avatar kmantel avatar kristenmanning avatar lgtm-com[bot] avatar mspitzer avatar musslick avatar natemwilson avatar petersj avatar pgleeson avatar samkg avatar wildcarde avatar xoltar avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

psyneulink's Issues

Allow for dynamic Conditions

The predefined PNL convenience Conditions are limited to using numeric values as arguments. We need to modify the API such that Conditions can be dynamic, including the ability to use the state of other PsyNeuLink Parameters as their arguments.

Convert OrnsteinUhlenbeckIntegrator to use private PRNG

See other function in the file on how to use private RandomState
offending line:

        value = previous_value + (decay * previous_value - rate * variable) * time_step_size + np.sqrt(
            time_step_size * noise) * np.random.normal()

Value shape mismatch in LinearMatrix function

adding:

        print("OWNER: ", self.owner)
        print("DEFAULT: ", self.defaults.value)
        print("VALUE: ", self.value)

prints:

OWNER:  (MappingProjection MappingProjection from a-1[RESULTS] to b-1[InputState-0])
DEFAULT:  0.0
VALUE:  [0.]
OWNER:  (MappingProjection MappingProjection from a-2[RESULTS] to b-2[InputState-0])
DEFAULT:  0.0
VALUE:  [0.]
OWNER:  (MappingProjection MappingProjection from a-2[RESULTS] to b-2[InputState-0])
DEFAULT:  0.0
VALUE:  [0.]

in bi_stable_percept tests.

Shape mismatch in Botvinick model

I ran into another instance of shape mismatch. adding:

    for p in comp.output_CIM.afferents:
        print("OUTPUT_CIM AFFERENT2: ", p.name)
        print("DEF VAR: ", p.defaults.variable)
        print("VAR: ", p.variable)
        print("sender def val:", p.sender.defaults.value)
        print("sender val:", p.sender.value)

gives

OUTPUT_CIM AFFERENT2:  (DECISION_ENERGY) to (OUTPUT_CIM_RESPONSE_DECISION_ENERGY)
DEF VAR:  [1.]
VAR:  [1.]
sender def val: 1.0
sender val: 1.0

this mismatch hits when compiling incoming projections to output_CIM mechanism.
It happens only in this one projection and only in the Botvinick model.

Some logging methods broken for components that execute more than once per timestep

Some logging methods, including log.nparray_dictionary() and log.csv() do not function properly when called for a component that has executed more than once in a time step (e.g. RecurrentTransferMechanism, etc.)

This seems to be due to an assumption made in some of the lower level methods (nparray(), _assemble_entry_data(), and elsewhere) that components will never execute more than once in a time step.

Expected behavior is for these methods to show values for all executions, or at the very least for them to show the final value in a time step. Actual behavior is that they show only the value after the first execution.

Notably, the full log does contain entries for all executions.

Mixed NN & DDM data discrepancy

Independent of the version of "Scripts/TEST SCRIPTS/Mixed NN & DDM script.py", the input to Hidden Layer 1 is

Commit Input
5c6a8d2 [2.66468183, 4.52238317, 4.58793488, 7.00409625, 5.45141135]
0537a15 [ 3.50305 , 4.885717, 5.245638, 6.572826, 5.019868]

Which is the correct input?

Compiled DDM Integrator function result is off by a factor of 20

The following code:

import psyneulink as pnl
import numpy as np
import pytest

decisionMaker = pnl.DDM(function=pnl.DriftDiffusionIntegrator(starting_point=0,
                                                              threshold=1,
                                                              noise=0.0),
                        reinitialize_when=pnl.AtTrialStart(),
                        output_ports=[pnl.DECISION_VARIABLE, pnl.RESPONSE_TIME],
                        name='DDM')

comp = pnl.Composition()
comp.add_node(decisionMaker)

print(comp.run([.05], termination_processing={pnl.TimeScale.TRIAL: pnl.WhenFinished(decisionMaker)},
         bin_execute=False))

results in an output of

[array([[1.0]], dtype=object), array([[20.0]], dtype=object)]

in Python mode, but

[[0.05], [1.0]]

in compiled mode - off by a factor of exactly 20.

projection.control_signal is None

on branch _instantiate_control_signal_REFACTORED

tests/scheduling/test_system_newsched.py:76: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../.pyenv/versions/pton/lib/python3.5/site-packages/typecheck/decorators.py:76: in typecheck_invocation_proxy
    result = method(*args, **kwargs)
PsyNeuLink/Components/System.py:474: in system
    context=context)
../../.pyenv/versions/pton/lib/python3.5/site-packages/typecheck/decorators.py:76: in typecheck_invocation_proxy
    result = method(*args, **kwargs)
PsyNeuLink/Components/System.py:837: in __init__
    control_signals=control_signals)
../../.pyenv/versions/pton/lib/python3.5/site-packages/typecheck/decorators.py:76: in typecheck_invocation_proxy
    result = method(*args, **kwargs)
PsyNeuLink/Components/Mechanisms/AdaptiveMechanisms/ControlMechanisms/EVCMechanism.py:745: in __init__
    context=self)
../../.pyenv/versions/pton/lib/python3.5/site-packages/typecheck/decorators.py:76: in typecheck_invocation_proxy
    result = method(*args, **kwargs)
PsyNeuLink/Components/Mechanisms/AdaptiveMechanisms/ControlMechanisms/ControlMechanism.py:286: in __init__
    context=self)
PsyNeuLink/Components/Mechanisms/Mechanism.py:938: in __init__
    context=context)
PsyNeuLink/Components/ShellClasses.py:109: in __init__
    context=context)
PsyNeuLink/Components/Component.py:703: in __init__
    self._instantiate_attributes_after_function(context=context)
PsyNeuLink/Components/Mechanisms/AdaptiveMechanisms/ControlMechanisms/EVCMechanism.py:1217: in _instantiate_attributes_after_function
    super()._instantiate_attributes_after_function(context=context)
PsyNeuLink/Components/Mechanisms/AdaptiveMechanisms/ControlMechanisms/ControlMechanism.py:400: in _instantiate_attributes_after_function
    self._take_over_as_default_controller(context=context)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self = (EVCMechanism EVCMechanism-1), context = ' INITIALIZING EVCMechanism-1 | EVCMechanism INITIALIZING : Component.__init__'

    def _take_over_as_default_controller(self, context=None):
    
        # FIX 5/23/17: INTEGRATE THIS WITH ASSIGNMENT OF control_signals
        # FIX:         (E.G., CHECK IF SPECIFIED ControlSignal ALREADY EXISTS)
        # Check the parameterStates of the system's mechanisms for any ControlProjections with deferred_init()
        for mech in self.system.mechanisms:
            for parameter_state in mech._parameter_states:
                for projection in parameter_state.afferents:
                    # If projection was deferred for init, initialize it now and instantiate for self
                    if projection.value is DEFERRED_INITIALIZATION and projection.init_args['sender'] is None:
                        # FIX 5/23/17: MODIFY THIS WHEN (param, ControlProjection) tuple
                        # FIX:         IS REPLACED WITH (param, ControlSignal) tuple
                        # Add projection itself to any params specified in the ControlProjection for the ControlSignal
                        #    (cached in the ControlProjection's control_signal attrib)
>                       projection.control_signal.update({MODULATORY_PROJECTIONS: [projection]})
E                       AttributeError: 'NoneType' object has no attribute 'update'

PsyNeuLink/Components/Mechanisms/AdaptiveMechanisms/ControlMechanisms/ControlMechanism.py:433: AttributeError

running from tests/scheduling/test_system_newsched.py

self = <test_system_newsched.TestInit object at 0x10744a240>

    def test_create_scheduler_from_system_StroopDemo(self):
        Color_Input = TransferMechanism(name='Color Input', function=Linear(slope=0.2995))
        Word_Input = TransferMechanism(name='Word Input', function=Linear(slope=0.2995))
    
        # Processing Mechanisms (Control)
        Color_Hidden = TransferMechanism(
            name='Colors Hidden',
            function=Logistic(gain=(1.0, ControlProjection)),
        )
        Word_Hidden = TransferMechanism(
            name='Words Hidden',
            function=Logistic(gain=(1.0, ControlProjection)),
        )
        Output = TransferMechanism(
            name='Output',
            function=Logistic(gain=(1.0, ControlProjection)),
        )
    
        # Decision Mechanisms
        Decision = DDM(
            function=BogaczEtAl(
                drift_rate=(1.0),
                threshold=(0.1654),
                noise=(0.5),
                starting_point=(0),
                t0=0.25,
            ),
            name='Decision',
        )
        # Outcome Mechanisms:
        Reward = TransferMechanism(name='Reward')
    
        # Processes:
        ColorNamingProcess = process(
            default_input_value=[0],
            pathway=[Color_Input, Color_Hidden, Output, Decision],
            name='Color Naming Process',
        )
    
        WordReadingProcess = process(
            default_input_value=[0],
            pathway=[Word_Input, Word_Hidden, Output, Decision],
            name='Word Reading Process',
        )
    
        RewardProcess = process(
            default_input_value=[0],
            pathway=[Reward],
            name='RewardProcess',
        )
    
        # System:
        mySystem = system(
            processes=[ColorNamingProcess, WordReadingProcess, RewardProcess],
            controller=EVCMechanism,
            enable_controller=True,
            # monitor_for_control=[Reward, (PROBABILITY_UPPER_THRESHOLD, 1, -1)],
>           name='EVC Gratton System',
        )

Excessive Calls to `parameter.__getattr__`

From profiling psyneulink code, I found that parameter.__getattr__ was called many times.

For example, in the following code:
from psyneulink.core.components import component

There were about 300,000 calls to parameter.__getattr__.

In addition, creating components in psyneulink seems to also increase the number of calls to
parameter.__getattr__, up to the order of millions of calls.

Testing shows that these calls represent atleast 40% (if not more) of the total time spent when creating nodes and importing psyneulink.

These counts seem a bit excessive; I'm wondering if there may be a bug buried deep in the parameters class - or potential for significant optimization.

Extra invalid afferents to controlled parameter states

When creating a Mechanism with controlled parameters, ControlProjections will be assigned or created to the ParameterState, but they don't appear to be connected or used in the future. They also leave invalid projections:

import psyneulink as pnl
Decision = pnl.DDM(
    function=pnl.DriftDiffusionAnalytical(
        drift_rate=(
            1.0,
            pnl.ControlProjection(
                function=pnl.Linear,
                control_signal_params={
                    pnl.ALLOCATION_SAMPLES: np.arange(0.1, 1.01, 0.3)
                },
            ),
        )
    )
)
print(Decision.parameter_states[0].all_afferents[0].sender)
# <class 'psyneulink.core.components.states.modulatorysignals.controlsignal.ControlSignal'>

In this small example, it doesn't really make sense to have a projection from a class.

These projections also persist even after a controller is assigned, as seen in tests/composition/test_control.py::TestModelBasedOptimizationControlMechanisms::test_nested_evc

These projections can be identified in _instantiate_projections_to_state when proj_sender is None.

Adding matrix to process pathway: unhashable type

Traceback (most recent call last):
  File "Scripts/TEST SCRIPTS/Mixed NN & DDM script.py", line 34, in <module>
    myDDM])
  File "/Users/kmantel/.pyenv/versions/pton/lib/python3.5/site-packages/typecheck/decorators.py", line 76, in typecheck_invocation_proxy
    result = method(*args, **kwargs)
  File "/Users/kmantel/Documents/PsyNeuLink/PsyNeuLink/Components/Process.py", line 517, in process
    context=context)
  File "/Users/kmantel/.pyenv/versions/pton/lib/python3.5/site-packages/typecheck/decorators.py", line 76, in typecheck_invocation_proxy
    result = method(*args, **kwargs)
  File "/Users/kmantel/Documents/PsyNeuLink/PsyNeuLink/Components/Process.py", line 875, in __init__
    context=context)
  File "/Users/kmantel/Documents/PsyNeuLink/PsyNeuLink/Components/ShellClasses.py", line 90, in __init__
    context=context)
  File "/Users/kmantel/Documents/PsyNeuLink/PsyNeuLink/Components/Component.py", line 695, in __init__
    self._instantiate_attributes_before_function(context=context)
  File "/Users/kmantel/Documents/PsyNeuLink/PsyNeuLink/Components/Process.py", line 919, in _instantiate_attributes_before_function
    self._instantiate_pathway(context=context)
  File "/Users/kmantel/Documents/PsyNeuLink/PsyNeuLink/Components/Process.py", line 993, in _instantiate_pathway
    self._standardize_config_entries(pathway=pathway, context=context)
  File "/Users/kmantel/Documents/PsyNeuLink/PsyNeuLink/Components/Process.py", line 1137, in _standardize_config_entries
    self.runtime_params_dict[pathway[i]] = None
TypeError: unhashable type: 'numpy.ndarray'

Supposedly a one-line fix

from PsyNeuLink.Components.Mechanisms.ProcessingMechanisms.DDM import *
from PsyNeuLink.Components.Mechanisms.ProcessingMechanisms.TransferMechanism import *
from PsyNeuLink.Components.Process import process
from PsyNeuLink.Globals.Keywords import *
from PsyNeuLink.Globals.Run import run

import random
random.seed(0)
np.random.seed(0)

random_matrix = get_matrix(RANDOM_CONNECTIVITY_MATRIX, 2, 5)

myInputLayer = TransferMechanism(name='Input Layer',
                        function=Linear(),
                        default_input_value = [0,0])

myHiddenLayer = TransferMechanism(name='Hidden Layer 1',
                         function=Logistic(gain=1.0, bias=0),
                         default_input_value = np.zeros((5,)))

myDDM = DDM(name='My_DDM',
            function=BogaczEtAl(drift_rate=0.5,
                                threshold=1,
                                starting_point=0.0))

myProcess = process(name='Neural Network DDM Process',
                    default_input_value=[0, 0],
                    pathway=[myInputLayer,
                             random_matrix,
                             # RANDOM_CONNECTIVITY_MATRIX,
                             # FULL_CONNECTIVITY_MATRIX,
                             myHiddenLayer,
                             FULL_CONNECTIVITY_MATRIX,
                             myDDM])

myProcess.reportOutputPref = True
myInputLayer.reportOutputPref = True
myHiddenLayer.reportOutputPref = True
myDDM.reportOutputPref = PreferenceEntry(True, PreferenceLevel.INSTANCE)

# myProcess.execute(input=[-1, 2])
# myProcess.run(inputs=[-1, 2])
run(myProcess, [[-1,2],[2,3],[5,5]])
# run(myProcess, inputs=[[-1,2],[2,3],[5,5]])

Tutorial notebook bug

The tutorial notebook is not running fully because of an outdated argument in TransferMechanism (and an import error with IDENTITY_MATRIX, if you fix the first error). I submited a PR (#452) to fix it.

§ jupyter nbconvert --execute --ExecutePreprocessor.timeout=300 PsyNeuLink\ Tutorial.ipynb
[NbConvertApp] Converting notebook PsyNeuLink Tutorial.ipynb to html
[NbConvertApp] Executing notebook with kernel: python3
[NbConvertApp] ERROR | Error while converting 'PsyNeuLink Tutorial.ipynb'
Traceback (most recent call last):
  File "/Users/fabien/.pyenv/versions/3.6.2/lib/python3.6/site-packages/nbconvert/nbconvertapp.py", line 381, in export_single_notebook
    output, resources = self.exporter.from_filename(notebook_filename, resources=resources)
  File "/Users/fabien/.pyenv/versions/3.6.2/lib/python3.6/site-packages/nbconvert/exporters/exporter.py", line 172, in from_filename
    return self.from_file(f, resources=resources, **kw)
  File "/Users/fabien/.pyenv/versions/3.6.2/lib/python3.6/site-packages/nbconvert/exporters/exporter.py", line 190, in from_file
    return self.from_notebook_node(nbformat.read(file_stream, as_version=4), resources=resources, **kw)
  File "/Users/fabien/.pyenv/versions/3.6.2/lib/python3.6/site-packages/nbconvert/exporters/html.py", line 84, in from_notebook_node
    return super(HTMLExporter, self).from_notebook_node(nb, resources, **kw)
  File "/Users/fabien/.pyenv/versions/3.6.2/lib/python3.6/site-packages/nbconvert/exporters/templateexporter.py", line 268, in from_notebook_node
    nb_copy, resources = super(TemplateExporter, self).from_notebook_node(nb, resources, **kw)
  File "/Users/fabien/.pyenv/versions/3.6.2/lib/python3.6/site-packages/nbconvert/exporters/exporter.py", line 132, in from_notebook_node
    nb_copy, resources = self._preprocess(nb_copy, resources)
  File "/Users/fabien/.pyenv/versions/3.6.2/lib/python3.6/site-packages/nbconvert/exporters/exporter.py", line 309, in _preprocess
    nbc, resc = preprocessor(nbc, resc)
  File "/Users/fabien/.pyenv/versions/3.6.2/lib/python3.6/site-packages/nbconvert/preprocessors/base.py", line 47, in __call__
    return self.preprocess(nb,resources)
  File "/Users/fabien/.pyenv/versions/3.6.2/lib/python3.6/site-packages/nbconvert/preprocessors/execute.py", line 242, in preprocess
    nb, resources = super(ExecutePreprocessor, self).preprocess(nb, resources)
  File "/Users/fabien/.pyenv/versions/3.6.2/lib/python3.6/site-packages/nbconvert/preprocessors/base.py", line 70, in preprocess
    nb.cells[index], resources = self.preprocess_cell(cell, resources, index)
  File "/Users/fabien/.pyenv/versions/3.6.2/lib/python3.6/site-packages/nbconvert/preprocessors/execute.py", line 275, in preprocess_cell
    raise CellExecutionError(msg)
nbconvert.preprocessors.execute.CellExecutionError: An error occurred while executing the following cell:
------------------
# Create the mechanism
logistic_transfer_mechanism = TransferMechanism(default_input_value=[0, 0],
                                                function=Logistic(gain=1,
                                                                  bias=0))

# Package into a process
logistic_transfer_process = process(pathway=[logistic_transfer_mechanism])

# Iterate and plot
xVals = np.linspace(-3, 3, num=51)
y1Vals = np.zeros((51,))
y2Vals = np.zeros((51,))
for i in range(xVals.shape[0]):
    # clarify why multiplying times 2
    output = logistic_transfer_process.execute([xVals[i], xVals[i] * 2])
    y1Vals[i] = output[0]
    y2Vals[i] = output[1]
    # Progress bar
    print("-", end="")
plt.plot(xVals, y1Vals)
plt.plot(xVals, y2Vals)
plt.show()
------------------

TypeError: __init__() got an unexpected keyword argument 'default_input_value'

Exception thrown when executing UserDefinedFunction with default param values

There seems to be a problem with UserDefineFunction when the python function has default values. See the branch bug/udf/param_default_val. I added a test called test_mech_autogenerated_udf_execute that demonstrates this bug. The exception generated is:

self = (ParameterState p), variable = None, execution_id = None
runtime_params = None, context = <ContextFlags.COMMAND_LINE: 512>

    def _execute(self, variable=None, execution_id=None, runtime_params=None, context=None):
        """Call self.function with current parameter value as the variable
    
        Get backingfield ("base") value of param of function of Mechanism to which the ParameterState belongs.
        Update its value in call to state's function.
        """
    
        if variable is not None:
            return super()._execute(variable, execution_id=execution_id, runtime_params=runtime_params, context=context)
        else:
>           variable = getattr(self.source.parameters, self.name).get(execution_id)
E           AttributeError: 'str' object has no attribute 'parameters'

Contributions - bugs, optimzations, features

Wondering if specific requests for contributions are identified anywhere? From my (limited) experiences package maintainers would typically open issues that outline some of these things (e.g bugs, features, optimizations), and that is where potential contributors would find what they'd like to work on.

Rumelhart scripts and show_graph

@jdcpni
I tried to visualize the models in tests/learning/test_rumelhart_semantic_network.py, by uncommenting the lines with show_graph in them. This hits an undefined variable on line 3854 of psyneulink/components/system.py, and I'm not sure what the intent of the line was. Attaching a failed output

rumelhart showgraph.txt

Create a finer grained TimeScale than what is currently known as TimeStep

Currently, the finest grain TimeScale present in PsyNeuLink is TimeStep. This corresponds to "the execution of all Mechanisms allowed to execute from a single consideration_set of a Scheduler".

This is problematic in cases where a mechanism, such as an LCA, executes multiple times in a given TimeStep. In these cases, all executions of the mechanism in that TimeStep will be logged as occurring at the exact same Time.

To fix this, we should implement one finer grained TimeScale that represents the number of executions of a Mechanism within a given TimeStep. It was suggested that the name for this finer-grained TimeScale is implied to be "Call" by our Conditions (AfterNCalls, etc.), but there was also a suggestion that "Cycle" may be a better name.

As part of the effort of modifying Time in PsyNeuLink, there was also a suggestion that what is currently known as "Pass" should be renamed to "Execution Step" for clarity. We should consider this suggestion.

Unclear TransferFunction ReLU behaviour

The python version looks like this:

        gain = self.get_current_function_param(GAIN, execution_id)
        bias = self.get_current_function_param(BIAS, execution_id)
        leak = self.get_current_function_param(LEAK, execution_id)

        result = np.maximum(gain * (variable - bias), bias, leak * (variable - bias))

The thrid parameter is unclear. it should be an output, but as an expression, it's just ignored.
the llvm version:

        var = builder.load(ptri)
        val = builder.fsub(var, bias)
        val1 = builder.fmul(val, gain)
        val2 = builder.fmul(val, leak)

        val = builder.call(max_f, [val1, bias])

Ignores the second value and gives identical results.

Determine if there is a more intuitive way to handle the API surrounding values and modulated values of parameters

There is a distinction between the value of a Parameter and its corresponding ParameterPort that comes into play when control signals are used to modulate a Parameter. In this situation, the value of the ParameterPort is changed, and not the value of the Parameter itself.

More often than not, the value of a ParameterPort is what is relevant to the user as it is the actual value that will be used by its owner Mechanism, but the API for accessing it or enabling logging on it is a little clunky.

e.g. here is the code to enable logging on the modulated value of a parameter termination_threshold on mechanism my_mechanism:

my_mechanism.set_log_conditions(['mod_termination_threshold'])

here is the code to enable logging on the base value:

my_mechanism.set_log_conditions(['termination_threshold'])

A user is likely to do the latter while intending the former. We should determine if there is a more intuitive way for this API to work, perhaps by adding an argument to the relevant method call(s) that would toggle between the two.

Pytorchmodelcreator layer biases are not consistent with Pytorch biases

When layer biases are initialized within PsyNeuLink during the creation of a Pytorch model they are set to be zero (see: pytorchmodelcreator.py:84)

However, the standard Pytorch behaviour is to randomly initialize the biases.
Example:
nn.Linear(1,1)
Is equivalent to a single neuron with linear activation, but has random biases.

Current test cases don't catch this, as they avoid using nn.Linear and related Pytorch layer creation methods altogether (instead opting to directly call lowlevel torch tensor operations).

We should look into ways to correct this inconsistency by allowing biases to be copied from PNL, and add more robust test cases that account for this.

Add compiler support for more scheduling conditions

Conditions marked [x] are implemented and tested.

  • class WhileNot(Condition):
  • class Always(Condition):
  • class Never(Condition):
  • class All(Condition):
  • class Any(Condition):
  • class Not(Condition):
  • class NWhen(Condition):
  • class BeforeTimeStep(Condition):
  • class AtTimeStep(Condition):
  • class AfterTimeStep(Condition):
  • class AfterNTimeSteps(Condition):
  • class BeforePass(Condition):
  • class AtPass(Condition):
  • class AfterPass(Condition):
  • class AfterNPasses(Condition):
  • class EveryNPasses(Condition):
  • class BeforeTrial(Condition):
  • class AtTrial(Condition):
  • class AfterTrial(Condition):
  • class AfterNTrials(Condition):
  • class AtRun(Condition):
  • class AfterRun(Condition):
  • class AfterNRuns(Condition):
  • class BeforeNCalls(_DependencyValidation, Condition):
  • class AtNCalls(_DependencyValidation, Condition):
  • class AfterCall(_DependencyValidation, Condition):
  • class AfterNCalls(_DependencyValidation, Condition):
  • class AfterNCallsCombined(_DependencyValidation, Condition):
  • class EveryNCalls(_DependencyValidation, Condition):
  • class JustRan(_DependencyValidation, Condition):
  • class AllHaveRun(_DependencyValidation, Condition):
  • class WhenFinished(_DependencyValidation, Condition):
  • class WhenFinishedAny(_DependencyValidation, Condition):
  • class WhenFinishedAll(_DependencyValidation, Condition):
  • class Threshold(_DependencyValidation, Condition):

Implement Parameter Space Partitioning algo

A modeling toolkit is substantially strengthened by being able to showcase all of a model's possible predictions. While exhaustive sweeps can generate this for small enough model spaces, for more complex models this must be done in a more sophisticated matter. A promising method seems to be Pitt et al.'s Parameter Space Partitioning (Pitt, Kim, Navarro, and Myung, 2006, Psych Review), which efficiently samples the model's parameter space to identify qualitatively different data patterns (where "qualitatively different" is defined by the user). Additional information, including a copy of the paper, visual demo, and matlab implementation are available at http://faculty.psy.ohio-state.edu/myung/personal/psp.html.

Convert UniformToNormalDist to use private PRNG

See other function in the file on how to use private RandomState
offending line:

        sample = np.random.rand(1)[0]
        result = ((np.sqrt(2) * erfinv(2 * sample - 1)) * standard_deviation) + mean

Convert BayesGLM to use private PRNG

See other function in the file on how to use private RandomState
offending line:

        phi = np.random.gamma(gamma_shape_n / 2, gamma_size_n / 2)
        return np.random.multivariate_normal(mu_n.reshape(-1,), phi * np.linalg.inv(Lambda_n))

bidirectional interaction?

How can I implement bi-directional projections in PsyNeuLink (e.g. the kind of projections used in the Interactive Activation and Competition model)? I want to get a recurrent system that can do pattern completion. It would be great if there are some working examples, but some high-level suggestions would be good enough! Thanks!

Inconsistent shape of instance_defaults.value between ObjectiveMechanism and its function_object

>>> import psyneulink as pnl
>>> A = pnl.components.mechanisms.ObjectiveMechanism(default_variable=[0,0,0])
>>> print(A.function_object.instance_defaults.value)
[ 0.  0.  0.]
>>> print(A.instance_defaults.value)
[[ 0.  0.  0.]]

using 2d variable as input does not help

>>> B = pnl.components.mechanisms.ObjectiveMechanism(default_variable=[[0,0,0]])
>>> print(B.function_object.instance_defaults.value)
[ 0.  0.  0.]
>>> print(B.instance_defaults.value)
[[ 0.  0.  0.]]

This causes problems with the output state specification

>>> print(A.output_states[0]._variable)
('OWNER_VALUE', 0)

Using function_object value shape would pass 0. instead of [ 0. 0. 0.]

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.