GithubHelp home page GithubHelp logo

emer / leabra Goto Github PK

View Code? Open in Web Editor NEW
72.0 7.0 21.0 33.89 MB

Go implementation of Leabra algorithm for biologically-based models of cognition, based on emergent framework (with Python interface)

Home Page: https://emersim.org

License: BSD 3-Clause "New" or "Revised" License

Go 67.17% Makefile 0.50% Python 32.33%
go golang neural-network cognitive-neuroscience cognitive-science computational-neuroscience artificial-intelligence artificial-neural-networks emergent

leabra's Introduction

Leabra in Go emergent

Go Report Card Go Reference CI Codecov

This is the Go implementation of the Leabra algorithm for biologically-based models of cognition, based on the Go emergent framework (with optional Python interface).

See Wiki Install for installation instructions, and the Wiki Rationale and History pages for a more detailed rationale for the new version of emergent, and a history of emergent (and its predecessors).

See the ra25 example for a complete working example (intended to be a good starting point for creating your own models), and any of the 26 models in the Comp Cog Neuro sims repository which also provide good starting points. See the etable wiki for docs and example code for the widely-used etable data table structure, and the family_trees example in the CCN textbook sims which has good examples of many standard network representation analysis techniques (PCA, cluster plots, RSA).

See python README and Python Wiki for info on using Python to run models.

Current Status / News

  • Nov 2020: Full Python conversions of CCN sims complete, and eTorch for viewing and interacting with PyTorch models.

  • April 2020: GoGi GUI version 1.0 released, and updated install instructions to use go.mod modules for most users.

  • 12/30/2019: Version 1.0.0 Released! -- CCN textbook simulations are done and hip, deep and pbwm variants are in place and robustly tested.

  • 3/2019: Python interface is up and running! See the python directory in leabra for the README status and how to give it a try. You can run the full examples/ra25 code using Python, including the GUI etc.

Design

  • leabra sub-package provides a clean, well-organized implementation of core Leabra algorithms and Network structures. More specialized modifications such as DeepLeabra or PBWM or PVLV are all (going to be) implemented as additional specialized code that builds on / replaces elements of the basic version. The goal is to make all of the code simpler, more transparent, and more easily modified by end users. You should not have to dig through deep chains of C++ inheritance to find out what is going on. Nevertheless, the basic tradeoffs of code re-use dictate that not everything should be in-line in one massive blob of code, so there is still some inevitable tracking down of function calls etc. The algorithm overview below should be helpful in finding everything.

  • ActParams (in act.go), InhibParams (in inhib.go), and LearnNeurParams / LearnSynParams (in learn.go) provide the core parameters and functions used, including the X-over-X-plus-1 activation function, FFFB inhibition, and the XCal BCM-like learning rule, etc. This function-based organization should be clearer than the purely structural organization used in C++ emergent.

  • There are 3 main levels of structure: Network, Layer and Prjn (projection). The network calls methods on its Layers, and Layers iterate over both Neuron data structures (which have only a minimal set of methods) and the Prjns, to implement the relevant computations. The Prjn fully manages everything about a projection of connectivity between two layers, including the full list of Syanpse elements in the connection. There is no "ConGroup" or "ConState" level as was used in C++, which greatly simplifies many things. The Layer also has a set of Pool elements, one for each level at which inhibition is computed (there is always one for the Layer, and then optionally one for each Sub-Pool of units (Pool is the new simpler term for "Unit Group" from C++ emergent).

  • The NetworkStru and LayerStru structs manage all the core structural aspects of things (data structures etc), and then the algorithm-specific versions (e.g., leabra.Network) use Go's anonymous embedding (akin to inheritance in C++) to transparently get all that functionality, while then directly implementing the algorithm code. Almost every step of computation has an associated method in leabra.Layer, so look first in layer.go to see how something is implemented.

  • Each structural element directly has all the parameters controlling its behavior -- e.g., the Layer contains an ActParams field (named Act), etc, instead of using a separate Spec structure as in C++ emergent. The Spec-like ability to share parameter settings across multiple layers etc is instead achieved through a styling-based paradigm -- you apply parameter "styles" to relevant layers instead of assigning different specs to them. This paradigm should be less confusing and less likely to result in accidental or poorly-understood parameter applications. We adopt the CSS (cascading-style-sheets) standard where parameters can be specifed in terms of the Name of an object (e.g., #Hidden), the Class of an object (e.g., .TopDown -- where the class name TopDown is manually assigned to relevant elements), and the Type of an object (e.g., Layer applies to all layers). Multiple space-separated classes can be assigned to any given element, enabling a powerful combinatorial styling strategy to be used.

  • Go uses interfaces to represent abstract collections of functionality (i.e., sets of methods). The emer package provides a set of interfaces for each structural level (e.g., emer.Layer etc) -- any given specific layer must implement all of these methods, and the structural containers (e.g., the list of layers in a network) are lists of these interfaces. An interface is implicitly a pointer to an actual concrete object that implements the interface. Thus, we typically need to convert this interface into the pointer to the actual concrete type, as in:

func (nt *Network) InitActs() {
	for _, ly := range nt.Layers {
		if ly.IsOff() {
			continue
		}
		ly.(*Layer).InitActs() // ly is the emer.Layer interface -- (*Layer) converts to leabra.Layer
	}
}
  • The emer interfaces are designed to support generic access to network state, e.g., for the 3D network viewer, but specifically avoid anything algorithmic. Thus, they should allow viewing of any kind of network, including PyTorch backprop nets.

  • There is also a leabra.LeabraLayer and leabra.LeabraPrjn interface, defined in leabra.go, which provides a virtual interface for the Leabra-specific algorithm functions at the basic level. These interfaces are used in the base leabra code, so that any more specialized version that embeds the basic leabra types will be called instead. See deep sub-package for implemented example that does DeepLeabra on top of the basic leabra foundation.

  • Layers have a Shape property, using the etensor.Shape type, which specifies their n-dimensional (tensor) shape. Standard layers are expected to use a 2D Y*X shape (note: dimension order is now outer-to-inner or RowMajor now), and a 4D shape then enables Pools ("unit groups") as hypercolumn-like structures within a layer that can have their own local level of inihbition, and are also used extensively for organizing patterns of connectivity.

Naming Conventions

There are several changes from the original C++ emergent implementation for how things are named now:

  • Pool <- Unit_Group -- A group of Neurons that share pooled inhibition. Can be entire layer and / or sub-pools within a layer.
  • AlphaCyc <- Trial -- We are now distinguishing more clearly between network-level timing units (e.g., the 100 msec alpha cycle over which learning operates within posterior cortex) and environmental or experimental timing units, e.g., the Trial etc. Please see the TimeScales type for an attempt to standardize the different units of time along these different dimensions. The examples/ra25 example uses trials and epochs for controlling the "environment" (such as it is), while the algorithm-specific code refers to AlphaCyc, Quarter, and Cycle, which are the only time scales that are specifically coded within the algorithm -- everything else is up to the specific model code.

The Leabra Algorithm

Leabra stands for Local, Error-driven and Associative, Biologically Realistic Algorithm, and it implements a balance between error-driven (backpropagation) and associative (Hebbian) learning on top of a biologically-based point-neuron activation function with inhibitory competition dynamics (either via inhibitory interneurons or an approximation thereof), which produce k-Winners-Take-All (kWTA) sparse distributed representations. Extensive documentation is available from the online textbook: Computational Cognitive Neuroscience which serves as a second edition to the original book: Computational Explorations in Cognitive Neuroscience: Understanding the Mind by Simulating the Brain, O'Reilly and Munakata, 2000, Cambridge, MA: MIT Press. Computational Explorations..

The name is pronounced like "Libra" and is intended to connote the balance of various different factors in an attempt to approach the "golden middle" ground between biological realism and computational efficiency and the ability to simulate complex cognitive function.

The version of Leabra implemented here corresponds to version 8.5 of C++ emergent (cemer).

The basic activation dynamics of Leabra are based on standard electrophysiological principles of real neurons, and in discrete spiking mode we implement exactly the AdEx (adapting exponential) model of Gerstner and colleagues Scholarpedia article on AdEx. The basic leabra package implements the rate code mode (which runs faster and allows for smaller networks), which provides a very close approximation to the AdEx model behavior, in terms of a graded activation signal matching the actual instantaneous rate of spiking across a population of AdEx neurons. We generally conceive of a single rate-code neuron as representing a microcolumn of roughly 100 spiking pyramidal neurons in the neocortex. Conversion factors from biological units from AdEx to normalized units used in Leabra are in this google sheet.

The excitatory synaptic input conductance (Ge in the code, known as net input in artificial neural networks) is computed as an average, not a sum, over connections, based on normalized, sigmoidaly transformed weight values, which are subject to scaling on a projection level to alter relative contributions. Automatic scaling is performed to compensate for differences in expected activity level in the different projections. See section on Input Scaling for details.

Inhibition is computed using a feed-forward (FF) and feed-back (FB) inhibition function (FFFB) that closely approximates the behavior of inhibitory interneurons in the neocortex. FF is based on a multiplicative factor applied to the average excitatory net input coming into a layer, and FB is based on a multiplicative factor applied to the average activation within the layer. These simple linear functions do an excellent job of controlling the overall activation levels in bidirectionally connected networks, producing behavior very similar to the more abstract computational implementation of kWTA dynamics implemented in previous versions.

There is a single learning equation, derived from a very detailed model of spike timing dependent plasticity (STDP) by Urakubo, Honda, Froemke, et al (2008), that produces a combination of Hebbian associative and error-driven learning. For historical reasons, we call this the XCAL equation (eXtended Contrastive Attractor Learning), and it is functionally very similar to the BCM learning rule developed by Bienenstock, Cooper, and Munro (1982). The essential learning dynamic involves a Hebbian-like co-product of sending neuron activation times receiving neuron activation, which biologically reflects the amount of calcium entering through NMDA channels, and this co-product is then compared against a floating threshold value. To produce the Hebbian learning dynamic, this floating threshold is based on a longer-term running average of the receiving neuron activation (AvgL in the code). This is the key idea for the BCM algorithm. To produce error-driven learning, the floating threshold is based on a faster running average of activation co-products (AvgM), which reflects an expectation or prediction, against which the instantaneous, later outcome is compared.

Weights are subject to a contrast enhancement function, which compensates for the soft (exponential) weight bounding that keeps weights within the normalized 0-1 range. Contrast enhancement is important for enhancing the selectivity of self-organizing learning, and generally results in faster learning with better overall results. Learning operates on the underlying internal linear weight value. Biologically, we associate the underlying linear weight value with internal synaptic factors such as actin scaffolding, CaMKII phosphorlation level, etc, while the contrast enhancement operates at the level of AMPA receptor expression.

There are various extensions to the algorithm that implement special neural mechanisms associated with the prefrontal cortex and basal ganglia PBWM, dopamine systems PVLV, the Hippocampus, and predictive learning and temporal integration dynamics associated with the thalamocortical circuits DeepLeabra. All of these are (will be) implemented as additional modifications of the core, simple leabra implementation, instead of having everything rolled into one giant hairball as in the original C++ implementation.

Pseudocode as a LaTeX doc for Paper Appendix

You can copy the mediawiki source of the following section into a file, and run pandoc on it to convert to LaTeX (or other formats) for inclusion in a paper. As this wiki page is always kept updated, it is best to regenerate from this source -- very easy:

curl "https://raw.githubusercontent.com/emer/leabra/master/README.md" -o appendix.md
pandoc appendix.md -f gfm -t latex -o appendix.tex

You can then edit the resulting .tex file to only include the parts you want, etc.

Leabra Algorithm Equations

The pseudocode for Leabra is given here, showing exactly how the pieces of the algorithm fit together, using the equations and variables from the actual code. Compared to the original C++ emergent implementation, this Go version of emergent is much more readable, while also not being too much slower overall.

There are also other implementations of Leabra available:

  • leabra7 Python implementation of the version 7 of Leabra, by Daniel Greenidge and Ken Norman at Princeton.
  • Matlab (link into the cemer C++ emergent source tree) -- a complete implementation of these equations in Matlab, coded by Sergio Verduzco-Flores.
  • Python implementation by Fabien Benureau.
  • R implementation by Johannes Titz.

This repository contains specialized additions to the core algorithm described here:

  • deep has the DeepLeabra mechanisms for simulating the deep neocortical <-> thalamus pathways (wherein basic Leabra represents purely superficial-layer processing)
  • pbwm has basic reinforcement learning models such as Rescorla-Wagner and TD (temporal differences).
  • pbwm has the prefrontal-cortex basal ganglia working memory model (PBWM).
  • hip has the hippocampus specific learning mechanisms.

Timing

Leabra is organized around the following timing, based on an internally-generated alpha-frequency (10 Hz, 100 msec periods) cycle of expectation followed by outcome, supported by neocortical circuitry in the deep layers and the thalamus, as hypothesized in the DeepLeabra extension to standard Leabra:

  • A Trial lasts 100 msec (10 Hz, alpha frequency), and comprises one sequence of expectation -- outcome learning, organized into 4 quarters.

    • Biologically, the deep neocortical layers (layers 5, 6) and the thalamus have a natural oscillatory rhythm at the alpha frequency. Specific dynamics in these layers organize the cycle of expectation vs. outcome within the alpha cycle.
  • A Quarter lasts 25 msec (40 Hz, gamma frequency) -- the first 3 quarters (75 msec) form the expectation / minus phase, and the final quarter are the outcome / plus phase.

    • Biologically, the superficial neocortical layers (layers 2, 3) have a gamma frequency oscillation, supporting the quarter-level organization.
  • A Cycle represents 1 msec of processing, where each neuron updates its membrane potential etc according to the above equations.

Variables

The leabra.Neuron struct contains all the neuron (unit) level variables, and the leabra.Layer contains a simple Go slice of these variables. Optionally, there can be leabra.Pool pools of subsets of neurons that correspond to hypercolumns, and support more local inhibitory dynamics (these used to be called UnitGroups in the C++ version).

  • Act = overall rate coded activation value -- what is sent to other neurons -- typically in range 0-1
  • Ge = total excitatory synaptic conductance -- the net excitatory input to the neuron -- does not include Gbar.E
  • Gi = total inhibitory synaptic conductance -- the net inhibitory input to the neuron -- does not include Gbar.I
  • Inet = net current produced by all channels -- drives update of Vm
  • Vm = membrane potential -- integrates Inet current over time
  • Targ = target value: drives learning to produce this activation value
  • Ext = external input: drives activation of unit from outside influences (e.g., sensory input)
  • AvgSS = super-short time-scale activation average -- provides the lowest-level time integration -- for spiking this integrates over spikes before subsequent averaging, and it is also useful for rate-code to provide a longer time integral overall
  • AvgS = short time-scale activation average -- tracks the most recent activation states (integrates over AvgSS values), and represents the plus phase for learning in XCAL algorithms
  • AvgM = medium time-scale activation average -- integrates over AvgS values, and represents the minus phase for learning in XCAL algorithms
  • AvgL = long time-scale average of medium-time scale (trial level) activation, used for the BCM-style floating threshold in XCAL
  • AvgLLrn = how much to learn based on the long-term floating threshold (AvgL) for BCM-style Hebbian learning -- is modulated by level of AvgL itself (stronger Hebbian as average activation goes higher) and optionally the average amount of error experienced in the layer (to retain a common proportionality with the level of error-driven learning across layers)
  • AvgSLrn = short time-scale activation average that is actually used for learning -- typically includes a small contribution from AvgM in addition to mostly AvgS, as determined by LrnActAvgParams.LrnM -- important to ensure that when unit turns off in plus phase (short time scale), enough medium-phase trace remains so that learning signal doesn't just go all the way to 0, at which point no learning would take place
  • ActM = records the traditional posterior-cortical minus phase activation, as activation after third quarter of current alpha cycle
  • ActP = records the traditional posterior-cortical plus_phase activation, as activation at end of current alpha cycle
  • ActDif = ActP - ActM -- difference between plus and minus phase acts -- reflects the individual error gradient for this neuron in standard error-driven learning terms
  • ActDel delta activation: change in Act from one cycle to next -- can be useful to track where changes are taking place
  • ActAvg = average activation (of final plus phase activation state) over long time intervals (time constant = DtPars.AvgTau -- typically 200) -- useful for finding hog units and seeing overall distribution of activation
  • Noise = noise value added to unit (ActNoiseParams determines distribution, and when / where it is added)
  • GiSyn = aggregated synaptic inhibition (from Inhib projections) -- time integral of GiRaw -- this is added with computed FFFB inhibition to get the full inhibition in Gi
  • GiSelf = total amount of self-inhibition -- time-integrated to avoid oscillations

The following are more implementation-level variables used in integrating synaptic inputs:

  • ActSent = last activation value sent (only send when diff is over threshold)
  • GeRaw = raw excitatory conductance (net input) received from sending units (send delta's are added to this value)
  • GeInc = delta increment in GeRaw sent using SendGeDelta
  • GiRaw = raw inhibitory conductance (net input) received from sending units (send delta's are added to this value)
  • GiInc = delta increment in GiRaw sent using SendGeDelta

Neurons are connected via synapses parameterized with the following variables, contained in the leabra.Synapse struct. The leabra.Prjn contains all of the synaptic connections for all the neurons across a given layer -- there are no Neuron-level data structures in the Go version.

  • Wt = synaptic weight value -- sigmoid contrast-enhanced
  • LWt = linear (underlying) weight value -- learns according to the lrate specified in the connection spec -- this is converted into the effective weight value, Wt, via sigmoidal contrast enhancement (see WtSigParams)
  • DWt = change in synaptic weight, from learning
  • Norm = DWt normalization factor -- reset to max of abs value of DWt, decays slowly down over time -- serves as an estimate of variance in weight changes over time
  • Moment = momentum -- time-integrated DWt changes, to accumulate a consistent direction of weight change and cancel out dithering contradictory changes

Activation Update Cycle (every 1 msec): Ge, Gi, Act

The leabra.Network Cycle method in leabra/network.go looks like this:

// Cycle runs one cycle of activation updating:
// * Sends Ge increments from sending to receiving layers
// * Average and Max Ge stats
// * Inhibition based on Ge stats and Act Stats (computed at end of Cycle)
// * Activation from Ge, Gi, and Gl
// * Average and Max Act stats
// This basic version doesn't use the time info, but more specialized types do, and we
// want to keep a consistent API for end-user code.
func (nt *Network) Cycle(ltime *Time) {
	nt.SendGDelta(ltime) // also does integ
	nt.AvgMaxGe(ltime)
	nt.InhibFmGeAct(ltime)
	nt.ActFmG(ltime)
	nt.AvgMaxAct(ltime)
}

For every cycle of activation updating, we compute the excitatory input conductance Ge, then compute inhibition Gi based on average Ge and Act (from previous cycle), then compute the Act based on those conductances. The equations below are not shown in computational order but rather conceptual order for greater clarity. All of the relevant parameters are in the leabra.Layer.Act and Inhib fields, which are of type ActParams and InhibParams -- in this Go version, the parameters have been organized functionally, not structurally, into three categories.

  • Ge excitatory conductance is actually computed using a highly efficient delta-sender-activation based algorithm, which only does the expensive multiplication of activations * weights when the sending activation changes by a given amount (OptThreshParams.Delta). However, conceptually, the conductance is given by this equation:

    • GeRaw += Sum_(recv) Prjn.GScale * Send.Act * Wt
      • Prjn.GScale is the Input Scaling factor that includes 1/N to compute an average, and the WtScaleParams Abs absolute scaling and Rel relative scaling, which allow one to easily modulate the overall strength of different input projections.
    • Ge += DtParams.Integ * (1/ DtParams.GTau) * (GeRaw - Ge)
      • This does a time integration of excitatory conductance, GTau = 1.4 default, and global integration time constant, Integ = 1 for 1 msec default.
  • Gi inhibtory conductance combines computed and synaptic-level inhibition (if present) -- most of code is in leabra/inhib.go

    • ffNetin = avgGe + FFFBParams.MaxVsAvg * (maxGe - avgGe)
    • ffi = FFFBParams.FF * MAX(ffNetin - FFBParams.FF0, 0)
      • feedforward component of inhibition with FF multiplier (1 by default) -- has FF0 offset and can't be negative (that's what the MAX(.. ,0) part does).
      • avgGe is average of Ge variable across relevant Pool of neurons, depending on what level this is being computed at, and maxGe is max of Ge across Pool
    • fbi += (1 / FFFBParams.FBTau) * (FFFBParams.FB * avgAct - fbi)
      • feedback component of inhibition with FB multiplier (1 by default) -- requires time integration to dampen oscillations that otherwise occur -- FBTau = 1.4 default.
    • Gi = FFFBParams.Gi * (ffi + fbi)
      • total inhibitory conductance, with global Gi multiplier -- default of 1.8 typically produces good sparse distributed representations in reasonably large layers (25 units or more).
  • Act activation from Ge, Gi, Gl (most of code is in leabra/act.go, e.g., ActParams.ActFmG method). When neurons are above thresholds in subsequent condition, they obey the "geLin" function which is linear in Ge:

    • geThr = (Gi * (Erev.I - Thr) + Gbar.L * (Erev.L - Thr) / (Thr - Erev.E)
    • nwAct = NoisyXX1(Ge * Gbar.E - geThr)
      • geThr = amount of excitatory conductance required to put the neuron exactly at the firing threshold, XX1Params.Thr = .5 default, and NoisyXX1 is the x / (x+1) function convolved with gaussian noise kernel, where x = XX1Parms.Gain * (Ge - geThr) and Gain is 100 by default
    • if Act < XX1Params.VmActThr && Vm <= X11Params.Thr: nwAct = NoisyXX1(Vm - Thr)
      • it is important that the time to first "spike" (above-threshold activation) be governed by membrane potential Vm integration dynamics, but after that point, it is essential that activation drive directly from the excitatory conductance Ge relative to the geThr threshold.
    • Act += (1 / DTParams.VmTau) * (nwAct - Act)
      • time-integration of the activation, using same time constant as Vm integration (VmTau = 3.3 default)
    • Vm += (1 / DTParams.VmTau) * Inet
    • Inet = Ge * (Erev.E - Vm) + Gbar.L * (Erev.L - Vm) + Gi * (Erev.I - Vm) + Noise
      • Membrane potential computed from net current via standard RC model of membrane potential integration. In practice we use normalized Erev reversal potentials and Gbar max conductances, derived from biophysical values: Erev.E = 1, .L = 0.3, .I = 0.25, Gbar's are all 1 except Gbar.L = .2 default.

Learning

XCAL DWt Function

Learning is based on running-averages of activation variables, parameterized in the leabra.Layer.Learn LearnParams field, mostly implemented in the leabra/learn.go file.

  • Running averages computed continuously every cycle, and note the compounding form. Tau params in LrnActAvgParams:

    • AvgSS += (1 / SSTau) * (Act - AvgSS)
      • super-short time scale running average, SSTau = 2 default -- this was introduced to smooth out discrete spiking signal, but is also useful for rate code.
    • AvgS += (1 / STau) * (AvgSS - AvgS)
      • short time scale running average, STau = 2 default -- this represents the plus phase or actual outcome signal in comparison to AvgM
    • AvgM += (1 / MTau) * (AvgS - AvgM)
      • medium time-scale running average, MTau = 10 -- this represents the minus phase or expectation signal in comparison to AvgS
    • AvgL += (1 / Tau) * (Gain * AvgM - AvgL); AvgL = MAX(AvgL, Min)
      • long-term running average -- this is computed just once per learning trial, not every cycle like the ones above -- params on AvgLParams: Tau = 10, Gain = 2.5 (this is a key param -- best value can be lower or higher) Min = .2
    • AvgLLrn = ((Max - Min) / (Gain - Min)) * (AvgL - Min)
      • learning strength factor for how much to learn based on AvgL floating threshold -- this is dynamically modulated by strength of AvgL itself, and this turns out to be critical -- the amount of this learning increases as units are more consistently active all the time (i.e., "hog" units). Params on AvgLParams, Min = 0.0001, Max = 0.5. Note that this depends on having a clear max to AvgL, which is an advantage of the exponential running-average form above.
    • AvgLLrn *= MAX(1 - layCosDiffAvg, ModMin)
      • also modulate by time-averaged cosine (normalized dot product) between minus and plus phase activation states in given receiving layer (layCosDiffAvg), (time constant 100) -- if error signals are small in a given layer, then Hebbian learning should also be relatively weak so that it doesn't overpower it -- and conversely, layers with higher levels of error signals can handle (and benefit from) more Hebbian learning. The MAX(ModMin) (ModMin = .01) factor ensures that there is a minimum level of .01 Hebbian (multiplying the previously-computed factor above). The .01 * .05 factors give an upper-level value of .0005 to use for a fixed constant AvgLLrn value -- just slightly less than this (.0004) seems to work best if not using these adaptive factors.
    • AvgSLrn = (1-LrnM) * AvgS + LrnM * AvgM
      • mix in some of the medium-term factor into the short-term factor -- this is important for ensuring that when neuron turns off in the plus phase (short term), that enough trace of earlier minus-phase activation remains to drive it into the LTD weight decrease region -- LrnM = .1 default.
  • Learning equation:

    • srs = Send.AvgSLrn * Recv.AvgSLrn

    • srm = Send.AvgM * Recv.AvgM

    • dwt = XCAL(srs, srm) + Recv.AvgLLrn * XCAL(srs, Recv.AvgL)

      • weight change is sum of two factors: error-driven based on medium-term threshold (srm), and BCM Hebbian based on long-term threshold of the recv unit (Recv.AvgL)
    • XCAL is the "check mark" linearized BCM-style learning function (see figure) that was derived from the Urakubo Et Al (2008) STDP model, as described in more detail in the CCN textbook

      • XCAL(x, th) = (x < DThr) ? 0 : (x > th * DRev) ? (x - th) : (-x * ((1-DRev)/DRev))
      • DThr = 0.0001, DRev = 0.1 defaults, and x ? y : z terminology is C syntax for: if x is true, then y, else z
    • DWtNorm -- normalizing the DWt weight changes is standard in current backprop, using the AdamMax version of the original RMS normalization idea, and benefits Leabra as well, and is On by default, params on DwtNormParams:

      • Norm = MAX((1 - (1 / DecayTau)) * Norm, ABS(dwt))
        • increment the Norm normalization using abs (L1 norm) instead of squaring (L2 norm), and with a small amount of decay: DecayTau = 1000.
      • dwt *= LrComp / MAX(Norm, NormMin)
        • normalize dwt weight change by the normalization factor, but with a minimum to prevent dividing by 0 -- LrComp compensates overall learning rate for this normalization (.15 default) so a consistent learning rate can be used, and NormMin = .001 default.
    • Momentum -- momentum is turned On by default, and has significant benefits for preventing hog units by driving more rapid specialization and convergence on promising error gradients. Parameters on MomentumParams:

      • Moment = (1 - (1 / MTau)) * Moment + dwt
      • dwt = LrComp * Moment
        • increment momentum from new weight change, MTau = 10, corresponding to standard .9 momentum factor (sometimes 20 = .95 is better), with LrComp = .1 comp compensating for increased effective learning rate.
    • DWt = Lrate * dwt

      • final effective weight change includes overall learning rate multiplier. For learning rate schedules, just directly manipulate the learning rate parameter -- not using any kind of builtin schedule mechanism.
  • Weight Balance -- this option (off by default but recommended for larger models) attempts to maintain more balanced weights across units, to prevent some units from hogging the representational space, by changing the rates of weight increase and decrease in the soft weight bounding function, as a function of the average receiving weights. All params in WtBalParams:

    • if (Wb.Avg < LoThr): Wb.Fact = LoGain * (LoThr - MAX(Wb.Avg, AvgThr)); Wb.Dec = 1 / (1 + Wb.Fact); Wb.Inc = 2 - Wb.Dec
    • else: Wb.Fact = HiGain * (Wb.Avg - HiThr); Wb.Inc = 1 / (1 + Wb.Fact); Wb.Dec = 2 - Wb.Inc
      • Wb is the WtBalRecvPrjn structure stored on the leabra.Prjn, per each Recv neuron. Wb.Avg = average of recv weights (computed separately and only every N = 10 weight updates, to minimize computational cost). If this average is relatively low (compared to LoThr = .4) then there is a bias to increase more than decrease, in proportion to how much below this threshold they are (LoGain = 6). If the average is relatively high (compared to HiThr = .4), then decreases are stronger than increases, HiGain = 4.
    • A key feature of this mechanism is that it does not change the sign of any weight changes, including not causing weights to change that are otherwise not changing due to the learning rule. This is not true of an alternative mechanism that has been used in various models, which normalizes the total weight value by subtracting the average. Overall this weight balance mechanism is important for larger networks on harder tasks, where the hogging problem can be a significant problem.
  • Weight update equation

    • The LWt value is the linear, non-contrast enhanced version of the weight value, and Wt is the sigmoidal contrast-enhanced version, which is used for sending netinput to other neurons. One can compute LWt from Wt and vice-versa, but numerical errors can accumulate in going back-and forth more than necessary, and it is generally faster to just store these two weight values.
    • DWt *= (DWt > 0) ? Wb.Inc * (1-LWt) : Wb.Dec * LWt
      • soft weight bounding -- weight increases exponentially decelerate toward upper bound of 1, and decreases toward lower bound of 0, based on linear, non-contrast enhanced LWt weights. The Wb factors are how the weight balance term shift the overall magnitude of weight increases and decreases.
    • LWt += DWt
      • increment the linear weights with the bounded DWt term
    • Wt = SIG(LWt)
      • new weight value is sigmoidal contrast enhanced version of linear weight
      • SIG(w) = 1 / (1 + (Off * (1-w)/w)^Gain)
    • DWt = 0
      • reset weight changes now that they have been applied.

Input Scaling

The Ge and Gi synaptic conductances computed from a given projection from one layer to the next reflect the number of receptors currently open and capable of passing current, which is a function of the activity of the sending layer, and total number of synapses. We use a set of equations to automatically normalize (rescale) these factors across different projections, so that each projection has roughly an equal influence on the receiving neuron, by default.

The most important factor to be mindful of for this automatic rescaling process is the expected activity level in a given sending layer. This is set initially to Layer.Inhib.ActAvg.Init, and adapted from there by the various other parameters in that Inhib.ActAvg struct. It is a good idea in general to set that Init value to a reasonable estimate of the proportion of activity you expect in the layer, and in very small networks, it is typically much better to just set the Fixed flag and keep this Init value as such, as otherwise the automatically computed averages can fluctuate significantly and thus create corresponding changes in input scaling. The default UseFirst flag tries to avoid the dependence on the Init values but sometimes the first value may not be very representative, so it is better to set Init and turn off UseFirst for more reliable performance.

Furthermore, we add two tunable parameters that further scale the overall conductance received from a given projection (one in a relative way compared to other projections, and the other a simple absolute multiplicative scaling factor). These are some of the most important parameters to configure in the model -- in particular the strength of top-down "back" projections typically must be relatively weak compared to bottom-up forward projections (e.g., a relative scaling factor of 0.1 or 0.2 relative to the forward projections).

The scaling contributions of these two factors are:

  • GScale = WtScale.Abs * (WtScale.Rel / Sum(all WtScale.Rel))

Thus, all the Rel factors contribute in proportion to their relative value compared to the sum of all such factors across all receiving projections into a layer, while Abs just multiplies directly.

In general, you want to adjust the Rel factors, to keep the total Ge and Gi levels relatively constant, while just shifting the relative contributions. In the relatively rare case where the overall Ge levels are too high or too low, you should adjust the Abs values to compensate.

Typically the Ge value should be between .5 and 1, to maintain a reasonably responsive neural response, and avoid numerical integration instabilities and saturation that can arise if the values get too high. You can record the Layer.Pools[0].Inhib.Ge.Avg and .Max values at the epoch level to see how these are looking -- this is especially important in large networks, and those with unusual, complex patterns of connectivity, where things might get out of whack.

Automatic Rescaling

Here are the relevant factors that are used to compute the automatic rescaling to take into account the expected activity level on the sending layer, and the number of connections in the projection. The actual code is in leabra/layer.go: GScaleFmAvgAct() and leabra/act.go SLayActScale

  • savg = sending layer average activation
  • snu = sending layer number of units
  • ncon = number of connections
  • slayActN = int(Round(savg * snu)) -- must be at least 1
  • sc = scaling factor, which is roughly 1 / expected number of active sending connections.
  • if ncon == snu: -- full connectivity
    • sc = 1 / slayActN
  • else: -- partial connectivity -- trickier
    • avgActN = int(Round(savg * ncon)) -- avg proportion of connections
    • expActN = avgActN + 2 -- add an extra 2 variance around expected value
    • maxActN = MIN(ncon, sLayActN) -- can't be more than number active
    • expActN = MIN(expActN, maxActN) -- constrain
    • sc = 1 / expActN

This sc factor multiplies the GScale factor as computed above.

leabra's People

Contributors

rcoreilly avatar rgobbel avatar rohrlich avatar stephenjread avatar zycyc avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

leabra's Issues

Segfault when resizing the window during training

Can be replicated for multiple programs under emer/examples. Go version is 1.14.4; dependencies downloaded with go get -u ./... with GO111MODULE=off.
Error log (ra25):

panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x20 pc=0x5cf855]

goroutine 51 [running]:
image.(*RGBA).Bounds(0x0, 0xc0045357b0, 0x10, 0x10, 0x1469f20)
	/usr/local/go/src/image/image.go:72 +0x5
image/draw.clip(0x18ccd60, 0x0, 0xc00272f1d0, 0x18c8ea0, 0xc002f6a280, 0xc00272f200, 0x0, 0x0, 0xc00272f220)
	/usr/local/go/src/image/draw/draw.go:75 +0x54
image/draw.DrawMask(0x18ccd60, 0x0, 0x433, 0x57, 0x445, 0x69, 0x18c8ea0, 0xc002f6a280, 0x0, 0x0, ...)
	/usr/local/go/src/image/draw/draw.go:107 +0xab
image/draw.Draw(...)
	/usr/local/go/src/image/draw/draw.go:101
github.com/goki/gi/gi.(*Viewport2D).DrawIntoParent(0xc0026de000, 0xc000160000)
	/home/theo/Projects/go/src/github.com/goki/gi/gi/viewport.go:362 +0x11a
github.com/goki/gi/gi.(*Viewport2D).RenderViewport2D(0xc0026de000)
	/home/theo/Projects/go/src/github.com/goki/gi/gi/viewport.go:542 +0x72
github.com/goki/gi/svg.(*Icon).Render2D(0xc0026de000)
	/home/theo/Projects/go/src/github.com/goki/gi/svg/icons.go:128 +0x11b
github.com/goki/gi/gi.(*Node2DBase).Render2DChildren(0xc004788000)
	/home/theo/Projects/go/src/github.com/goki/gi/gi/node2d.go:908 +0xa2
github.com/goki/gi/gi.(*Icon).Render2D(0xc004788000)
	/home/theo/Projects/go/src/github.com/goki/gi/gi/icon.go:153 +0x61
github.com/goki/gi/gi.(*Layout).Render2DChildren(0xc0026d50f0)
	/home/theo/Projects/go/src/github.com/goki/gi/gi/layout.go:1447 +0xb9
github.com/goki/gi/gi.(*Layout).Render2D(0xc0026d50f0)
	/home/theo/Projects/go/src/github.com/goki/gi/gi/layout.go:2107 +0xc3
github.com/goki/gi/gi.(*Node2DBase).Render2DTree(0xc0026d50f0)
	/home/theo/Projects/go/src/github.com/goki/gi/gi/node2d.go:869 +0x78
github.com/goki/gi/gi.(*PartsWidgetBase).Render2DParts(...)
	/home/theo/Projects/go/src/github.com/goki/gi/gi/widget.go:853
github.com/goki/gi/gi.(*ButtonBase).Render2D(0xc0026d4c80)
	/home/theo/Projects/go/src/github.com/goki/gi/gi/buttons.go:696 +0xd0
github.com/goki/gi/gi.(*Layout).Render2DChildren(0xc00263bdf0)
	/home/theo/Projects/go/src/github.com/goki/gi/gi/layout.go:1447 +0xb9
github.com/goki/gi/gi.(*Layout).Render2D(0xc00263bdf0)
	/home/theo/Projects/go/src/github.com/goki/gi/gi/layout.go:2107 +0xc3
github.com/goki/gi/gi.(*Node2DBase).Render2DTree(0xc00263bdf0)
	/home/theo/Projects/go/src/github.com/goki/gi/gi/node2d.go:869 +0x78
github.com/goki/gi/gi.(*PartsWidgetBase).Render2DParts(...)
	/home/theo/Projects/go/src/github.com/goki/gi/gi/widget.go:853
github.com/goki/gi/gi.(*ButtonBase).Render2D(0xc00263b980)
	/home/theo/Projects/go/src/github.com/goki/gi/gi/buttons.go:696 +0xd0
github.com/goki/gi/gi.(*Layout).Render2DChildren(0xc000287180)
	/home/theo/Projects/go/src/github.com/goki/gi/gi/layout.go:1447 +0xb9
github.com/goki/gi/gi.(*Frame).Render2D(0xc000287180)
	/home/theo/Projects/go/src/github.com/goki/gi/gi/frame.go:167 +0xc4
github.com/goki/gi/gi.(*Layout).Render2DChildren(0xc0008b2600)
	/home/theo/Projects/go/src/github.com/goki/gi/gi/layout.go:1447 +0xb9
github.com/goki/gi/gi.(*TabView).Render2D(0xc0008b2600)
	/home/theo/Projects/go/src/github.com/goki/gi/gi/tabview.go:511 +0xb6
github.com/goki/gi/gi.(*SplitView).Render2D(0xc0005b6000)
	/home/theo/Projects/go/src/github.com/goki/gi/gi/splitview.go:418 +0x1c9
github.com/goki/gi/gi.(*Layout).Render2DChildren(0xc000465180)
	/home/theo/Projects/go/src/github.com/goki/gi/gi/layout.go:1447 +0xb9
github.com/goki/gi/gi.(*Frame).Render2D(0xc000465180)
	/home/theo/Projects/go/src/github.com/goki/gi/gi/frame.go:167 +0xc4
github.com/goki/gi/gi.(*Layout).Render2DChildren(0xc000463b80)
	/home/theo/Projects/go/src/github.com/goki/gi/gi/layout.go:1447 +0xb9
github.com/goki/gi/gi.(*Layout).Render2D(0xc000463b80)
	/home/theo/Projects/go/src/github.com/goki/gi/gi/layout.go:2107 +0xc3
github.com/goki/gi/gi.(*Node2DBase).Render2DChildren(0xc000160000)
	/home/theo/Projects/go/src/github.com/goki/gi/gi/node2d.go:908 +0xa2
github.com/goki/gi/gi.(*Viewport2D).Render2D(0xc000160000)
	/home/theo/Projects/go/src/github.com/goki/gi/gi/viewport.go:643 +0x62
github.com/goki/gi/gi.(*Node2DBase).Render2DTree(0xc000160000)
	/home/theo/Projects/go/src/github.com/goki/gi/gi/node2d.go:869 +0x78
github.com/goki/gi/gi.(*Node2DBase).FullRender2DTree(0xc000160000)
	/home/theo/Projects/go/src/github.com/goki/gi/gi/node2d.go:728 +0x89
github.com/goki/gi/gi.(*Viewport2D).FullRender2DTree(0xc000160000)
	/home/theo/Projects/go/src/github.com/goki/gi/gi/viewport.go:563 +0x91
github.com/goki/gi/gi.(*Viewport2D).UpdateNodes(0xc000160000)
	/home/theo/Projects/go/src/github.com/goki/gi/gi/viewport.go:837 +0x4d5
github.com/goki/gi/gi.(*Viewport2D).NodeUpdated(0xc000160000, 0x195df40, 0xc0008d2000, 0x1, 0x13a06a0, 0xc00312d1c8)
	/home/theo/Projects/go/src/github.com/goki/gi/gi/viewport.go:749 +0x166
github.com/goki/gi/gi.SignalViewport2D(0x1930040, 0xc000160000, 0x1932140, 0xc0008d2000, 0x1, 0x13a06a0, 0xc00312d1c8)
	/home/theo/Projects/go/src/github.com/goki/gi/gi/viewport.go:703 +0x1d2
github.com/goki/ki/ki.(*Signal).Emit(0xc0008d2058, 0x1932140, 0xc0008d2000, 0x1, 0x13a06a0, 0xc00312d1c8)
	/home/theo/Projects/go/src/github.com/goki/ki/ki/signal.go:173 +0x1a7
github.com/goki/ki/ki.(*Node).UpdateEnd(0xc0008d2000, 0xc0008d2001)
	/home/theo/Projects/go/src/github.com/goki/ki/ki/node.go:1935 +0x20a
github.com/emer/emergent/netview.(*NetView).GoUpdate(0xc00076f800)
	/home/theo/Projects/go/src/github.com/emer/emergent/netview/netview.go:126 +0xf6
main.(*Sim).UpdateView(0x2d754a0, 0x18e2001)
	/home/theo/Projects/go/src/github.com/emer/leabra/examples/ra25/ra25.go:349 +0x92
main.(*Sim).AlphaCyc(0x2d754a0, 0x18dd101)
	/home/theo/Projects/go/src/github.com/emer/leabra/examples/ra25/ra25.go:416 +0x37b
main.(*Sim).TrainTrial(0x2d754a0)
	/home/theo/Projects/go/src/github.com/emer/leabra/examples/ra25/ra25.go:474 +0xa1
main.(*Sim).Train(0x2d754a0)
	/home/theo/Projects/go/src/github.com/emer/leabra/examples/ra25/ra25.go:574 +0x32
created by main.(*Sim).ConfigGui.func4
	/home/theo/Projects/go/src/github.com/emer/leabra/examples/ra25/ra25.go:1263 +0x6f

Node indices in Netview are flipped from indices in input files

I just noticed that when you hover over a node in Netview and it gives you the indices of that node, the values don't match the indices specified in a training data table. For example, node 0,1 in a training data table corresponds to node 1,0 in the Netview.

Panic during network activation update

Clicking on a different tab of simulation GUI has sometimes caused a panic in updating the UI but this is the first time I've seen this stack trace.

runtime.fatalpanic at panic.go:847
runtime.gopanic at panic.go:722
sync.(*WaitGroup).Wait at waitgroup.go:132
github.com/emer/leabra/leabra.(*NetworkStru).ThrLayFun at networkstru.go:622
github.com/emer/leabra/leabra.(*Network).SendGDelta at network.go:168
github.com/emer/leabra/leabra.(*Network).Cycle at network.go:157
github.com/emer/leabra/deep.(*Network).Cycle at network.go:126
main.(*Sim).AlphaCyc at audonly.go:575
main.(*Sim).TrainTrial at audonly.go:701
main.(*Sim).Train at audonly.go:1024
runtime.goexit at asm_amd64.s:1357

  • Async stack trace
    main.(*Sim).ConfigGui.func2 at audonly.go:1813

Main window for ra25 is misbehaving

I just updated to the newest version of gi and then updated leabra. When I started ra25 the main window opened up at several multiple widths of the size of my main external monitor. When I clicked the green button in the upper left, it then went full screen mode and fit in the external monitor. But when I then clicked the green button again, the window went totally black and expanded to many times larger than my external monitor.

stty behaviour, pyleabra

I compiled the pyleabra binary from the current version on the ra25.py example. I get a seemingly working GUI and a Python REPL. After quitting both (and confirming that no left-behind process is running), my bash terminal no longer echoes characters typed. Reproduced in X-forwarded sessions on two separate systems. ra25 compiled from go source does not yield this behaviour.

(stty sane resets to normal behaviour, but it seems noteworthy.)

[edit: Python 3.8.5, go 1.13]

env.py, line 378 causes Seg fault

The last line included in the block below appears to cause a seg fault:

# Python type for struct env.FixedTable
class FixedTable(go.GoClass):
  """FixedTable is a basic Env that manages patterns from an etable.Table, with\neither sequential or permuted random ordering, and uses standard Trial / Epoch\nTimeScale counters to record progress and iterations through the table.\nIt also records the outer loop of Run as provided by the model.\nIt uses an IdxView indexed view of the Table, so a single shared table\ncan be used across different environments, with each having its own unique view.\n"""
  def __init__(self, *args, **kwargs):
    """
    handle=A Go-side object is always initialized with an explicit handle=arg
    otherwise parameters can be unnamed in order of field names or named fields
    in which case a new Go object is constructed first
    """
    if len(kwargs) == 1 and 'handle' in kwargs:
      self.handle = kwargs['handle']
    elif len(args) == 1 and isinstance(args[0], go.GoClass):
      self.handle = args[0].handle
    else:
      self.handle = _leabra.env_FixedTable_CTor() # line 378, env.py

Here is the log message after entering this statement while debugging in VSCode:

thazy@macbookpro ~/go/src/github.com/thazy/leabra_demo/ra25/pyra25$ pyleabra -i ra25.py
OpenGL version 4.1 NVIDIA-12.0.24 355.11.10.50.10.103
Waiting for debugger attach
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x503e400]

goroutine 36 [running]:
main.etable_IdxView_Idxs_Get(0xffffffffffffffff, 0x5a3f960)
        /Users/thazy/go/src/github.com/emer/leabra/python/leabra/leabra.go:29395 +0x30
main._cgoexpwrap_a5bdb2a0ec34_etable_IdxView_Idxs_Get(0xffffffffffffffff, 0x17e5c480)
        _cgo_gotypes.go:37142 +0x2b
main._Cfunc_Py_Main(0x3, 0xc00000c020, 0xc000000000)
        _cgo_gotypes.go:675 +0x4d
main.GoPyMainRun.func2(0x3, 0xc00000c020, 0x2)
        /Users/thazy/go/src/github.com/emer/leabra/python/leabra/leabra.go:227 +0xbd
main.GoPyMainRun()
        /Users/thazy/go/src/github.com/emer/leabra/python/leabra/leabra.go:227 +0x190
main.main.func1()
        /Users/thazy/go/src/github.com/emer/leabra/python/leabra/leabra.go:162 +0x20
github.com/goki/gi/gimain.Main.func1(0x5cdc160, 0x791ae60)
        /Users/thazy/go/src/github.com/goki/gi/gimain/gimain.go:30 +0x24
github.com/goki/gi/oswin/driver/glos.Main.func1()
        /Users/thazy/go/src/github.com/goki/gi/oswin/driver/glos/app.go:88 +0x40
created by github.com/goki/gi/oswin/driver/glos.Main
        /Users/thazy/go/src/github.com/goki/gi/oswin/driver/glos/app.go:87 +0x87
thazy@macbookpro ~/go/src/github.com/thazy/leabra_demo/ra25/pyra25$ 

Got here by calling self.ConfigPats() in ra25.py (from ../leabra/examples/ra25/); NOTE: etable.py also needs from leabra import env to get to env.py (also filed as a separate. issue)

self.ConfigPats() call in ra25.py causes hangup so GUI doesn't open

Using a fresh copy of ra25.py pyleabra -i ra25.py opens and runs correctly. Replacing self.OpenPats() with self.ConfigPats() (in self.Config()) causes a hang so that the GUI doesn't open. The log message shows something about a missing handle in the _leabra built-in module:

thazy@macbookpro ~/go/src/github.com/thazy/leabra_demo/ra25$ pyleabra -i ra25.py 
OpenGL version 4.1 NVIDIA-12.0.24 355.11.10.50.10.103
Traceback (most recent call last):
  File "ra25.py", line 1393, in <module>
    main(sys.argv[1:])
  File "ra25.py", line 1337, in main
    TheSim.Config()
  File "ra25.py", line 339, in Config
    self.ConfigPats()
  File "ra25.py", line 766, in ConfigPats
    patgen.PermutedBinaryRows(dt.Cols[1], 6, 1, 0)
  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/leabra/patgen.py", line 67, in PermutedBinaryRows
    _leabra.patgen_PermutedBinaryRows(tsr.handle, nOn, onVal, offVal, goRun)
AttributeError: 'int' object has no attribute 'handle'

Accounting for habituation and sensitization using Leabra

Hello,

I am not sure if I'm missing something, but is there any literature or models of habituation and sensitization using leabra? For instance, Gluck et al mention a model for habituation and sensitization in Sea Aplysia in their Learning and Memory (2008) book. Would that be doable with Leabra?

I had gone through the CoCoNeu book a couple months ago, and don't remember running into this at that time. At the current moment too, I couldn't find anything with a simple search for 'habituation' or 'sensitization'. I'm not sure if I just need to go through it again to be able to "see" how habituation and sensitization can be accounted for by Leabra; so would be glad to be pointed in the right direction!

ra25 runtime error on Windows 10

Hi there,

I've been following the installation instructions in order to get Leabra working on Windows 10. But when I try to run the ra25 example simulation (after compiling it with no issues), the following runtime error appears:

NThreads: 1     go max procs: 6 num cpu:6
Exception 0xc0000005 0x0 0x0 0x7ffd650389aa
PC=0x7ffd650389aa
signal arrived during external code execution

runtime.cgocall(0x7ff66bd208d0, 0xc0007ed800)
        C:/Program Files/Go/src/runtime/cgocall.go:157 +0x3e fp=0xc0007ed7d8 sp=0xc0007ed7a0 pc=0x7ff66ad0939e
github.com/goki/vulkan._Cfunc_callVkCreateDescriptorPool(0x217218c0218, 0x217211fc580, 0x0, 0xc0007de1b0)
        _cgo_gotypes.go:8255 +0x55 fp=0xc0007ed800 sp=0xc0007ed7d8 pc=0x7ff66b0a6115
github.com/goki/vulkan.CreateDescriptorPool.func1(0x217218c0218, 0x217211fc580, 0x0, 0xc0007de1b0)
        C:/Users/bdkn9463/go/pkg/mod/github.com/goki/[email protected]/vulkan.go:734 +0xb1 fp=0xc0007ed840 sp=0xc0007ed800 pc=0x7ff66b0d7711
github.com/goki/vulkan.CreateDescriptorPool(0x217218c0218, 0xc0001fd0e0?, 0x0, 0xc0007de1b0)
        C:/Users/bdkn9463/go/pkg/mod/github.com/goki/[email protected]/vulkan.go:734 +0x45 fp=0xc0007ed878 sp=0xc0007ed840 pc=0x7ff66b0d7625
github.com/goki/vgpu/vgpu.(*Vars).DescLayout(0xc0002d67c8, 0x217218c0218)
        C:/Users/bdkn9463/go/pkg/mod/github.com/goki/[email protected]/vgpu/vars.go:320 +0x410 fp=0xc0007edaa0 sp=0xc0007ed878 pc=0x7ff66b1263d0
github.com/goki/vgpu/vgpu.(*Vars).Config(0xc0002d67c8)
        C:/Users/bdkn9463/go/pkg/mod/github.com/goki/[email protected]/vgpu/vars.go:169 +0x1cb fp=0xc0007edbb0 sp=0xc0007edaa0 pc=0x7ff66b1254cb
github.com/goki/vgpu/vgpu.(*Memory).Config(0xc0002d6798, 0x7ff66b129506?)
        C:/Users/bdkn9463/go/pkg/mod/github.com/goki/[email protected]/vgpu/memory.go:100 +0x2b fp=0xc0007edbd8 sp=0xc0007edbb0 pc=0x7ff66b11326b
github.com/goki/vgpu/vgpu.(*System).Config(0xc0002d6710)
        C:/Users/bdkn9463/go/pkg/mod/github.com/goki/[email protected]/vgpu/system.go:267 +0x34 fp=0xc0007edc48 sp=0xc0007edbd8 pc=0x7ff66b11ff74
github.com/goki/vgpu/vdraw.(*Drawer).ConfigSys(0xc0002d6710)
        C:/Users/bdkn9463/go/pkg/mod/github.com/goki/[email protected]/vdraw/config.go:106 +0x988 fp=0xc0007eddc0 sp=0xc0007edc48 pc=0x7ff66b12e808
github.com/goki/vgpu/vdraw.(*Drawer).ConfigSurface(0xc0002d6710, 0xc00059a370, 0xc00089d930?)
        C:/Users/bdkn9463/go/pkg/mod/github.com/goki/[email protected]/vdraw/vdraw.go:51 +0xaf fp=0xc0007ede00 sp=0xc0007eddc0 pc=0x7ff66b130b0f
github.com/goki/gi/oswin/driver/vkos.(*appImpl).NewWindow.func2()
        C:/Users/bdkn9463/go/pkg/mod/github.com/goki/[email protected]/oswin/driver/vkos/app.go:249 +0x127 fp=0xc0007ede80 sp=0xc0007ede00 pc=0x7ff66bc6ec87
github.com/goki/gi/oswin/driver/vkos.(*appImpl).mainLoop(0x7ff66c4952e0)
        C:/Users/bdkn9463/go/pkg/mod/github.com/goki/[email protected]/oswin/driver/vkos/app.go:159 +0x102 fp=0xc0007edf00 sp=0xc0007ede80 pc=0x7ff66bc6da82
github.com/goki/gi/oswin/driver/vkos.Main(0x7ff66e6d4480?)
        C:/Users/bdkn9463/go/pkg/mod/github.com/goki/[email protected]/oswin/driver/vkos/app.go:81 +0x96 fp=0xc0007edf18 sp=0xc0007edf00 pc=0x7ff66bc6d5b6
github.com/goki/gi/oswin/driver.driverMain(...)
        C:/Users/bdkn9463/go/pkg/mod/github.com/goki/[email protected]/oswin/driver/driver_vkos.go:18
github.com/goki/gi/oswin/driver.Main(...)
        C:/Users/bdkn9463/go/pkg/mod/github.com/goki/[email protected]/oswin/driver/driver.go:27
github.com/goki/gi/gimain.Main(0x7ff66d832ad0)
        C:/Users/bdkn9463/go/pkg/mod/github.com/goki/[email protected]/gimain/gimain.go:31 +0x50 fp=0xc0007edf38 sp=0xc0007edf18 pc=0x7ff66bd15eb0
main.main()
        C:/Users/Public/leabra/examples/ra25/ra25.go:48 +0x4a fp=0xc0007edf50 sp=0xc0007edf38 pc=0x7ff66bd19dca
runtime.main()
        C:/Program Files/Go/src/runtime/proc.go:271 +0x28b fp=0xc0007edfe0 sp=0xc0007edf50 pc=0x7ff66ad42aab
runtime.goexit({})
        C:/Program Files/Go/src/runtime/asm_amd64.s:1695 +0x1 fp=0xc0007edfe8 sp=0xc0007edfe0 pc=0x7ff66ad752c1

goroutine 2 gp=0xc00005a700 m=nil [force gc (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
        C:/Program Files/Go/src/runtime/proc.go:402 +0xce fp=0xc00005dfa8 sp=0xc00005df88 pc=0x7ff66ad42eae
runtime.goparkunlock(...)
        C:/Program Files/Go/src/runtime/proc.go:408
runtime.forcegchelper()
        C:/Program Files/Go/src/runtime/proc.go:326 +0xb8 fp=0xc00005dfe0 sp=0xc00005dfa8 pc=0x7ff66ad42d38
runtime.goexit({})
        C:/Program Files/Go/src/runtime/asm_amd64.s:1695 +0x1 fp=0xc00005dfe8 sp=0xc00005dfe0 pc=0x7ff66ad752c1
created by runtime.init.6 in goroutine 1
        C:/Program Files/Go/src/runtime/proc.go:314 +0x1a

goroutine 3 gp=0xc00005aa80 m=nil [GC sweep wait]:
runtime.gopark(0x1?, 0x0?, 0x0?, 0x0?, 0x0?)
        C:/Program Files/Go/src/runtime/proc.go:402 +0xce fp=0xc00005ff80 sp=0xc00005ff60 pc=0x7ff66ad42eae
runtime.goparkunlock(...)
        C:/Program Files/Go/src/runtime/proc.go:408
runtime.bgsweep(0xc00003a070)
        C:/Program Files/Go/src/runtime/mgcsweep.go:318 +0xdf fp=0xc00005ffc8 sp=0xc00005ff80 pc=0x7ff66ad2ba5f
runtime.gcenable.gowrap1()
        C:/Program Files/Go/src/runtime/mgc.go:203 +0x25 fp=0xc00005ffe0 sp=0xc00005ffc8 pc=0x7ff66ad20325
runtime.goexit({})
        C:/Program Files/Go/src/runtime/asm_amd64.s:1695 +0x1 fp=0xc00005ffe8 sp=0xc00005ffe0 pc=0x7ff66ad752c1
created by runtime.gcenable in goroutine 1
        C:/Program Files/Go/src/runtime/mgc.go:203 +0x66

goroutine 4 gp=0xc00005ac40 m=nil [GC scavenge wait]:
runtime.gopark(0x10000?, 0x7ff66da24d30?, 0x0?, 0x0?, 0x0?)
        C:/Program Files/Go/src/runtime/proc.go:402 +0xce fp=0xc00006ff78 sp=0xc00006ff58 pc=0x7ff66ad42eae
runtime.goparkunlock(...)
        C:/Program Files/Go/src/runtime/proc.go:408
runtime.(*scavengerState).park(0x7ff66e6d1b80)
        C:/Program Files/Go/src/runtime/mgcscavenge.go:425 +0x49 fp=0xc00006ffa8 sp=0xc00006ff78 pc=0x7ff66ad29409
runtime.bgscavenge(0xc00003a070)
        C:/Program Files/Go/src/runtime/mgcscavenge.go:658 +0x59 fp=0xc00006ffc8 sp=0xc00006ffa8 pc=0x7ff66ad299b9
runtime.gcenable.gowrap2()
        C:/Program Files/Go/src/runtime/mgc.go:204 +0x25 fp=0xc00006ffe0 sp=0xc00006ffc8 pc=0x7ff66ad202c5
runtime.goexit({})
        C:/Program Files/Go/src/runtime/asm_amd64.s:1695 +0x1 fp=0xc00006ffe8 sp=0xc00006ffe0 pc=0x7ff66ad752c1
created by runtime.gcenable in goroutine 1
        C:/Program Files/Go/src/runtime/mgc.go:204 +0xa5

goroutine 5 gp=0xc00005b180 m=nil [finalizer wait]:
runtime.gopark(0x0?, 0xc000042030?, 0x0?, 0xc0?, 0x1000000010?)
        C:/Program Files/Go/src/runtime/proc.go:402 +0xce fp=0xc000061e20 sp=0xc000061e00 pc=0x7ff66ad42eae
runtime.runfinq()
        C:/Program Files/Go/src/runtime/mfinal.go:194 +0x107 fp=0xc000061fe0 sp=0xc000061e20 pc=0x7ff66ad1f3a7
runtime.goexit({})
        C:/Program Files/Go/src/runtime/asm_amd64.s:1695 +0x1 fp=0xc000061fe8 sp=0xc000061fe0 pc=0x7ff66ad752c1
created by runtime.createfing in goroutine 1
        C:/Program Files/Go/src/runtime/mfinal.go:164 +0x3d

goroutine 18 gp=0xc0000841c0 m=nil [GC worker (idle)]:
runtime.gopark(0xa583c803984c8?, 0x0?, 0x0?, 0x0?, 0x0?)
        C:/Program Files/Go/src/runtime/proc.go:402 +0xce fp=0xc00006bf50 sp=0xc00006bf30 pc=0x7ff66ad42eae
runtime.gcBgMarkWorker()
        C:/Program Files/Go/src/runtime/mgc.go:1310 +0xe5 fp=0xc00006bfe0 sp=0xc00006bf50 pc=0x7ff66ad22465
runtime.goexit({})
        C:/Program Files/Go/src/runtime/asm_amd64.s:1695 +0x1 fp=0xc00006bfe8 sp=0xc00006bfe0 pc=0x7ff66ad752c1
created by runtime.gcBgMarkStartWorkers in goroutine 1
        C:/Program Files/Go/src/runtime/mgc.go:1234 +0x1c

goroutine 19 gp=0xc000084380 m=nil [GC worker (idle)]:
runtime.gopark(0xa583c803984c8?, 0x0?, 0x0?, 0x0?, 0x0?)
        C:/Program Files/Go/src/runtime/proc.go:402 +0xce fp=0xc00006df50 sp=0xc00006df30 pc=0x7ff66ad42eae
runtime.gcBgMarkWorker()
        C:/Program Files/Go/src/runtime/mgc.go:1310 +0xe5 fp=0xc00006dfe0 sp=0xc00006df50 pc=0x7ff66ad22465
runtime.goexit({})
        C:/Program Files/Go/src/runtime/asm_amd64.s:1695 +0x1 fp=0xc00006dfe8 sp=0xc00006dfe0 pc=0x7ff66ad752c1
created by runtime.gcBgMarkStartWorkers in goroutine 1
        C:/Program Files/Go/src/runtime/mgc.go:1234 +0x1c

goroutine 20 gp=0xc000084540 m=nil [GC worker (idle)]:
runtime.gopark(0xa583c803984c8?, 0x0?, 0x0?, 0x0?, 0x0?)
        C:/Program Files/Go/src/runtime/proc.go:402 +0xce fp=0xc000433f50 sp=0xc000433f30 pc=0x7ff66ad42eae
runtime.gcBgMarkWorker()
        C:/Program Files/Go/src/runtime/mgc.go:1310 +0xe5 fp=0xc000433fe0 sp=0xc000433f50 pc=0x7ff66ad22465
runtime.goexit({})
        C:/Program Files/Go/src/runtime/asm_amd64.s:1695 +0x1 fp=0xc000433fe8 sp=0xc000433fe0 pc=0x7ff66ad752c1
created by runtime.gcBgMarkStartWorkers in goroutine 1
        C:/Program Files/Go/src/runtime/mgc.go:1234 +0x1c

goroutine 6 gp=0xc00005b500 m=nil [GC worker (idle)]:
runtime.gopark(0xa583c803984c8?, 0x0?, 0x0?, 0x0?, 0x0?)
        C:/Program Files/Go/src/runtime/proc.go:402 +0xce fp=0xc000071f50 sp=0xc000071f30 pc=0x7ff66ad42eae
runtime.gcBgMarkWorker()
        C:/Program Files/Go/src/runtime/mgc.go:1310 +0xe5 fp=0xc000071fe0 sp=0xc000071f50 pc=0x7ff66ad22465
runtime.goexit({})
        C:/Program Files/Go/src/runtime/asm_amd64.s:1695 +0x1 fp=0xc000071fe8 sp=0xc000071fe0 pc=0x7ff66ad752c1
created by runtime.gcBgMarkStartWorkers in goroutine 1
        C:/Program Files/Go/src/runtime/mgc.go:1234 +0x1c

goroutine 21 gp=0xc000084700 m=nil [GC worker (idle)]:
runtime.gopark(0xa583c803984c8?, 0x0?, 0x0?, 0x0?, 0x0?)
        C:/Program Files/Go/src/runtime/proc.go:402 +0xce fp=0xc000435f50 sp=0xc000435f30 pc=0x7ff66ad42eae
runtime.gcBgMarkWorker()
        C:/Program Files/Go/src/runtime/mgc.go:1310 +0xe5 fp=0xc000435fe0 sp=0xc000435f50 pc=0x7ff66ad22465
runtime.goexit({})
        C:/Program Files/Go/src/runtime/asm_amd64.s:1695 +0x1 fp=0xc000435fe8 sp=0xc000435fe0 pc=0x7ff66ad752c1
created by runtime.gcBgMarkStartWorkers in goroutine 1
        C:/Program Files/Go/src/runtime/mgc.go:1234 +0x1c

goroutine 34 gp=0xc000482000 m=nil [GC worker (idle)]:
runtime.gopark(0xa583c803984c8?, 0x0?, 0x0?, 0x0?, 0x0?)
        C:/Program Files/Go/src/runtime/proc.go:402 +0xce fp=0xc00042ff50 sp=0xc00042ff30 pc=0x7ff66ad42eae
runtime.gcBgMarkWorker()
        C:/Program Files/Go/src/runtime/mgc.go:1310 +0xe5 fp=0xc00042ffe0 sp=0xc00042ff50 pc=0x7ff66ad22465
runtime.goexit({})
        C:/Program Files/Go/src/runtime/asm_amd64.s:1695 +0x1 fp=0xc00042ffe8 sp=0xc00042ffe0 pc=0x7ff66ad752c1
created by runtime.gcBgMarkStartWorkers in goroutine 1
        C:/Program Files/Go/src/runtime/mgc.go:1234 +0x1c

goroutine 7 gp=0xc0004821c0 m=nil [chan receive]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
        C:/Program Files/Go/src/runtime/proc.go:402 +0xce fp=0xc0006e5ea0 sp=0xc0006e5e80 pc=0x7ff66ad42eae
runtime.chanrecv(0xc000078240, 0xc0006e5fa8, 0x1)
        C:/Program Files/Go/src/runtime/chan.go:583 +0x3cd fp=0xc0006e5f18 sp=0xc0006e5ea0 pc=0x7ff66ad0ba2d
runtime.chanrecv2(0x0?, 0x0?)
        C:/Program Files/Go/src/runtime/chan.go:447 +0x12 fp=0xc0006e5f40 sp=0xc0006e5f18 pc=0x7ff66ad0b652
github.com/emer/leabra/leabra.(*NetworkStru).ThrWorker(0xc0000c4700, 0x0)
        C:/Users/Public/leabra/leabra/networkstru.go:716 +0xab fp=0xc0006e5fc0 sp=0xc0006e5f40 pc=0x7ff66bc541ab
github.com/emer/leabra/leabra.(*NetworkStru).StartThreads.gowrap1()
        C:/Users/Public/leabra/leabra/networkstru.go:700 +0x25 fp=0xc0006e5fe0 sp=0xc0006e5fc0 pc=0x7ff66bc54045
runtime.goexit({})
        C:/Program Files/Go/src/runtime/asm_amd64.s:1695 +0x1 fp=0xc0006e5fe8 sp=0xc0006e5fe0 pc=0x7ff66ad752c1
created by github.com/emer/leabra/leabra.(*NetworkStru).StartThreads in goroutine 1
        C:/Users/Public/leabra/leabra/networkstru.go:700 +0xe7

goroutine 8 gp=0xc000482540 m=nil [chan receive]:
runtime.gopark(0xc0002960c0?, 0x217596bd128?, 0x60?, 0x0?, 0x217596b0108?)
        C:/Program Files/Go/src/runtime/proc.go:402 +0xce fp=0xc0005f1940 sp=0xc0005f1920 pc=0x7ff66ad42eae
runtime.chanrecv(0xc0000789c0, 0x0, 0x1)
        C:/Program Files/Go/src/runtime/chan.go:583 +0x3cd fp=0xc0005f19b8 sp=0xc0005f1940 pc=0x7ff66ad0ba2d
runtime.chanrecv1(0xc0005f1a08?, 0x0?)
        C:/Program Files/Go/src/runtime/chan.go:442 +0x12 fp=0xc0005f19e0 sp=0xc0005f19b8 pc=0x7ff66ad0b632
github.com/goki/gi/oswin/driver/vkos.(*appImpl).RunOnMain(0x7ff66c4952e0, 0xc000308000)
        C:/Users/bdkn9463/go/pkg/mod/github.com/goki/[email protected]/oswin/driver/vkos/app.go:97 +0x72 fp=0xc0005f1a18 sp=0xc0005f19e0 pc=0x7ff66bc6d652
github.com/goki/gi/oswin/driver/vkos.(*appImpl).NewWindow(0x7ff66c4952e0, 0xc00060a400?)
        C:/Users/bdkn9463/go/pkg/mod/github.com/goki/[email protected]/oswin/driver/vkos/app.go:242 +0x3ce fp=0xc0005f1ae8 sp=0xc0005f1a18 pc=0x7ff66bc6e40e
github.com/goki/gi/gi.NewWindow({0x7ff66c7b0888, 0x4}, {0x7ff66c7dca2f, 0x18}, 0xc00060a400)
        C:/Users/bdkn9463/go/pkg/mod/github.com/goki/[email protected]/gi/window.go:345 +0x129 fp=0xc0005f1b48 sp=0xc0005f1ae8 pc=0x7ff66b39b529
github.com/goki/gi/gi.NewMainWindow({0x7ff66c7b0888, 0x4}, {0x7ff66c7dca2f, 0x18}, 0x640, 0x4b0)
        C:/Users/bdkn9463/go/pkg/mod/github.com/goki/[email protected]/gi/window.go:393 +0x138 fp=0xc0005f1ba0 sp=0xc0005f1b48 pc=0x7ff66b39b978
github.com/emer/emergent/egui.(*GUI).MakeWindow(0x7ff66e6d4ca8, {0x7ff66c7151a0, 0x7ff66e6d4480}, {0x7ff66c7b0888, 0x4}, {0x7ff66c7dca2f, 0x18}, {0x7ff66c81187e, 0x72})
        C:/Users/bdkn9463/go/pkg/mod/github.com/emer/[email protected]/egui/gui.go:67 +0xd2 fp=0xc0005f1c18 sp=0xc0005f1ba0 pc=0x7ff66bd0ab52
main.(*Sim).ConfigGui(0x7ff66e6d4480)
        C:/Users/Public/leabra/examples/ra25/ra25.go:776 +0x6b fp=0xc0005f1f88 sp=0xc0005f1c18 pc=0x7ff66bd1d5cb
main.guirun()
        C:/Users/Public/leabra/examples/ra25/ra25.go:56 +0x26 fp=0xc0005f1fa0 sp=0xc0005f1f88 pc=0x7ff66bd19e06
main.main.func1()
        C:/Users/Public/leabra/examples/ra25/ra25.go:49 +0xf fp=0xc0005f1fb0 sp=0xc0005f1fa0 pc=0x7ff66bd1f2cf
github.com/goki/gi/gimain.Main.func1({0x0?, 0x0?})
        C:/Users/bdkn9463/go/pkg/mod/github.com/goki/[email protected]/gimain/gimain.go:32 +0x13 fp=0xc0005f1fc0 sp=0xc0005f1fb0 pc=0x7ff66bd15ef3
github.com/goki/gi/oswin/driver/vkos.Main.func1()
        C:/Users/bdkn9463/go/pkg/mod/github.com/goki/[email protected]/oswin/driver/vkos/app.go:78 +0x28 fp=0xc0005f1fe0 sp=0xc0005f1fc0 pc=0x7ff66bc76628
runtime.goexit({})
        C:/Program Files/Go/src/runtime/asm_amd64.s:1695 +0x1 fp=0xc0005f1fe8 sp=0xc0005f1fe0 pc=0x7ff66ad752c1
created by github.com/goki/gi/oswin/driver/vkos.Main in goroutine 1
        C:/Users/bdkn9463/go/pkg/mod/github.com/goki/[email protected]/oswin/driver/vkos/app.go:77 +0x8a
rax     0x0
rbx     0x0
rcx     0x217218c2230
rdx     0x10
rdi     0x217218c2230
rsi     0x217218c21d0
rbp     0x6c0b7ff7c0
rsp     0x6c0b7ff480
r8      0x40
r9      0x1
r10     0x0
r11     0x20fffcfadeefbb01
r12     0x40
r13     0x80
r14     0x10
r15     0x0
rip     0x7ffd650389aa
rflags  0x10202
cs      0x33
fs      0x53
gs      0x2b

A very similar error popped up when I tried running the widgets example during the GoGi installation process.

Judging from a quick glance at this error message, it seems like maybe the issue is related to Vulkan somehow? I did try to do a fresh installation of the Vulkan SDK, but unfortunately the error persisted. I also saw some recent discussion about an outdated go.mod file in the Github issues, but I'm not sure if it applies to me since I've been using the latest versions of the Leabra files (as of 3/29/24).

Vulkan error: incompatible driver occurred when running the ra25 example

I installed leabra strictly according to the wiki. But when running the ra25 example. The below error occurred:

panic: vulkan error: vulkan error: incompatible driver (-9) on /Users/wjn/gopath/pkg/mod/github.com/goki/[email protected]/vgpu/errors.go:23 (0x1005b631e)
NewError: pc, _, _, ok := runtime.Caller(0)`

The compiling process went well. And the code could run without gui. Could you give me some instructions to resolve this problem?

`etable.py` needs `from leabra import env`

After adding a call to self.ConfigPats() in the ra25.py demo project (from .../leabra/examples/ra25/) entering into the function call patgen.PermutedBinaryRows(dt.Cols[1], 6, 1, 0) (~line 766) ends up in etable.py where env is a missing variable. When I add the from' line locally and then did .../leabra/python/make install' (Important: do not do a make since that appears to restore the env-less version) it appears to fix the problem, only to seg fault further downstream (see .../emer/leabra/ Issue #1).

confirm XCAL check-mark equations with plot

it's been a while -- double-check.

also, add note in README about AvgSLrn and how extensive experiments with different rate constants wasn't able to match performance of current setup.

leabra.Prjn does not implement emer.Prjn properly

Looks like the return types on three methods are not right. Replacing lines 64-67 of leabra/prjn.go seems to fix it:

func (pj *Prjn) SetClass(cls string)          { pj.Cls = cls }
func (pj *Prjn) SetPattern(pat prjn.Pattern) emer.Prjn { pj.Pat = pat; return pj }
func (pj *Prjn) SetType(typ emer.PrjnType)    { pj.Typ = typ }

I've also started using this pattern to verify that my code is implementing interfaces correctly:

 var _ emer.Prjn = (*Prjn)(nil) // for verification that the struct implements the interface

Python install

intel mac os 12.3
Go installed and python 3.9

all requirements seem to be satisfied but make fails with:
...
--- Processing package: github.com/emer/etable/convolve ---

--- Processing package: github.com/emer/etable/efile ---
2022/05/08 19:18:00 internal error: package "fmt" without types was imported from "github.com/emer/etable/eplot"
make: *** [gen] Error 1

$

I can attach entire output if you'd like

Mistake in ra25 code

The current code for ra25 has a simple mistake in the code for the ConfigPats and the OpenPats function copied below. The ConfigPats saves the generated file with the .csv extension and the etable.Comma setting, whereas the OpenPats function opens the file with the .tsv extension and the etable.Tab setting. Presumably they should be consistent. I had an issue where I used ConfigPats to generate the structure of my training data and then tried to use the OpenPats command to read in the training data later, which didn't work. It failed to find the file. When I made them consistent, everything was fine.

func (ss *Sim) ConfigPats() {
dt := ss.Pats
dt.SetMetaData("name", "TrainPats")
dt.SetMetaData("desc", "Training patterns")
sch := etable.Schema{
{"Name", etensor.STRING, nil, nil},
{"Input", etensor.FLOAT32, []int{5, 5}, []string{"Y", "X"}},
{"Output", etensor.FLOAT32, []int{5, 5}, []string{"Y", "X"}},
}
dt.SetFromSchema(sch, 25)

patgen.PermutedBinaryRows(dt.Cols[1], 6, 1, 0)
patgen.PermutedBinaryRows(dt.Cols[2], 6, 1, 0)
dt.SaveCSV("random_5x5_25_gen.csv", etable.Comma, etable.Headers)

}

func (ss *Sim) OpenPats() {
dt := ss.Pats
dt.SetMetaData("name", "TrainPats")
dt.SetMetaData("desc", "Training patterns")
err := dt.OpenCSV("random_5x5_25.tsv", etable.Tab)
if err != nil {
log.Println(err)
}
}

Emergent v1.4.31 seems to break Leabra

Hi there,

I'm currently setting up a new model in a separate directory outside the main Leabra folder. When I initially tried to compile it, I got these errors:

# github.com/emer/leabra/leabra
..\..\bdkn9463\go\pkg\mod\github.com\emer\[email protected]\leabra\layer.go:167:45: not enough arguments in call to ly.LeabraLay.UnitVal1D
        have (int, int)
        want (int, int, int)
..\..\bdkn9463\go\pkg\mod\github.com\emer\[email protected]\leabra\layer.go:190:37: not enough arguments in call to ly.LeabraLay.UnitVal1D
        have (int, int)
        want (int, int, int)
..\..\bdkn9463\go\pkg\mod\github.com\emer\[email protected]\leabra\layer.go:232:37: not enough arguments in call to ly.LeabraLay.UnitVal1D
        have (int, int)
        want (int, int, int)
..\..\bdkn9463\go\pkg\mod\github.com\emer\[email protected]\leabra\layer.go:250:38: not enough arguments in call to ly.LeabraLay.UnitVal1D
        have (int, int)
        want (int, int, int)
..\..\bdkn9463\go\pkg\mod\github.com\emer\[email protected]\leabra\layer.go:360:26: cannot use ly (variable of type *Layer) as emer.Layer value in argument to emer.SendNameTry: *Layer does not implement emer.Layer (missing method AddClass)
..\..\bdkn9463\go\pkg\mod\github.com\emer\[email protected]\leabra\layer.go:363:30: cannot use ly (variable of type *Layer) as emer.Layer value in argument to emer.SendNameTypeTry: *Layer does not implement emer.Layer (missing method AddClass)
..\..\bdkn9463\go\pkg\mod\github.com\emer\[email protected]\leabra\layer.go:366:26: cannot use ly (variable of type *Layer) as emer.Layer value in argument to emer.RecvNameTry: *Layer does not implement emer.Layer (missing method AddClass)
..\..\bdkn9463\go\pkg\mod\github.com\emer\[email protected]\leabra\layer.go:369:30: cannot use ly (variable of type *Layer) as emer.Layer value in argument to emer.RecvNameTypeTry: *Layer does not implement emer.Layer (missing method AddClass)
..\..\bdkn9463\go\pkg\mod\github.com\emer\[email protected]\leabra\network.go:43:9: cannot use &Layer{} (value of type *Layer) as emer.Layer value in return statement: *Layer does not implement emer.Layer (missing method AddClass)
..\..\bdkn9463\go\pkg\mod\github.com\emer\[email protected]\leabra\network.go:48:9: cannot use &Prjn{} (value of type *Prjn) as emer.Prjn value in return statement: *Prjn does not implement emer.Prjn (missing method AddClass)
..\..\bdkn9463\go\pkg\mod\github.com\emer\[email protected]\leabra\network.go:48:9: too many errors

The go.mod file that I initially generated set the required Emergent version to v1.4.31, but the compilation errors went away when I set it back to v1.3.53 (the version specified in the Leabra repository's go.mod file). This was just something I tried randomly, and I'm not sure why it worked.

Instructions about nogui should note that sims cannot be run nogui without modification

Given that the various web pages encourage people to use some of the sims as potential starting points for projects, it would be good to explicitly note that while the ra25 project can be run nogui, the sims cannot because they lack the necessary code.

In hindsight it is obvious, but the ability to run nogui is not a general characteristics of an emergent program, but instead requires something like the specific code found in the ra25 project.

WtFmDWt call timing issues

ss.Net.WtFmDWt() is called at start of AlphaCyc so the previous DWt call leaves the weight changes visible in the gui.

However, for simulations like the hippocampus (and also the weight priming model from ccn sims) where we test and score item-level performance after each epoch of training, the last item does not benefit from learning! This systematically slows the measured learning performance!

The specific sims can be fixed, but it would be good to have a more general fix for this issue.

Probably the best solution is just for the NetView to save all the synapse-level vars in a separate data structure at the point of the update call, without any history, and be done with it.

slice bounds out of range error in hip and hip_bench

When training running through the last epoch, it shows the attached error. I believe it's caused by ss.logrun function.

panic: runtime error: slice bounds out of range [1:0]

goroutine 45 [running]:
main.(*Sim).LogRun(0x2a64240, 0xc0009ab770)
C:/Users/Liu/OneDrive/UCD_Neuroscience/Modeling/leabra/examples/hip/hip.go:1527 +0x9d8
main.(*Sim).RunEnd(0x2a64240)
C:/Users/Liu/OneDrive/UCD_Neuroscience/Modeling/leabra/examples/hip/hip.go:645 +0x45
main.(*Sim).TrainTrial(0x2a64240)
C:/Users/Liu/OneDrive/UCD_Neuroscience/Modeling/leabra/examples/hip/hip.go:626 +0x1f2
main.(*Sim).TrainRun(0x2a64240)
C:/Users/Liu/OneDrive/UCD_Neuroscience/Modeling/leabra/examples/hip/hip.go:794 +0x45
created by main.(*Sim).ConfigGui.func12
C:/Users/Liu/OneDrive/UCD_Neuroscience/Modeling/leabra/examples/hip/hip.go:1736 +0x76

TD algorithm not reflecting negative externally-delivered reinforcements (i.e., NegPV)

In td.go file of pbwm package, the function on the RewInteg layer does not reflect NegPV values clamped on the Rew layer. Note how plus phase Act takes only nrn.Ge value as its current Reward value, which presumably reflects the net input fm Reward layer only and Ge is positive-rectified? Here is the relevant function:

func (ly *TDRewIntegLayer) ActFmG(ltime leabra.Time) {
rply, _ := ly.RewPredLayer()
if rply == nil {
return
}
rpActP := rply.Neurons[0].ActP
rpAct := rply.Neurons[0].Act
for ni := range ly.Neurons {
nrn := &ly.Neurons[ni]
if nrn.IsOff() {
continue
}
if ltime.Quarter == 3 { // plus phase
nrn.Act = nrn.Ge + ly.RewInteg.Discount
rpAct
} else {
nrn.Act = rpActP // previous actP
}
}
}

ra25 can't find file

I recently reinstalled everything starting with Go 1.13.5 and then rebuilt ra25. Now when I go to run ra25 I get the following message, despite the fact that the “missing" file is in the same directory as the binary. Ra25 opens, but when I try to initialize it, it crashes. Is it a problem with the path? I get the same problem when I try to run the binary from the ra25 examples folder. It used to work before I reinstalled a version of go that uses go.mod.

(base) StephenReadsMBP:~ read$ /Users/read/go/bin/ra25 ; exit;
2020/01/12 16:25:48 open random_5x5_25.tsv: no such file or directory
2020/01/12 16:25:48 open random_5x5_25.tsv: no such file or directory
OpenGL version 4.1 INTEL-12.10.14

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.