GithubHelp home page GithubHelp logo

mlech26l / ncps Goto Github PK

View Code? Open in Web Editor NEW
1.8K 72.0 290.0 5.92 MB

PyTorch and TensorFlow implementation of NCP, LTC, and CfC wired neural models

Home Page: https://www.nature.com/articles/s42256-020-00237-3

License: Apache License 2.0

Python 100.00%
ncp recurrent-neural-network nature-machine-intelligence tensorflow keras cfc

ncps's People

Contributors

jm12138 avatar kwyip avatar mlech26l avatar oidlichtnwoada avatar raminmh avatar shuboyang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ncps's Issues

Getting a weird error

When implementing the LTCcell like in the examples I get an error like this: 2022-01-25 20:03:29.675637: F tensorflow/core/framework/tensor.cc:681] Check failed: IsAligned() ptr = 0x29613b360
My implementation looks like this:

wiring = kncp.wirings.FullyConnected(nn1lstm, 64)
ltc_cell = LTCCell(wiring)
model = Sequential()
model.add(RNN(ltc_cell, return_sequences=True, input_shape=(config.num_steps, config.input_size)))
model.add(Dense(config.output_size))
model.compile(optimizer=optimizer, loss=loss_function, metrics=['Accuracy'])
model.build()

I am using Tensorflow-macos v2.7 on Mac M1 with Metal
What can I do?

Unable to save model to file

Hi there,

I have created a simple NCP model based on the examples on your GitHub page.
The model trains correctly, but for some reason I am unable to save the model afterwards.

I have tried the following commands in my Python script:

model.save('ncp-model.h5')
Result: Program terminates with the following error message: NotImplementedError: Layer LTCCell has arguments in __init__ and therefore must override get_config.

model.save('ncp-model') (i.e. no h5 extension).
Result: Program generates the following error message: WARNING:tensorflow:From /home/anyone/anaconda3/lib/python3.7/site-packages/tensorflow/python/training/tracking/tracking.py:111: Model.state_updates (from tensorflow.python.keras.engine.training) is deprecated and will be removed in a future version.
Instructions for updating:
This property should not be used in TensorFlow 2.0, as updates are applied automatically.
2020-11-27 11:48:26.703819: W tensorflow/python/util/util.cc:348] Sets are not currently considered sequences, but this may change in the future, so consider avoiding using them.
WARNING:tensorflow:From /home/anyone/anaconda3/lib/python3.7/site-packages/tensorflow/python/training/tracking/tracking.py:111: Layer.updates (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version.
Instructions for updating:
This property should not be used in TensorFlow 2.0, as updates are applied automatically.

In both of the above cases, a saved model file is created on my hard drive, but TensorFlow cannot load it back in again.

Details of my setup are as follows:

  • HP Omen laptop, 32 GB system memory
  • NVIDIA GTX-1070 GPU, 8 GB VRAM
  • Ubuntu 18.04
  • TensorFlow 2.3.1
  • Python 3.7

Do you have any idea what the problem could be? Any assistance you can provide would be greatly appreciated!

Thanks & Regards,
Brendan.

Compatibility with TFLite fused LSTM operations

This is a follow-on from issue https://github.com/mlech26l/keras-ncp/issues/20, but with broader implications and applications. A related issue is discussed here

I am attempting to deploy an NCP model on specialized hardware. To do so, I need to convert a trained NCP model to .tflite. While it seems possible to run a straightforward conversion without error, the resulting .tflite model does not result in a fused-lstm operator

It might be too large of an ask, but is there any possibility of supporting the fused LSTM API? I am a bit out of my depth on how to even begin that process.

In order to demonstrate what I hope to achieve, and what is not working, I've included a zip file with the following contents:

hello_ncp.zip

  • hello_ncp_data.py: data for training NCP model based on the time series example
  • train_hello_ncp_tflite.py: script for training the NCP (or vanilla LSTM) and converting to .tflite
  • environment.yml: conda environment to reproduce my dependencies
  • requirements.txt: list of resulting dependencies from environment.yml

To run a successful training and tflite conversion (because it uses a vanilla LSTM instead of NCP):

conda env create -f environment.yml
conda activate hello_ncp
python train_hello_ncp_tflite.py  --substitute-lstm    # uses LSTM instead of NCP, saves to hello_ncp.tflite
git clone [email protected]:GreenWaves-Technologies/gap_sdk.git    # toolset of deploying tflite model specialized hardware
source ./gap_sdk/configs/ai_deck.sh 
nntool
open hello_ncp.tflite

The following will fail at the last step because the NCP was not successfully converted to a fused LSTM operation during the tflite conversion

conda env create -f environment.yml
conda activate hello_ncp
python train_hello_ncp_tflite.py   # uses NCP, saves to hello_ncp.tflite
git clone [email protected]:GreenWaves-Technologies/gap_sdk.git    # toolset of deploying tflite model specialized hardware
source ./gap_sdk/configs/ai_deck.sh 
nntool
set debug true
open hello_ncp.tflite

Result (I can provide further traceback if helpful):

ValueError: no handler found for WHILE
EXCEPTION of type 'ValueError' occurred with message: 'no handler found for WHILE'

GPU support for NCP model training?

Hi there,

Is it possible to train an NCP model in a TensorFlow Docker container with GPU support?

I have been getting good results by training my NCP model with the CPU-only version of TensorFlow, but I would really like to train in the GPU in order to reduce training time.

Is it possible to do this? I have tried it once using a TensorFlow Docker container with GPU support, but I got the following error message: ModuleNotFoundError: No module named 'kerasncp'.

Thanks,
Brendan.

如何获得193G的数据集

你好,你的出色工作给了我很多启发,所以我正在复制这篇论文。希望获得193g的数据集来完成这项工作。可以的话请发到我邮箱:[[email protected]]。感谢您的回复

dimension of the input

Hi,

We'are follwoing your work, and now, i have a question that how to reshape our image sequence with dimention of (10000, 384, 640, 3) to fit the input requirement of ltc model. Are (10000, 1, 384, 640, 3) and (10000, 6, 384, 640, 3) same as the input data?

thanks for your reply.

Stacking wirings.FullyConnected with nn.Linear layers slows the training by 120x

Before describing my issue, I would like to thank the authors for the incredible paper: "Neural Circuit Policy" where you have shown how LTC model artificial neurons can perform autonomous driving robustly with the wiring architecture inspired from C.Elegans.

Motivated by the provided Keras code, I tried stacking up the wirings.FullyConnected() layer after two linear layers. Here is a piece of code:

class QNetwork_w_LTC(nn.Module):
    def __init__(self, env):
        super(QNetwork, self).__init__()
        self.fc1 = nn.Linear(np.array(env.single_observation_space.shape).prod() + np.prod(env.single_action_space.shape), 64)
        self.fc2 = nn.Linear(64, 64)
        ###########################
        self.wiring = kncp.wirings.FullyConnected(units=16, output_dim=1)
        self.ltc_cell = LTCCell(wiring=self.wiring, in_features=5)
        ##########################
        # self.fc3 = nn.Linear(64, 1)

    def forward(self, x, a):
        x = torch.cat([x, a], 1)
        x = F.relu(self.fc1(x))
        x = F.relu(self.fc2(x))
        #######################
        x = x.unsqueeze(-1)
        ltc_sequence = RNNSequence(self.ltc_cell)
        x = ltc_sequence.forward(x)
        #######################
        return x

In place of the generic Q-network used in Deep RL algos like this:

class QNetwork(nn.Module):
    def __init__(self, env):
        super(QNetwork, self).__init__()
        self.fc1 = nn.Linear(np.array(env.single_observation_space.shape).prod() + np.prod(env.single_action_space.shape), 64)
        self.fc2 = nn.Linear(64, 64)
        self.fc3 = nn.Linear(64, 1)

    def forward(self, x, a):
        x = torch.cat([x, a], 1)
        x = F.relu(self.fc1(x))
        x = F.relu(self.fc2(x))
        x = self.fc3(x)
        return x

But during training, the vanilla nn.Linear layers took 3 hours for 1.5 million timesteps while the nn.Linear layers stacked with wirings.Fullyconnected() layers took 5 hours for just 20k timesteps.

According to my guess the decrease in speed is due to the following reasons:

  1. Maybe I am using much more parameters than I should. (I really could not understand what its the use of output_dim and in_features)?
  2. Maybe implementing wirings.py file in PyTorch, like making every matrix/tensor as torch.Tensor could increase the speed?

Could you please specify if thre are any other options to speed up the code!

Thnaks in advance for helping me out!

Issues on recurrent connections in command layer for CfC

The closed form solution of the ODE needs the current state of the input (f(I(t))) to compute the current state of the neuron x(t), so i am curious about how to implement the recurrent connection in the 'command neuron' layer in your NCP. The problem is that the command neuron couldn't acquire its current state to compute itself. So I check the source code. It seems that the the recurrent part in the command layer doesn't work. Because "the command layer - to -command layer part" in the adjacent matrix didn't take park in the computation acutually. So how to inplement the recurrent connection under CfC solution?

the matrix in red circle doesn't include the command-to-command neurons connection.
image

The numpy version

Hi,The numpy version?
when, allow_pickles=True, report OSError: Failed to interpret file 'D:/Workspace/Pycharm/keras-ncp/kerasncp\datasets\icra2020_lidar_collision_avoidance.npz' as a pickle.
when, allow_pickles=False, report ValueError: Cannot load file containing pickled data when allow_pickle=False

about WiredCfcCell and CfcCell

Hi,I've read your recent paper-"Closed-form Continuous-time Neural Networks". It's really a excellent work about how to translate a trained LTC network into its closed-form variant and construct Cfc neural network. But I don't find open source code about WiredCfcCell and CfcCell in keras-ncp. So, is the code open? If the code is open source, Can I get a copy of the source code for academic research?
My email is [email protected].
Thanks again.

Evaluate model

Thank you for your nice work. I still work on LTC network and I would like to ask about model evaluation. I trained LTC network with 64 neurons and already had the best model weight. However, I would like to drop some connection of LTC network (such as randomly drop 20% wiring connection of LTC network). When training, I used fully-connected wiring. When testing model, I tried to change into random wiring instead of fully-connected wiring. However, I still got the same result when I tested random wiring and original fully-connected wiring with the same model. It looks like the model is fixed and it only used fully-connected wiring (because I used it training). I wonder if there is any way to drop some connection among LTC neurons while evaluating model.

About the 193g dataset

Hi, I've read your paper and it's really a nice work! I want to do some experiments with NCP on autonomy. So would you please send me a download link of the dataset?
Thanks again.
[email protected]

'NoneType' object cannot be interpreted as an integer`

Hey everyone, I am trying to implement NCP for a time series model, defined below.

`class Model:
def init(self,model_name,x_train):
self.model_name = model_name
self.model = self.buildModel(x_train)
print("> New model initialized: ",model_name)

def buildModel(self,x_train):
    ncp_wiring = kncp.wirings.NCP(
    inter_neurons=20,  # Number of inter neurons
    command_neurons=10,  # Number of command neurons
    motor_neurons=5,  # Number of motor neurons
    sensory_fanout=4,  # How many outgoing synapses has each sensory neuron
    inter_fanout=5,  # How many outgoing synapses has each inter neuron
    recurrent_command_synapses=6,  # Now many recurrent synapses are in the    # command neuron layer
    motor_fanin=4,)  # How many incoming synapses has each motor neuron    )
    # Overwrite some of the initialization ranges
    ncp_cell = LTCCell(ncp_wiring,initialization_ranges={ "w": (0.2, 2.0),},)  
    height, width, channels = (78, 200, 3)
    model = Sequential()
    x = len(x_train[0])

    model.add(LSTM(256,input_shape=((1,x)), return_sequences = True, activation = "relu"))
    model.add(Dropout(0.2))
    model.add(BatchNormalization())

    model.add(LSTM(128, input_shape=((1, x)), return_sequences=True, activation="relu"))
    model.add(Dropout(0.2))
    model.add(BatchNormalization())

    model.add(LSTM(128, input_shape=((1, x)), activation="relu"))
    model.add(Dropout(0.2))
    model.add(BatchNormalization())
    
    model.add(RNN(ltc_cell))
    
    model.add(Dense(32, activation="relu"))
    model.add(Dropout(0.2))

    model.add(Dense(2, activation="softmax"))
    #model.add(keras.layers.RNN(ncp_cell,return_sequences=True))
   
    optimizer = tf.keras.optimizers.Adam(lr = 0.001, decay = 1e-6)
    model.compile(loss="sparse_categorical_crossentropy",optimizer=optimizer)

    return model`

I then get the following error,

`File "F:\Dropbox\AIML EDUCATION\AIMLEDUCATION\Model.py", line 19, in init
self.model = self.buildModel(x_train)

File "F:\Dropbox\AIML EDUCATION\AIMLEDUCATION\Model.py", line 51, in buildModel
model.add(RNN(ltc_cell))
File "C:\Users\Denis\anaconda3\envs\tf1\lib\site-
packages\tensorflow\python\training\tracking\base.py", line 530, in _method_wrapper
result = method(self, *args, **kwargs)

File "C:\Users\Denis\anaconda3\envs\tf1\lib\site-packages\keras\engine\sequential.py", line 217, in add
output_tensor = layer(self.outputs[0])

File "C:\Users\Denis\anaconda3\envs\tf1\lib\site-packages\keras\layers\recurrent.py", line 659, in call
return super(RNN, self).call(inputs, **kwargs)

File "C:\Users\Denis\anaconda3\envs\tf1\lib\site-packages\keras\engine\base_layer.py", line 977, in call
input_list)

File "C:\Users\Denis\anaconda3\envs\tf1\lib\site-packages\keras\engine\base_layer.py", line 1115, in _functional_construction_call inputs, input_masks, args, kwargs)

File "C:\Users\Denis\anaconda3\envs\tf1\lib\site-packages\keras\engine\base_layer.py", line 848, in _keras_tensor_symbolic_call return self._infer_output_signature(inputs, args, kwargs, input_masks)

File "C:\Users\Denis\anaconda3\envs\tf1\lib\site-packages\keras\engine\base_layer.py", line 886, in _infer_output_signature
self._maybe_build(inputs)

File "C:\Users\Denis\anaconda3\envs\tf1\lib\site-packages\keras\engine\base_layer.py", line 2659, in _maybe_build
self.build(input_shapes) # pylint:disable=not-callable

File "C:\Users\Denis\anaconda3\envs\tf1\lib\site-packages\keras\layers\recurrent.py", line 577, in build
self.cell.build(step_input_shape)

File "C:\Users\Denis\anaconda3\envs\tf1\lib\site-packages\kerasncp\tf\ltc_cell.py", line 131, in build
self._wiring.build(input_dim)

File "C:\Users\Denis\anaconda3\envs\tf1\lib\site-packages\kerasncp\wirings\wirings.py", line 150, in build
super().build(input_shape)

File "C:\Users\Denis\anaconda3\envs\tf1\lib\site-packages\kerasncp\wirings\wirings.py", line 44, in build
self.set_input_dim(input_dim)

File "C:\Users\Denis\anaconda3\envs\tf1\lib\site-packages\kerasncp\wirings\wirings.py", line 55, in set_input_dim
[input_dim, self.units], dtype=np.int32

TypeError: 'NoneType' object cannot be interpreted as an integer`

Any help would be appreciated.

AttributeError: 'WiredCfCCell' object has no attribute 'register_module'

When using 'AutoNCP' method, I met an error as the title said. Then I went into the file "...\ncps\torch\wired_cfc_cell.py" and found the method 'self.register_module' which hasn't been declared (or just I don't find it). I wonder where does 'register_module' come from. I would appreciate it if you can help me.

NCP and LTC mixture

If I understand correctly this repo is bound to LTC and NCP papers at the same time.
But LTC in the paper is described in general case without wirings class that is used in NCP.
In this repo the LTC cell is strongly based on the NCP architucture.
Can the different neural network be used in the LTC cell using code from repo?

Example of stacking LTC with convolutional layers on pytorch version

Hello!Thank you for having such a great work for many beginners like me to learn, recently I am using pytorch framework to build LTC model, but I can never use convolutional layer and LTC well together, always report the error of data dimension mismatch makes me very distressed, hope to get a pytorch framework of convolutional layer and LTC stacking example, looking forward to your reply!

unable to train NCP using gradient-based reinforcement learning

i am currently trying to train an NCP architecture using a Q-Learning approach on the OpenAI gym CartPole-v0 environment. i'm running into problems however as the agent simply refuses to learn, or at least is taking far longer than a regular feed-forward neural network with Deep-Q-Learning. for reference i am using the pytorch implementation, and you can have a look at my code here if u would like.

i am not utilising the recurrent aspect of the NCP atm, however I don't think that's what's causing the problem as i tried a simple supervised learning test and that seemed to work fine. also i tried a regular feed-forward neural network in place of the NCP and that worked fine, and as mentioned it was fairly quick to achieve good results. in my code that's just the DQNetwork class.

is there some nuance to how i should approach this that i have missed, or have i perhaps made a silly error :P? thank u

Reproducibility issue: low rewards in Atari examples

Thanks for your work and contribution. Yet we encountered reproduction inability when trying to replicate the results in Atari Behavior Cloning & Atari Reinforcement Learning (PPO) . In both case, we used the code exactly cloned from the repo.

In Atari Behavior Cloning task, the train loss steadily declines with the training procedure. However, the Mean Return(put the model in a real environment and run close loops) remains pretty low. After 50 epochs of training, the Mean Return is utterly the same as the initial model. We also visualize the behavior of the trained model and find it performs badly in Breakout. Here's part of the training log:

Details

(ncps) E:\ncps_experiment>python atari_torch.py
A.L.E: Arcade Learning Environment (version 0.7.4+069f8bd)
[Powered by Stella]
C:\Users\Admin\anaconda3\envs\ncps\lib\site-packages\gym\utils\seeding.py:138: DeprecationWarning: WARN: Function hash_seed(seed, max_bytes) is marked as deprecated and will be removed in the future.
deprecation(
C:\Users\Admin\anaconda3\envs\ncps\lib\site-packages\gym\utils\seeding.py:175: DeprecationWarning: WARN: Function _bigint_from_bytes(bytes) is marked as deprecated and will be removed in the future.
deprecation(
2023-05-11 02:08:34,419 WARNING deprecation.py:47 -- DeprecationWarning: FrameStack has been deprecated. This will raise an error in the future!
loss=0.488: 100%|
Epoch 1, val_loss=0.5465, val_acc=82.52%
Mean return 1.8 (n=10)

loss=0.331: 100%|
Epoch 2, val_loss=0.8403, val_acc=67.58%
Mean return 1.8 (n=10)

loss=0.2709: 100%|
Epoch 3, val_loss=2.126, val_acc=29.59%
Mean return 0.5 (n=10)
......
loss=0.05224: 100%|
Epoch 48, val_loss=0.831, val_acc=70.96%
Mean return 1.4 (n=10)

loss=0.04968: 100%|
Epoch 49, val_loss=1.643, val_acc=56.48%
Mean return 0.0 (n=10)

loss=0.04885: 100%|
Epoch 50, val_loss=2.886, val_acc=52.69%
Mean return 0.6 (n=10)

The circumstance in Atari Reinforcement Learning (PPO) is almost the same. The policy reward just can't grow steadily as that showed in the tutorial. After 100k steps of sampling, the policy reward merely reached 5.0. Here's part of the training log:

Details

Ran 0.0 hours
sampled 4k steps
policy reward: 1.1
saved checkpoint 'rl_ckpt/ALE/Breakout-v5'

Ran 0.5 hours
sampled 164k steps
policy reward: 1.6
saved checkpoint 'rl_ckpt/ALE/Breakout-v5'

Ran 1.0 hours
sampled 348k steps
policy reward: 3.5
saved checkpoint 'rl_ckpt/ALE/Breakout-v5'

Ran 1.5 hours
sampled 540k steps
policy reward: 5.4
saved checkpoint 'rl_ckpt/ALE/Breakout-v5'

Ran 2.0 hours
sampled 732k steps
policy reward: 4.5
saved checkpoint 'rl_ckpt/ALE/Breakout-v5'

Ran 2.5 hours
sampled 916k steps
policy reward: 5.3
saved checkpoint 'rl_ckpt/ALE/Breakout-v5'

Ran 3.0 hours
sampled 1108k steps
policy reward: 4.5
saved checkpoint 'rl_ckpt/ALE/Breakout-v5'

We followed the tutorial and install the specified version of gym, ray and ale-py. We wonder if it's to do with the versions of other packages. Here's the conda environment we used in behavior cloning & reinforcement learning, respectively.
behavior cloning env

packages in environment at C:\Users\Admin\anaconda3\envs\ncps:

Name Version Build Channel

_ipyw_jlab_nb_ext_conf 0.1.0 py39haa95532_0 https://repo.anaconda.com/pkgs/main
absl-py 1.4.0 pypi_0 pypi
aiosignal 1.3.1 pypi_0 pypi
alabaster 0.7.12 pyhd3eb1b0_0 https://repo.anaconda.com/pkgs/main
ale-py 0.7.4 pypi_0 pypi
anaconda-client 1.11.0 py39haa95532_0 https://repo.anaconda.com/pkgs/main
anaconda-project 0.11.1 py39haa95532_0 https://repo.anaconda.com/pkgs/main
anyio 3.5.0 py39haa95532_0 https://repo.anaconda.com/pkgs/main
appdirs 1.4.4 pyhd3eb1b0_0 https://repo.anaconda.com/pkgs/main
argon2-cffi 21.3.0 pyhd3eb1b0_0 https://repo.anaconda.com/pkgs/main
argon2-cffi-bindings 21.2.0 py39h2bbff1b_0 https://repo.anaconda.com/pkgs/main
arrow 1.2.2 pyhd3eb1b0_0 https://repo.anaconda.com/pkgs/main
astroid 2.11.7 py39haa95532_0 https://repo.anaconda.com/pkgs/main
astropy 5.1 py39h080aedc_0 https://repo.anaconda.com/pkgs/main
astunparse 1.6.3 pypi_0 pypi
atomicwrites 1.4.0 py_0 https://repo.anaconda.com/pkgs/main
attrs 21.4.0 pyhd3eb1b0_0 https://repo.anaconda.com/pkgs/main
automat 20.2.0 py_0 https://repo.anaconda.com/pkgs/main
autopep8 1.6.0 pyhd3eb1b0_1 https://repo.anaconda.com/pkgs/main
autorom 0.4.2 pypi_0 pypi
autorom-accept-rom-license 0.6.1 pypi_0 pypi
babel 2.9.1 pyhd3eb1b0_0 https://repo.anaconda.com/pkgs/main
backcall 0.2.0 pyhd3eb1b0_0 https://repo.anaconda.com/pkgs/main
backports 1.1 pyhd3eb1b0_0 https://repo.anaconda.com/pkgs/main
backports.functools_lru_cache 1.6.4 pyhd3eb1b0_0 https://repo.anaconda.com/pkgs/main
backports.tempfile 1.0 pyhd3eb1b0_1 https://repo.anaconda.com/pkgs/main
backports.weakref 1.0.post1 py_1 https://repo.anaconda.com/pkgs/main
bcrypt 3.2.0 py39h2bbff1b_1 https://repo.anaconda.com/pkgs/main
beautifulsoup4 4.11.1 py39haa95532_0 https://repo.anaconda.com/pkgs/main
binaryornot 0.4.4 pyhd3eb1b0_1 https://repo.anaconda.com/pkgs/main
bitarray 2.5.1 py39h2bbff1b_0 https://repo.anaconda.com/pkgs/main
bkcharts 0.2 py39haa95532_1 https://repo.anaconda.com/pkgs/main
black 22.6.0 py39haa95532_0 https://repo.anaconda.com/pkgs/main
blas 1.0 mkl https://repo.anaconda.com/pkgs/main
bleach 4.1.0 pyhd3eb1b0_0 https://repo.anaconda.com/pkgs/main
blosc 1.21.0 h19a0ad4_1 https://repo.anaconda.com/pkgs/main
bokeh 2.4.3 py39haa95532_0 https://repo.anaconda.com/pkgs/main
boto3 1.24.28 py39haa95532_0 https://repo.anaconda.com/pkgs/main
botocore 1.27.28 py39haa95532_0 https://repo.anaconda.com/pkgs/main
bottleneck 1.3.5 py39h080aedc_0 https://repo.anaconda.com/pkgs/main
brotli 1.0.9 h2bbff1b_7 https://repo.anaconda.com/pkgs/main
brotli-bin 1.0.9 h2bbff1b_7 https://repo.anaconda.com/pkgs/main
brotlipy 0.7.0 py39h2bbff1b_1003 https://repo.anaconda.com/pkgs/main
bzip2 1.0.8 he774522_0 https://repo.anaconda.com/pkgs/main
ca-certificates 2023.01.10 haa95532_0 defaults
cachetools 5.3.0 pypi_0 pypi
certifi 2022.12.7 py39haa95532_0 defaults
cffi 1.15.1 py39h2bbff1b_0 https://repo.anaconda.com/pkgs/main
cfitsio 3.470 h2bbff1b_7 https://repo.anaconda.com/pkgs/main
chardet 4.0.0 py39haa95532_1003 https://repo.anaconda.com/pkgs/main
charls 2.2.0 h6c2663c_0 https://repo.anaconda.com/pkgs/main
charset-normalizer 2.0.4 pyhd3eb1b0_0 https://repo.anaconda.com/pkgs/main
click 8.0.4 py39haa95532_0 https://repo.anaconda.com/pkgs/main
cloudpickle 2.0.0 pyhd3eb1b0_0 https://repo.anaconda.com/pkgs/main
clyent 1.2.2 py39haa95532_1 https://repo.anaconda.com/pkgs/main
colorama 0.4.5 py39haa95532_0 https://repo.anaconda.com/pkgs/main
colorcet 3.0.0 py39haa95532_0 https://repo.anaconda.com/pkgs/main
comtypes 1.1.10 py39haa95532_1002 https://repo.anaconda.com/pkgs/main
conda-content-trust 0.1.3 py39haa95532_0 https://repo.anaconda.com/pkgs/main
conda-pack 0.6.0 pyhd3eb1b0_0 https://repo.anaconda.com/pkgs/main
conda-package-handling 1.9.0 py39h8cc25b3_0 https://repo.anaconda.com/pkgs/main
conda-repo-cli 1.0.20 py39haa95532_0 https://repo.anaconda.com/pkgs/main
conda-verify 3.4.2 py_1 https://repo.anaconda.com/pkgs/main
constantly 15.1.0 pyh2b92418_0 https://repo.anaconda.com/pkgs/main
cookiecutter 1.7.3 pyhd3eb1b0_0 https://repo.anaconda.com/pkgs/main
cryptography 37.0.1 py39h21b164f_0 https://repo.anaconda.com/pkgs/main
cssselect 1.1.0 pyhd3eb1b0_0 https://repo.anaconda.com/pkgs/main
cuda-cccl 12.1.109 0 nvidia
cuda-cudart 11.7.99 0 nvidia
cuda-cudart-dev 11.7.99 0 nvidia
cuda-cupti 11.7.101 0 nvidia
cuda-libraries 11.7.1 0 nvidia
cuda-libraries-dev 11.7.1 0 nvidia
cuda-nvrtc 11.7.99 0 nvidia
cuda-nvrtc-dev 11.7.99 0 nvidia
cuda-nvtx 11.7.91 0 nvidia
cuda-runtime 11.7.1 0 nvidia
curl 7.84.0 h2bbff1b_0 https://repo.anaconda.com/pkgs/main
cycler 0.11.0 pyhd3eb1b0_0 https://repo.anaconda.com/pkgs/main
cython 0.29.32 py39hd77b12b_0 https://repo.anaconda.com/pkgs/main
cytoolz 0.11.0 py39h2bbff1b_0 https://repo.anaconda.com/pkgs/main
daal4py 2021.6.0 py39h757b272_1 https://repo.anaconda.com/pkgs/main
dal 2021.6.0 h59b6b97_874 https://repo.anaconda.com/pkgs/main
dask 2022.7.0 py39haa95532_0 https://repo.anaconda.com/pkgs/main
dask-core 2022.7.0 py39haa95532_0 https://repo.anaconda.com/pkgs/main
dataclasses 0.8 pyh6d0b6a4_7 https://repo.anaconda.com/pkgs/main
datashader 0.14.1 py39haa95532_0 https://repo.anaconda.com/pkgs/main
datashape 0.5.4 py39haa95532_1 https://repo.anaconda.com/pkgs/main
debugpy 1.5.1 py39hd77b12b_0 https://repo.anaconda.com/pkgs/main
decorator 5.1.1 pyhd3eb1b0_0 https://repo.anaconda.com/pkgs/main
defusedxml 0.7.1 pyhd3eb1b0_0 https://repo.anaconda.com/pkgs/main
diff-match-patch 20200713 pyhd3eb1b0_0 https://repo.anaconda.com/pkgs/main
dill 0.3.4 pyhd3eb1b0_0 https://repo.anaconda.com/pkgs/main
distlib 0.3.6 pypi_0 pypi
distributed 2022.7.0 py39haa95532_0 https://repo.anaconda.com/pkgs/main
dm-tree 0.1.8 pypi_0 pypi
docutils 0.18.1 py39haa95532_3 https://repo.anaconda.com/pkgs/main
entrypoints 0.4 py39haa95532_0 https://repo.anaconda.com/pkgs/main
et_xmlfile 1.1.0 py39haa95532_0 https://repo.anaconda.com/pkgs/main
fftw 3.3.9 h2bbff1b_1 https://repo.anaconda.com/pkgs/main
filelock 3.12.0 pypi_0 pypi
flake8 4.0.1 pyhd3eb1b0_1 https://repo.anaconda.com/pkgs/main
flask 1.1.2 pyhd3eb1b0_0 https://repo.anaconda.com/pkgs/main
flatbuffers 23.5.9 pypi_0 pypi
fonttools 4.25.0 pyhd3eb1b0_0 https://repo.anaconda.com/pkgs/main
freetype 2.10.4 hd328e21_0 https://repo.anaconda.com/pkgs/main
frozenlist 1.3.3 pypi_0 pypi
fsspec 2022.7.1 py39haa95532_0 https://repo.anaconda.com/pkgs/main
future 0.18.2 py39haa95532_1 https://repo.anaconda.com/pkgs/main
gast 0.4.0 pypi_0 pypi
gensim 4.1.2 py39hd77b12b_0 https://repo.anaconda.com/pkgs/main
giflib 5.2.1 h62dcd97_0 https://repo.anaconda.com/pkgs/main
glob2 0.7 pyhd3eb1b0_0 https://repo.anaconda.com/pkgs/main
google-auth 2.18.1 pypi_0 pypi
google-auth-oauthlib 1.0.0 pypi_0 pypi
google-pasta 0.2.0 pypi_0 pypi
greenlet 1.1.1 py39hd77b12b_0 https://repo.anaconda.com/pkgs/main
grpcio 1.54.0 pypi_0 pypi
gym 0.23.1 pypi_0 pypi
gym-notices 0.0.8 pypi_0 pypi
h5py 3.7.0 py39h3de5c98_0 https://repo.anaconda.com/pkgs/main
hdf5 1.10.6 h1756f20_1 https://repo.anaconda.com/pkgs/main
heapdict 1.0.1 pyhd3eb1b0_0 https://repo.anaconda.com/pkgs/main
holoviews 1.15.0 py39haa95532_0 https://repo.anaconda.com/pkgs/main
hvplot 0.8.0 py39haa95532_0 https://repo.anaconda.com/pkgs/main
hyperlink 21.0.0 pyhd3eb1b0_0 https://repo.anaconda.com/pkgs/main
icc_rt 2022.1.0 h6049295_2 https://repo.anaconda.com/pkgs/main
icu 58.2 ha925a31_3 https://repo.anaconda.com/pkgs/main
idna 3.3 pyhd3eb1b0_0 https://repo.anaconda.com/pkgs/main
imagecodecs 2021.8.26 py39hc0a7faf_1 https://repo.anaconda.com/pkgs/main
imageio 2.19.3 py39haa95532_0 https://repo.anaconda.com/pkgs/main
imagesize 1.4.1 py39haa95532_0 https://repo.anaconda.com/pkgs/main
importlib-metadata 4.11.3 py39haa95532_0 https://repo.anaconda.com/pkgs/main
importlib-resources 5.12.0 pypi_0 pypi
importlib_metadata 4.11.3 hd3eb1b0_0 https://repo.anaconda.com/pkgs/main
incremental 21.3.0 pyhd3eb1b0_0 https://repo.anaconda.com/pkgs/main
inflection 0.5.1 py39haa95532_0 https://repo.anaconda.com/pkgs/main
iniconfig 1.1.1 pyhd3eb1b0_0 https://repo.anaconda.com/pkgs/main
intake 0.6.5 pyhd3eb1b0_0 https://repo.anaconda.com/pkgs/main
intel-openmp 2021.4.0 haa95532_3556 https://repo.anaconda.com/pkgs/main
intervaltree 3.1.0 pyhd3eb1b0_0 https://repo.anaconda.com/pkgs/main
ipykernel 6.15.2 py39haa95532_0 https://repo.anaconda.com/pkgs/main
ipython 7.31.1 py39haa95532_1 https://repo.anaconda.com/pkgs/main
ipython_genutils 0.2.0 pyhd3eb1b0_1 https://repo.anaconda.com/pkgs/main
ipywidgets 7.6.5 pyhd3eb1b0_1 https://repo.anaconda.com/pkgs/main
isort 5.9.3 pyhd3eb1b0_0 https://repo.anaconda.com/pkgs/main
itemadapter 0.3.0 pyhd3eb1b0_0 https://repo.anaconda.com/pkgs/main
itemloaders 1.0.4 pyhd3eb1b0_1 https://repo.anaconda.com/pkgs/main
itsdangerous 2.0.1 pyhd3eb1b0_0 https://repo.anaconda.com/pkgs/main
jax 0.4.10 pypi_0 pypi
jdcal 1.4.1 pyhd3eb1b0_0 https://repo.anaconda.com/pkgs/main
jedi 0.18.1 py39haa95532_1 https://repo.anaconda.com/pkgs/main
jellyfish 0.9.0 py39h2bbff1b_0 https://repo.anaconda.com/pkgs/main
jinja2 2.11.3 pyhd3eb1b0_0 https://repo.anaconda.com/pkgs/main
jinja2-time 0.2.0 pyhd3eb1b0_3 https://repo.anaconda.com/pkgs/main
jmespath 0.10.0 pyhd3eb1b0_0 https://repo.anaconda.com/pkgs/main
joblib 1.1.0 pyhd3eb1b0_0 https://repo.anaconda.com/pkgs/main
jpeg 9e h2bbff1b_0 https://repo.anaconda.com/pkgs/main
jq 1.6 haa95532_1 https://repo.anaconda.com/pkgs/main
json5 0.9.6 pyhd3eb1b0_0 https://repo.anaconda.com/pkgs/main
jsonschema 4.16.0 py39haa95532_0 https://repo.anaconda.com/pkgs/main
jupyter 1.0.0 py39haa95532_8 https://repo.anaconda.com/pkgs/main
jupyter_client 7.3.4 py39haa95532_0 https://repo.anaconda.com/pkgs/main
jupyter_console 6.4.3 pyhd3eb1b0_0 https://repo.anaconda.com/pkgs/main
jupyter_core 4.11.1 py39haa95532_0 https://repo.anaconda.com/pkgs/main
jupyter_server 1.18.1 py39haa95532_0 https://repo.anaconda.com/pkgs/main
jupyterlab 3.4.4 py39haa95532_0 https://repo.anaconda.com/pkgs/main
jupyterlab_pygments 0.1.2 py_0 https://repo.anaconda.com/pkgs/main
jupyterlab_server 2.10.3 pyhd3eb1b0_1 https://repo.anaconda.com/pkgs/main
jupyterlab_widgets 1.0.0 pyhd3eb1b0_1 https://repo.anaconda.com/pkgs/main
keras 2.12.0 pypi_0 pypi
keyring 23.4.0 py39haa95532_0 https://repo.anaconda.com/pkgs/main
kiwisolver 1.4.2 py39hd77b12b_0 https://repo.anaconda.com/pkgs/main
lazy-object-proxy 1.6.0 py39h2bbff1b_0 https://repo.anaconda.com/pkgs/main
lcms2 2.12 h83e58a3_0 https://repo.anaconda.com/pkgs/main
lerc 3.0 hd77b12b_0 https://repo.anaconda.com/pkgs/main
libaec 1.0.4 h33f27b4_1 https://repo.anaconda.com/pkgs/main
libarchive 3.6.1 hebabd0d_0 https://repo.anaconda.com/pkgs/main
libbrotlicommon 1.0.9 h2bbff1b_7 https://repo.anaconda.com/pkgs/main
libbrotlidec 1.0.9 h2bbff1b_7 https://repo.anaconda.com/pkgs/main
libbrotlienc 1.0.9 h2bbff1b_7 https://repo.anaconda.com/pkgs/main
libclang 16.0.0 pypi_0 pypi
libcublas 11.10.3.66 0 nvidia
libcublas-dev 11.10.3.66 0 nvidia
libcufft 10.7.2.124 0 nvidia
libcufft-dev 10.7.2.124 0 nvidia
libcurand 10.3.2.106 0 nvidia
libcurand-dev 10.3.2.106 0 nvidia
libcurl 7.84.0 h86230a5_0 https://repo.anaconda.com/pkgs/main
libcusolver 11.4.0.1 0 nvidia
libcusolver-dev 11.4.0.1 0 nvidia
libcusparse 11.7.4.91 0 nvidia
libcusparse-dev 11.7.4.91 0 nvidia
libdeflate 1.8 h2bbff1b_5 https://repo.anaconda.com/pkgs/main
libiconv 1.16 h2bbff1b_2 https://repo.anaconda.com/pkgs/main
liblief 0.11.5 hd77b12b_1 https://repo.anaconda.com/pkgs/main
libnpp 11.7.4.75 0 nvidia
libnpp-dev 11.7.4.75 0 nvidia
libnvjpeg 11.8.0.2 0 nvidia
libnvjpeg-dev 11.8.0.2 0 nvidia
libpng 1.6.37 h2a8f88b_0 https://repo.anaconda.com/pkgs/main
libsodium 1.0.18 h62dcd97_0 https://repo.anaconda.com/pkgs/main
libspatialindex 1.9.3 h6c2663c_0 https://repo.anaconda.com/pkgs/main
libssh2 1.10.0 hcd4344a_0 https://repo.anaconda.com/pkgs/main
libtiff 4.4.0 h8a3f274_0 https://repo.anaconda.com/pkgs/main
libuv 1.44.2 h2bbff1b_0 defaults
libwebp 1.2.2 h2bbff1b_0 https://repo.anaconda.com/pkgs/main
libxml2 2.9.14 h0ad7f3c_0 https://repo.anaconda.com/pkgs/main
libxslt 1.1.35 h2bbff1b_0 https://repo.anaconda.com/pkgs/main
libzopfli 1.0.3 ha925a31_0 https://repo.anaconda.com/pkgs/main
llvmlite 0.38.0 py39h23ce68f_0 https://repo.anaconda.com/pkgs/main
locket 1.0.0 py39haa95532_0 https://repo.anaconda.com/pkgs/main
lxml 4.9.1 py39h1985fb9_0 https://repo.anaconda.com/pkgs/main
lz4 3.1.3 py39h2bbff1b_0 https://repo.anaconda.com/pkgs/main
lz4-c 1.9.3 h2bbff1b_1 https://repo.anaconda.com/pkgs/main
lzo 2.10 he774522_2 https://repo.anaconda.com/pkgs/main
m2-msys2-runtime 2.5.0.17080.65c939c 3 https://repo.anaconda.com/pkgs/msys2
m2-patch 2.7.5 2 https://repo.anaconda.com/pkgs/msys2
m2w64-libwinpthread-git 5.0.0.4634.697f757 2 https://repo.anaconda.com/pkgs/msys2
markdown 3.3.4 py39haa95532_0 https://repo.anaconda.com/pkgs/main
markupsafe 2.0.1 py39h2bbff1b_0 https://repo.anaconda.com/pkgs/main
matplotlib 3.5.2 py39haa95532_0 https://repo.anaconda.com/pkgs/main
matplotlib-base 3.5.2 py39hd77b12b_0 https://repo.anaconda.com/pkgs/main
matplotlib-inline 0.1.6 py39haa95532_0 https://repo.anaconda.com/pkgs/main
mccabe 0.6.1 py39haa95532_2 https://repo.anaconda.com/pkgs/main
menuinst 1.4.19 py39h59b6b97_0 https://repo.anaconda.com/pkgs/main
mistune 0.8.4 py39h2bbff1b_1000 https://repo.anaconda.com/pkgs/main
mkl 2021.4.0 haa95532_640 https://repo.anaconda.com/pkgs/main
mkl-service 2.4.0 py39h2bbff1b_0 https://repo.anaconda.com/pkgs/main
mkl_fft 1.3.1 py39h277e83a_0 https://repo.anaconda.com/pkgs/main
mkl_random 1.2.2 py39hf11a4ad_0 https://repo.anaconda.com/pkgs/main
ml-dtypes 0.1.0 pypi_0 pypi
mock 4.0.3 pyhd3eb1b0_0 https://repo.anaconda.com/pkgs/main
mpmath 1.2.1 py39haa95532_0 https://repo.anaconda.com/pkgs/main
msgpack-python 1.0.3 py39h59b6b97_0 https://repo.anaconda.com/pkgs/main
msys2-conda-epoch 20160418 1 https://repo.anaconda.com/pkgs/msys2
multipledispatch 0.6.0 py39haa95532_0 https://repo.anaconda.com/pkgs/main
munkres 1.1.4 py_0 https://repo.anaconda.com/pkgs/main
mypy_extensions 0.4.3 py39haa95532_1 https://repo.anaconda.com/pkgs/main
nbclassic 0.3.5 pyhd3eb1b0_0 https://repo.anaconda.com/pkgs/main
nbclient 0.5.13 py39haa95532_0 https://repo.anaconda.com/pkgs/main
nbconvert 6.4.4 py39haa95532_0 https://repo.anaconda.com/pkgs/main
nbformat 5.5.0 py39haa95532_0 https://repo.anaconda.com/pkgs/main
ncps 0.0.7 pypi_0 pypi
nest-asyncio 1.5.5 py39haa95532_0 https://repo.anaconda.com/pkgs/main
networkx 2.8.4 py39haa95532_0 https://repo.anaconda.com/pkgs/main
nltk 3.7 pyhd3eb1b0_0 https://repo.anaconda.com/pkgs/main
nose 1.3.7 pyhd3eb1b0_1008 https://repo.anaconda.com/pkgs/main
notebook 6.4.12 py39haa95532_0 https://repo.anaconda.com/pkgs/main
numba 0.55.1 py39hf11a4ad_0 https://repo.anaconda.com/pkgs/main
numexpr 2.8.3 py39hb80d3ca_0 https://repo.anaconda.com/pkgs/main
numpy 1.22.0 pypi_0 pypi
numpydoc 1.4.0 py39haa95532_0 https://repo.anaconda.com/pkgs/main
oauthlib 3.2.2 pypi_0 pypi
olefile 0.46 pyhd3eb1b0_0 https://repo.anaconda.com/pkgs/main
openjpeg 2.4.0 h4fc8c34_0 https://repo.anaconda.com/pkgs/main
openpyxl 3.0.10 py39h2bbff1b_0 https://repo.anaconda.com/pkgs/main
openssl 1.1.1t h2bbff1b_0 defaults
opt-einsum 3.3.0 pypi_0 pypi
packaging 21.3 pyhd3eb1b0_0 https://repo.anaconda.com/pkgs/main
pandas 1.4.4 py39hd77b12b_0 https://repo.anaconda.com/pkgs/main
pandocfilters 1.5.0 pyhd3eb1b0_0 https://repo.anaconda.com/pkgs/main
panel 0.13.1 py39haa95532_0 https://repo.anaconda.com/pkgs/main
param 1.12.0 pyhd3eb1b0_0 https://repo.anaconda.com/pkgs/main
paramiko 2.8.1 pyhd3eb1b0_0 https://repo.anaconda.com/pkgs/main
parsel 1.6.0 py39haa95532_0 https://repo.anaconda.com/pkgs/main
parso 0.8.3 pyhd3eb1b0_0 https://repo.anaconda.com/pkgs/main
partd 1.2.0 pyhd3eb1b0_1 https://repo.anaconda.com/pkgs/main
pathlib 1.0.1 pyhd3eb1b0_1 https://repo.anaconda.com/pkgs/main
pathspec 0.9.0 py39haa95532_0 https://repo.anaconda.com/pkgs/main
patsy 0.5.2 py39haa95532_1 https://repo.anaconda.com/pkgs/main
pep8 1.7.1 py39haa95532_1 https://repo.anaconda.com/pkgs/main
pexpect 4.8.0 pyhd3eb1b0_3 https://repo.anaconda.com/pkgs/main
pickleshare 0.7.5 pyhd3eb1b0_1003 https://repo.anaconda.com/pkgs/main
pillow 9.2.0 py39hdc2b20a_1 https://repo.anaconda.com/pkgs/main
pip 22.2.2 py39haa95532_0 https://repo.anaconda.com/pkgs/main
pkginfo 1.8.2 pyhd3eb1b0_0 https://repo.anaconda.com/pkgs/main
platformdirs 3.5.0 pypi_0 pypi
plotly 5.9.0 py39haa95532_0 https://repo.anaconda.com/pkgs/main
pluggy 1.0.0 py39haa95532_1 https://repo.anaconda.com/pkgs/main
poyo 0.5.0 pyhd3eb1b0_0 https://repo.anaconda.com/pkgs/main
prometheus_client 0.14.1 py39haa95532_0 https://repo.anaconda.com/pkgs/main
prompt-toolkit 3.0.20 pyhd3eb1b0_0 https://repo.anaconda.com/pkgs/main
prompt_toolkit 3.0.20 hd3eb1b0_0 https://repo.anaconda.com/pkgs/main
protego 0.1.16 py_0 https://repo.anaconda.com/pkgs/main
protobuf 3.20.3 pypi_0 pypi
psutil 5.9.0 py39h2bbff1b_0 https://repo.anaconda.com/pkgs/main
ptyprocess 0.7.0 pyhd3eb1b0_2 https://repo.anaconda.com/pkgs/main
py 1.11.0 pyhd3eb1b0_0 https://repo.anaconda.com/pkgs/main
py-lief 0.11.5 py39hd77b12b_1 https://repo.anaconda.com/pkgs/main
pyasn1 0.4.8 pyhd3eb1b0_0 https://repo.anaconda.com/pkgs/main
pyasn1-modules 0.2.8 py_0 https://repo.anaconda.com/pkgs/main
pycodestyle 2.8.0 pyhd3eb1b0_0 https://repo.anaconda.com/pkgs/main
pycosat 0.6.3 py39h2bbff1b_0 https://repo.anaconda.com/pkgs/main
pycparser 2.21 pyhd3eb1b0_0 https://repo.anaconda.com/pkgs/main
pyct 0.4.8 py39haa95532_1 https://repo.anaconda.com/pkgs/main
pycurl 7.45.1 py39hcd4344a_0 https://repo.anaconda.com/pkgs/main
pydispatcher 2.0.5 py39haa95532_2 https://repo.anaconda.com/pkgs/main
pydocstyle 6.1.1 pyhd3eb1b0_0 https://repo.anaconda.com/pkgs/main
pyerfa 2.0.0 py39h2bbff1b_0 https://repo.anaconda.com/pkgs/main
pyflakes 2.4.0 pyhd3eb1b0_0 https://repo.anaconda.com/pkgs/main
pygments 2.11.2 pyhd3eb1b0_0 https://repo.anaconda.com/pkgs/main
pyhamcrest 2.0.2 pyhd3eb1b0_2 https://repo.anaconda.com/pkgs/main
pyjwt 2.4.0 py39haa95532_0 https://repo.anaconda.com/pkgs/main
pylint 2.14.5 py39haa95532_0 https://repo.anaconda.com/pkgs/main
pyls-spyder 0.4.0 pyhd3eb1b0_0 https://repo.anaconda.com/pkgs/main
pynacl 1.5.0 py39h8cc25b3_0 https://repo.anaconda.com/pkgs/main
pyodbc 4.0.34 py39hd77b12b_0 https://repo.anaconda.com/pkgs/main
pyopenssl 22.0.0 pyhd3eb1b0_0 https://repo.anaconda.com/pkgs/main
pyparsing 3.0.9 py39haa95532_0 https://repo.anaconda.com/pkgs/main
pyqt 5.9.2 py39hd77b12b_6 https://repo.anaconda.com/pkgs/main
pyrsistent 0.18.0 py39h196d8e1_0 https://repo.anaconda.com/pkgs/main
pysocks 1.7.1 py39haa95532_0 https://repo.anaconda.com/pkgs/main
pytables 3.6.1 py39h56d22b6_1 https://repo.anaconda.com/pkgs/main
pytest 7.1.2 py39haa95532_0 https://repo.anaconda.com/pkgs/main
python 3.9.13 h6244533_1 https://repo.anaconda.com/pkgs/main
python-dateutil 2.8.2 pyhd3eb1b0_0 https://repo.anaconda.com/pkgs/main
python-fastjsonschema 2.16.2 py39haa95532_0 https://repo.anaconda.com/pkgs/main
python-libarchive-c 2.9 pyhd3eb1b0_1 https://repo.anaconda.com/pkgs/main
python-lsp-black 1.0.0 pyhd3eb1b0_0 https://repo.anaconda.com/pkgs/main
python-lsp-jsonrpc 1.0.0 pyhd3eb1b0_0 https://repo.anaconda.com/pkgs/main
python-lsp-server 1.3.3 pyhd3eb1b0_0 https://repo.anaconda.com/pkgs/main
python-slugify 5.0.2 pyhd3eb1b0_0 https://repo.anaconda.com/pkgs/main
python-snappy 0.6.0 py39hd77b12b_3 https://repo.anaconda.com/pkgs/main
python-version 0.0.2 pypi_0 pypi
pytorch 2.0.0 py3.9_cuda11.7_cudnn8_0 pytorch
pytorch-cuda 11.7 h16d0643_3 pytorch
pytorch-mutex 1.0 cuda pytorch
pytz 2022.1 py39haa95532_0 https://repo.anaconda.com/pkgs/main
pyviz_comms 2.0.2 pyhd3eb1b0_0 https://repo.anaconda.com/pkgs/main
pywavelets 1.3.0 py39h2bbff1b_0 https://repo.anaconda.com/pkgs/main
pywin32 302 py39h2bbff1b_2 https://repo.anaconda.com/pkgs/main
pywin32-ctypes 0.2.0 py39haa95532_1000 https://repo.anaconda.com/pkgs/main
pywinpty 2.0.2 py39h5da7b33_0 https://repo.anaconda.com/pkgs/main
pyyaml 6.0 py39h2bbff1b_1 https://repo.anaconda.com/pkgs/main
pyzmq 23.2.0 py39hd77b12b_0 https://repo.anaconda.com/pkgs/main
qdarkstyle 3.0.2 pyhd3eb1b0_0 https://repo.anaconda.com/pkgs/main
qstylizer 0.1.10 pyhd3eb1b0_0 https://repo.anaconda.com/pkgs/main
qt 5.9.7 vc14h73c81de_0 https://repo.anaconda.com/pkgs/main
qtawesome 1.0.3 pyhd3eb1b0_0 https://repo.anaconda.com/pkgs/main
qtconsole 5.2.2 pyhd3eb1b0_0 https://repo.anaconda.com/pkgs/main
qtpy 2.2.0 py39haa95532_0 https://repo.anaconda.com/pkgs/main
queuelib 1.5.0 py39haa95532_0 https://repo.anaconda.com/pkgs/main
ray 2.1.0 pypi_0 pypi
regex 2022.7.9 py39h2bbff1b_0 https://repo.anaconda.com/pkgs/main
requests 2.28.1 py39haa95532_0 https://repo.anaconda.com/pkgs/main
requests-file 1.5.1 pyhd3eb1b0_0 https://repo.anaconda.com/pkgs/main
requests-oauthlib 1.3.1 pypi_0 pypi
rope 0.22.0 pyhd3eb1b0_0 https://repo.anaconda.com/pkgs/main
rsa 4.9 pypi_0 pypi
rtree 0.9.7 py39h2eaa2aa_1 https://repo.anaconda.com/pkgs/main
ruamel.yaml 0.17.21 py39h2bbff1b_0 https://repo.anaconda.com/pkgs/main
ruamel.yaml.clib 0.2.6 py39h2bbff1b_1 https://repo.anaconda.com/pkgs/main
ruamel_yaml 0.15.100 py39h2bbff1b_0 https://repo.anaconda.com/pkgs/main
s3transfer 0.6.0 py39haa95532_0 https://repo.anaconda.com/pkgs/main
scikit-image 0.19.2 py39hf11a4ad_0 https://repo.anaconda.com/pkgs/main
scikit-learn 1.0.2 py39hf11a4ad_1 https://repo.anaconda.com/pkgs/main
scikit-learn-intelex 2021.6.0 py39haa95532_0 https://repo.anaconda.com/pkgs/main
scipy 1.9.1 py39he11b74f_0 https://repo.anaconda.com/pkgs/main
scrapy 2.6.2 py39haa95532_0 https://repo.anaconda.com/pkgs/main
seaborn 0.11.2 pyhd3eb1b0_0 https://repo.anaconda.com/pkgs/main
send2trash 1.8.0 pyhd3eb1b0_1 https://repo.anaconda.com/pkgs/main
service_identity 18.1.0 pyhd3eb1b0_1 https://repo.anaconda.com/pkgs/main
setuptools 63.4.1 py39haa95532_0 https://repo.anaconda.com/pkgs/main
sip 4.19.13 py39hd77b12b_0 https://repo.anaconda.com/pkgs/main
six 1.16.0 pyhd3eb1b0_1 https://repo.anaconda.com/pkgs/main
smart_open 5.2.1 py39haa95532_0 https://repo.anaconda.com/pkgs/main
snappy 1.1.9 h6c2663c_0 https://repo.anaconda.com/pkgs/main
sniffio 1.2.0 py39haa95532_1 https://repo.anaconda.com/pkgs/main
snowballstemmer 2.2.0 pyhd3eb1b0_0 https://repo.anaconda.com/pkgs/main
sortedcollections 2.1.0 pyhd3eb1b0_0 https://repo.anaconda.com/pkgs/main
sortedcontainers 2.4.0 pyhd3eb1b0_0 https://repo.anaconda.com/pkgs/main
soupsieve 2.3.1 pyhd3eb1b0_0 https://repo.anaconda.com/pkgs/main
sphinx 5.0.2 py39haa95532_0 https://repo.anaconda.com/pkgs/main
sphinxcontrib-applehelp 1.0.2 pyhd3eb1b0_0 https://repo.anaconda.com/pkgs/main
sphinxcontrib-devhelp 1.0.2 pyhd3eb1b0_0 https://repo.anaconda.com/pkgs/main
sphinxcontrib-htmlhelp 2.0.0 pyhd3eb1b0_0 https://repo.anaconda.com/pkgs/main
sphinxcontrib-jsmath 1.0.1 pyhd3eb1b0_0 https://repo.anaconda.com/pkgs/main
sphinxcontrib-qthelp 1.0.3 pyhd3eb1b0_0 https://repo.anaconda.com/pkgs/main
sphinxcontrib-serializinghtml 1.1.5 pyhd3eb1b0_0 https://repo.anaconda.com/pkgs/main
spyder 5.2.2 py39haa95532_1 https://repo.anaconda.com/pkgs/main
spyder-kernels 2.2.1 py39haa95532_0 https://repo.anaconda.com/pkgs/main
sqlalchemy 1.4.39 py39h2bbff1b_0 https://repo.anaconda.com/pkgs/main
sqlite 3.39.3 h2bbff1b_0 https://repo.anaconda.com/pkgs/main
statsmodels 0.13.2 py39h2bbff1b_0 https://repo.anaconda.com/pkgs/main
sympy 1.10.1 py39haa95532_0 https://repo.anaconda.com/pkgs/main
tabulate 0.8.10 py39haa95532_0 https://repo.anaconda.com/pkgs/main
tbb 2021.6.0 h59b6b97_0 https://repo.anaconda.com/pkgs/main
tbb4py 2021.6.0 py39h59b6b97_0 https://repo.anaconda.com/pkgs/main
tblib 1.7.0 pyhd3eb1b0_0 https://repo.anaconda.com/pkgs/main
tenacity 8.0.1 py39haa95532_1 https://repo.anaconda.com/pkgs/main
tensorboard 2.12.3 pypi_0 pypi
tensorboard-data-server 0.7.0 pypi_0 pypi
tensorboardx 2.6 pypi_0 pypi
tensorflow-estimator 2.12.0 pypi_0 pypi
tensorflow-intel 2.12.0 pypi_0 pypi
tensorflow-io-gcs-filesystem 0.31.0 pypi_0 pypi
termcolor 2.3.0 pypi_0 pypi
terminado 0.13.1 py39haa95532_0 https://repo.anaconda.com/pkgs/main
testpath 0.6.0 py39haa95532_0 https://repo.anaconda.com/pkgs/main
text-unidecode 1.3 pyhd3eb1b0_0 https://repo.anaconda.com/pkgs/main
textdistance 4.2.1 pyhd3eb1b0_0 https://repo.anaconda.com/pkgs/main
threadpoolctl 2.2.0 pyh0d69192_0 https://repo.anaconda.com/pkgs/main
three-merge 0.1.1 pyhd3eb1b0_0 https://repo.anaconda.com/pkgs/main
tifffile 2021.7.2 pyhd3eb1b0_2 https://repo.anaconda.com/pkgs/main
tinycss 0.4 pyhd3eb1b0_1002 https://repo.anaconda.com/pkgs/main
tk 8.6.12 h2bbff1b_0 https://repo.anaconda.com/pkgs/main
tldextract 3.2.0 pyhd3eb1b0_0 https://repo.anaconda.com/pkgs/main
toml 0.10.2 pyhd3eb1b0_0 https://repo.anaconda.com/pkgs/main
tomli 2.0.1 py39haa95532_0 https://repo.anaconda.com/pkgs/main
tomlkit 0.11.1 py39haa95532_0 https://repo.anaconda.com/pkgs/main
toolz 0.11.2 pyhd3eb1b0_0 https://repo.anaconda.com/pkgs/main
torchvision 0.15.0 pypi_0 pypi
tornado 6.1 py39h2bbff1b_0 https://repo.anaconda.com/pkgs/main
tqdm 4.64.1 py39haa95532_0 https://repo.anaconda.com/pkgs/main
traitlets 5.1.1 pyhd3eb1b0_0 https://repo.anaconda.com/pkgs/main
twisted 22.2.0 py39h2bbff1b_1 https://repo.anaconda.com/pkgs/main
twisted-iocpsupport 1.0.2 py39h2bbff1b_0 https://repo.anaconda.com/pkgs/main
typing-extensions 4.3.0 py39haa95532_0 https://repo.anaconda.com/pkgs/main
typing_extensions 4.3.0 py39haa95532_0 https://repo.anaconda.com/pkgs/main
tzdata 2022c h04d1e81_0 https://repo.anaconda.com/pkgs/main
ujson 5.4.0 py39hd77b12b_0 https://repo.anaconda.com/pkgs/main
unidecode 1.2.0 pyhd3eb1b0_0 https://repo.anaconda.com/pkgs/main
urllib3 1.26.11 py39haa95532_0 https://repo.anaconda.com/pkgs/main
vc 14.2 h21ff451_1 https://repo.anaconda.com/pkgs/main
virtualenv 20.23.0 pypi_0 pypi
vs2015_runtime 14.27.29016 h5e58377_2 https://repo.anaconda.com/pkgs/main
w3lib 1.21.0 pyhd3eb1b0_0 https://repo.anaconda.com/pkgs/main
watchdog 2.1.6 py39haa95532_0 https://repo.anaconda.com/pkgs/main
wcwidth 0.2.5 pyhd3eb1b0_0 https://repo.anaconda.com/pkgs/main
webencodings 0.5.1 py39haa95532_1 https://repo.anaconda.com/pkgs/main
websocket-client 0.58.0 py39haa95532_4 https://repo.anaconda.com/pkgs/main
werkzeug 2.0.3 pyhd3eb1b0_0 https://repo.anaconda.com/pkgs/main
wheel 0.37.1 pyhd3eb1b0_0 https://repo.anaconda.com/pkgs/main
widgetsnbextension 3.5.2 py39haa95532_0 https://repo.anaconda.com/pkgs/main
win_inet_pton 1.1.0 py39haa95532_0 https://repo.anaconda.com/pkgs/main
win_unicode_console 0.5 py39haa95532_0 https://repo.anaconda.com/pkgs/main
wincertstore 0.2 py39haa95532_2 https://repo.anaconda.com/pkgs/main
winpty 0.4.3 4 https://repo.anaconda.com/pkgs/main
wrapt 1.14.1 py39h2bbff1b_0 https://repo.anaconda.com/pkgs/main
xarray 0.20.1 pyhd3eb1b0_1 https://repo.anaconda.com/pkgs/main
xlrd 2.0.1 pyhd3eb1b0_0 https://repo.anaconda.com/pkgs/main
xlsxwriter 3.0.3 pyhd3eb1b0_0 https://repo.anaconda.com/pkgs/main
xlwings 0.27.15 py39haa95532_0 https://repo.anaconda.com/pkgs/main
xz 5.2.6 h8cc25b3_0 https://repo.anaconda.com/pkgs/main
yaml 0.2.5 he774522_0 https://repo.anaconda.com/pkgs/main
yapf 0.31.0 pyhd3eb1b0_0 https://repo.anaconda.com/pkgs/main
zeromq 4.3.4 hd77b12b_0 https://repo.anaconda.com/pkgs/main
zfp 0.5.5 hd77b12b_6 https://repo.anaconda.com/pkgs/main
zict 2.1.0 py39haa95532_0 https://repo.anaconda.com/pkgs/main
zipp 3.8.0 py39haa95532_0 https://repo.anaconda.com/pkgs/main
zlib 1.2.12 h8cc25b3_3 https://repo.anaconda.com/pkgs/main
zope 1.0 py39haa95532_1 https://repo.anaconda.com/pkgs/main
zope.interface 5.4.0 py39h2bbff1b_0 https://repo.anaconda.com/pkgs/main
zstd 1.5.2 h19a0ad4_0 https://repo.anaconda.com/pkgs/main

reinforcement learning env

packages in environment at C:\Users\Admin\anaconda3\envs\tf2:

Name Version Build Channel

absl-py 1.4.0 pypi_0 pypi
aiosignal 1.3.1 pypi_0 pypi
ale-py 0.7.4 pypi_0 pypi
astunparse 1.6.3 pypi_0 pypi
attrs 23.1.0 pypi_0 pypi
autorom 0.4.2 pypi_0 pypi
autorom-accept-rom-license 0.6.1 pypi_0 pypi
blas 1.0 mkl defaults
ca-certificates 2023.01.10 haa95532_0 defaults
cachetools 5.3.0 pypi_0 pypi
certifi 2023.5.7 py39haa95532_0 defaults
charset-normalizer 3.1.0 pypi_0 pypi
click 8.0.4 pypi_0 pypi
cloudpickle 2.2.1 pypi_0 pypi
colorama 0.4.6 pypi_0 pypi
contourpy 1.0.7 pypi_0 pypi
cycler 0.11.0 pypi_0 pypi
distlib 0.3.6 pypi_0 pypi
dm-tree 0.1.8 pypi_0 pypi
filelock 3.12.0 pypi_0 pypi
flatbuffers 23.5.9 pypi_0 pypi
fonttools 4.39.4 pypi_0 pypi
frozenlist 1.3.3 pypi_0 pypi
future 0.18.3 pypi_0 pypi
gast 0.4.0 pypi_0 pypi
google-auth 2.18.1 pypi_0 pypi
google-auth-oauthlib 0.4.6 pypi_0 pypi
google-pasta 0.2.0 pypi_0 pypi
grpcio 1.54.2 pypi_0 pypi
gym 0.23.1 pypi_0 pypi
gym-notices 0.0.8 pypi_0 pypi
h5py 3.8.0 pypi_0 pypi
idna 3.4 pypi_0 pypi
imageio 2.29.0 pypi_0 pypi
importlib-metadata 6.6.0 pypi_0 pypi
importlib-resources 5.12.0 pypi_0 pypi
intel-openmp 2023.1.0 h59b6b97_46319 defaults
jsonschema 4.17.3 pypi_0 pypi
keras 2.10.0 pypi_0 pypi
keras-preprocessing 1.1.2 pypi_0 pypi
kiwisolver 1.4.4 pypi_0 pypi
lazy-loader 0.2 pypi_0 pypi
libclang 16.0.0 pypi_0 pypi
libffi 3.4.4 hd77b12b_0 defaults
lz4 4.3.2 pypi_0 pypi
markdown 3.4.3 pypi_0 pypi
markupsafe 2.1.2 pypi_0 pypi
matplotlib 3.7.1 pypi_0 pypi
mkl 2023.1.0 h8bd8f75_46356 defaults
mkl-service 2.4.0 py39h2bbff1b_1 defaults
mkl_fft 1.3.6 py39hf11a4ad_1 defaults
mkl_random 1.2.2 py39hf11a4ad_1 defaults
msgpack 1.0.5 pypi_0 pypi
ncps 0.0.7 pypi_0 pypi
networkx 3.1 pypi_0 pypi
numpy 1.24.3 pypi_0 pypi
numpy-base 1.23.5 py39h46c4fa8_1 defaults
oauthlib 3.2.2 pypi_0 pypi
openssl 1.1.1t h2bbff1b_0 defaults
opt-einsum 3.3.0 pypi_0 pypi
packaging 23.1 pypi_0 pypi
pandas 2.0.1 pypi_0 pypi
pillow 9.5.0 pypi_0 pypi
pip 23.0.1 py39haa95532_0 defaults
pkgutil-resolve-name 1.3.10 pypi_0 pypi
platformdirs 3.5.1 pypi_0 pypi
protobuf 3.19.6 pypi_0 pypi
pyasn1 0.5.0 pypi_0 pypi
pyasn1-modules 0.3.0 pypi_0 pypi
pyparsing 3.0.9 pypi_0 pypi
pyrsistent 0.19.3 pypi_0 pypi
python 3.9.16 h6244533_2 defaults
python-dateutil 2.8.2 pypi_0 pypi
pytz 2023.3 pypi_0 pypi
pywavelets 1.4.1 pypi_0 pypi
pyyaml 6.0 pypi_0 pypi
ray 2.1.0 pypi_0 pypi
requests 2.31.0 pypi_0 pypi
requests-oauthlib 1.3.1 pypi_0 pypi
rsa 4.9 pypi_0 pypi
scikit-image 0.20.0 pypi_0 pypi
scipy 1.9.1 pypi_0 pypi
setuptools 66.0.0 py39haa95532_0 defaults
six 1.16.0 pypi_0 pypi
sqlite 3.41.2 h2bbff1b_0 defaults
tabulate 0.9.0 pypi_0 pypi
tbb 2021.8.0 h59b6b97_0 defaults
tensorboard 2.10.1 pypi_0 pypi
tensorboard-data-server 0.6.1 pypi_0 pypi
tensorboard-plugin-wit 1.8.1 pypi_0 pypi
tensorboardx 2.6 pypi_0 pypi
tensorflow 2.10.0 pypi_0 pypi
tensorflow-estimator 2.10.0 pypi_0 pypi
tensorflow-io-gcs-filesystem 0.31.0 pypi_0 pypi
termcolor 2.3.0 pypi_0 pypi
tifffile 2023.4.12 pypi_0 pypi
tqdm 4.65.0 pypi_0 pypi
typing-extensions 4.6.1 pypi_0 pypi
tzdata 2023.3 pypi_0 pypi
urllib3 1.26.16 pypi_0 pypi
vc 14.2 h21ff451_1 defaults
virtualenv 20.23.0 pypi_0 pypi
vs2015_runtime 14.27.29016 h5e58377_2 defaults
werkzeug 2.3.4 pypi_0 pypi
wheel 0.38.4 py39haa95532_0 defaults
wrapt 1.15.0 pypi_0 pypi
zipp 3.15.0 pypi_0 pypi

getting issues while saving model

----> 1 model.save('location.keras')

File /opt/conda/lib/python3.10/site-packages/keras/utils/traceback_utils.py:70, in filter_traceback..error_handler(*args, **kwargs)
67 filtered_tb = _process_traceback_frames(e.traceback)
68 # To get the full stack trace, call:
69 # tf.debugging.disable_traceback_filtering()
---> 70 raise e.with_traceback(filtered_tb) from None
71 finally:
72 del filtered_tb

File /opt/conda/lib/python3.10/site-packages/ncps/tf/wired_cfc_cell.py:122, in WiredCfCCell.get_config(self)
120 seralized["mode"] = self.mode
121 seralized["activation"] = self._activation
--> 122 seralized["backbone_units"] = self.hidden_units
123 seralized["backbone_layers"] = self.hidden_layers
124 seralized["backbone_dropout"] = self.hidden_dropout

AttributeError: 'WiredCfCCell' object has no attribute 'hidden_units'

the difference between NCP and Linear

Hello,sorry to bother you.I want to know whether all the Linear layers in a neural network can be replaced by the NCP model to optimize the network structure.

200GB data :)

Hi, I really appreciate your awesome work
I would like to test your good work on autonomous driving.
Would you mind giving the link to the data? [email protected]

Thanks in advance!

Questions about ltc_cell.py

def _sigmoid(self, v_pre, mu, sigma):
v_pre = torch.unsqueeze(v_pre, -1) # For broadcasting
mues = v_pre - mu
x = sigma * mues
return torch.sigmoid(x)

Why is data multiplied by sigma rather than divided by sigma in the process of normalization?

In file: ltc_example_sinusoidal.ipynb: error NameError: name 'wirings' is not defined

Both on my pc and on your COLab, I get the same error when executing the Python notebook ltc_example_sinusoidal.ipynb.
This is the error:

---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
<ipython-input-1-912bdff07f27> in <module>
----> 1 ncp_arch = wirings.AutoNCP(8,1)
      2 
      3 ncp_model = keras.models.Sequential(
      4     [
      5         keras.layers.InputLayer(input_shape=(None, 2)),

NameError: name 'wirings' is not defined

Can you, please, advise?
Thank you very much

Questions about the input dimension

Hi, I have a very confusing problem in my recent reproduction work, my data shape is number_frames x 160 x 320 x 3, my applied some convolution before NCP, but it keeps reporting error that the dimension needed by the model will always be one more than the dimension I input and the error will not match, I think the first dimension of my data is the dimension represented by time, I I would like to know if you have any suggestions for me on this problem.Looking forward to your reply!

TypeError: SequenceLearner.optimizer_step() missing 1 required positional argument: 'closure'

When I want to run the pt_example, the Error happened.

Error displaying widget: model not found

TypeError                                 Traceback (most recent call last)
Cell In[15], line 2
      1 # Train the model for 400 epochs (= training steps)
----> 2 trainer.fit(model=learn, train_dataloaders=dataloader)

File /opt/conda/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py:520, in Trainer.fit(self, model, train_dataloaders, val_dataloaders, datamodule, ckpt_path)
    518 model = _maybe_unwrap_optimized(model)
    519 self.strategy._lightning_module = model
--> 520 call._call_and_handle_interrupt(
    521     self, self._fit_impl, model, train_dataloaders, val_dataloaders, datamodule, ckpt_path
    522 )

File /opt/conda/lib/python3.10/site-packages/pytorch_lightning/trainer/call.py:44, in _call_and_handle_interrupt(trainer, trainer_fn, *args, **kwargs)
     42         return trainer.strategy.launcher.launch(trainer_fn, *args, trainer=trainer, **kwargs)
     43     else:
---> 44         return trainer_fn(*args, **kwargs)
     46 except _TunerExitException:
     47     _call_teardown_hook(trainer)

File /opt/conda/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py:559, in Trainer._fit_impl(self, model, train_dataloaders, val_dataloaders, datamodule, ckpt_path)
    549 self._data_connector.attach_data(
    550     model, train_dataloaders=train_dataloaders, val_dataloaders=val_dataloaders, datamodule=datamodule
    551 )
    553 ckpt_path = self._checkpoint_connector._select_ckpt_path(
    554     self.state.fn,
    555     ckpt_path,
    556     model_provided=True,
    557     model_connected=self.lightning_module is not None,
    558 )
--> 559 self._run(model, ckpt_path=ckpt_path)
    561 assert self.state.stopped
    562 self.training = False

File /opt/conda/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py:935, in Trainer._run(self, model, ckpt_path)
    930 self._signal_connector.register_signal_handlers()
    932 # ----------------------------
    933 # RUN THE TRAINER
    934 # ----------------------------
--> 935 results = self._run_stage()
    937 # ----------------------------
    938 # POST-Training CLEAN UP
    939 # ----------------------------
    940 log.debug(f"{self.__class__.__name__}: trainer tearing down")

File /opt/conda/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py:978, in Trainer._run_stage(self)
    976         self._run_sanity_check()
    977     with torch.autograd.set_detect_anomaly(self._detect_anomaly):
--> 978         self.fit_loop.run()
    979     return None
    980 raise RuntimeError(f"Unexpected state {self.state}")

File /opt/conda/lib/python3.10/site-packages/pytorch_lightning/loops/fit_loop.py:201, in _FitLoop.run(self)
    199 try:
    200     self.on_advance_start()
--> 201     self.advance()
    202     self.on_advance_end()
    203     self._restarting = False

File /opt/conda/lib/python3.10/site-packages/pytorch_lightning/loops/fit_loop.py:354, in _FitLoop.advance(self)
    352 self._data_fetcher.setup(combined_loader)
    353 with self.trainer.profiler.profile("run_training_epoch"):
--> 354     self.epoch_loop.run(self._data_fetcher)

File /opt/conda/lib/python3.10/site-packages/pytorch_lightning/loops/training_epoch_loop.py:133, in _TrainingEpochLoop.run(self, data_fetcher)
    131 while not self.done:
    132     try:
--> 133         self.advance(data_fetcher)
    134         self.on_advance_end()
    135         self._restarting = False

File /opt/conda/lib/python3.10/site-packages/pytorch_lightning/loops/training_epoch_loop.py:218, in _TrainingEpochLoop.advance(self, data_fetcher)
    215 with trainer.profiler.profile("run_training_batch"):
    216     if trainer.lightning_module.automatic_optimization:
    217         # in automatic optimization, there can only be one optimizer
--> 218         batch_output = self.automatic_optimization.run(trainer.optimizers[0], kwargs)
    219     else:
    220         batch_output = self.manual_optimization.run(kwargs)

File /opt/conda/lib/python3.10/site-packages/pytorch_lightning/loops/optimization/automatic.py:185, in _AutomaticOptimization.run(self, optimizer, kwargs)
    178         closure()
    180 # ------------------------------
    181 # BACKWARD PASS
    182 # ------------------------------
    183 # gradient update with accumulated gradients
    184 else:
--> 185     self._optimizer_step(kwargs.get("batch_idx", 0), closure)
    187 result = closure.consume_result()
    188 if result.loss is None:

File /opt/conda/lib/python3.10/site-packages/pytorch_lightning/loops/optimization/automatic.py:261, in _AutomaticOptimization._optimizer_step(self, batch_idx, train_step_and_backward_closure)
    258     self.optim_progress.optimizer.step.increment_ready()
    260 # model hook
--> 261 call._call_lightning_module_hook(
    262     trainer,
    263     "optimizer_step",
    264     trainer.current_epoch,
    265     batch_idx,
    266     optimizer,
    267     train_step_and_backward_closure,
    268 )
    270 if not should_accumulate:
    271     self.optim_progress.optimizer.step.increment_completed()

File /opt/conda/lib/python3.10/site-packages/pytorch_lightning/trainer/call.py:142, in _call_lightning_module_hook(trainer, hook_name, pl_module, *args, **kwargs)
    139 pl_module._current_fx_name = hook_name
    141 with trainer.profiler.profile(f"[LightningModule]{pl_module.__class__.__name__}.{hook_name}"):
--> 142     output = fn(*args, **kwargs)
    144 # restore current_fx when nested context
    145 pl_module._current_fx_name = prev_fx_name

TypeError: SequenceLearner.optimizer_step() missing 1 required positional argument: 'closure'```

pt_example cant be run using GPU

I want to run the example using gpu, so I set the parameter in pt_example.py of:
trainer = pl.Trainer(
logger=pl.loggers.CSVLogger("log"),
max_epochs=400,
progress_bar_refresh_rate=1,
gradient_clip_val=1, # Clip gradient to stabilize training
gpus=1
)
However, it shows the errors as follows:

GPU available: True, used: True
TPU available: None, using: 0 TPU cores
/home/lksgcc/.pyenv/versions/anaconda3-5.0.1/envs/p36env/lib/python3.6/site-packages/pytorch_lightning/utilities/distributed.py:50: UserWarning: you defined a validation_step but have no val_dataloader. Skipping validation loop
  warnings.warn(*args, **kwargs)

  | Name  | Type        | Params
--------------------------------------
0 | model | RNNSequence | 350   
--------------------------------------
350       Trainable params
0         Non-trainable params
350       Total params
0.001     Total estimated model params size (MB)
Epoch 0:   0%|                                                                                                                | 0/1 [00:00<?, ?it/s]
Traceback (most recent call last):
  File "pt_example.py", line 131, in <module>
    trainer.fit(learn, dataloader)
  File "/home/lksgcc/.pyenv/versions/anaconda3-5.0.1/envs/p36env/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 513, in fit
    self.dispatch()
  File "/home/lksgcc/.pyenv/versions/anaconda3-5.0.1/envs/p36env/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 553, in dispatch
    self.accelerator.start_training(self)
  File "/home/lksgcc/.pyenv/versions/anaconda3-5.0.1/envs/p36env/lib/python3.6/site-packages/pytorch_lightning/accelerators/accelerator.py", line 74, in start_training
    self.training_type_plugin.start_training(trainer)
  File "/home/lksgcc/.pyenv/versions/anaconda3-5.0.1/envs/p36env/lib/python3.6/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py", line 111, in start_training
    self._results = trainer.run_train()
  File "/home/lksgcc/.pyenv/versions/anaconda3-5.0.1/envs/p36env/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 644, in run_train
    self.train_loop.run_training_epoch()
  File "/home/lksgcc/.pyenv/versions/anaconda3-5.0.1/envs/p36env/lib/python3.6/site-packages/pytorch_lightning/trainer/training_loop.py", line 492, in run_training_epoch
    batch_output = self.run_training_batch(batch, batch_idx, dataloader_idx)
  File "/home/lksgcc/.pyenv/versions/anaconda3-5.0.1/envs/p36env/lib/python3.6/site-packages/pytorch_lightning/trainer/training_loop.py", line 650, in run_training_batch
    self.optimizer_step(optimizer, opt_idx, batch_idx, train_step_and_backward_closure)
  File "/home/lksgcc/.pyenv/versions/anaconda3-5.0.1/envs/p36env/lib/python3.6/site-packages/pytorch_lightning/trainer/training_loop.py", line 434, in optimizer_step
    using_lbfgs=is_lbfgs,
  File "pt_example.py", line 79, in optimizer_step
    optimizer.optimizer.step(closure=closure)
  File "/home/lksgcc/.pyenv/versions/anaconda3-5.0.1/envs/p36env/lib/python3.6/site-packages/torch/autograd/grad_mode.py", line 26, in decorate_context
    return func(*args, **kwargs)
  File "/home/lksgcc/.pyenv/versions/anaconda3-5.0.1/envs/p36env/lib/python3.6/site-packages/torch/optim/adam.py", line 66, in step
    loss = closure()
  File "/home/lksgcc/.pyenv/versions/anaconda3-5.0.1/envs/p36env/lib/python3.6/site-packages/pytorch_lightning/trainer/training_loop.py", line 645, in train_step_and_backward_closure
    split_batch, batch_idx, opt_idx, optimizer, self.trainer.hiddens
  File "/home/lksgcc/.pyenv/versions/anaconda3-5.0.1/envs/p36env/lib/python3.6/site-packages/pytorch_lightning/trainer/training_loop.py", line 738, in training_step_and_backward
    result = self.training_step(split_batch, batch_idx, opt_idx, hiddens)
  File "/home/lksgcc/.pyenv/versions/anaconda3-5.0.1/envs/p36env/lib/python3.6/site-packages/pytorch_lightning/trainer/training_loop.py", line 293, in training_step
    training_step_output = self.trainer.accelerator.training_step(args)
  File "/home/lksgcc/.pyenv/versions/anaconda3-5.0.1/envs/p36env/lib/python3.6/site-packages/pytorch_lightning/accelerators/accelerator.py", line 157, in training_step
    return self.training_type_plugin.training_step(*args)
  File "/home/lksgcc/.pyenv/versions/anaconda3-5.0.1/envs/p36env/lib/python3.6/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py", line 122, in training_step
    return self.lightning_module.training_step(*args, **kwargs)
  File "pt_example.py", line 46, in training_step
    y_hat = self.model.forward(x)
  File "pt_example.py", line 32, in forward
    new_output, hidden_state = self.rnn_cell.forward(inputs, hidden_state)
  File "/home/lksgcc/.pyenv/versions/anaconda3-5.0.1/envs/p36env/lib/python3.6/site-packages/kerasncp/torch/ltc_cell.py", line 255, in forward
    next_state = self._ode_solver(inputs, states, elapsed_time)
  File "/home/lksgcc/.pyenv/versions/anaconda3-5.0.1/envs/p36env/lib/python3.6/site-packages/kerasncp/torch/ltc_cell.py", line 186, in _ode_solver
    sensory_w_activation *= self._params["sensory_sparsity_mask"]
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!

When I changed the wiring into

wiring = kncp.wirings.NCP(
    inter_neurons=12,  # Number of inter neurons
    command_neurons=8,  # Number of command neurons
    motor_neurons=out_features,  # Number of motor neurons
    sensory_fanout=4,  # How many outgoing synapses has each sensory neuron
    inter_fanout=4,  # How many outgoing synapses has each inter neuron
    recurrent_command_synapses=4,  # Now many recurrent synapses are in the
    # command neuron layer
    motor_fanin=6,  # How many incomming syanpses has each motor neuron
)

The error still exists.
Thanks!

hope to obtain 193G data set

Hi, Your excellent work has inspired us a lot, so I am reproducing the paper. I hope to obtain the 193g data set to complete this work. If you can, please send it to my email: [email protected] . Thanks for your reply.

Compatibility with GAP8 nntool

I am working to port a NCP model to a micro-robotics platform; specifically a GAP8 processor onboard the Crazyflie AI-Deck. The GAP8 developers have provided a toolset and workflow for porting keras .h5 models into format that can be run onboard the GAP8 processor, however I can't get it to work with a keras-ncp model.

I can successfully convert my trained model to .tflite using a slightly modified version of h5_to_tflite.py. However, when I attempt to open the .tflite model with nntool, I get the error

Traceback (most recent call last):
  File "/home/ross/miniconda3/envs/hello_ncp/lib/python3.7/site-packages/cmd2/cmd2.py", line 1661, in onecmd_plus_hooks
    stop = self.onecmd(statement, add_to_history=add_to_history)
  File "/home/ross/miniconda3/envs/hello_ncp/lib/python3.7/site-packages/cmd2/cmd2.py", line 2081, in onecmd
    stop = func(statement)
  File "/home/ross/miniconda3/envs/hello_ncp/lib/python3.7/site-packages/cmd2/decorators.py", line 223, in cmd_wrapper
    return func(cmd2_app, args)
  File "/home/ross/Projects/AIIA/crazyflie/gap_sdk/tools/nntool/interpreter/commands/open.py", line 118, in do_open
    self.__open_graph(args)
  File "/home/ross/Projects/AIIA/crazyflie/gap_sdk/tools/nntool/interpreter/commands/open.py", line 92, in __open_graph
    G = create_graph(graph_file, opts=opts)
  File "/home/ross/Projects/AIIA/crazyflie/gap_sdk/tools/nntool/importer/importer.py", line 52, in create_graph
    graph = importer.create_graph(filename, opts)
  File "/home/ross/Projects/AIIA/crazyflie/gap_sdk/tools/nntool/importer/tflite2/tflite.py", line 103, in create_graph
    self._import_tflite_graph(G, model, opts)
  File "/home/ross/Projects/AIIA/crazyflie/gap_sdk/tools/nntool/importer/tflite2/tflite.py", line 154, in _import_tflite_graph
    self._provisional_outputs, opts)
  File "/home/ross/Projects/AIIA/crazyflie/gap_sdk/tools/nntool/importer/tflite2/tflite.py", line 245, in _import_nodes
    node, all_nodes=all_nodes, G=G, opts=opts, importer=self)
  File "/home/ross/Projects/AIIA/crazyflie/gap_sdk/tools/nntool/importer/tflite2/handlers/handler.py", line 65, in handle
    return ver_handle(node, **kwargs)
  File "/home/ross/Projects/AIIA/crazyflie/gap_sdk/tools/nntool/importer/tflite2/handlers/backend/fill.py", line 56, in version_1
    return cls._common(node, **kwargs)
  File "/home/ross/Projects/AIIA/crazyflie/gap_sdk/tools/nntool/importer/tflite2/handlers/backend/fill.py", line 40, in _common
    shape = list(cls._verify_constant(inputs[0]))
  File "/home/ross/Projects/AIIA/crazyflie/gap_sdk/tools/nntool/importer/tflite2/handlers/backend_handler.py", line 63, in _verify_constant
    raise ValueError("expected node %s to be constant input" % inp[0].name)
ValueError: expected node CONCATENATION_0_6 to be constant input
EXCEPTION of type 'ValueError' occurred with message: 'expected node CONCATENATION_0_6 to be constant input'

I've opened an issue with the GAP8 developers in the hopes the problem (and fix) might be on their side (see here), however I am also opening an issue here because it is not clear where the fundamental problem lies. My suspicion is that the incompatibility lies in the LTCCell definition, but I don't know enough about either codebase to debug it on my own.

Not an issue, but a suggestion.

Hi Mathias! I'm a great fan of what you and Ramin, and are currently doing my master thesis in robotics based on the CfC. Playing around with the ncps package I noticed a pretty extreme improvment in training time using impala-cnn in the atari-tf example (BC). The convolution block I used was pretty simple:

class impalaConvLayer(tf.keras.layers.Layer):
def init(self, filters, kernel_size, strides, padding='valid', use_bias=False):
super(impalaConvLayer, self).init()
self.conv = Conv2D(
filters=filters,
kernel_size=kernel_size,
strides=strides,
padding=padding,
use_bias=use_bias,
kernel_initializer=tf.keras.initializers.VarianceScaling(scale=2.0, mode='fan_out', distribution='truncated_normal')
)
self.bn = BatchNormalization(momentum=0.99, epsilon=0.001)
self.relu = ReLU()

@tf.function
def call(self, inputs):
    x = self.conv(inputs)
    x = self.bn(x)
    x = self.relu(x)
    return x

class ImpalaConvBlock(tf.keras.models.Sequential):
def init(self):
super(ImpalaConvBlock, self).init(layers=[
impalaConvLayer(filters=16, kernel_size=8, strides=4),
impalaConvLayer(filters=32, kernel_size=4, strides=2),
impalaConvLayer(filters=32, kernel_size=3, strides=1),
Flatten(),
Dense(units=256, activation='relu')
])

As I believe training time on weak computers often discourage students I think making the example run faster could be wise. What do you think about using impala-cnn before CfC? Is there something I've overlooked that makes this a bad idea?

Anyways, keep up the good work! What you've accomplished is really inspiring!

Robin

GPU performance issues

Hi!
Thanks for great work!

I've got this model:

ncp_wiring = kncp.wirings.NCP(
inter_neurons=40,
command_neurons=14,
motor_neurons=2*6,
sensory_fanout=20,
inter_fanout=10,
recurrent_command_synapses=10,
motor_fanin=10,
)
ltc_cell = LTCCell(ncp_wiring)

model = tf.keras.Sequential()
model.add(tf.keras.layers.InputLayer(input_shape=(300, 16)))
model.add(tf.keras.layers.RNN(ltc_cell, return_sequences=False))
model.add(tf.keras.layers.Reshape([6, 2]))

At inference on a machine with tf 2.3.0 (no GPU) and 4 cores, 14 GB RAM I get an inference time of about 0.11s.
Same tf version with GPU support but on a machine with a 4 x NVIDIA Tesla K80 and 24 cores, 224 GB RAM my inference time lies at about 0.64s

No specific warnings from tf about GPU incompatibilities to model.

Do your models perform well with GPUs?

Inquiry on LTC Models for Financial Time-Series Prediction

Hello everyone,

I hope you're having a great day. I am currently working on a personal learning project involving financial time-series prediction using LTC models. Specifically, I aim to predict the price of natural gas as a case study.

I am trying to understand the "ltc_example_sinusoidal.ipynb" Google Colab notebook. There are a few aspects that I'm struggling to grasp, and I would greatly appreciate any insights or clarifications.

https://colab.research.google.com/drive/1IvVXVSC7zZPo5w-PfL3mk1MC3PIPw7Vs?usp=sharing

In the section "Plotting the prediction of the trained model," I encountered the line "prediction = model(data_x).numpy()". It seems that the prediction is made on the same initial data that was used for training. Shouldn't predictions be made on a new set of data to properly test the model's performance? Am I misunderstanding something here?

For my specific use case, I am preparing a dataset containing daily natural gas prices in the US for the last 20 years, along with other related variables like daily average temperature, production levels, storage data, etc. I plan to use these vectors as input features (data_x), similar to the example in the notebook: "data_x = np.stack([np.sin(np.linspace(0, 3 * np.pi, N)), np.cos(np.linspace(0, 3 * np.pi, N))], axis=1)". The target variable (data_y) will be the price of the next day or week.

a. Should I provide the entire historical dataset for 20 years and let the model run on it, or would it be more appropriate to use year-long batches for training?

b. Is there any guideline or indication of how many neurons of each type I should use in the model relative to the dataset size or the number of expected cause-consequence patterns?

In theory, the LTC model should have learned the causal relations within the dataset. My intention is to use the trained model to predict the gas price for the next day or week by calling the function "prediction = model(NEWdata_x).numpy()". The NEWdata_x would represent the set of vectors from the last year.

Based on your experience with LTC models, does this approach make sense for financial time-series prediction?

I apologize if some of these questions sound basic; I'm relatively new to this area of study. Any insights would be greatly appreciated. Thank you for your time and support!

关于文中整个网络框架的解读

Hello!In your article Neural circuit policies enabling auditable autonomy, you give the number of units in the last fully-connected layer of the convolution head as one, which does not match the network structure diagram in the article with 32 sensory neurons, I would like to know if my understanding of the article is off.Hope to get your advice.

Is the implementation of Eq. 3 inconsistent with the paper?

        numerator = (
            cm_t * v_pre
            + self._params["gleak"] * self._params["vleak"]
            + w_numerator ### Why do you use addition instead of multiplication here?
        )
        denominator = cm_t + self._params["gleak"] + w_denominator

        # Avoid dividing by 0 
        v_pre = numerator / (denominator + self._epsilon)

Data icra2020_imitation_data

Thank you so much for your incredible work. But I have questions related to data from your repository.
Could you please explain more about the data https://github.com/mlech26l/icra_lds/raw/master/icra2020_imitation_data_packed.npz?
How did you do to collect data? Because in google collaboratory, you said that

Input data is obtained from a Sick LMS 1xx laser rangefinder (LiDAR) mounted on the robot. Output variable is the steering direction as a variable in the range [-1,+1], i.e., -1 corresponding to turning left, 0 going straight, and +1 to turning right. Supervised training data was collected by manually steering the robot around the obstacles on 29 different tracks.

However, in closed issue #5, you said that the dataset generated by the active test runs is available for download from repository. It makes me confused about this dataset, is it collected by from a Sick LMS 1xx laser rangefinder (LiDAR) or generated by your algorithm ?

Besides, in these following shape of dataset, I understand there are 678 samples with sequence of length 32. Could you please provide information about (541,1) of x_train? If I was not mistaken, 541 would be the number of features, so what is the information of these features?

x_train (678, 32, 541, 1)
y_train (678, 32, 1)

problem loading tensorflow models

I'm using your example code from here I save the model using
model.save(model_dir) but when loading it with the model = keras.models.load_model(model_dir) I get an error: TypeError: ('Keyword argument not understood:', 'adjacency_matrix')

AttributeError: 'FullyConnected' object has no attribute 'sensory_adjacency_matrix'

hi,ace! I am very interested in your work and using your LTCcell to model time series. However, the following problems appear in the process of use, may I ask why? I am a beginner and I hope to get your help
Here is my code:
`

def main(argv):
    wiring = kerasncp.wirings.FullyConnected(8, FLAGS.rnn_units)  # 16 units, 8 motor neurons
    ltc_cell = LTCCell(wiring, FLAGS.emb_dim)

    casflow_inputs = tf.keras.layers.Input(shape=(FLAGS.max_seq, FLAGS.emb_dim))
    bn_casflow_inputs = tf.keras.layers.BatchNormalization()(casflow_inputs)

    gru_2 = tf.keras.layers.Bidirectional(tf.keras.layers.RNN(ltc_cell))(bn_casflow_inputs)

    mlp_1 = tf.keras.layers.Dense(128, activation='relu')(gru_2)
    mlp_2 = tf.keras.layers.Dense(64, activation='relu')(mlp_1)
    outputs = tf.keras.layers.Dense(1)(mlp_2)`

There is a bug in the running:
sa

`

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.