GithubHelp home page GithubHelp logo

aqibsaeed / urban-sound-classification Goto Github PK

View Code? Open in Web Editor NEW
512.0 31.0 246.0 10.21 MB

Urban sound classification using Deep Learning

License: Apache License 2.0

Jupyter Notebook 100.00%
urban-sound-classification neural-network deep-learning audio-classification

urban-sound-classification's Introduction

urban-sound-classification's People

Contributors

aaqibsaeed avatar aqibsaeed avatar davidglavas avatar lincolnhard avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

urban-sound-classification's Issues

ValueError: Trying to share variable rnn/multi_rnn_cell/cell_0/lstm_cell/kernel, but specified shape (600, 1200) and found shape (320, 1200).

Hi,

I am trying out the RNN notebook and when executing the code, it gives this error:
ValueError: Trying to share variable rnn/multi_rnn_cell/cell_0/lstm_cell/kernel, but specified shape (600, 1200) and found shape (320, 1200).

when executing this line:
output, state = tf.nn.dynamic_rnn(cell, x, dtype = tf.float32)

I suspect the network is giving wrong n_input values, but I am not sure what should be the correct value?

the full stack trace is as below:

Traceback (most recent call last):
  File "RNNClassifier.py", line 79, in <module>
    prediction = RNN(x, weight, bias)
  File "RNNClassifier.py", line 74, in RNN
    output, state = tf.nn.dynamic_rnn(cell, x, dtype = tf.float32)
  File "/datadrive/xiaoyzhu/python2env/local/lib/python2.7/site-packages/tensorflow/python/ops/rnn.py", line 574, in dynamic_rnn
    dtype=dtype)
  File "/datadrive/xiaoyzhu/python2env/local/lib/python2.7/site-packages/tensorflow/python/ops/rnn.py", line 737, in _dynamic_rnn_loop
    swap_memory=swap_memory)
  File "/datadrive/xiaoyzhu/python2env/local/lib/python2.7/site-packages/tensorflow/python/ops/control_flow_ops.py", line 2770, in while_loop
    result = context.BuildLoop(cond, body, loop_vars, shape_invariants)
  File "/datadrive/xiaoyzhu/python2env/local/lib/python2.7/site-packages/tensorflow/python/ops/control_flow_ops.py", line 2599, in BuildLoop
    pred, body, original_loop_vars, loop_vars, shape_invariants)
  File "/datadrive/xiaoyzhu/python2env/local/lib/python2.7/site-packages/tensorflow/python/ops/control_flow_ops.py", line 2549, in _BuildLoop
    body_result = body(*packed_vars_for_body)
  File "/datadrive/xiaoyzhu/python2env/local/lib/python2.7/site-packages/tensorflow/python/ops/rnn.py", line 722, in _time_step
    (output, new_state) = call_cell()
  File "/datadrive/xiaoyzhu/python2env/local/lib/python2.7/site-packages/tensorflow/python/ops/rnn.py", line 708, in <lambda>
    call_cell = lambda: cell(input_t, state)
  File "/datadrive/xiaoyzhu/python2env/local/lib/python2.7/site-packages/tensorflow/python/ops/rnn_cell_impl.py", line 180, in __call__
    return super(RNNCell, self).__call__(inputs, state)
  File "/datadrive/xiaoyzhu/python2env/local/lib/python2.7/site-packages/tensorflow/python/layers/base.py", line 441, in __call__
    outputs = self.call(inputs, *args, **kwargs)
  File "/datadrive/xiaoyzhu/python2env/local/lib/python2.7/site-packages/tensorflow/python/ops/rnn_cell_impl.py", line 916, in call
    cur_inp, new_state = cell(cur_inp, cur_state)
  File "/datadrive/xiaoyzhu/python2env/local/lib/python2.7/site-packages/tensorflow/python/ops/rnn_cell_impl.py", line 180, in __call__
    return super(RNNCell, self).__call__(inputs, state)
  File "/datadrive/xiaoyzhu/python2env/local/lib/python2.7/site-packages/tensorflow/python/layers/base.py", line 441, in __call__
    outputs = self.call(inputs, *args, **kwargs)
  File "/datadrive/xiaoyzhu/python2env/local/lib/python2.7/site-packages/tensorflow/python/ops/rnn_cell_impl.py", line 542, in call
    lstm_matrix = _linear([inputs, m_prev], 4 * self._num_units, bias=True)
  File "/datadrive/xiaoyzhu/python2env/local/lib/python2.7/site-packages/tensorflow/python/ops/rnn_cell_impl.py", line 1017, in _linear
    initializer=kernel_initializer)
  File "/datadrive/xiaoyzhu/python2env/local/lib/python2.7/site-packages/tensorflow/python/ops/variable_scope.py", line 1065, in get_variable
    use_resource=use_resource, custom_getter=custom_getter)
  File "/datadrive/xiaoyzhu/python2env/local/lib/python2.7/site-packages/tensorflow/python/ops/variable_scope.py", line 962, in get_variable
    use_resource=use_resource, custom_getter=custom_getter)
  File "/datadrive/xiaoyzhu/python2env/local/lib/python2.7/site-packages/tensorflow/python/ops/variable_scope.py", line 360, in get_variable
    validate_shape=validate_shape, use_resource=use_resource)
  File "/datadrive/xiaoyzhu/python2env/local/lib/python2.7/site-packages/tensorflow/python/ops/variable_scope.py", line 1405, in wrapped_custom_getter
    *args, **kwargs)
  File "/datadrive/xiaoyzhu/python2env/local/lib/python2.7/site-packages/tensorflow/python/ops/rnn_cell_impl.py", line 183, in _rnn_get_variable
    variable = getter(*args, **kwargs)
  File "/datadrive/xiaoyzhu/python2env/local/lib/python2.7/site-packages/tensorflow/python/ops/rnn_cell_impl.py", line 183, in _rnn_get_variable
    variable = getter(*args, **kwargs)
  File "/datadrive/xiaoyzhu/python2env/local/lib/python2.7/site-packages/tensorflow/python/ops/variable_scope.py", line 352, in _true_getter
    use_resource=use_resource)
  File "/datadrive/xiaoyzhu/python2env/local/lib/python2.7/site-packages/tensorflow/python/ops/variable_scope.py", line 669, in _get_single_variable
    found_var.get_shape()))
ValueError: Trying to share variable rnn/multi_rnn_cell/cell_0/lstm_cell/kernel, but specified shape (600, 1200) and found shape (320, 1200).

An error ocurred while starting the kernel 2018󈚬󈚸 22:29:08.263250: I C:\tf_jenkins\workspace\rel‑win\M\windows\PY\36\tensorflow\core\platform\cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX AVX2

An error ocurred while starting the kernel
2018󈚬󈚸 22:29:08.263250: I C:\tf_jenkins\workspace\rel‑win\M\windows\PY\36\tensorflow\core\platform\cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX AVX2

RNN code doesn't run

Hi
I tried to run your RNN code but I got this error:
Traceback (most recent call last):
File "/home/sam95/PycharmProjects/SED_using_CNN/RNN_SED.py", line 43, in
tr_features,tr_labels = extract_features(parent_dir,tr_sub_dirs)
File "/home/sam95/PycharmProjects/SED_using_CNN/RNN_SED.py", line 25, in extract_features
if(len(sound_clip[start:end]) == window_size):
TypeError: slice indices must be integers or None or have an index method

can you help me solving this problem?

np.int error

the error invalid literal for int() with base 10 in following line how can i fix it??
return np.array(features), np.array(labels,dtype = np.int)

Why you didn't use apply_max_pool() function?

You defined it in your CNN code, but you didn't use this function.
I want to know if there is no need to use it or you forget it?
According to the paper you referenced in your blog, there should be max_pool operation.

And i run your code on UrbanSound8K data set, the loss didn't convergence.
Do you have a good result?

Notebook

@aqibsaeed, does this model able to run in python without using notebook?
Thank you

NoBackendError

Just want to raise this issue in case someone also encountered this (and share my solution). I will get this no backend error:

Traceback (most recent call last):
  File "CNNClassifier.py", line 54, in <module>
    features,labels = extract_features(parent_dir,sub_dirs)
  File "CNNClassifier.py", line 25, in extract_features
    sound_clip,s = librosa.load(fn)
  File "/datadrive/xiaoyzhu/python2env/local/lib/python2.7/site-packages/librosa/core/audio.py", line 107, in load
    with audioread.audio_open(os.path.realpath(path)) as input_file:
  File "/datadrive/xiaoyzhu/python2env/local/lib/python2.7/site-packages/audioread/__init__.py", line 116, in audio_open
    raise NoBackendError()

In Ubuntu, I use
sudo apt-get install libav-tools

and this issue get solved.

Help me

Please help me in this project. I don't know anything about deep learning please help me completing this project please create video tutorials

ValueError: Variable rnn/multi_rnn_cell/cell_0/lstm_cell/kernel already exists

Traceback (most recent call last):

File "", line 1, in
prediction = RNN(x, weight, bias)

File "", line 4, in RNN
output, state = tf.nn.dynamic_rnn(cell, x, dtype = tf.float32)

File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\ops\rnn.py", line 574, in dynamic_rnn
dtype=dtype)

File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\ops\rnn.py", line 737, in _dynamic_rnn_loop
swap_memory=swap_memory)

File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\ops\control_flow_ops.py", line 2770, in while_loop
result = context.BuildLoop(cond, body, loop_vars, shape_invariants)

File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\ops\control_flow_ops.py", line 2599, in BuildLoop
pred, body, original_loop_vars, loop_vars, shape_invariants)

File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\ops\control_flow_ops.py", line 2549, in _BuildLoop
body_result = body(*packed_vars_for_body)

File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\ops\rnn.py", line 722, in _time_step
(output, new_state) = call_cell()

File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\ops\rnn.py", line 708, in
call_cell = lambda: cell(input_t, state)

File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\ops\rnn_cell_impl.py", line 180, in call
return super(RNNCell, self).call(inputs, state)

File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\layers\base.py", line 441, in call
outputs = self.call(inputs, *args, **kwargs)

File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\ops\rnn_cell_impl.py", line 916, in call
cur_inp, new_state = cell(cur_inp, cur_state)

File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\ops\rnn_cell_impl.py", line 180, in call
return super(RNNCell, self).call(inputs, state)

File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\layers\base.py", line 441, in call
outputs = self.call(inputs, *args, **kwargs)

File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\ops\rnn_cell_impl.py", line 542, in call
lstm_matrix = _linear([inputs, m_prev], 4 * self._num_units, bias=True)

File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\ops\rnn_cell_impl.py", line 1017, in _linear
initializer=kernel_initializer)

File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\ops\variable_scope.py", line 1065, in get_variable
use_resource=use_resource, custom_getter=custom_getter)

File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\ops\variable_scope.py", line 962, in get_variable
use_resource=use_resource, custom_getter=custom_getter)

File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\ops\variable_scope.py", line 360, in get_variable
validate_shape=validate_shape, use_resource=use_resource)

File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\ops\variable_scope.py", line 1405, in wrapped_custom_getter
*args, **kwargs)

File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\ops\rnn_cell_impl.py", line 183, in _rnn_get_variable
variable = getter(*args, **kwargs)

File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\ops\rnn_cell_impl.py", line 183, in _rnn_get_variable
variable = getter(*args, **kwargs)

File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\ops\variable_scope.py", line 352, in _true_getter
use_resource=use_resource)

File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\ops\variable_scope.py", line 664, in _get_single_variable
name, "".join(traceback.format_list(tb))))

ValueError: Variable rnn/multi_rnn_cell/cell_0/lstm_cell/kernel already exists, disallowed. Did you mean to set reuse=True in VarScope? Originally defined at:

File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 1269, in init
self._traceback = _extract_stack()
File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 2506, in create_op
original_op=self._default_original_op, op_def=op_def)
File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 767, in apply_op
op_def=op_def)

IndexError: list index out of range


IndexError Traceback (most recent call last)
in
5 'fold9','fold10'])
6 for sub_dir in sub_dirs:
----> 7 features, labels = extract_features(parent_dir,sub_dir)
8 np.savez("{0}{1}".format(save_dir, sub_dir),
9 features=features,

in extract_features(parent_dir, sub_dirs, file_ext, bands, frames)
12 segment_log_specgrams, segment_labels = [], []
13 sound_clip,sr = librosa.load(fn)
---> 14 label = int(fn.split('/')[2].split('-')[1])
15 for (start,end) in _windows(sound_clip,window_size):
16 if(len(sound_clip[start:end]) == window_size):

IndexError: list index out of range

Can I run this code on Spyder in my windows laptop?

I get below error

--
An error ocurred while starting the kernel
2018󈚬󈚸 23:01:33.118432: I C:\tf_jenkins\workspace\rel‑win\M\windows\PY\36\tensorflow\core\platform\cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX AVX2

Problem about the window sizes on different dataset

I'm using different dataset (number of data: 538) with the CNN code. Length of each data differs from each other. I realize that some of the data is not even analyzed since it didn't satisfy the if(len(eq_data[start:end]) == window_size)

I tried the change the window_size argument but then I got errors such as (window_size = 256)

Traceback (most recent call last):
File "mac_learn_spec_cnn.py", line 112, in
features,labels = extract_features(parent_dir)
File "mac_learn_spec_cnn.py", line 97, in extract_features
log_specgrams = np.asarray(log_specgrams).reshape(len(log_specgrams),bands,frames,1)
ValueError: cannot reshape array of size 1509480 into shape (1198,60,41,1)

What would you recommend me to overcome this problem?

'epoch' is not defined - classification using RNN

Can you help me with this issue please


NameError Traceback (most recent call last)
in ()
8 _, c = session.run([optimizer, loss_f],feed_dict={x: batch_x, y : batch_y})
9
---> 10 if epoch % display_step == 0:
11 # Calculate batch accuracy
12 acc = session.run(accuracy, feed_dict={x: batch_x, y: batch_y})

NameError: name 'epoch' is not defined

Accuracy of 0.5619

Hi, I'm a student studying RNN. I have changed the number of epoch from 3 to 50 of the RNN. After training the accuracy is about 0.5619018899057902. Is this correct?

Urban Sound Classification using CNN.ipynb test accuracy 0.149?

I run the Urban Sound Classification using CNN.ipynb and got a test accuracy of 0.149. and the accuracy in paper Environmental sound classification with convolutional neural networks by Karol J. Piczak. is more than 0.5. I wonder whether any code goes wrong? thanks

ValueError: Cannot feed value of shape (0, 0) for Tensor 'Placeholder_81:0', which has shape '(?, 10)'

Sound-Data directly contains fold1,...fold10
However, I get ValueError: Cannot feed value of shape (0, 0) for Tensor 'Placeholder_81:0', which has shape '(?, 10)'

Please help

image

It happens here:
_, c = session.run([optimizer, cross_entropy],feed_dict={X: batch_x, Y : batch_y})

I checked some values:

Y
Out[227]: <tf.Tensor 'Placeholder_85:0' shape=(?, 10) dtype=float32>

batch_y
Out[228]: array([], shape=(0, 0), dtype=float64)

labels
Out[229]: array([], shape=(0, 0), dtype=float64)


view spectrogram for CNN simislar to FFN

I'm trying to figure out how to view the spectrogram and its deltas for the CNN code similar to the way the spectrogram is viewed in the feedforward network code. Any help would be appreciated.

Dimensionality mismatch in RNN code

Hello,

The dimensions of the extracted features are supposed to be [None,bands,frames], i.e [None, 20, 41].
However, the placeholder for x expects dimensions [None,n_steps,n_input] which turns out to be [None,41,20].
I encountered this error while trying to train the RNN for another dataset. Am I missing something?

Error Message in Block with cost-history

Hi, i am getting following error message:


ValueError Traceback (most recent call last)
in ()
4 sess.run(init)
5 for epoch in range(training_epochs):
----> 6 _, cost = sess.run([optimizer, cost_function], feed_dict={X: train_x, Y: train_y})
7 cost_history = np.append(cost_history, cost)
8

/usr/lib/python3.6/site-packages/tensorflow/python/client/session.py in run(self, fetches, feed_dict, options, run_metadata)
787 try:
788 result = self._run(None, fetches, feed_dict, options_ptr,
--> 789 run_metadata_ptr)
790 if run_metadata:
791 proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)

/usr/lib/python3.6/site-packages/tensorflow/python/client/session.py in _run(self, handle, fetches, feed_dict, options, run_metadata)
973 'Cannot feed value of shape %r for Tensor %r, '
974 'which has shape %r'
--> 975 % (np_val.shape, subfeed_t.name, str(subfeed_t.get_shape())))
976 if not self.graph.is_feedable(subfeed_t):
977 raise ValueError('Tensor %s may not be fed.' % subfeed_t)

ValueError: Cannot feed value of shape (0, 0) for Tensor 'Placeholder_1:0', which has shape '(?, 10)'

How to organize the datasets?

The datasets of UrbanSound8K is organized as follows:
UrbanSound8K
---audio
------fold1
------fold2
------fold3
------fold4
------fold5
------fold6
------fold7
------fold8
------fold9
------fold10
---metadata

So, should I first create a directory named Sound-Data,
then copy the fold1 and fold2 to be the sub-directory of directory Sound-Data?
Is there something wrong?

Thank you.

Hardcoded path to sound files in `Urban Sound Classification using NN`

https://github.com/aqibsaeed/Urban-Sound-Classification/blob/3f41ec2094f12107f9d336a12cb853b76f79e264/Urban%20Sound%20Classification%20using%20NN.ipynb does not run correctly. Specifically, the block

cost_history = np.empty(shape=[1],dtype=float)
y_true, y_pred = None, None
with tf.Session() as sess:
    sess.run(init)
    for epoch in range(training_epochs):            
        _,cost = sess.run([optimizer,cost_function],feed_dict={X:train_x,Y:train_y})
        cost_history = np.append(cost_history,cost)
    
    y_pred = sess.run(tf.argmax(y_,1),feed_dict={X: test_x})
    y_true = sess.run(tf.argmax(test_y,1))

yields the following error:

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-12-62c28f30e413> in <module>()
      4     sess.run(init)
      5     for epoch in range(training_epochs):
----> 6         _,cost = sess.run([optimizer,cost_function],feed_dict={X:train_x,Y:train_y})
      7         cost_history = np.append(cost_history,cost)
      8 

/usr/local/lib/python2.7/site-packages/tensorflow/python/client/session.pyc in run(self, fetches, feed_dict, options, run_metadata)
    787     try:
    788       result = self._run(None, fetches, feed_dict, options_ptr,
--> 789                          run_metadata_ptr)
    790       if run_metadata:
    791         proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)

/usr/local/lib/python2.7/site-packages/tensorflow/python/client/session.pyc in _run(self, handle, fetches, feed_dict, options, run_metadata)
    973                 'Cannot feed value of shape %r for Tensor %r, '
    974                 'which has shape %r'
--> 975                 % (np_val.shape, subfeed_t.name, str(subfeed_t.get_shape())))
    976           if not self.graph.is_feedable(subfeed_t):
    977             raise ValueError('Tensor %s may not be fed.' % subfeed_t)

ValueError: Cannot feed value of shape (0, 0) for Tensor u'Placeholder_1:0', which has shape '(?, 10)'

NoBackendError

When I compile, I get this:

raise NoBackendError()

NoBackendError

Error Running the Notebook

screenshot from 2017-09-06 21-20-53

Hi, when I tried running the notebook "Urban Sound Classification using CNN.ipynb", I came across this error on Jupyter Notebook. May I know what is wrong?

"AssertionError: " with empty error message when running just the imports of convolutional neural network


AssertionError Traceback (most recent call last)
in
2 import glob
3 import os
----> 4 import librosa
5 import numpy as np
6 from sklearn.model_selection import KFold

c:\users\jerkm\appdata\local\programs\python\python38-32\lib\site-packages\librosa_init_.py in
10 # And all the librosa sub-modules
11 from ._cache import cache
---> 12 from . import core
13 from . import beat
14 from . import decompose

c:\users\jerkm\appdata\local\programs\python\python38-32\lib\site-packages\librosa\core_init_.py in
123 """
124
--> 125 from .time_frequency import * # pylint: disable=wildcard-import
126 from .audio import * # pylint: disable=wildcard-import
127 from .spectrum import * # pylint: disable=wildcard-import

c:\users\jerkm\appdata\local\programs\python\python38-32\lib\site-packages\librosa\core\time_frequency.py in
10
11 from ..util.exceptions import ParameterError
---> 12 from ..util.deprecation import Deprecated
13
14 all = ['frames_to_samples', 'frames_to_time',

c:\users\jerkm\appdata\local\programs\python\python38-32\lib\site-packages\librosa\util_init_.py in
75 """
76
---> 77 from .utils import * # pylint: disable=wildcard-import
78 from .files import * # pylint: disable=wildcard-import
79 from .matching import * # pylint: disable=wildcard-import

c:\users\jerkm\appdata\local\programs\python\python38-32\lib\site-packages\librosa\util\utils.py in
13 from .._cache import cache
14 from .exceptions import ParameterError
---> 15 from .decorators import deprecated
16
17 # Constrain STFT block sizes to 256 KB

c:\users\jerkm\appdata\local\programs\python\python38-32\lib\site-packages\librosa\util\decorators.py in
7 from decorator import decorator
8 import six
----> 9 from numba.decorators import jit as optional_jit
10
11 all = ['moved', 'deprecated', 'optional_jit']

c:\users\jerkm\appdata\local\programs\python\python38-32\lib\site-packages\numba\decorators.py in
12 from . import config, sigutils
13 from .errors import DeprecationError, NumbaDeprecationWarning
---> 14 from .targets import registry
15 from .stencil import stencil
16

c:\users\jerkm\appdata\local\programs\python\python38-32\lib\site-packages\numba\targets\registry.py in
3 import contextlib
4
----> 5 from . import cpu
6 from .descriptors import TargetDescriptor
7 from .. import dispatcher, utils, typing

c:\users\jerkm\appdata\local\programs\python\python38-32\lib\site-packages\numba\targets\cpu.py in
12 from numba import utils, cgutils, types
13 from numba.utils import cached_property
---> 14 from numba.targets import (
15 callconv, codegen, externals, intrinsics, listobj, setobj, dictimpl,
16 )

c:\users\jerkm\appdata\local\programs\python\python38-32\lib\site-packages\numba\targets\listobj.py in
1069
1070 @overload_method(types.List, "sort")
-> 1071 def ol_list_sort(lst, key=None, reverse=False):
1072
1073 _sort_check_key(key)

c:\users\jerkm\appdata\local\programs\python\python38-32\lib\site-packages\numba\core\extending.py in decorate(overload_func)

c:\users\jerkm\appdata\local\programs\python\python38-32\lib\site-packages\numba\core\typing\templates.py in make_overload_method_template(typ, attr, overload_func, inline)

c:\users\jerkm\appdata\local\programs\python\python38-32\lib\site-packages\numba\core\typing\templates.py in make_overload_attribute_template(typ, attr, overload_func, inline, base)

AssertionError:

RNN Dimensions must be equal, but are 600 and 320 for 'rnn/while/rnn/multi_rnn_cell/cell_0/cell_0/lstm_cell/MatMul_1' (op: 'MatMul') with input shapes: [?,600], [320,1200].


ValueError Traceback (most recent call last)
in ()
----> 1 prediction = RNN(x, weight, bias)
2
3 # Define loss and optimizer
4 loss_f = -tf.reduce_sum(y * tf.log(prediction))
5 optimizer = tf.train.AdamOptimizer(learning_rate = learning_rate).minimize(loss_f)

in RNN(x, weight, bias)
2 cell = rnn_cell.LSTMCell(n_hidden,state_is_tuple = True)
3 cell = rnn_cell.MultiRNNCell([cell] * 2)
----> 4 output, state = tf.nn.dynamic_rnn(cell, x, dtype = tf.float32)
5 output = tf.transpose(output, [1, 0, 2])
6 last = tf.gather(output, int(output.get_shape()[0]) - 1)

/home/houssam/anaconda2/lib/python2.7/site-packages/tensorflow/python/ops/rnn.pyc in dynamic_rnn(cell, inputs, sequence_length, initial_state, dtype, parallel_iterations, swap_memory, time_major, scope)
612 swap_memory=swap_memory,
613 sequence_length=sequence_length,
--> 614 dtype=dtype)
615
616 # Outputs of _dynamic_rnn_loop are always shaped [time, batch, depth].

/home/houssam/anaconda2/lib/python2.7/site-packages/tensorflow/python/ops/rnn.pyc in _dynamic_rnn_loop(cell, inputs, initial_state, parallel_iterations, swap_memory, sequence_length, dtype)
775 loop_vars=(time, output_ta, state),
776 parallel_iterations=parallel_iterations,
--> 777 swap_memory=swap_memory)
778
779 # Unpack final output if not using output tuples.

/home/houssam/anaconda2/lib/python2.7/site-packages/tensorflow/python/ops/control_flow_ops.pyc in while_loop(cond, body, loop_vars, shape_invariants, parallel_iterations, back_prop, swap_memory, name)
2814 loop_context = WhileContext(parallel_iterations, back_prop, swap_memory) # pylint: disable=redefined-outer-name
2815 ops.add_to_collection(ops.GraphKeys.WHILE_CONTEXT, loop_context)
-> 2816 result = loop_context.BuildLoop(cond, body, loop_vars, shape_invariants)
2817 return result
2818

/home/houssam/anaconda2/lib/python2.7/site-packages/tensorflow/python/ops/control_flow_ops.pyc in BuildLoop(self, pred, body, loop_vars, shape_invariants)
2638 self.Enter()
2639 original_body_result, exit_vars = self._BuildLoop(
-> 2640 pred, body, original_loop_vars, loop_vars, shape_invariants)
2641 finally:
2642 self.Exit()

/home/houssam/anaconda2/lib/python2.7/site-packages/tensorflow/python/ops/control_flow_ops.pyc in _BuildLoop(self, pred, body, original_loop_vars, loop_vars, shape_invariants)
2588 structure=original_loop_vars,
2589 flat_sequence=vars_for_body_with_tensor_arrays)
-> 2590 body_result = body(*packed_vars_for_body)
2591 if not nest.is_sequence(body_result):
2592 body_result = [body_result]

/home/houssam/anaconda2/lib/python2.7/site-packages/tensorflow/python/ops/rnn.pyc in _time_step(time, output_ta_t, state)
760 skip_conditionals=True)
761 else:
--> 762 (output, new_state) = call_cell()
763
764 # Pack state if using state tuples

/home/houssam/anaconda2/lib/python2.7/site-packages/tensorflow/python/ops/rnn.pyc in ()
746
747 input_t = nest.pack_sequence_as(structure=inputs, flat_sequence=input_t)
--> 748 call_cell = lambda: cell(input_t, state)
749
750 if sequence_length is not None:

/home/houssam/anaconda2/lib/python2.7/site-packages/tensorflow/python/ops/rnn_cell_impl.pyc in call(self, inputs, state, scope)
181 with vs.variable_scope(vs.get_variable_scope(),
182 custom_getter=self._rnn_get_variable):
--> 183 return super(RNNCell, self).call(inputs, state)
184
185 def _rnn_get_variable(self, getter, *args, **kwargs):

/home/houssam/anaconda2/lib/python2.7/site-packages/tensorflow/python/layers/base.pyc in call(self, inputs, *args, **kwargs)
573 if in_graph_mode:
574 self._assert_input_compatibility(inputs)
--> 575 outputs = self.call(inputs, *args, **kwargs)
576
577 if outputs is None:

/home/houssam/anaconda2/lib/python2.7/site-packages/tensorflow/python/ops/rnn_cell_impl.pyc in call(self, inputs, state)
1064 [-1, cell.state_size])
1065 cur_state_pos += cell.state_size
-> 1066 cur_inp, new_state = cell(cur_inp, cur_state)
1067 new_states.append(new_state)
1068

/home/houssam/anaconda2/lib/python2.7/site-packages/tensorflow/python/ops/rnn_cell_impl.pyc in call(self, inputs, state, scope)
181 with vs.variable_scope(vs.get_variable_scope(),
182 custom_getter=self._rnn_get_variable):
--> 183 return super(RNNCell, self).call(inputs, state)
184
185 def _rnn_get_variable(self, getter, *args, **kwargs):

/home/houssam/anaconda2/lib/python2.7/site-packages/tensorflow/python/layers/base.pyc in call(self, inputs, *args, **kwargs)
573 if in_graph_mode:
574 self._assert_input_compatibility(inputs)
--> 575 outputs = self.call(inputs, *args, **kwargs)
576
577 if outputs is None:

/home/houssam/anaconda2/lib/python2.7/site-packages/tensorflow/python/ops/rnn_cell_impl.pyc in call(self, inputs, state)
609
610 # i = input_gate, j = new_input, f = forget_gate, o = output_gate
--> 611 lstm_matrix = self._linear1([inputs, m_prev])
612 i, j, f, o = array_ops.split(
613 value=lstm_matrix, num_or_size_splits=4, axis=1)

/home/houssam/anaconda2/lib/python2.7/site-packages/tensorflow/python/ops/rnn_cell_impl.pyc in call(self, args)
1187 res = math_ops.matmul(args[0], self._weights)
1188 else:
-> 1189 res = math_ops.matmul(array_ops.concat(args, 1), self._weights)
1190 if self._build_bias:
1191 res = nn_ops.bias_add(res, self._biases)

/home/houssam/anaconda2/lib/python2.7/site-packages/tensorflow/python/ops/math_ops.pyc in matmul(a, b, transpose_a, transpose_b, adjoint_a, adjoint_b, a_is_sparse, b_is_sparse, name)
1889 else:
1890 return gen_math_ops._mat_mul(
-> 1891 a, b, transpose_a=transpose_a, transpose_b=transpose_b, name=name)
1892
1893

/home/houssam/anaconda2/lib/python2.7/site-packages/tensorflow/python/ops/gen_math_ops.pyc in _mat_mul(a, b, transpose_a, transpose_b, name)
2435 _, _, _op = _op_def_lib._apply_op_helper(
2436 "MatMul", a=a, b=b, transpose_a=transpose_a, transpose_b=transpose_b,
-> 2437 name=name)
2438 _result = _op.outputs[:]
2439 _inputs_flat = _op.inputs

/home/houssam/anaconda2/lib/python2.7/site-packages/tensorflow/python/framework/op_def_library.pyc in _apply_op_helper(self, op_type_name, name, **keywords)
785 op = g.create_op(op_type_name, inputs, output_types, name=scope,
786 input_types=input_types, attrs=attr_protos,
--> 787 op_def=op_def)
788 return output_structure, op_def.is_stateful, op
789

/home/houssam/anaconda2/lib/python2.7/site-packages/tensorflow/python/framework/ops.pyc in create_op(self, op_type, inputs, dtypes, input_types, name, attrs, op_def, compute_shapes, compute_device)
2956 op_def=op_def)
2957 if compute_shapes:
-> 2958 set_shapes_for_outputs(ret)
2959 self._add_op(ret)
2960 self._record_op_seen_by_control_dependencies(ret)

/home/houssam/anaconda2/lib/python2.7/site-packages/tensorflow/python/framework/ops.pyc in set_shapes_for_outputs(op)
2207 shape_func = _call_cpp_shape_fn_and_require_op
2208
-> 2209 shapes = shape_func(op)
2210 if shapes is None:
2211 raise RuntimeError(

/home/houssam/anaconda2/lib/python2.7/site-packages/tensorflow/python/framework/ops.pyc in call_with_requiring(op)
2157
2158 def call_with_requiring(op):
-> 2159 return call_cpp_shape_fn(op, require_shape_fn=True)
2160
2161 _call_cpp_shape_fn_and_require_op = call_with_requiring

/home/houssam/anaconda2/lib/python2.7/site-packages/tensorflow/python/framework/common_shapes.pyc in call_cpp_shape_fn(op, require_shape_fn)
625 res = _call_cpp_shape_fn_impl(op, input_tensors_needed,
626 input_tensors_as_shapes_needed,
--> 627 require_shape_fn)
628 if not isinstance(res, dict):
629 # Handles the case where _call_cpp_shape_fn_impl calls unknown_shape(op).

/home/houssam/anaconda2/lib/python2.7/site-packages/tensorflow/python/framework/common_shapes.pyc in _call_cpp_shape_fn_impl(op, input_tensors_needed, input_tensors_as_shapes_needed, require_shape_fn)
689 missing_shape_fn = True
690 else:
--> 691 raise ValueError(err.message)
692
693 if missing_shape_fn:

ValueError: Dimensions must be equal, but are 600 and 320 for 'rnn/while/rnn/multi_rnn_cell/cell_0/cell_0/lstm_cell/MatMul_1' (op: 'MatMul') with input shapes: [?,600], [320,1200].

ValueError: Cannot feed value of shape (0, 0) for Tensor 'Placeholder_1:0', which has shape '(?, 10)'

Traceback (most recent call last):
File "/neuralNet.py", line 160, in
_, cost = sess.run([optimizer, cost_function], feed_dict={X: train_x, Y: train_y})
File "\anaconda3\lib\site-packages\tensorflow\python\client\session.py", line 789, in run
run_metadata_ptr)
File "\anaconda3\lib\site-packages\tensorflow\python\client\session.py", line 975, in _run
% (np_val.shape, subfeed_t.name, str(subfeed_t.get_shape())))

Running near exact code, with the only addition being 'import librosa.display'

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.