Comments (19)
https://drive.google.com/file/d/1xwsQFV7VS4ngEQPy-aGrvO-ZppLsdXKi/view?usp=sharing
I trained ds_cnn_medium with default parameters.
from ml-examples.
See here for training code for a compatible model.
It is possible that the micro speech model needs different input features to what is expected of the existing kws_micronet_m model (and the ds_cnn model linked above). So if the input sizes do not match then there will be issues running the application.
from ml-examples.
I used this scripts to train model (medium version). It did not work. I am gonna check about feature size.
from ml-examples.
Hi Richard
I trained ds_cnn model with this conf (dct_coefficient_count=10);
python train.py --model_architecture ds_cnn --model_size_info 5 172 10 4 2 1 172 3 3 2 2 172 3 3 1 1 172 3 3 1 1 172 3 3 1 1 --dct_coefficient_count 10 --window_size_ms 40 --window_stride_ms 20 --learning_rate 0.0005,0.0001,0.00002 --how_many_training_steps 10000,10000,10000 --summaries_dir work/DS_CNN/DS_CNN_M/retrain_logs --train_dir work/DS_CNN/DS_CNN_M/training
And i moved to weights to this file;
ARM-software/ML-examples/tree/main/cmsis-pack-examples/kws/src/kws_micronet_m.tflite.cpp
But it did not work? Is not it possible to share how you produce pretrained model? I can use pretrained model but others does not work?
from ml-examples.
When you say it doesn't work is it the result of the model when used in the application are not what you expect? If that is the case then it might be caused by the labels vector here
The pretrained micronet model label order is different to that from the training scripts. Try changing your labels in the application to this order and see if that helps?
from ml-examples.
Nope. I trained ds_cnn in ML-ZOO with same label. same output size. I generated .tflite file. And I converted it with this;
https://github.com/thodoxuan99/KWS_MCU/blob/main/kws_cortex_m/tflite_to_tflu.py
I moved weight to this place;
ARM-software/ML-examples/tree/main/cmsis-pack-examples/kws/src/kws_micronet_m.tflite.cpp
It did not generate anything. Empty screen. No signal has shown. It did not gave error during build project. I did not look the label order, because it did not produce output.
from ml-examples.
Would you be able to share the tflite file, and I can try replicating the issue?
from ml-examples.
Okay, I converted your model with
xxd -i ds_cnn_quantized.tflite > model_data.cc
and copied the contents of the array into ARM-software/ML-examples/tree/main/cmsis-pack-examples/kws/src/kws_micronet_m.tflite.cpp
, overwriting the existing model data that is there.
It builds but I get the following output when running on Keil Studio Cloud.
INFO - Added support to op resolver
INFO - Creating allocator using tensor arena at 0x31000000
INFO - Allocating tensors
ERROR - tensor allocation failed!
ERROR - Failed to initialise model
Running it again locally I believe this is caused by the Softmax operator in your model that isn't present in the pretrained MicroNet one. TFLite micro needs to know what operators are present in the model for it to work otherwise it will throw an error.
I manually enlisted this operator by editing the local cmsis pack at ~/.cache/arm/packs/ARM/ml-embedded-eval-kit-uc-api/22.8.0-Beta/source/application/api/use_case/kws/src/MicroNetKwsModel.cc
and also editing these 2 lines in the main (as these numbers don't align with your model input shape). I also had to make edits to ~/.cache/arm/packs/ARM/ml-embedded-eval-kit-uc-api/22.8.0-Beta/source/application/api/use_case/kws/src/KwsProcessing.cc
to change the useSoftmax parameter within the DoPostProcess() function to false.
Inference now runs albeit with the wrong result, which may just be the result of the new model.
Ideally we should have some way for a user to manually enlist new operators via the API, incase they have changed the model like you have done. We have a task to do this but I am not sure of when this will be completed.
You can make local edits to the cmsis-packs yourself as well to get things working but this is probably not a sustainable solution, instead I think you have 2 ways to go forward:
- You can adjust the retrained model so it doesn't have softmax at the end when you produce your tflite file, this way you don't need to edit the cmsis packs.
- Switch to using the ML-embedded-evaluation-kit, which is what this cmsis-pack example are based off of (and where the cmsis packs come from). Swapping to use this will allow you to more easily modify the KWS use case and change models etc. or even generate new cmsis-packs if you wish.
from ml-examples.
Hi Richard
I commented this line to deactivate softmax
And I gave output with "x" variable. My new code looks like;
# Squeeze before passing to output fully connected layer.
x = tf.reshape(x, shape=(-1, conv_feat[layer_no]))
# Output connected layer.
# output = tf.keras.layers.Dense(units=label_count, activation='softmax')(x)
return tf.keras.Model(inputs, x)
I trained model, replaced the weigths, and got build. (Default labels, and parameters used)
But now, it show the audio signal, however does not show keyword or any text.
Should I change it Relu, something else i need to fix it?
BR
Mesut
from ml-examples.
Make sure that the labels in the application here are changed to match the label order from the retraining here.
from ml-examples.
It matches, but it does not give any output. Even wrong keyword. It did not print any text to screen
from ml-examples.
Are not there any open-source code for training script of micronet model which you use?
from ml-examples.
Okay, I converted your model with
xxd -i ds_cnn_quantized.tflite > model_data.cc
and copied the contents of the array intoARM-software/ML-examples/tree/main/cmsis-pack-examples/kws/src/kws_micronet_m.tflite.cpp
, overwriting the existing model data that is there.It builds but I get the following output when running on Keil Studio Cloud.
INFO - Added support to op resolver
INFO - Creating allocator using tensor arena at 0x31000000
INFO - Allocating tensors
ERROR - tensor allocation failed!
ERROR - Failed to initialise modelRunning it again locally I believe this is caused by the Softmax operator in your model that isn't present in the pretrained MicroNet one. TFLite micro needs to know what operators are present in the model for it to work otherwise it will throw an error.
I manually enlisted this operator by editing the local cmsis pack at
~/.cache/arm/packs/ARM/ml-embedded-eval-kit-uc-api/22.8.0-Beta/source/application/api/use_case/kws/src/MicroNetKwsModel.cc
and also editing these 2 lines in the main (as these numbers don't align with your model input shape). I also had to make edits to~/.cache/arm/packs/ARM/ml-embedded-eval-kit-uc-api/22.8.0-Beta/source/application/api/use_case/kws/src/KwsProcessing.cc
to change the useSoftmax parameter within the DoPostProcess() function to false.Inference now runs albeit with the wrong result, which may just be the result of the new model.
Ideally we should have some way for a user to manually enlist new operators via the API, incase they have changed the model like you have done. We have a task to do this but I am not sure of when this will be completed.
You can make local edits to the cmsis-packs yourself as well to get things working but this is probably not a sustainable solution, instead I think you have 2 ways to go forward:
1. You can adjust the retrained model so it doesn't have softmax at the end when you produce your tflite file, this way you don't need to edit the cmsis packs. 2. Switch to using the [ML-embedded-evaluation-kit](https://review.mlplatform.org/plugins/gitiles/ml/ethos-u/ml-embedded-evaluation-kit), which is what this cmsis-pack example are based off of (and where the cmsis packs come from). Swapping to use this will allow you to more easily modify the KWS use case and change models etc. or even generate new cmsis-packs if you wish.
Hi
Is it possible to share your modifications in this part? So i can try same changing, and i can get result with my own model.
in this part;
I manually enlisted this operator by editing the local cmsis pack at
~/.cache/arm/packs/ARM/ml-embedded-eval-kit-uc-api/22.8.0-Beta/source/application/api/use_case/kws/src/MicroNetKwsModel.ccand also editing [these 2 lines](https://github.com/ARM-software/ML-examples/blob/d4816d163ffbddb37e3d5e01cc3351e9452b2abb/cmsis-pack-examples/kws/src/main_wav.cpp#L97) in the main (as these numbers don't align with your model input shape). I also had to make edits to
~/.cache/arm/packs/ARM/ml-embedded-eval-kit-uc-api/22.8.0-Beta/source/application/api/use_case/kws/src/KwsProcessing.cc to change the useSoftmax parameter within the DoPostProcess() function to false.
from ml-examples.
Apologies for the delay responding. We don't have opensource model description of micronet but you can use the training code you already have and add micronet style model in yourself if you wish to by inspecting the tflite file in Netron and implementing it in the right place of the training code. However, that shouldn't be necessary now that you have removed softmax from your model already, your model should now work.
To fix the blank output you will need to also make the change I did to these 2 lines as I think your input shape is different to the included model.
They should change to this I believe:
const uint32_t numMfccFeatures = 10; const uint32_t numMfccFrames = 49;
As this was getting it from the model input shape originally. Give that a try and it should hopefully not give blank output anymore (fingers crossed).
from ml-examples.
Still blank output... can you show me these modifications also; so i can try same model which you tried.
-
I manually enlisted this operator by editing the local cmsis pack at ~/.cache/arm/packs/ARM/ml-embedded-eval-kit-uc-api/22.8.0-Beta/source/application/api/use_case/kws/src/MicroNetKwsModel.cc
-
I also had to make edits to~/.cache/arm/packs/ARM/ml-embedded-eval-kit-uc-api/22.8.0-Beta/source/application/api/use_case/kws/src/KwsProcessing.cc to change the useSoftmax parameter within the DoPostProcess() function to false.
from ml-examples.
I manually enlisted this operator by editing the local cmsis pack at ~/.cache/arm/packs/ARM/ml-embedded-eval-kit-uc-api/22.8.0-Beta/source/application/api/use_case/kws/src/MicroNetKwsModel.cc
After line 32 add: this->m_opResolver.AddSoftmax();
so it will look like:
this->m_opResolver.AddRelu();
this->m_opResolver.AddSoftmax();
~/.cache/arm/packs/ARM/ml-embedded-eval-kit-uc-api/22.8.0-Beta/source/application/api/use_case/kws/include/MicroNetKwsModel.hpp
Line 49 you need to increase ms_maxOpCnt by 1
I also had to make edits to~/.cache/arm/packs/ARM/ml-embedded-eval-kit-uc-api/22.8.0-Beta/source/application/api/use_case/kws/src/KwsProcessing.cc to change the useSoftmax parameter within the DoPostProcess() function to false.
Line 207 change last parameter to false so it will be:
this->m_labels, 1, false);
Did you try running with your new model (that doesn't have softmax) on CS300 FVP, either locally or via Keil Studio Cloud, and what output did you get on the console? I should hope to see some error or output message on the console to see at what point it is getting to when executing, which might help to debug.
from ml-examples.
Hi Richard
I ran model which i sent with drive. Now it works. Thanks.
You referred the main wav in here; but I guess it should main_live.cc
To fix the blank output you will need to also make the change I did to [these 2 lines](https://github.com/ARM-software/ML-examples/blob/d4816d163ffbddb37e3d5e01cc3351e9452b2abb/cmsis-pack-examples/kws/src/main_wav.cpp#L97) as I think your input shape is different to the included model.
They should change to this I believe:
const uint32_t numMfccFeatures = 10; const uint32_t numMfccFrames = 49;
Thanks
from ml-examples.
I did not use CS300 FVP, or something like that before. I am getting build with .bin . And running on F46. How can i get CS300 FVP to see the errors.
from ml-examples.
I would think you should be able to see outputs over serial for your board as well.
Public CS300 FVP are available to download here.
from ml-examples.
Related Issues (20)
- Error in compile tflu-kws-cortex-m/kws_cortex_m HOT 2
- Undefined reference to the main error
- PyArmnn TFLite Model parsing error: "Unsupported Operations" HOT 1
- Link error which i could not fix
- document error - links not working
- Receive data with Putty
- make -f tensorflow/lite/micro/tools/make/Makefile TAGS="DS_CNN Simple_KWS_Test" generate_kws_cortex_m_mbed_project HOT 4
- Error in compile tflu-kws-cortex-m/kws_cortex_m HOT 3
- couldn't retrieve the checkpoints HOT 4
- 'RaggedTensor' + size mismatch HOT 1
- Error with nn_quantiser.py HOT 1
- Freez.py for tensrflow2.5 versin HOT 3
- Printing predictions HOT 1
- custom keyword - transfer learning HOT 1
- changing wanted word to only one word - error HOT 1
- predicted classes - custom dataset HOT 3
- Integration of Tensorflow Lite and CMSIS-NN
- Integration of Tensorflow lite and CMSIS-NN HOT 3
- using custom dataset -- Data too short when trying to read string
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from ml-examples.