GithubHelp home page GithubHelp logo

arm-software / ml-examples Goto Github PK

View Code? Open in Web Editor NEW
420.0 46.0 188.0 90.83 MB

Arm Machine Learning tutorials and examples

Home Page: https://developer.arm.com/technologies/machine-learning-on-arm

License: Apache License 2.0

Python 0.33% C++ 97.68% C 0.43% Java 0.35% CMake 0.02% Shell 0.06% Jupyter Notebook 1.00% Assembly 0.14%
arm machine-learning python deep-learning deep-neural-networks neural-network ml raspberry-pi raspberry-pi-3

ml-examples's Introduction

ML Examples

Source code for machine learning tutorials and examples used in Arm's ML developer space.

Projects and tutorials

Arm NN Mobilenet on Android

Deploy a quantized TensorFlowLite MobileNet V2 model on Android using the Arm NN SDK.

Arm Style Transfer on Android

Implement a neural style transfer on Android with Arm NN APIs.

CMSIS pack based examples for Arm Corstone 300

CMSIS project showing keyword spotting (KWS) and object detection on Arm® Corstone™-300.

Ethos-U on Corstone 300

Explore the Arm® Corstone™-300 with Arm® Cortex™-M55 and Arm® Ethos™-U55 NPU.

Multi-Gesture Recognition

Train a convolutional neural network from scratch to recognize multiple gestures in a wide range of conditions with TensorFlow and a Raspberry Pi 4 Model B or Pi 3.

Fire detection on a Raspberry Pi using PyArmNN

Deploy a neural network, trained to recognize images that include a fire or flames on a Raspberry Pi, using PyArmNN.

Pytorch to Tensorflow

Deploy a Jupyter notebook that will demonstrate how to convert a model trained in PyTorch to Tensorflow Lite format.

RNN unrolling for Tf Lite

Deploy a Jupyter notebook that will demonstrate how to train a Recurrent Neural Network (RNN) in TensorFlow, and then prepare it for exporting to Tensorflow Lite format by unrolling it.

Image recognition on MBED using CMSIS and TFLM

Deply an image recognition demo on a Discovery STM32F746G board using TensorFlow Lite for Microcontrollers (TFLM) and CMSIS-NN.

Yeah, World

Explore gesture recognition with TensorFlow and transfer learning on the Raspberry Pi 4 Model B, Pi 3 and Pi Zero.

ml-examples's People

Contributors

alexanderefremovarm avatar burton2000 avatar eanoca01 avatar elm8116 avatar gggekov avatar gmiodice avatar jonatanantoni avatar kshitij-sisodia-arm avatar mark-arm avatar matteoarm avatar mtikekar avatar navsuda avatar nextcam avatar ninaarm avatar oscarandersson8218 avatar phorsman-arm avatar robell avatar sicong-li-arm avatar wxarm avatar wxgithub avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ml-examples's Issues

Why the two ways of input pre-processing are different?

This is code from folder CMSIS_5-5.6.0\CMSIS\NN\Examples\IAR\iar_nn_examples\NN-example-cifar10.
/* input pre-processing /
int mean_data[3] = INPUT_MEAN_SHIFT;
unsigned int scale_data[3] = INPUT_RIGHT_SHIFT;
for (int i=0;i<32
32*3; i+=3) {
img_buffer2[i] = (q7_t)__SSAT( ((((int)image_data[i] - mean_data[0])<<7) + (0x1<<(scale_data[0]-1)))
>> scale_data[0], 8);
img_buffer2[i+1] = (q7_t)__SSAT( ((((int)image_data[i+1] - mean_data[1])<<7) + (0x1<<(scale_data[1]-1)))
>> scale_data[1], 8);
img_buffer2[i+2] = (q7_t)__SSAT( ((((int)image_data[i+2] - mean_data[2])<<7) + (0x1<<(scale_data[2]-1)))
>> scale_data[2], 8);
}

This is code_gen.py generate code.
void mean_subtract(q7_t* image_data) {
for(int i=0; i<DATA_OUT_CHDATA_OUT_DIMDATA_OUT_DIM; i++) {
image_data[i] = (q7_t)__SSAT( ((int)(image_data[i] - mean[i]) >> DATA_RSHIFT), 8);
}
}

When I replace mean_file in the file cifar10_m4_train_test.prototxt with mean_value, then I get the following code.
layer {
name: "data"
type: "Data"
top: "data"
top: "label"
include {
phase: TRAIN
}
transform_param {
mean_file: "/home/ubuntu/caffe/examples/cifar10/mean.binaryproto"
}
data_param {
source: "/home/ubuntu/caffe/examples/cifar10/cifar10_train_lmdb"
batch_size: 100
backend: LMDB
}
}
layer {
name: "data"
type: "Data"
top: "data"
top: "label"
include {
phase: TRAIN
}
transform_param {
mean_value:101
mean_value:102
mean_value:103
}
data_param {
source: "/home/ubuntu/caffe/examples/cifar10/cifar10_train_lmdb"
batch_size: 100
backend: LMDB
}
}

void mean_subtract(q7_t* image_data) {
for(int i=0; i<DATA_OUT_CHDATA_OUT_DIMDATA_OUT_DIM; i+=3) {
image_data[i] = (q7_t)__SSAT( (((int)image_data[i] - mean[0]) >> DATA_RSHIFT), 8);
image_data[i+1] = (q7_t)__SSAT( (((int)image_data[i+1] - mean[1]) >> DATA_RSHIFT), 8);
image_data[i+2] = (q7_t)__SSAT( (((int)image_data[i+2] - mean[2]) >> DATA_RSHIFT), 8);
}
}

changing wanted word to only one word - error

Hi,

Why when I change the wanted words to only one word (yes for example) I get the following error

Traceback (most recent call last): File "test.py", line 199, in <module> test() File "test.py", line 44, in test model.load_weights(FLAGS.checkpoint).expect_partial() File "C:\Users\ash_j\anaconda3\envs\newenvt\lib\site-packages\tensorflow\python\keras\engine\training.py", line 2297, in load_weights status = self._trackable_saver.restore(filepath, options) File "C:\Users\ash_j\anaconda3\envs\newenvt\lib\site-packages\tensorflow\python\training\tracking\util.py", line 1339, in restore checkpoint=checkpoint, proto_id=0).restore(self._graph_view.root) File "C:\Users\ash_j\anaconda3\envs\newenvt\lib\site-packages\tensorflow\python\training\tracking\base.py", line 258, in restore restore_ops = trackable._restore_from_checkpoint_position(self) # pylint: disable=protected-access File "C:\Users\ash_j\anaconda3\envs\newenvt\lib\site-packages\tensorflow\python\training\tracking\base.py", line 978, in _restore_from_checkpoint_position tensor_saveables, python_saveables)) File "C:\Users\ash_j\anaconda3\envs\newenvt\lib\site-packages\tensorflow\python\training\tracking\util.py", line 309, in restore_saveables validated_saveables).restore(self.save_path_tensor, self.options) File "C:\Users\ash_j\anaconda3\envs\newenvt\lib\site-packages\tensorflow\python\training\saving\functional_saver.py", line 339, in restore restore_ops = restore_fn() File "C:\Users\ash_j\anaconda3\envs\newenvt\lib\site-packages\tensorflow\python\training\saving\functional_saver.py", line 323, in restore_fn restore_ops.update(saver.restore(file_prefix, options)) File "C:\Users\ash_j\anaconda3\envs\newenvt\lib\site-packages\tensorflow\python\training\saving\functional_saver.py", line 116, in restore restored_tensors, restored_shapes=None) File "C:\Users\ash_j\anaconda3\envs\newenvt\lib\site-packages\tensorflow\python\training\saving\saveable_object_util.py", line 132, in restore self.handle_op, self._var_shape, restored_tensor) File "C:\Users\ash_j\anaconda3\envs\newenvt\lib\site-packages\tensorflow\python\ops\resource_variable_ops.py", line 308, in shape_safe_assign_variable_handle shape.assert_is_compatible_with(value_tensor.shape) File "C:\Users\ash_j\anaconda3\envs\newenvt\lib\site-packages\tensorflow\python\framework\tensor_shape.py", line 1161, in assert_is_compatible_with raise ValueError("Shapes %s and %s are incompatible" % (self, other)) ValueError: Shapes (3,) and (12,) are incompatible WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.iter WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.beta_1 WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.beta_2 WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.decay WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer's state 'm' for (root).layer_with_weights-0.kernel WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer's state 'm' for (root).layer_with_weights-0.bias WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer's state 'm' for (root).layer_with_weights-1.gamma WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer's state 'm' for (root).layer_with_weights-1.beta

Wrong col_buffer size/type ?

Hi, the attached network causes memory corruption when run a few times on Cortex-M7. I think the col_buffer size/type is wrong. The generated code allocates a col_buffer of 800, but gdb breaks on writes almost twice the size, doubling the col_buffer size fixes the issue for now.

square.pkl.zip

armNN output is diffirent from actual value

I write a test code to test a tensorflow lite model using ARMNN, the input dimensionality of the network is (1,112,112,3),I memset the input to all 100.0, then run inference, but the output is really diffirent from actual, is there something wrong with my code ?

here is my codes:

#define PIC_SIZE 112

void print_vector(vector &src)
{
cout<<"\n";
for(int i=0;i<src.size();i++){
if( ((i % 6) == 0) && (i !=0))
cout<<"\n";
printf(" %8f",src[i]);
}
cout<<"\n";
}

int main(int argc, char** argv)
{
// Initialize face detection model
int i,size;
char *path, *path2;

if (argc < 3)
    std::cout << "param err";

size = PIC_SIZE*PIC_SIZE*3;

std::vector<float> input_array(size);


const char* tflite_file = "mobileface_nonquntize.tflite";
const std::string inputName = "data";
const std::string outputName = "output";

using TContainer = boost::variant<std::vector<float>>;

unsigned int outputNumElements = 128;



using IParser = armnnTfLiteParser::ITfLiteParser;
auto armnnparser(IParser::Create());
armnn::INetworkPtr network = armnnparser->CreateNetworkFromBinaryFile(tflite_file);


// Find the binding points for the input and output nodes 
    
 using BindingPointInfo = armnnTfLiteParser::BindingPointInfo;    
const std::vector<BindingPointInfo> inputBindings  = { armnnparser->GetNetworkInputBindingInfo(0, inputName) };



const std::vector<BindingPointInfo> outputBindings = { armnnparser->GetNetworkOutputBindingInfo(0, outputName) };        

// Output tensor size is equal to the number of model output labels
//const unsigned int outputNumElements = modelOutputLabels.size();
std::vector<TContainer> outputDataContainers = { std::vector<float>(outputNumElements)};


// Optimize the network for a specific runtime compute 
// device, e.g. CpuAcc, GpuAcc CpuRef
armnn::IRuntime::CreationOptions options;
armnn::IRuntimePtr runtime = armnn::IRuntime::Create(options);
armnn::IOptimizedNetworkPtr optNet = armnn::Optimize(*network,
   {armnn::Compute::CpuAcc, armnn::Compute::CpuRef},  
   runtime->GetDeviceSpec());

   // Load the optimized network onto the runtime device
armnn::NetworkId networkIdentifier;
runtime->LoadNetwork(networkIdentifier, std::move(optNet));



for (int i = 0; i < size; i++){
     input_array[i] = 100.0; 
}


const std::vector<TContainer> inputDataContainers = {input_array}; 

armnn::Status ret = runtime->EnqueueWorkload(networkIdentifier,
      armnnUtils::MakeInputTensors(inputBindings, inputDataContainers),
      armnnUtils::MakeOutputTensors(outputBindings, outputDataContainers));


      
std::vector<float> output1 = boost::get<std::vector<float>>(outputDataContainers[0]);

cout<<"\n";
print_vector(output1);

return 0;
}

mnist_caffe example : prototxt

Hi,
Please excuse my ignorance. If I was to use the mnist_caffe example for my custom resnet model, at what point will I pass the prototxt file. Obviously the model is passed at line 49 of the code
armnn::INetworkPtr network = parser->CreateNetworkFromBinaryFile("model/lenet_iter_9000.caffemodel",
{ }, // input taken from file if empty
{ "prob" }); // output node

what i don't understand is where to specify the prototxt file

Will the mnist_caffe example also work for color image - I presume the data part of my prototxt file specifies the channels. Am I right?

compile error of minst_tf for arm32 device

I am trying to compile minst_tf for arm32 device, and the Makefile is:
ARMNN_LIB = ${HOME}/armnn-devenv/armnn/build
ARMNN_INC = ${HOME}/armnn-devenv/armnn/include

all: mnist_tf

mnist_tf: mnist_tf.cpp mnist_loader.hpp
g++ -O3 -m32 -std=c++14 -I$(ARMNN_INC) mnist_tf.cpp -o mnist_tf -L$(ARMNN_LIB) -larmnn -larmnnTfParser -lpthread

clean:
-rm -f mnist_tf

test: mnist_tf
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:$(ARMNN_LIB) ./mnist_tf

but i got the error:
/usr/bin/ld: skipping incompatible /home/happyyang/armnn-devenv/armnn/build/libarmnn.so when searching for -larmnn
/usr/bin/ld: cannot find -larmnn
/usr/bin/ld: skipping incompatible /home/happyyang/armnn-devenv/armnn/build/libarmnnTfParser.so when searching for -larmnnTfParser
/usr/bin/ld: cannot find -larmnnTfParser
collect2: error: ld returned 1 exit status
make: *** [mnist_tf] Error 1
I am sure libarmnn.so libarmnnTfParser.so are ok,because the UnitTests runs ok.
who can help me how to compile it,thx

CMSIS-NN CIFAR10 example for STM32F746G-DISCO does not work?

Hello All,

I am following this Image recognition on Arm Cortex-M with CMSIS-NN guide

I have exactly the same hardware:

STM32F746G-DISCO
STM32F4DIS-CAM

the same software installed

Ubuntu 16.04 LTS
Python 2.7.12
Caffe
GNU Tools for Arm Embedded Processors 7-2017-q4-major

and I am able to reproduce all mentioned in the guide steps (including building basic camera app, quantizing the model, converting the model), except the final one! = Deploy transformed model on an Arm Cortex-M processor

The final call is:

#Run this command in cmsisnn_demo folder
mbed compile -m DISCO_F746NG -t GCC_ARM --source . --source ../ML-examples/cmsisnn-cifar10/code/m7 --source ../ML-examples/cmsisnn-cifar10/camera_demo/camera_with_nn/ --source ../CMSIS_5/CMSIS/NN/Include --source ../CMSIS_5/CMSIS/NN/Source --source ../CMSIS_5/CMSIS/DSP/Include --source ../CMSIS_5/CMSIS/DSP/Source --source ../CMSIS_5/CMSIS/Core/Include -j8

The only difference is in

--source ../ML-examples/cmsisnn-cifar10/camera_demo/camera_with_nn/

the original call refers to

--source ../ML-examples/cmsisnn-cifar10/camera_demo/camera_app/

But I have not changed camera_app folder, just used already prepared by ARM folder with camera_with_nn (ARM provides ready-to-go camera apps without NN and with NN, here)

And mbed finishes with info

[mbed] ERROR: "/usr/bin/python" returned error.

(I know, not much descriptive... but I do not see any ERRORs during building, only: multiple definition of or first defined here)

So perhaps any known issues reffering to above?
Has anyone tried to complete this guide?

Hints more than welcome.

make -f tensorflow/lite/micro/tools/make/Makefile TAGS="DS_CNN Simple_KWS_Test" generate_kws_cortex_m_mbed_project

I am following the tutorial here:tflu-kws-cortex-m
$ make -f tensorflow/lite/micro/tools/make/Makefile TAGS="DS_CNN Simple_KWS_Test" generate_kws_cortex_m_mbed_project
This produces the following error:
make: *** no rules to make target“tensorflow/lite/micro/tools/make/gen/linux_x86_64/prj/kws_cortex_m/mbed/third_party/gemmlowp/fixedpoint/fixedpoint.h”, due to “generate_kws_cortex_m_mbed_project” stop

How to resolved this error?

Freez.py for tensrflow2.5 versin

Hi,
I'm trying to edit the freez.py from the main project folder( which is Tensorflow 1.X). to make it suitable for thates project files( Tensorflow 2.5) However, when running
python freeze.py --model_architecture ds_cnn --model_size_info 5 64 10 4 2 2 64 3 3 1 1 64 3 3 1 1 64 3 3 1 1 64 3 3 1 1 --checkpoint ../Pretrained_models/DS_CNN/DS_CNN_S/ckpt/ds_cnn_0.94_ckpt --output_file ds_cnn.pb

I'm getting the following error

2021-08-05 15:42:18.490308: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2021-08-05 15:42:18.491455: I tensorflow/core/common_runtime/process_util.cc:146] Creating new thread pool with default inter op setting: 2. Tune using inter_op_parallelism_threads for best performance.
WARNING:tensorflow:From C:\Users\ash_j\anaconda3\envs\newenvt\lib\site-packages\tensorflow\python\keras\layers\normalization.py:534: _colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
W0805 15:42:18.584352 17588 deprecation.py:336] From C:\Users\ash_j\anaconda3\envs\newenvt\lib\site-packages\tensorflow\python\keras\layers\normalization.py:534: _colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
Traceback (most recent call last):
File "C:\Users\x\anaconda3\envs\newenvt\lib\site-packages\tensorflow\python\framework\tensor_util.py", line 549, in make_tensor_proto
str_values = [compat.as_bytes(x) for x in proto_values]
File "C:\Users\x\anaconda3\envs\newenvt\lib\site-packages\tensorflow\python\framework\tensor_util.py", line 549, in
str_values = [compat.as_bytes(x) for x in proto_values]
File "C:\Users\x\anaconda3\envs\newenvt\lib\site-packages\tensorflow\python\util\compat.py", line 87, in as_bytes
(bytes_or_text,))
TypeError: Expected binary or unicode string, got <tensorflow.python.keras.engine.functional.Functional object at 0x000002E6F9A16F08>

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "freeze.py", line 248, in
tf.compat.v1.app.run(main=main, argv=[sys.argv[0]] + unparsed)
File "C:\Users\x\anaconda3\envs\newenvt\lib\site-packages\tensorflow\python\platform\app.py", line 40, in run
_run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
File "C:\Users\x\xanaconda3\envs\newenvt\lib\site-packages\absl\app.py", line 312, in run
_run_main(main, args)
File "C:\Users\x\anaconda3\envs\newenvt\lib\site-packages\absl\app.py", line 258, in _run_main
sys.exit(main(argv))
File "freeze.py", line 176, in main
FLAGS.model_size_info)
File "freeze.py", line 115, in create_inference_graph
tf.nn.softmax(logits, name='labels_softmax')
File "C:\Users\x\anaconda3\envs\newenvt\lib\site-packages\tensorflow\python\util\dispatch.py", line 206, in wrapper
return target(*args, **kwargs)
File "C:\Users\x\anaconda3\envs\newenvt\lib\site-packages\tensorflow\python\ops\nn_ops.py", line 3701, in softmax_v2
return _wrap_2d_function(logits, gen_nn_ops.softmax, axis, name)
File "C:\Users\x\anaconda3\envs\newenvt\lib\site-packages\tensorflow\python\ops\nn_ops.py", line 3613, in _wrap_2d_function
inputs = ops.convert_to_tensor(inputs)
File "C:\Users\x\anaconda3\envs\newenvt\lib\site-packages\tensorflow\python\profiler\trace.py", line 163, in wrapped
return func(*args, **kwargs)
File "C:\Users\x\anaconda3\envs\newenvt\lib\site-packages\tensorflow\python\framework\ops.py", line 1566, in convert_to_tensor
ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
File "C:\Users\x\anaconda3\envs\newenvt\lib\site-packages\tensorflow\python\framework\constant_op.py", line 339, in _constant_tensor_conversion_function
return constant(v, dtype=dtype, name=name)
File "C:\Users\x\anaconda3\envs\newenvt\lib\site-packages\tensorflow\python\framework\constant_op.py", line 265, in constant
allow_broadcast=True)
File "C:\Users\x\anaconda3\envs\newenvt\lib\site-packages\tensorflow\python\framework\constant_op.py", line 283, in _constant_impl
allow_broadcast=allow_broadcast))
File "C:\Users\x\anaconda3\envs\newenvt\lib\site-packages\tensorflow\python\framework\tensor_util.py", line 553, in make_tensor_proto
"supported type." % (type(values), values))
TypeError: Failed to convert object of type <class 'tensorflow.python.keras.engine.functional.Functional'> to Tensor. Contents: <tensorflow.python.keras.engine.functional.Functional object at 0x000002E6F9A16F08>. Consider casting elements to a supported type.

Would appreciate any help regarding this.
Thanks

Integration of Tensorflow Lite and CMSIS-NN

We are stuck at a problem that requires the weights and biases to be represented in Qm.n formats from a trained TensorFlow lite model. The trained CNN model on Tensorflow lite or Brevitas provides int8 weights, biases, and output activations. The model also specifies the scaling value and zero point, which it uses to convert the float32 weights to int8 weights during the computation internally. A screenshot of the trained weights from one convolution layer is attached [here.]
image

CMSIS-NN requires the int8 weights to be represented in Qm.n format. It is essential to know the fixed-point representation(Qm.n) since the CMSIS-NN API calls require us to left-shift the biases and right-shift the output activations for correct computation. This task is performed by the CMSIS-API when we provide the exact left shift and right shift values in the header file. A screenshot of the header file and the API call is attached.

Header File snapshot :
image

API Call Snapshot :
image

The problem is we do not know how to convert the scaling factor and zero-point values to a Qm.n format. We wondered if there is a way to know the Qm.n formats of the weights, biases, and output activations from Tensorflow lite. CMSIS-NN help guide provided by ARM mentions specific scripts that directly do these conversions like code_gen.py, but unfortunately, these help scripts are no longer available on the Github page (https://github.com/ARM-software/ML-examples/blob/master/cmsisnn-cifar10/code_gen.py). It would be great if you can help.

Error std::bad_alloc while creating IRuntime instance

I have successfully compiled the binary 'tensorflow_inference' on board but getting error while executing it. According to the logs, it seems that error is related to creating armnn's iruntime instance:

Suspicious Code:

armnn::IRuntime::CreationOptions options;
armnn::IRuntimePtr runtime = armnn::IRuntime::Create(options);

Code:

for (int testImageIndex = 0; testImageIndex < nTests; testImageIndex++)
    {
        std::cout << "Reached here .. 1" << std::endl;
        auto start = std::chrono::system_clock::now();
        std::unique_ptr<MnistImage> input = loadMnistImage(dataDir, testImageIndex);
        std::cout << "Reached here .. 2" << std::endl;
        if (input == nullptr)
        {
            return EXIT_FAILURE;
        }

        /* Import the TensorFlow model */
        armnnTfParser::ITfParserPtr
            parser = armnnTfParser::ITfParser::Create();
        std::cout << "Reached here .. 3" << std::endl;
        armnn::INetworkPtr
            network = parser->CreateNetworkFromTextFile(TENSOR_MODEL,
                                                        {{"Placeholder", {1, 784, 1, 1}}}, {"Softmax"});
        std::cout << "Reached here .. 4" << std::endl;
        /* Find the binding points for the input and output nodes */
        armnnTfParser::BindingPointInfo
            inputBindingInfo = parser->GetNetworkInputBindingInfo("Placeholder");
        std::cout << "Reached here .. 5" << std::endl;
        armnnTfParser::BindingPointInfo
            outputBindingInfo = parser->GetNetworkOutputBindingInfo("Softmax");
        std::cout << "Reached here .. 6" << std::endl;
        armnn::IRuntime::CreationOptions options;
        std::cout << "Reached here .. 7" << std::endl;
        armnn::IRuntimePtr runtime = armnn::IRuntime::Create(options);
        std::cout << "Reached here .. 8" << std::endl;
        armnn::IOptimizedNetworkPtr optNet = armnn::Optimize(*network,
                                                             {armnn::Compute::CpuRef}, runtime->GetDeviceSpec());
        std::cout << "Reached here .. 9" << std::endl;
        /* Load the optimized network onto the runtime device */
        armnn::NetworkId networkIdentifier;
        runtime->LoadNetwork(networkIdentifier, std::move(optNet));
        std::cout << "Reached here .. 10" << std::endl;

Output Logs :-
Reached here .. 1
Reached here .. 2
Reached here .. 3
Reached here .. 4
Reached here .. 5
Reached here .. 6
Reached here .. 7
terminate called after throwing an instance of 'std::bad_alloc'
what(): std::bad_alloc
Aborted

It would be a great help if anyone can point out any mistake or I am missing something or not.
Thanks in advance.

Error in running of Armnn-mnist

I have build the arm environment by following this guide - https://github.com/ARM-software/armnn/blob/branches/armnn_21_05/BuildGuideCrossCompilation.md#build-armnn

While running make command for example armnn-mnist show error

g++ -O3 -std=c++14 -I/home/abhimat/armnn-devenv/armnn/include mnist_caffe.cpp -o mnist_caffe -L/home/abhimat/armnn-devenv/build-x86_64/release/armnn -larmnn -larmnnCaffeParser /usr/bin/ld: cannot find -larmnn /usr/bin/ld: cannot find -larmnnCaffeParser collect2: error: ld returned 1 exit status make: *** [Makefile:7: mnist_caffe] Error 1

Make file look like

ARMNN_LIB = ${HOME}/armnn-devenv/armnn/build/armnn
ARMNN_INC = ${HOME}/armnn-devenv/armnn/include

all: mnist_caffe mnist_tf

mnist_caffe: mnist_caffe.cpp mnist_loader.hpp
g++ -O3 -std=c++14 -I$(ARMNN_INC) mnist_caffe.cpp -o mnist_caffe -L$(ARMNN_LIB) -larmnn -larmnnCaffeParser

mnist_tf: mnist_tf.cpp mnist_loader.hpp
g++ -O3 -std=c++14 -I$(ARMNN_INC) mnist_tf.cpp -o mnist_tf -L$(ARMNN_LIB) -larmnn -larmnnTfParser -lpthread

clean:
-rm -f mnist_tf mnist_caffe

test: mnist_caffe mnist_tf
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:$(ARMNN_LIB) ./mnist_caffe
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:$(ARMNN_LIB) ./mnist_tf`

custom keyword - transfer learning

Hello,

I want to use this trained model as a base model; then, I will retrain the model with a custom dataset. The word that I want to train the model on is not included in the original commands dataset.
Will this transfer learning process work ?
The number of utterances for my keyword is not big.

Thank you

How to build this example for Android device

Does below compile command from Mnist example means it is trying to compile the example for x86 device? How can we compile this example for Android device and then test it?

g++ -O3 -std=c++1y -I$(ARMNN_INC) mnist_tf.cpp -o mnist_tf -L$(ARMNN_LIB) -larmnn -larmnnTfParser

Best,

Fangming

Quantised Mobilenet make error

Hello,
I am trying Cross-compiling Arm NN for Zynq-7000.I have followed the steps mentioned in this link
https://developer.arm.com/solutions/machine-learning-on-arm/developer-material/how-to-guides/cross-compiling-arm-nn-for-the-raspberry-pi-and-tensorflow/extracting-arm-nn-on-your-raspberry-pi-and-running-a-sample-program
I tried to run makefile for the Quantised Mobilenet example on my host machine (ubuntu 18.04) but I ge this error :
/home/lsa/armnn-pi/armnn/build/libarmnn.so: undefined reference to `pthread_create'
collect2: error: ld returned 1 exit status
Makefile:14: recipe for target 'mobilenetv1_quant_tflite' failed
make: *** [mobilenetv1_quant_tflite] Error 1

Cifar10 Check failed: mdb_status == 0 (2 vs. 0) No such file or directory

Hello All,

section Troubleshooting shows solution for error:
Check failed: mdb_status == 0 (2 vs. 0) No such file or directory
The solution is:

- - - - - -
Open file
gedit ~/CMSISNN_Webinar/ML-examples/cmsisnn-cifar10/models/cifar10_m7_train_test.prototxt

and check that the mean_file and source are pointing to the files in caffe/examples/cifar10/ directory as shown below

# The following lines will have to be updated.
# Note: You will have to replace  with the path where you have cloned the Caffe repository

mean_file: "/caffe/examples/cifar10/mean.binaryproto"
source: "/caffe/examples/cifar10/cifar10_train_lmdb"
source: "/caffe/examples/cifar10/cifar10_test_lmdb"

- - - - - -

However, files mentioned above ie
mean.binaryproto,
cifar10_train_lmdb,
cifar10_test_lmdb
do not exist in caffe/examples/cifar10/

so how can I get them? Should have they been in caffe/examples/cifar10/ already?

Looking at Caffe GitHub repo, I do not see these files.

So even paths changed, error still appears.

Hints more than welcome.

Image Data pre-processing.

Hi, I would like to know what is the expected way to pre-process the image data before sending it to the conv1 layer for processing?

  1. Is it simple mean subtraction like:
    for (int i=0;i<32323; i+=3) {
    img_buffer2[i] = (q7_t)((int)image_data[i] - mean_data[0]);
    img_buffer2[i+1] = (q7_t)((int)image_data[i+1] - mean_data[1]);
    img_buffer2[i+2] = (q7_t)((int)image_data[i+2] - mean_data[2]);
    }

  2. Or As given in https://github.com/ARM-software/CMSIS_5/blob/develop/CMSIS/NN/Examples/IAR/iar_nn_examples/NN-example-cifar10/arm_nnexamples_cifar10.cpp
    for (int i=0;i<32323; i+=3) {
    img_buffer2[i] = (q7_t)__SSAT( ((((int)image_data[i] - mean_data[0])<<7) + (0x1<<(scale_data[0]-1)))
    >> scale_data[0], 8);
    img_buffer2[i+1] = (q7_t)__SSAT( ((((int)image_data[i+1] - mean_data[1])<<7) + (0x1<<(scale_data[1]-1)))
    >> scale_data[1], 8);
    img_buffer2[i+2] = (q7_t)__SSAT( ((((int)image_data[i+2] - mean_data[2])<<7) + (0x1<<(scale_data[2]-1)))
    >> scale_data[2], 8);
    }

deploy tensorflow model using armnn

/tmp/ccMoa1wM.o: In function main': mnist_tf.cpp:(.text+0xa54): undefined reference to armnnTfParser::ITfParser::Create()'
mnist_tf.cpp:(.text+0xa88): undefined reference to armnn::TensorShape::TensorShape(std::initializer_list<unsigned int>)' mnist_tf.cpp:(.text+0xc84): undefined reference to armnn::IRuntime::Create(armnn::IRuntime::CreationOptions const&)'
mnist_tf.cpp:(.text+0xd2c): undefined reference to armnn::Optimize(armnn::INetwork const&, std::vector<armnn::BackendId, std::allocator<armnn::BackendId> > const&, armnn::IDeviceSpec const&, armnn::OptimizerOptions const&, armnn::Optional<std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > >&>)' /tmp/ccMoa1wM.o: In function armnn::ConstTensor::BaseTensor(armnn::TensorInfo const&, void const*)':
mnist_tf.cpp:(.text.ZN5armnn11ConstTensorCI2NS_10BaseTensorIPKvEEERKNS_10TensorInfoES3[ZN5armnn11ConstTensorCI5NS_10BaseTensorIPKvEEERKNS_10TensorInfoES3]+0x24): undefined reference to armnn::BaseTensor<void const*>::BaseTensor(armnn::TensorInfo const&, void const*)' /tmp/ccMoa1wM.o: In function armnn::Tensor::BaseTensor(armnn::TensorInfo const&, void*)':
mnist_tf.cpp:(.text.ZN5armnn6TensorCI2NS_10BaseTensorIPvEEERKNS_10TensorInfoES2[ZN5armnn6TensorCI5NS_10BaseTensorIPvEEERKNS_10TensorInfoES2]+0x24): undefined reference to armnn::BaseTensor<void*>::BaseTensor(armnn::TensorInfo const&, void*)' /tmp/ccMoa1wM.o: In function armnn::ConstTensor::ConstTensor(armnn::ConstTensor&&)':
mnist_tf.cpp:(.text.ZN5armnn11ConstTensorC2EOS0[ZN5armnn11ConstTensorC5EOS0]+0x18): undefined reference to armnn::BaseTensor<void const*>::BaseTensor(armnn::BaseTensor<void const*> const&)' /tmp/ccMoa1wM.o: In function armnn::Tensor::Tensor(armnn::Tensor&&)':
mnist_tf.cpp:(.text.ZN5armnn6TensorC2EOS0[ZN5armnn6TensorC5EOS0]+0x18): undefined reference to armnn::BaseTensor<void*>::BaseTensor(armnn::BaseTensor<void*> const&)' /tmp/ccMoa1wM.o: In function std::pair<std::__cxx11::basic_string<char, std::char_traits, std::allocator > const, armnn::TensorShape>::pair<char const (&) [12], true>(char const (&) [12], armnn::TensorShape const&)':
mnist_tf.cpp:(.text.ZNSt4pairIKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEEN5armnn11TensorShapeEEC2IRA12_KcLb1EEEOT_RKS8[ZNSt4pairIKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEEN5armnn11TensorShapeEEC5IRA12_KcLb1EEEOT_RKS8]+0x6c): undefined reference to armnn::TensorShape::TensorShape(armnn::TensorShape const&)' /tmp/ccMoa1wM.o: In function armnn::ConstTensor::ConstTensor(armnn::ConstTensor const&)':
mnist_tf.cpp:(.text.ZN5armnn11ConstTensorC2ERKS0[ZN5armnn11ConstTensorC5ERKS0]+0x18): undefined reference to armnn::BaseTensor<void const*>::BaseTensor(armnn::BaseTensor<void const*> const&)' /tmp/ccMoa1wM.o: In function armnn::Tensor::Tensor(armnn::Tensor const&)':
mnist_tf.cpp:(.text.ZN5armnn6TensorC2ERKS0[ZN5armnn6TensorC5ERKS0]+0x18): undefined reference to armnn::BaseTensor<void*>::BaseTensor(armnn::BaseTensor<void*> const&)' /tmp/ccMoa1wM.o: In function std::pair<std::__cxx11::basic_string<char, std::char_traits, std::allocator > const, armnn::TensorShape>::pair(std::pair<std::__cxx11::basic_string<char, std::char_traits, std::allocator > const, armnn::TensorShape> const&)':
mnist_tf.cpp:(.text.ZNSt4pairIKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEEN5armnn11TensorShapeEEC2ERKS9[ZNSt4pairIKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEEN5armnn11TensorShapeEEC5ERKS9]+0x38): undefined reference to `armnn::TensorShape::TensorShape(armnn::TensorShape const&)'
collect2: error: ld returned 1 exit status

i got this error while trying to upload a tensorflow model using Mnist_tf.cpp

anyone can help me please ?

tensorflow.python.framework.errors_impl.NotFoundError: No registered '_FusedConv2D' OpKernel for 'CPU' devices compatible with node {{node conv2d_1/BiasAdd}} . Registered: <no registered kernels> [[conv2d_1/BiasAdd]]

pi@raspberrypi:~/ML-examples/multi-gesture-recognition $ python3 run.py day1/model.h5
Using TensorFlow backend.
2020-06-02 19:07:06.269994: E tensorflow/core/platform/hadoop/hadoop_file_system.cc:132] HadoopFileSystem load error: libhdfs.so: cannot open shared object file: No such file or directory
pygame 1.9.4.post1
Hello from the pygame community. https://www.pygame.org/contribute.html
Loading model
WARNING:tensorflow:From /home/pi/.local/lib/python3.7/site-packages/tensorflow_core/python/ops/resource_variable_ops.py:1630: calling BaseResourceVariable.init (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version.
Instructions for updating:
If using Keras pass *_constraint arguments to layers.
Now running!
2020-06-02 19:07:18.547858: W tensorflow/core/grappler/utils/graph_view.cc:843] No registered '_FusedConv2D' OpKernel for CPU devices compatible with node {{node conv2d_3/BiasAdd}}
. Registered:

2020-06-02 19:07:18.550973: W tensorflow/core/grappler/utils/graph_view.cc:843] No registered '_FusedConv2D' OpKernel for CPU devices compatible with node {{node conv2d_2/BiasAdd}}
. Registered:

2020-06-02 19:07:18.553986: W tensorflow/core/grappler/utils/graph_view.cc:843] No registered '_FusedConv2D' OpKernel for CPU devices compatible with node {{node conv2d_1/BiasAdd}}
. Registered:

2020-06-02 19:07:18.648451: E tensorflow/core/common_runtime/executor.cc:659] Executor failed to create kernel. Not found: No registered '_FusedConv2D' OpKernel for 'CPU' devices compatible with node {{node conv2d_1/BiasAdd}}
. Registered:

 [[conv2d_1/BiasAdd]]

Traceback (most recent call last):
File "run.py", line 93, in
main()
File "run.py", line 66, in main
classes = model.predict(np.array([x]))[1]
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1462, in predict
callbacks=callbacks)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training_arrays.py", line 324, in predict_loop
batch_outs = f(ins_batch)
File "/home/pi/.local/lib/python3.7/site-packages/tensorflow_core/python/keras/backend.py", line 3559, in call
self._make_callable(feed_arrays, feed_symbols, symbol_vals, session)
File "/home/pi/.local/lib/python3.7/site-packages/tensorflow_core/python/keras/backend.py", line 3496, in _make_callable
callable_fn = session._make_callable_from_options(callable_opts)
File "/home/pi/.local/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 1503, in _make_callable_from_options
return BaseSession._Callable(self, callable_options)
File "/home/pi/.local/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 1458, in init
session._session, options_ptr)
tensorflow.python.framework.errors_impl.NotFoundError: No registered '_FusedConv2D' OpKernel for 'CPU' devices compatible with node {{node conv2d_1/BiasAdd}}
. Registered:

 [[conv2d_1/BiasAdd]]

pi@raspberrypi:~/ML-examples/multi-gesture-recognition $

Quantization and Code generator Script

Hi all,

I would like to know whether the nn_quantizer.py and code_gen.py works for all caffe models with specified restrictions.

As an example:

  1. In NXP mnist example, they are mentioning like nn_quantizer as it is doesn't work for MNIST.
  2. In some other github issues, it is mentioned that code_gen.py is not working as expected for mnist.

Please guide me through this.

I was planning to develop a NN model for mcu. If caffe model can be converted for mcu, it will be better for me.
otherwise I have to go for tensorflow lite.

Compilation termination

Hello All,
I am following the guide ,and I do it like it says.

I have the same environments

  • ubuntu 16.04

  • python 2.7.12

  • caffe

  • GNU Arm Embedded Toolchain. Recommended version: gcc-arm-none-eabi-7-2017-q4-major

But when it come to the last , I run the code
mbed compile -m DISCO_F746NG -t GCC_ARM --source . --source ../ML-examples/cmsisnn-cifar10/camera_demo/camera_app/
mbed compile -m DISCO_F746NG -t GCC_ARM --source . --source ../ML-examples/cmsisnn-cifar10/code/m7 --source ../ML-examples/cmsisnn-cifar10/camera_demo/camera_app/ --source ../CMSIS_5/CMSIS/NN/Include --source ../CMSIS_5/CMSIS/NN/Source --source ../CMSIS_5/CMSIS/DSP/Include --source ../CMSIS_5/CMSIS. /DSP/Source --source ../CMSIS_5/CMSIS/Core/Include -j8

It shows
fatal error: mbed.h :No such file or directory
arm_sorting.h :No such file or directory

I find similar issue here,but still it doesn't give a solution. So what I should do to fix it?

ImportError: no module named tensorflow

I am following the tutorial on yeah-world and have imported tensorflow and verified it was installed.
when running

python train.py example/model.h5 example/yeah example/sitting example/random

I received a import error: no module named tensorflow

corrected this by :
python3 train.py example/model.h5 example/yeah example/sitting example/random

now receiving
UnicodeDecodeError: 'ascii' codec can't decode byte 0x8d in position 1:ordinal not in range(128)

Example lenet training/deployment prototxt config?

Hello,
I trained the default 10000 iterations model based on the Caffe lenet example in a x86 Docker image and converted it from train to deployment using the provided lenet.prototext of the example with the python code below.
The resulting my.caffemodel with the same size works with your program producing the same prediction on Odroid but has an segmentation fault on exit, indicating some issues with buffer overflows.

odroid@odroid:~/src/ML-examples/armnn-mnist$ ./mnist_caffe 
Predicted: 7
   Actual: 7
Segmentation fault

Using the provided 9000 iterations model I get no segfaults (GPU and CPU Mode). Would it be possible for you to share the detailed modification you did and steps to get to the lenet_iter_9000.caffemodel file please? This would really help to get reproducible results.

import numpy as np
import sys

model = 'lenet.prototxt';
weights = 'lenet_iter_10000.caffemodel';
net = caffe.Net(model,weights,caffe.TEST);
caffe.set_mode_cpu()
net.save('my.caffemodel')

I used the following lenet.prototxt config for deployment from the example under
https://github.com/BVLC/caffe/blob/master/examples/mnist/lenet.prototxt

'-larmnn' and '-larmnnTfParser' not found

Hi, I followed your steps in the ML-examples/armnn-mnist/
After I executed 'make test', I got the following error messages:

/usr/bin/x86_64-linux-gnu-ld: cannot find -larmnn
/usr/bin/x86_64-linux-gnu-ld: cannot find -larmnnTfParser

Can you please suggest how to fix it?

Error while running test_tflite.py

Hi,
while running test_tflite.py, I got the below error
"ValueError: Cannot set tensor: Dimension mismatch. Got 3920 but expected 490 for dimension 1 of input 37."

Command i ran:
python test_tflite.py --tflite_path ../Pretrained_models/DS_CNN/DS_CNN_S/ds_cnn_s_quantized.tflite

Error with nn_quantiser.py

issue

attempting to run the cmsisnn-cifar10 ML-example

command typed

python3 nn_quantizer.py --model models/cifar10_m4_train_test.prototxt --weights models/cifar10_m4_iter_70000.caffemodel.h5 --save models/cifar10_m4.pkl

error message

I0804 16:46:04.856118 22909 net.cpp:257] Network initialization done.
I0804 16:46:04.871187 22909 net.cpp:801] Ignoring source layer cifar
I0804 16:46:04.871275 22909 hdf5.cpp:33] Datatype class: H5T_FLOAT
Traceback (most recent call last):
File "nn_quantizer.py", line 614, in
my_model.get_graph_connectivity()
File "nn_quantizer.py", line 234, in get_graph_connectivity
for key, value in self.top_blob.iteritems():
AttributeError: 'dict' object has no attribute 'iteritems'

Compilation errors while compiling armnn-mnist example

I have followed this link( https://github.com/ARM-software/armnn/blob/master/BuildGuideAndroidNDK.md ) to build ARM NN with Android NDK.

Then I tired to build this example, I got below errors. The compile command is "g++ -O3 -std=c++1y -I$(ARMNN_INC) mnist_tf.cpp -o mnist_tf -L$(ARMNN_LIB) -larmnn -larmnnTfParser" I guess we should use "aarch64-linux-android-clang++" instead of "g++ -O3 -std=c++1y" to compile the code, but there is some other errors. Could any one provide a instruction to compile armnn-mnist example? from /home/XXXX/armnn-devenv/armnn/include/armnn/ArmNN.hpp:9,
from mnist_tf.cpp:13:
/home/XXXX/armnn-devenv/armnn/include/armnn/TypesUtils.hpp: In function ‘constexpr const char* armnn::GetStatusAsCString(armnn::Status)’:
/home/XXXX/armnn-devenv/armnn/include/armnn/TypesUtils.hpp:25:1: error: body of constexpr function ‘constexpr const char* armnn::GetStatusAsCString(armnn::Status)’ not a return-statement
}
^
/home/XXXX/armnn-devenv/armnn/include/armnn/TypesUtils.hpp: In function ‘constexpr const char* armnn::GetComputeDeviceAsCString(armnn::Compute)’:
/home/XXXX/armnn-devenv/armnn/include/armnn/TypesUtils.hpp:36:1: error: body of constexpr function ‘constexpr const char* armnn::GetComputeDeviceAsCString(armnn::Compute)’ not a return-statement
}
^
/home/XXXX/armnn-devenv/armnn/include/armnn/TypesUtils.hpp: In function ‘constexpr const char* armnn::GetActivationFunctionAsCString(armnn::ActivationFunction)’:
/home/XXXX/armnn-devenv/armnn/include/armnn/TypesUtils.hpp:54:1: error: body of constexpr function ‘constexpr const char* armnn::GetActivationFunctionAsCString(armnn::ActivationFunction)’ not a return-statement
}
^
/home/XXXX/armnn-devenv/armnn/include/armnn/TypesUtils.hpp: In function ‘constexpr const char* armnn::GetPoolingAlgorithmAsCString(armnn::PoolingAlgorithm)’:
/home/XXXX/armnn-devenv/armnn/include/armnn/TypesUtils.hpp:65:1: error: body of constexpr function ‘constexpr const char* armnn::GetPoolingAlgorithmAsCString(armnn::PoolingAlgorithm)’ not a return-statement
}
^
/home/XXXX/armnn-devenv/armnn/include/armnn/TypesUtils.hpp: In function ‘constexpr const char* armnn::GetOutputShapeRoundingAsCString(armnn::OutputShapeRounding)’:
/home/XXXX/armnn-devenv/armnn/include/armnn/TypesUtils.hpp:75:1: error: body of constexpr function ‘constexpr const char* armnn::GetOutputShapeRoundingAsCString(armnn::OutputShapeRounding)’ not a return-statement
}
^
/home/XXXX/armnn-devenv/armnn/include/armnn/TypesUtils.hpp: In function ‘constexpr const char* armnn::GetPaddingMethodAsCString(armnn::PaddingMethod)’:
/home/XXXX/armnn-devenv/armnn/include/armnn/TypesUtils.hpp:86:1: error: body of constexpr function ‘constexpr const char* armnn::GetPaddingMethodAsCString(armnn::PaddingMethod)’ not a return-statement
}
^
/home/XXXX/armnn-devenv/armnn/include/armnn/TypesUtils.hpp: In function ‘constexpr unsigned int armnn::GetDataTypeSize(armnn::DataType)’:
/home/XXXX/armnn-devenv/armnn/include/armnn/TypesUtils.hpp:97:1: error: body of constexpr function ‘constexpr unsigned int armnn::GetDataTypeSize(armnn::DataType)’ not a return-statement
}
^
/home/XXXX/armnn-devenv/armnn/include/armnn/TypesUtils.hpp: In instantiation of ‘constexpr bool armnn::StrEqual(const char*, const char (&)[N]) [with int N = 7]’:
/home/XXXX/armnn-devenv/armnn/include/armnn/TypesUtils.hpp:112:31: required from here
/home/XXXX/armnn-devenv/armnn/include/armnn/TypesUtils.hpp:108:1: error: body of constexpr function ‘constexpr bool armnn::StrEqual(const char*, const char (&)[N]) [with int N = 7]’ not a return-statement
}
^
/home/XXXX/armnn-devenv/armnn/include/armnn/TypesUtils.hpp: In function ‘constexpr armnn::Compute armnn::ParseComputeDevice(const char*)’:
/home/XXXX/armnn-devenv/armnn/include/armnn/TypesUtils.hpp:128:1: error: body of constexpr function ‘constexpr armnn::Compute armnn::ParseComputeDevice(const char*)’ not a return-statement
}
^
/home/XXXX/armnn-devenv/armnn/include/armnn/TypesUtils.hpp: In function ‘constexpr const char* armnn::GetDataTypeName(armnn::DataType)’:
/home/XXXX/armnn-devenv/armnn/include/armnn/TypesUtils.hpp:139:1: error: body of constexpr function ‘constexpr const char* armnn::GetDataTypeName(armnn::DataType)’ not a return-statement
}
^
In file included from /usr/include/c++/4.8/bits/char_traits.h:39:0,
from /usr/include/c++/4.8/ios:40,
from /usr/include/c++/4.8/ostream:38,
from /usr/include/c++/4.8/iostream:39,
from mnist_tf.cpp:6:
/usr/include/c++/4.8/bits/stl_algobase.h: In instantiation of ‘bool std::equal(_IIter1, _IIter1, _IIter2, _BinaryPredicate) [with _IIter1 = const unsigned int*; _IIter2 = const unsigned int*; _BinaryPredicate = const unsigned int*]’:
/home/XXXX/armnn-devenv/armnn/include/armnn/Types.hpp:136:69: required from here
/usr/include/c++/4.8/bits/stl_algobase.h:1062:46: error: ‘__binary_pred’ cannot be used as a function
if (!bool(__binary_pred(*__first1, *__first2)))
^
make: *** [mnist_tf] Error 1

./mnist-caffe cannot load armnn .so libraries

Hello!
I spent some time setting up the ARM NN SDK on my nVidia Jetson board and have passed and gone through all the manuals (compute library, caffe, armnn and tests). I am now trying to run an example from ML-examples - mnist. I modified the makefile and it builds fine. But, when I try to run ./mnist-caffe I keep getting error 40 - cannot load two .so libraries from ARMNN. I tried to copy these into the local folder, I created symbolic links to them in /usr/lib but nothing seems to help...

Is there a simple solution to this issue? An alternative is that I try to build armnn static libraries but for that I would need instructions...

Thanks!

Compiler error when compiling camera app with Mbed 1.10

Would appreciate advice on this within the next 2-3 days. I am a complete novice so any bit of information would be useful.

To build a basic camera app for STM32F746-DISCO, I am following the tutorial here: https://developer.arm.com/solutions/machine-learning-on-arm/developer-material/how-to-guides/image-recognition-on-arm-cortex-m-with-cmsis-nn/single-page.

Due to a missing header in Mbed 2.0 (ARMmbed/mbed-cli#805), I am using Mbed 1.10 instead. (Instead of mbed new cmsisnn_demo --mbedlib, I omitted the --mbedlib so it will use Mbed 1.10).

After installing requirements, I try to compile it with:

mbed compile -m DISCO_F746NG -t GCC_ARM --source . --source ../ML-examples/cmsisnn-cifar10/camera_demo/camera_app/

This produces the following error:

BUILD/DISCO_F746NG/GCC_ARM/ML-examples/cmsisnn-cifar10/camera_demo/camera_a
pp/camera_app.o: In function `DCMI_IRQHandler':
/home/dtch009/CMSISNN_Webinar/cmsisnn_demo/../ML-examples/cmsisnn-cifar10/c
amera_demo/camera_app/camera_app.cpp:10: multiple definition of `DCMI_IRQHa
ndler'
BUILD/DISCO_F746NG/GCC_ARM/BSP_DISCO_F746NG/Drivers/BSP/STM32746G-Discovery
/stm32746g_discovery_camera.o:/home/dtch009/CMSISNN_Webinar/cmsisnn_demo/./
BSP_DISCO_F746NG/Drivers/BSP/STM32746G-Discovery/stm32746g_discovery_camera
.c:404: first defined here
BUILD/DISCO_F746NG/GCC_ARM/ML-examples/cmsisnn-cifar10/camera_demo/camera_a
pp/camera_app.o: In function `DMA2_Stream1_IRQHandler':
/home/dtch009/CMSISNN_Webinar/cmsisnn_demo/../ML-examples/cmsisnn-cifar10/c
amera_demo/camera_app/camera_app.cpp:13: multiple definition of `DMA2_Strea
m1_IRQHandler'
BUILD/DISCO_F746NG/GCC_ARM/BSP_DISCO_F746NG/Drivers/BSP/STM32746G-Discovery
/stm32746g_discovery_camera.o:/home/dtch009/CMSISNN_Webinar/cmsisnn_demo/./
BSP_DISCO_F746NG/Drivers/BSP/STM32746G-Discovery/stm32746g_discovery_camera
.c:413: first defined here
collect2: error: ld returned 1 exit status
[ERROR] BUILD/DISCO_F746NG/GCC_ARM/ML-examples/cmsisnn-cifar10/camera_demo/
camera_app/camera_app.o: In function `DCMI_IRQHandler':
/home/dtch009/CMSISNN_Webinar/cmsisnn_demo/../ML-examples/cmsisnn-cifar10/c
amera_demo/camera_app/camera_app.cpp:10: multiple definition of `DCMI_IRQHandler'
BUILD/DISCO_F746NG/GCC_ARM/BSP_DISCO_F746NG/Drivers/BSP/STM32746G-Discovery/stm32746g_discovery_camera.o:/home/dtch009/CMSISNN_Webinar/cmsisnn_demo/./BSP_DISCO_F746NG/Drivers/BSP/STM32746G-Discovery/stm32746g_discovery_camera.c:404: first defined here
BUILD/DISCO_F746NG/GCC_ARM/ML-examples/cmsisnn-cifar10/camera_demo/camera_app/camera_app.o: In function `DMA2_Stream1_IRQHandler':
/home/dtch009/CMSISNN_Webinar/cmsisnn_demo/../ML-examples/cmsisnn-cifar10/camera_demo/camera_app/camera_app.cpp:13: multiple definition of `DMA2_Stream1_IRQHandler'
BUILD/DISCO_F746NG/GCC_ARM/BSP_DISCO_F746NG/Drivers/BSP/STM32746G-Discovery/stm32746g_discovery_camera.o:/home/dtch009/CMSISNN_Webinar/cmsisnn_demo/./BSP_DISCO_F746NG/Drivers/BSP/STM32746G-Discovery/stm32746g_discovery_camera

TensorFlow model conversion to CMSIS-NN calls

Hello!

is there any guide available for converting TensorFlow model to CMSIS-NN calls?
Something similar to this guide converting Caffe model to CMSIS-NN calls but for TensorFlow models.

In this guide there are two Python scripts being used:
nn_quantizer.py for quantization to 8-bit weights and activations
code_gen.py for generating the code consisting of NN function calls

I would like to know if such a tools exist.
Or perhaps you can recommend different approach how to manage TensorFlow models if I want to use them in Cortex-M devices?

Thank you in advance!

couldn't retrieve the checkpoints

Hello,

I'm trying to run the following line
python test.py --model_architecture dnn --model_size_info 128 128 128 --checkpoint /tflu-kws-cortex-m/Pretrained_models/DNN/DNN_S/ckpt/checkpoint

Howevre I'm getting the following error

Traceback (most recent call last): File "test.py", line 182, in <module> test() File "test.py", line 44, in test model.load_weights(FLAGS.checkpoint).expect_partial() File "/home/pi/miniconda3/lib/python3.6/site-packages/tensorflow/python/keras/engine/training.py", line 162, in load_weights return super(Model, self).load_weights(filepath, by_name) File "/home/pi/miniconda3/lib/python3.6/site-packages/tensorflow/python/keras/engine/network.py", line 1384, in load_weights pywrap_tensorflow.NewCheckpointReader(filepath) File "/home/pi/miniconda3/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 636, in NewCheckpointReader return CheckpointReader(compat.as_bytes(filepattern)) File "/home/pi/miniconda3/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 648, in __init__ this = _pywrap_tensorflow_internal.new_CheckpointReader(filename) tensorflow.python.framework.errors_impl.InvalidArgumentError: Unsuccessful TensorSliceReader constructor: Failed to get matching files on /tflu-kws-cortex-m/Pretrained_models/DNN/DNN_S/ckpt/checkpoint: Not found: /tflu-kws-cortex-m/Pretrained_models/DNN/DNN_S/ckpt; No such file or directory

Any anyone had the same issue and managed to fix it?

Thank you

How to convert cv::Mat (from opencv) to a data which can be input into arm nn caffe model?

I want to use opencv to read the data from a camera. So the camera's data can be save as a cv::Mat.
I want to know how to convert cv::Mat (from opencv) to a data which can be input into arm nn caffe model?
I have learned the demo mnist_caffe.cpp in this repository. But I can't get enough ideas to use arm nn sdk to write a demo which can implement a detection fuction. I also don't how to load a picture data not a mnist data using arm nn. Can you give me some advices? I am very confusing. Thank you!!!

predicted classes - custom dataset

Hi,

I retrained the model with my own dataset. I created one class for the wanted word (only one wanted word in my case) and another class for unwanted words which has random files from the original dataset. I also changed the duration of the file to 1500 milliseconds.
The problems are:
1- The predicted class is always one class.
2- The model accuracy is different each time I run the model. One is around 83% and other time it is around 17%.

The data is balanced but couldn't figure out what is the problem.
`24/24 [==============================] - 3s 80ms/step - loss: 0.8777 - accuracy: 0.6862 - val_loss: 1.0598 - val_accuracy: 0.6667

Epoch 2/12
24/24 [==============================] - 2s 65ms/step - loss: 0.4955 - accuracy: 0.8997 - val_loss: 1.0245 - val_accuracy: 0.6667

Epoch 3/12
24/24 [==============================] - 2s 65ms/step - loss: 0.3274 - accuracy: 0.9115 - val_loss: 0.9986 - val_accuracy: 0.6667

Epoch 4/12
24/24 [==============================] - 2s 68ms/step - loss: 0.2524 - accuracy: 0.9258 - val_loss: 0.9799 - val_accuracy: 0.6667

Epoch 5/12
24/24 [==============================] - 2s 66ms/step - loss: 0.2001 - accuracy: 0.9583 - val_loss: 0.9677 - val_accuracy: 0.6667

Epoch 6/12
24/24 [==============================] - 4s 154ms/step - loss: 0.1572 - accuracy: 0.9831 - val_loss: 0.9584 - val_accuracy: 0.6667

Epoch 7/12
24/24 [==============================] - 2s 67ms/step - loss: 0.1263 - accuracy: 0.9831 - val_loss: 0.9561 - val_accuracy: 0.6667

Epoch 8/12
24/24 [==============================] - 2s 68ms/step - loss: 0.1007 - accuracy: 0.9909 - val_loss: 0.9609 - val_accuracy: 0.6667

Epoch 9/12
24/24 [==============================] - 2s 68ms/step - loss: 0.0707 - accuracy: 0.9987 - val_loss: 0.9670 - val_accuracy: 0.6667

Epoch 10/12
24/24 [==============================] - 2s 68ms/step - loss: 0.0557 - accuracy: 0.9987 - val_loss: 0.9830 - val_accuracy: 0.6667

Epoch 11/12
24/24 [==============================] - 2s 69ms/step - loss: 0.0523 - accuracy: 0.9948 - val_loss: 1.0069 - val_accuracy: 0.8333

Epoch 12/12
24/24 [==============================] - 2s 67ms/step - loss: 0.0405 - accuracy: 0.9974 - val_loss: 1.0212 - val_accuracy: 0.1667

1/1 [==============================] - 0s 62ms/step - loss: 1.0094 - accuracy: 0.1667
Final test accuracy: 16.67%

Running testing on validation set...
Preidcted tf.Tensor([0 0 0 0 0 0], shape=(6,), dtype=int64)
[1 2 0 2 2 2]
Validation accuracy = 16.67%(N=6)
Running testing on test set...
predicted
tf.Tensor([0 0 0 0 0 0], shape=(6,), dtype=int64)
[2 2 0 1 2 2]
[[1 0 0]
[1 0 0]
[4 0 0]]
Test accuracy = 16.67%(N=6)`

Floating point exception (core dumped)

python nn_quantizer.py --model model-conv.prototxt --weights _iter_1000.caffemodel --save model-conv.pkl --iterations 20

Throws me this error:

W0425 07:42:26.264839  1034 _caffe.cpp:172] DEPRECATION WARNING - deprecated use of Python interface
W0425 07:42:26.264870  1034 _caffe.cpp:173] Use this instead (with the named "weights" parameter):
W0425 07:42:26.264874  1034 _caffe.cpp:175] Net('model-conv.prototxt', 1, weights='_iter_1000.caffemodel')
I0425 07:42:26.267058  1034 gpu_memory.cpp:82] GPUMemory::Manager initialized
I0425 07:42:26.518502  1034 net.cpp:462] The NetState phase (1) differed from the phase (0) specified by a rule in layer data
Floating point exception (core dumped)

Problem with numpy in pinet.py

Hello!
I've just been following the Yeah-World tutorial and have done all the things it said, except I changed the code in record.py so it can use a standard USB webcam:
' cap = cv2.VideoCapture(0)
frames = []
started = time()
while time() - started < seconds:
frame = cap.read()
frames.append(frame)'

The only thing I changed was the loop where it records frame by frame, I removed the camera dependencies from everything because it pointed towards Picamera, because I don't have a Picamera currently. After that I tested record.py and it worked! When I tried to train a model based on the standard yeah, sitting, and random, I got the error from pinet.py:

'2020-04-13 15:37:00.067761: E tensorflow/core/platform/hadoop/hadoop_file_system.cc:132] HadoopFileSystem load error: libhdfs.so: cannot open shared object file: No such file or directory
Loading tensorflow feature extractor...
WARNING:tensorflow:From /home/pi/ML-examples/yeah-world/pinet.py:28: The name tf.gfile.GFile is deprecated. Please use tf.io.gfile.GFile instead.

WARNING:tensorflow:From /home/pi/ML-examples/yeah-world/pinet.py:29: The name tf.GraphDef is deprecated. Please use tf.compat.v1.GraphDef instead.

WARNING:tensorflow:From /home/pi/ML-examples/yeah-world/pinet.py:37: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead.

Loading example/yeahTraceback (most recent call last):
File "train.py", line 101, in
main()
File "train.py", line 55, in main
features = [feature_extractor.features(f) for f in x]
File "train.py", line 55, in
features = [feature_extractor.features(f) for f in x]
File "/home/pi/ML-examples/yeah-world/pinet.py", line 45, in features
preprocessed = ((np.array(image, dtype=np.float32) / 255.) - 0.5) * 2.
ValueError: setting an array element with a sequence.`

I looked up the error and it said that it was trying to force an uneven multidimensional array into something? I don't know if it's messed up because my webcam may be giving it the wrong format of data, or if I don't have the right version of NumPy installed. Please forgive me if I did something wrong, this is my first time posting a question/issue

I have a Rasberry Pi 3B+ running Raspbian GNU/Linux 10 (buster)

ARMNN mobilenet quant example has undefined functions

I am trying to run mobilenetv1_quant_tflite.cpp but in the section where the input image is loaded and preprocessed, there seems to be some undefined functions/types. On line 119 NormalizationParameters cannot be found and neither can PrepareImageTensor. I have all the includes and libraries set. Am I missing something?

How to link mnist_caffe.cpp statically with libarmnn.a and libarmnnCaffeParser.a

I have built static versions of armnn and armnnCaffeParser libraries; libarmnn.a and libarmnnCaffeParser.a respectively.

I am trying to compile and link mnist_caffe.cpp statically with them but getting numerous undefined references errors.

To do static linking: I edited the command in the mnist_caffe target in the armnn-mnist Makefile as follows:

--mnist_caffe target before editting:

mnist_caffe: mnist_caffe.cpp mnist_loader.hpp
       g++ -O3 -std=c++14 -I$(ARMNN_INC) mnist_caffe.cpp -o mnist_caffe -L$(ARMNN_LIB)  -larmnn -larmnnCaffeParser

--mnist_caffe target after editing (just added -static flag and used aarch64-linux-gnu-g++ instead of g++ since I am cross-compiling on an x86 machine)

mnist_caffe: mnist_caffe.cpp mnist_loader.hpp
        aarch64-linux-gnu-g++ -O3 -std=c++14 -I$(ARMNN_INC) mnist_caffe.cpp -o mnist_caffe -L$(ARMNN_LIB)  -static -larmnn -larmnnCaffeParser

Note that ARMNN_LIB has only static versions of armnn and armnnCaffeParser libraries (i.e. it only has armnn.a and armnnCaffeParser.a, there are no .so versions of them)

I am only interested in the mnist_caffe target. So I run $ make mnist_caffe. The result is numerous undefined reference errors that I directed all to static.txt attached below:
static.txt

My question in a nutshell: How to output a working statically-linked mnist_caffe executable given that I have both libarmnn.a and libarmnnCaffeParser.a ?

PS: I am only interested in compiling and linking. In other words, I am only interested in the output executable mnist_caffe, but I want it to be statically linked since I will take it and run it on an ARM simulator that only accepts statically-linked executables.

Error in compile tflu-kws-cortex-m/kws_cortex_m

I am following the tutorial with these command:
make -f tensorflow/lite/micro/tools/make/Makefile TAGS="DS_CNN Simple_KWS_Test" generate_kws_cortex_m_mbed_project
cd tensorflow/lite/micro/tools/make/gen/linux_x86_64/prj/kws_cortex_m/mbed/

change the mbed-os.lib version :
https://github.com/ARMmbed/mbed-os/#60cbab381dd2d5d860407b1b789741275012a075

mbed config root .
mbed deploy
mbed compile -m K64F -t GCC_ARM

and then t get an error:

  • Compile [ 99.2%]: arm_rfft_fast_init_f32.c
    Compile [ 99.6%]: arm_rfft_init_f32.c
    Compile [100.0%]: schema_utils.cc
    Link: mbed
    /home/fjw/download/gcc-arm-none-eabi/gcc-arm-none-eabi-10-2020-q4-major/bin/../lib/gcc/arm-none-eabi/10.2.1/../../../../arm-none-eabi/bin/ld: BUILD/K64F/GCC_ARM/mbed-os/platform/source/mbed_sdk_boot.o: in function __wrap_main': /home/fjw/ML-examples/tflu-kws-cortex-m/tensorflow/tensorflow/lite/micro/tools/make/gen/linux_x86_64/prj/kws_cortex_m/mbed/./mbed-os/platform/source/mbed_sdk_boot.c:177: undefined reference to main'
    collect2: error: ld returned 1 exit status
    [ERROR] /home/fjw/download/gcc-arm-none-eabi/gcc-arm-none-eabi-10-2020-q4-major/bin/../lib/gcc/arm-none-eabi/10.2.1/../../../../arm-none-eabi/bin/ld: BUILD/K64F/GCC_ARM/mbed-os/platform/source/mbed_sdk_boot.o: in function __wrap_main': /home/fjw/ML-examples/tflu-kws-cortex-m/tensorflow/tensorflow/lite/micro/tools/make/gen/linux_x86_64/prj/kws_cortex_m/mbed/./mbed-os/platform/source/mbed_sdk_boot.c:177: undefined reference to main'
    collect2: error: ld returned 1 exit status

  • [mbed] ERROR: "/home/fjw/anaconda3/bin/python" returned error.
    Code: 1
    Path: "/home/fjw/ML-examples/tflu-kws-cortex-m/tensorflow/tensorflow/lite/micro/tools/make/gen/linux_x86_64/prj/kws_cortex_m/mbed"
    Command: "/home/fjw/anaconda3/bin/python -u /home/fjw/ML-examples/tflu-kws-cortex-m/tensorflow/tensorflow/lite/micro/tools/make/gen/linux_x86_64/prj/kws_cortex_m/mbed/mbed-os/tools/make.py -t GCC_ARM -m K64F --source . --build ./BUILD/K64F/GCC_ARM"

How can I solve this problem?

model path in README

It seems to me that the proper model folder is "model" and not "models" as suggested in the README:
ML-examples/armnn-mnist/README.md

Printing predictions

Hi,

I'm trying to print the prediction results and the labels, in addition to accuracy.
I;m not sure what I'm doing wrong here

for mfcc, label in test_data:
prediction = tflite_inference(mfcc, tflite_path)
predicted_indices.append(np.squeeze(tf.argmax(prediction, axis=1)))

strlabel="C:/tmp/speech_commands_train/conv_labels.txt"
labels_list= [line.rstrip() for line in tf.io.gfile.GFile(strlabel)]

top_k = prediction.argsort()[-5:][::-1]


for node_id in top_k:
 human_string = labels_list[node_id]
 score = predicted_indices[node_id]
print('%s (score = %.5f)' % (human_string, score))

test_accuracy = calculate_accuracy(predicted_indices, expected_indices)
confusion_matrix = tf.math.confusion_matrix(expected_indices, predicted_indices,
                                            num_classes=model_settings['label_count'])

Error message

human_string = labels_list[node_id] TypeError: only integer scalar arrays can be converted to a scalar index

Thank you in advance for your help.

How can I input this argument "const std::map<std::string, armnn::TensorShape>& inputShapes"

virtual armnn::INetworkPtr CreateNetworkFromBinaryFile( const char* graphFile, const std::map<std::string, armnn::TensorShape>& inputShapes, const std::vector<std::string>& requestedOutputs) = 0;

How can I input this argument "const std::map<std::string, armnn::TensorShape>& inputShapes" in CreateNetworkFromBinaryFile?
I found that inputshape can be {} which is empty in the mnist sample
armnnCaffeParser::ICaffeParserPtr parser = armnnCaffeParser::ICaffeParser::Create(); armnn::INetworkPtr network = parser->CreateNetworkFromBinaryFile("model/lenet_iter_9000.caffemodel", { }, // input taken from file if empty { "prob" }); // output node

I want to know what this argument is used for, and how should I use it. Could you give me some examples? Thank you!!!

ML-examples/cmsisnn-cifar10 is a bad prediction

I use the code to convert the dog image to an array.

from PIL import Image, ImageOps
import numpy as np

np.set_printoptions(threshold=np.inf)

def resize_image(image, _width=32, _height=32):
    new_image = Image.open(image)
    new_image = ImageOps.fit(new_image , (_width, _height), Image.ANTIALIAS)
    new_image_rgb = new_image.convert('RGB')
    return np.asarray(new_image_rgb).flatten()

def print_array_for_c(_array):
    print("{",end="")
    for pixel in _array:
        print(pixel,end=",")
    print("}")

print_array_for_c(resize_image('dog.jpg'))

and then input NN


/* from the dog image conversion */
#define IMG_DATA {255,255, ...  ,255,}  

uint8_t   image_data[3 * 32 * 32] = IMG_DATA;
q7_t      output_data[10];

char const *class_names[10] = {"airplane", "automobile", "bird", "cat", "deer",
               "dog", "frog", "horse", "ship", "truck"};

uint8_t get_max_index(q7_t *array,uint8_t len)
{
    int max = -128;
    int max_index;
    for (int i = 0; i < len; i++)
    {
        if(max<array[i])
        {
            max = array[i];
            max_index = i;
        }
    }
    return max_index;
}

int main()
{
  printf("start execution, wait for the result...\n");
	
  /* start the execution */
  run_nn((q7_t*)image_data,output_data);

  arm_softmax_q7(output_data, 10, output_data);

  for (uint8_t i = 0; i < 10; i++)
  {
      printf("%d: %d\n", i, output_data[i]);
  }
	
printf("It is %s.\n",class_names[get_max_index(output_data,sizeof(output_data)/sizeof(output_data[0]))]);

  return 0;
}

but the Prediction is "Plane", I tried a lot of pictures and the predictions were not correct.

Please help me, thanks.

mnist make error

i followed the instruction as https://github.com/ARM-software/armnn/blob/branches/armnn_19_05/BuildGuideCrossCompilation.md.
everything is ok. and unittest is no error occured.

when i run the armnn-mnist.
my makefile is:
ARMNN_LIB = ${HOME}/zoey/armnn/build
ARMNN_INC = ${HOME}/zoey/armnn/include
BOOST_ROOT = /home/zoey/armnn-devenv/boost_arm64_install
PROTOBUF =/home/zoey/armnn-devenv/google/x86_64_pb_install/lib
all: mnist_caffe mnist_tf
mnist_caffe: mnist_caffe.cpp mnist_loader.hpp
aarch64-linux-gnu-g++ -O3 -std=c++14 -I$(ARMNN_INC) -I$(BOOST_ROOT) mnist_caffe.cpp -o mnist_caffe -L$(PROTOBUF) -L$(ARMNN_LIB) -lprotobuf -larmnn -larmnnCaffeParser -lpthread
mnist_tf: mnist_tf.cpp mnist_loader.hpp
aarch64-linux-gnu-g++ -O3 -std=c++14 -I$(ARMNN_INC) -I$(BOOST_ROOT) mnist_tf.cpp -o mnist_caffe -L$(PROTOBUF) -L$(ARMNN_LIB) -lprotobuf -larmnn -larmnnTfParser -lpthread
clean:
-rm -f mnist_tf mnist_caffe
test: mnist_caffe mnist_tf
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:$(ARMNN_LIB) ./mnist_caffe
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:$(ARMNN_LIB) ./mnist_tf

however,the error happened as fellow:
/usr/lib/gcc-cross/aarch64-linux-gnu/7/../../../../aarch64-linux-gnu/bin/ld: 当搜索用于 /home/zoey/armnn-devenv/google/x86_64_pb_install/lib/libprotobuf.so 时跳过不兼容的 -lprotobuf
/usr/lib/gcc-cross/aarch64-linux-gnu/7/../../../../aarch64-linux-gnu/bin/ld: 当搜索用于 /home/zoey/armnn-devenv/google/x86_64_pb_install/lib/libprotobuf.a 时跳过不兼容的 -lprotobuf
/usr/lib/gcc-cross/aarch64-linux-gnu/7/../../../../aarch64-linux-gnu/bin/ld: 当搜索用于 //usr/local/lib/libprotobuf.so 时跳过不兼容的 -lprotobuf
/usr/lib/gcc-cross/aarch64-linux-gnu/7/../../../../aarch64-linux-gnu/bin/ld: 当搜索用于 //usr/local/lib/libprotobuf.a 时跳过不兼容的 -lprotobuf
/usr/lib/gcc-cross/aarch64-linux-gnu/7/../../../../aarch64-linux-gnu/bin/ld: 找不到 -lprotobuf

thank you for helping me solve the problem

'RaggedTensor' + size mismatch

Hello ,

When running test.py using python test.py --model_architecture ds_cnn --model_size_info 5 64 10 4 2 2 64 3 3 1 1 64 3 3 1 1 64 3 3 1 1 64 3 3 1 1 --dct_coefficient_count 10 --window_size_ms 40 --window_stride_ms 20 --checkpoint ../Pretrained_models/DS_CNN/DS_CNN_S/ckpt/ds_cnn_0.94_ckpt
I get the following errors

`Untarring speech_commands_v0.02.tar.gz...
Running testing on validation set...
Traceback (most recent call last):
File "C:\Users\x\anaconda3\envs\newenvt\lib\site-packages\tensorflow\python\util\dispatch.py", line 206, in wrapper
return target(*args, **kwargs)
File "C:\Users\x\anaconda3\envs\newenvt\lib\site-packages\tensorflow\python\ops\math_ops.py", line 1838, in tensor_not_equals
return gen_math_ops.not_equal(self, other, incompatible_shape_error=False)
File "C:\Users\x\anaconda3\envs\newenvt\lib\site-packages\tensorflow\python\ops\gen_math_ops.py", line 6573, in not_equal
ctx=_ctx)
File "C:\Users\x\anaconda3\envs\newenvt\lib\site-packages\tensorflow\python\ops\gen_math_ops.py", line 6601, in not_equal_eager_fallback
_attr_T, _inputs_T = _execute.args_to_matching_eager([x, y], ctx, [])
File "C:\Users\x\anaconda3\envs\newenvt\lib\site-packages\tensorflow\python\eager\execute.py", line 280, in args_to_matching_eager
ret = [ops.convert_to_tensor(t, dtype, ctx=ctx) for t in l]
File "C:\Users\x\anaconda3\envs\newenvt\lib\site-packages\tensorflow\python\eager\execute.py", line 280, in
ret = [ops.convert_to_tensor(t, dtype, ctx=ctx) for t in l]
File "C:\Users\x\anaconda3\envs\newenvt\lib\site-packages\tensorflow\python\profiler\trace.py", line 163, in wrapped
return func(*args, **kwargs)
File "C:\Users\x\anaconda3\envs\newenvt\lib\site-packages\tensorflow\python\framework\ops.py", line 1566, in convert_to_tensor
ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
File "C:\Users\x\anaconda3\envs\newenvt\lib\site-packages\tensorflow\python\framework\constant_op.py", line 339, in _constant_tensor_conversion_function
return constant(v, dtype=dtype, name=name)
File "C:\Users\x\anaconda3\envs\newenvt\lib\site-packages\tensorflow\python\framework\constant_op.py", line 265, in constant
allow_broadcast=True)
File "C:\Users\x\anaconda3\envs\newenvt\lib\site-packages\tensorflow\python\framework\constant_op.py", line 276, in _constant_impl
return _constant_eager_impl(ctx, value, dtype, shape, verify_shape)
File "C:\Users\x\anaconda3\envs\newenvt\lib\site-packages\tensorflow\python\framework\constant_op.py", line 301, in _constant_eager_impl
t = convert_to_eager_tensor(value, ctx, dtype)
File "C:\Users\x\anaconda3\envs\newenvt\lib\site-packages\tensorflow\python\framework\constant_op.py", line 98, in convert_to_eager_tensor
return ops.EagerTensor(value, ctx.device_name, dtype)
ValueError: TypeError: object of type 'RaggedTensor' has no len()

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "test.py", line 182, in
test()
File "test.py", line 48, in test
val_data = audio_processor.get_data(audio_processor.Modes.VALIDATION).batch(FLAGS.batch_size)
File "C:\Users\x\Dropbox\Documents\x\Coding\KWS\tflu-kws-cortex-m\Training\data.py", line 190, in get_data
use_background = (self.background_data != []) and (mode == AudioProcessor.Modes.TRAINING)
File "C:\Users\x\anaconda3\envs\newenvt\lib\site-packages\tensorflow\python\util\dispatch.py", line 210, in wrapper
result = dispatch(wrapper, args, kwargs)
File "C:\Users\x\anaconda3\envs\newenvt\lib\site-packages\tensorflow\python\util\dispatch.py", line 122, in dispatch
result = dispatcher.handle(args, kwargs)
File "C:\Users\x\anaconda3\envs\newenvt\lib\site-packages\tensorflow\python\ops\ragged\ragged_dispatch.py", line 219, in handle
ragged_tensor_shape.RaggedTensorDynamicShape.from_tensor(y))
File "C:\Users\x\anaconda3\envs\newenvt\lib\site-packages\tensorflow\python\ops\ragged\ragged_tensor_shape.py", line 470, in broadcast_dynamic_shape
shape_x = shape_x.broadcast_dimension(axis, shape_y.dimension_size(axis))
File "C:\Users\x\anaconda3\envs\newenvt\lib\site-packages\tensorflow\python\ops\ragged\ragged_tensor_shape.py", line 351, in broadcast_dimension
condition, data=broadcast_err, summarize=10)
File "C:\Users\x\anaconda3\envs\newenvt\lib\site-packages\tensorflow\python\util\dispatch.py", line 206, in wrapper
return target(*args, **kwargs)
File "C:\Users\x\anaconda3\envs\newenvt\lib\site-packages\tensorflow\python\util\tf_should_use.py", line 247, in wrapped
return _add_should_use_warning(fn(*args, **kwargs),
File "C:\Users\x\anaconda3\envs\newenvt\lib\site-packages\tensorflow\python\ops\control_flow_ops.py", line 164, in Assert
(condition, "\n".join(data_str)))
tensorflow.python.framework.errors_impl.InvalidArgumentError: Expected 'tf.Tensor(False, shape=(), dtype=bool)' to be true. Summarized data: b'Unable to broadcast: dimension size mismatch in dimension'
1
b'lengths='
0
b'dim_size='
1522930, 988891, 980062, 960000, 978488, 960000`

How to use Batch processing

Below snippet in the ML-examples/armnn-mobilenet-quant/mobilenetv1_quant_tflite.cpp code is used to load and preprocess the input.

    // Load and preprocess input image
    const std::vector<TContainer> inputDataContainers =
    { PrepareImageTensor<uint8_t>(programOptions.imagePath,
            inputTensorWidth, inputTensorHeight,
            normParams,
            inputTensorBatchSize,
            inputTensorDataLayout) };

programOptions.imagePath takes a path of single image.

There is an option of batch processing using argument "inputTensorBatchSize" in the "PrepareImageTensor".

I am able to run it successfully for single image.

How to utilise this above code for batch processing with multiple images. Please help.

Thanks in advance.

Running TfLite model using armnn

Hi,
How can I run my TfLite model using armnn?

I am using Quantized MobileNet model used by armnn to test TfLiteParser but my output is something like

0
2.57139e-39
0
0
0
0
9.18355e-41
0
0
0
0
0
0
0
0
2.35099e-38
0
2.35099e-38
0
0
0
2.40609e-38
3.67342e-40
0

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.