GithubHelp home page GithubHelp logo

microsoft / edgeml Goto Github PK

View Code? Open in Web Editor NEW
1.5K 88.0 367.0 80.37 MB

This repository provides code for machine learning algorithms for edge devices developed at Microsoft Research India.

License: Other

CMake 1.45% C++ 68.29% C 12.19% Shell 0.21% Cuda 1.05% Fortran 8.78% JavaScript 0.05% CSS 0.04% Python 7.55% Makefile 0.12% Batchfile 0.04% Jupyter Notebook 0.13% ANTLR 0.02% C# 0.06%
iot-device sensor edge-machine-learning resource-constrained-ml machine-learning-algorithms microsoft-research machine-learning tensorflow deep-learning edge-computing

edgeml's Introduction

The Edge Machine Learning library

This repository provides code for machine learning algorithms for edge devices developed at Microsoft Research India.

Machine learning models for edge devices need to have a small footprint in terms of storage, prediction latency, and energy. One instance of where such models are desirable is resource-scarce devices and sensors in the Internet of Things (IoT) setting. Making real-time predictions locally on IoT devices without connecting to the cloud requires models that fit in a few kilobytes.

Contents

Algorithms that shine in this setting in terms of both model size and compute, namely:

  • Bonsai: Strong and shallow non-linear tree based classifier.
  • ProtoNN: Prototype based k-nearest neighbors (kNN) classifier.
  • EMI-RNN: Training routine to recover the critical signature from time series data for faster and accurate RNN predictions.
  • Shallow RNN: A meta-architecture for training RNNs that can be applied to streaming data.
  • FastRNN & FastGRNN - FastCells: Fast, Accurate, Stable and Tiny (Gated) RNN cells.
  • DROCC: Deep Robust One-Class Classfiication for training robust anomaly detectors.
  • RNNPool: An efficient non-linear pooling operator for RAM constrained inference.

These algorithms can train models for classical supervised learning problems with memory requirements that are orders of magnitude lower than other modern ML algorithms. The trained models can be loaded onto edge devices such as IoT devices/sensors, and used to make fast and accurate predictions completely offline.

A tool that adapts models trained by above algorithms to be inferred by fixed point arithmetic.

  • SeeDot: Floating-point to fixed-point quantization tool.

Applications demonstrating usecases of these algorithms:

  • GesturePod: Gesture recognition pipeline for microcontrollers.
  • MSC-RNN: Multi-scale cascaded RNN for analyzing Radar data.

Organization

  • The tf directory contains the edgeml_tf package which specifies these architectures in TensorFlow, and examples/tf contains sample training routines for these algorithms.
  • The pytorch directory contains the edgeml_pytorch package which specifies these architectures in PyTorch, and examples/pytorch contains sample training routines for these algorithms.
  • The cpp directory has training and inference code for Bonsai and ProtoNN algorithms in C++.
  • The applications directory has code/demonstrations of applications of the EdgeML algorithms.
  • The tools/SeeDot directory has the quantization tool to generate fixed-point inference code.
  • The c_reference directory contains the inference code (floating-point or quantized) for various algorithms in C.

Please see install/run instructions in the README pages within these directories.

Details and project pages

For details, please see our project page, Microsoft Research page, the ICML '17 publications on Bonsai and ProtoNN algorithms, the NeurIPS '18 publications on EMI-RNN and FastGRNN, the PLDI '19 publication on SeeDot compiler, the UIST '19 publication on Gesturepod, the BuildSys '19 publication on MSC-RNN, the NeurIPS '19 publication on Shallow RNNs, the ICML '20 publication on DROCC, and the NeurIPS '20 publication on RNNPool.

Also checkout the ELL project which can provide optimized binaries for some of the ONNX models trained by this library.

Contributors:

Code for algorithms, applications and tools contributed by:

Contributors to this project. New contributors welcome.

Please email us your comments, criticism, and questions.

If you use software from this library in your work, please use the BibTex entry below for citation.

@misc{edgeml04,
   author = {{Dennis, Don Kurian and Gaurkar, Yash and Gopinath, Sridhar and Goyal, Sachin 
              and Gupta, Chirag and Jain, Moksh and Jaiswal, Shikhar and Kumar, Ashish and
              Kusupati, Aditya and  Lovett, Chris and Patil, Shishir G and Saha, Oindrila and
              Simhadri, Harsha Vardhan}},
   title = {{EdgeML: Machine Learning for resource-constrained edge devices}},
   url = {https://github.com/Microsoft/EdgeML},
   version = {0.4},
}

Microsoft Open Source Code of Conduct This project has adopted the

Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.

edgeml's People

Contributors

aayan636 avatar adityakusupati avatar aigen avatar anilkagak2 avatar aron123 avatar dependabot[bot] avatar harsha-simhadri avatar kant avatar kimsangyeon-dgu avatar lamarrr avatar lovettchris avatar metastableb avatar microsoft-github-policy-service[bot] avatar microsoftopensource avatar mj10 avatar mr-yamraj avatar msftgits avatar oindrilasaha avatar pawankartiks avatar punitvara avatar pushkalkatara avatar saching007 avatar shikharj avatar shishirpatil avatar siddharthdivi avatar sridhargopinath avatar suiyengar avatar suvadeep-iitb avatar t-chgupt avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

edgeml's Issues

Fixing pre-processing script for google speech datasets

@metastableB , in the process_google.py file, I think the right way to create multiple versions of the datasets with different classes should be based on the classes listed in the labelmap rather than including everything and combining the not-required classes to 0. I think this should be chaged to support, say, simple things like google-30 where the classes are only the 30 keywords and no noise.

https://github.com/microsoft/EdgeML/blob/pytorch/pytorch/examples/SRNN/process_google.py#L35.

Testing pytorch implementations.

Leaving this issue here to keep track of developments.

  • Need to test every algorithm again with the runner suits.
  • Re-run notebooks and push new updates whenever that makes sense.
  • Test SRNN updates, with and without dropout.

Application to Extreme Multi-label Classification?

Hi,

Thank you for this great work. I am curious about how to apply EdgeML to extreme multi-label classification problems. Your documentation for ProtoNN Wiki and paper suggests that it is a good fit but it is not clear how to get started with that. In the C++ code, what should the data format be for multi-label problems? In the tensorflow versions, how should the model be updated to apply it to multi-label problems (how to define the loss and output layer, etc..). Also, is only ProtoNN suitable for XML problems, or can the other algorithms be adapted too?

Cheers!

Can you shed some light on your ICML paper experiments with CUReT and Chars datasets?

Looking at your paper "Resource-efficient Machine Learning in 2 KB RAM for the Internet of Things," it isn't clear to me how you set up the experiments for CUReT and Chars.

For example, for CUReT, Table 1 lists the total number of images as 4204 + 1403 = 5607, but if one downloads the dataset from http://www.robots.ox.ac.uk/~vgg/research/texclass/, the number of images is 5612.

The setup for the Chars dataset is also unclear to me. Table 1 lists the total number of images as 4397 + 1886 = 6283, but downloading the dataset from http://www.ee.surrey.ac.uk/CVSSP/demos/chars74k/, the number of images is 7705.

Can you please specify how you ran these experiments such that we can have a fair comparison with the algorithm proposed in your paper?

Documentation about your tests

Hello. I plan to try this paper on on a microcontroller and I wonder if you have documentation about loading the model/predict code on a ATmega328P.
I have everything compiled and working on a PC and while I saw the pull request for Arduino, I would like to do this without their IDE

Question: have you looked at posits (a.k.a. unum type 3)

So, this is just a general curiosity question from someone who at best may fool with this library on his Arduino's at some point.

I found this repo through this article on Bonsai, which linked to the paper.

The part where it mentioned that 8-bit fixed point math was used to avoid floating point overhead stuck me as interesting. I can only assume that this comes with trade-offs, but I know next to nothing of Machine Learning, so have no idea of what those are.

However, it just made me wonder if you were aware of posits (which come in 8-bit variants as well), and how they would fit in this picture.

Cheers,
Job

tracking macosx support

Ive been sitting on the sidelines hoping someone would tackle osx support but it hasnt happened yet, so heres my start

Jacobs-Air:~ jacobrosenthal$ gcc --version
Configured with: --prefix=/Applications/Xcode.app/Contents/Developer/usr --with-gxx-include-dir=/usr/include/c++/4.2.1
Apple LLVM version 9.0.0 (clang-900.0.39.2)
Target: x86_64-apple-darwin17.3.0
Thread model: posix
InstalledDir: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin
Jacobs-Air:~ jacobrosenthal$ 

Yeah were clang masquerading on macosx, need gcc5

brew install gcc5
I already have it

Jacobs-Air:~ jacobrosenthal$ gcc-5 --version
gcc-5 (Homebrew GCC 5.5.0) 5.5.0
Copyright (C) 2015 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

ah config.mk already uses with CC=g++-5

try to make

Jacobs-MacBook-Air:EdgeML jacobrosenthal$ make -j
In file included from utils.h:7:0,
                 from utils.cpp:4:
pre_processor.h:7:17: fatal error: mkl.h: No such file or directory

MKL doesnt appear to be in brew yet
was problematic on osx? maybe not now?
https://github.com/Homebrew/homebrew-science/issues/2840
I had to get that manually
https://software.intel.com/en-us/get-started-with-mkl-for-osx

thus in config.mk
MKL_ROOT=/opt/intel/mkl
and $(MKL_ROOT)/lib/intel64 becomes $(MKL_ROOT)/lib
and /opt/intel/mkl/lib/intel64/ becomes /opt/intel/mkl/lib/ everywhere

bunch of this

/opt/intel/mkl/include/mkl_service.h:30:20: fatal error: stdlib.h: No such file or directory

You probably wont need to, but my brewed gcc5 was broken, and I had to
brew remove gcc5 && brew install gcc5

Then in src/common/goldfoil.cpp feenableexcept isnt available
Some stuff about fixing that but ill comment it for now
https://stackoverflow.com/questions/247053/enabling-floating-point-interrupts-on-mac-os-x-intel

Then this I cant figure out, Click to expand ``` /var/folders/hv/syzcx3w10850txk2z6tbnb9m0000gn/T//ccsNs2oT.s:60644:11: note: change section name to "__const" .section __DATA,__const_coal,coalesced ^ ~~~~~~~~~~~~ g++-5 -o ../../libProtoNN.so -shared -fPIC ProtoNNModel.o ProtoNNHyperParams.o ProtoNNParams.o ProtoNNTrainer.o ProtoNNPredictor.o ProtoNNFunctions.o cluster.o -lc Undefined symbols for architecture x86_64: "EdgeML::loadMinMax(Eigen::Matrix&, Eigen::Matrix&, int, std::__cxx11::basic_string, std::allocator >)", referenced from: EdgeML::ProtoNN::ProtoNNPredictor::normalize() in ProtoNNPredictor.o "EdgeML::saveMinMax(Eigen::Matrix const&, Eigen::Matrix const&, std::__cxx11::basic_string, std::allocator >)", referenced from: EdgeML::ProtoNN::ProtoNNTrainer::normalize() in ProtoNNTrainer.o "EdgeML::l2Normalize(Eigen::SparseMatrix&)", referenced from: EdgeML::ProtoNN::ProtoNNTrainer::normalize() in ProtoNNTrainer.o EdgeML::ProtoNN::ProtoNNPredictor::normalize() in ProtoNNPredictor.o "EdgeML::parallelExp(Eigen::Matrix&)", referenced from: EdgeML::ProtoNN::ProtoNNPredictor::RBF() in ProtoNNPredictor.o EdgeML::gaussianKernel(Eigen::Matrix const&, Eigen::Matrix const&, float, long long, long long) in ProtoNNFunctions.o EdgeML::gaussianKernel(Eigen::Matrix const&, Eigen::Matrix const&, float, long long, long long) [clone .constprop.878] in ProtoNNFunctions.o "EdgeML::ResultStruct::scaleAndAdd(EdgeML::ResultStruct&, float)", referenced from: EdgeML::ProtoNN::ProtoNNPredictor::testBatchWise() in ProtoNNPredictor.o EdgeML::ProtoNN::ProtoNNPredictor::testPointWise() in ProtoNNPredictor.o "EdgeML::ResultStruct::scale(float)", referenced from: EdgeML::ProtoNN::ProtoNNPredictor::testBatchWise() in ProtoNNPredictor.o EdgeML::ProtoNN::ProtoNNPredictor::testPointWise() in ProtoNNPredictor.o "EdgeML::ResultStruct::ResultStruct()", referenced from: EdgeML::ProtoNN::ProtoNNPredictor::testBatchWise() in ProtoNNPredictor.o EdgeML::ProtoNN::ProtoNNPredictor::testPointWise() in ProtoNNPredictor.o "EdgeML::computeMinMax(Eigen::SparseMatrix const&, Eigen::Matrix&, Eigen::Matrix&)", referenced from: EdgeML::ProtoNN::ProtoNNTrainer::normalize() in ProtoNNTrainer.o "EdgeML::denseExportStat(Eigen::Matrix const&)", referenced from: EdgeML::ProtoNN::ProtoNNTrainer::sizeForExportBDense() in ProtoNNTrainer.o EdgeML::ProtoNN::ProtoNNTrainer::sizeForExportWDense() in ProtoNNTrainer.o EdgeML::ProtoNN::ProtoNNTrainer::sizeForExportZDense() in ProtoNNTrainer.o "EdgeML::global_log_info(std::__cxx11::basic_string, std::allocator > const&, std::__cxx11::basic_string, std::allocator > const&, std::__cxx11::basic_string, std::allocator > const&, int)", referenced from: EdgeML::ProtoNN::ProtoNNModel::ProtoNNModel(EdgeML::ProtoNN::ProtoNNModel::ProtoNNHyperParams const&) in ProtoNNModel.o EdgeML::ProtoNN::ProtoNNModel::ProtoNNHyperParams::finalizeHyperParams() in ProtoNNHyperParams.o EdgeML::ProtoNN::ProtoNNModel::ProtoNNHyperParams::setHyperParamsFromArgs(int, char const**) in ProtoNNHyperParams.o EdgeML::ProtoNN::ProtoNNTrainer::createOutputDirs() in ProtoNNTrainer.o EdgeML::ProtoNN::ProtoNNTrainer::normalize() in ProtoNNTrainer.o EdgeML::ProtoNN::ProtoNNTrainer::setFromArgs(int, char const**) in ProtoNNTrainer.o EdgeML::ProtoNN::ProtoNNTrainer::initializeModel() in ProtoNNTrainer.o ... "EdgeML::minMaxNormalize(Eigen::SparseMatrix&, Eigen::Matrix const&, Eigen::Matrix const&)", referenced from: EdgeML::ProtoNN::ProtoNNTrainer::normalize() in ProtoNNTrainer.o EdgeML::ProtoNN::ProtoNNPredictor::normalize() in ProtoNNPredictor.o "EdgeML::global_log_trace(std::__cxx11::basic_string, std::allocator > const&, std::__cxx11::basic_string, std::allocator > const&, std::__cxx11::basic_string, std::allocator > const&, int)", referenced from: EdgeML::sparsekmeans::kmeanspp(Eigen::SparseMatrix const&, float const*, Eigen::Matrix&) in cluster.o EdgeML::sparsekmeans::kmeans(Eigen::SparseMatrix const&, Eigen::Matrix&, int, unsigned long long*) in cluster.o EdgeML::densekmeans::kmeanspp(Eigen::Matrix const&, float const*, Eigen::Matrix&) in cluster.o EdgeML::densekmeans::kmeans(Eigen::Matrix const&, Eigen::Matrix&, int, unsigned long long*) in cluster.o EdgeML::densekmeans::kmeans(Eigen::Matrix const&, Eigen::Matrix&, int, unsigned long long*) [clone .constprop.414] in cluster.o EdgeML::kmeansLabelwise(Eigen::SparseMatrix const&, Eigen::Matrix const&, Eigen::Matrix&, Eigen::Matrix&, int) in cluster.o "EdgeML::sparseExportStat(Eigen::Matrix const&)", referenced from: EdgeML::ProtoNN::ProtoNNTrainer::sizeForExportBSparse() in ProtoNNTrainer.o EdgeML::ProtoNN::ProtoNNTrainer::sizeForExportWSparse() in ProtoNNTrainer.o EdgeML::ProtoNN::ProtoNNTrainer::sizeForExportZSparse() in ProtoNNTrainer.o "EdgeML::exportDenseMatrix(Eigen::Matrix const&, unsigned long const&, char*)", referenced from: EdgeML::ProtoNN::ProtoNNTrainer::exportBDense(int, char*) in ProtoNNTrainer.o EdgeML::ProtoNN::ProtoNNTrainer::exportWDense(int, char*) in ProtoNNTrainer.o EdgeML::ProtoNN::ProtoNNTrainer::exportZDense(int, char*) in ProtoNNTrainer.o "EdgeML::exportSparseMatrix(Eigen::Matrix const&, unsigned long const&, char*)", referenced from: EdgeML::ProtoNN::ProtoNNTrainer::exportBSparse(int, char*) in ProtoNNTrainer.o EdgeML::ProtoNN::ProtoNNTrainer::exportWSparse(int, char*) in ProtoNNTrainer.o EdgeML::ProtoNN::ProtoNNTrainer::exportZSparse(int, char*) in ProtoNNTrainer.o "EdgeML::getTopKScoresBatch(Eigen::Matrix const&, Eigen::Matrix&, Eigen::Matrix&, int)", referenced from: EdgeML::ProtoNN::ProtoNNPredictor::saveTopKScores(std::__cxx11::basic_string, std::allocator >, int) in ProtoNNPredictor.o "EdgeML::global_log_warning(std::__cxx11::basic_string, std::allocator > const&, std::__cxx11::basic_string, std::allocator > const&, std::__cxx11::basic_string, std::allocator > const&, int)", referenced from: EdgeML::ProtoNN::ProtoNNTrainer::createOutputDirs() in ProtoNNTrainer.o EdgeML::ProtoNN::ProtoNNPredictor::createOutputDirs() in ProtoNNPredictor.o EdgeML::altMinSGD(EdgeML::Data const&, EdgeML::ProtoNN::ProtoNNModel&, float*, std::__cxx11::basic_string, std::allocator > const&) in ProtoNNFunctions.o "EdgeML::writeMatrixInASCII(Eigen::Matrix const&, std::__cxx11::basic_string, std::allocator > const&, std::__cxx11::basic_string, std::allocator > const&)", referenced from: EdgeML::ProtoNN::ProtoNNTrainer::train() in ProtoNNTrainer.o "EdgeML::computeModelSizeInkB(float const&, float const&, float const&, Eigen::Matrix const&, Eigen::Matrix const&, Eigen::Matrix const&)", referenced from: EdgeML::altMinSGD(EdgeML::Data const&, EdgeML::ProtoNN::ProtoNNModel&, float*, std::__cxx11::basic_string, std::allocator > const&) in ProtoNNFunctions.o "EdgeML::global_log_diagnostic(Eigen::Matrix const&, std::__cxx11::basic_string, std::allocator > const&, std::__cxx11::basic_string, std::allocator > const&, int)", referenced from: EdgeML::gaussianKernel(Eigen::Matrix const&, Eigen::Matrix const&, float, long long, long long) in ProtoNNFunctions.o EdgeML::gaussianKernel(Eigen::Matrix const&, Eigen::Matrix const&, float, long long, long long) [clone .constprop.878] in ProtoNNFunctions.o EdgeML::gradL_W(Eigen::Matrix const&, Eigen::SparseMatrix const&, Eigen::Matrix const&, Eigen::Matrix const&, Eigen::SparseMatrix const&, Eigen::Matrix const&, float, long long, long long) in ProtoNNFunctions.o EdgeML::gradL_B(Eigen::Matrix const&, Eigen::SparseMatrix const&, Eigen::Matrix const&, Eigen::Matrix const&, Eigen::Matrix const&, float, long long, long long) in ProtoNNFunctions.o "EdgeML::sequentialQuickSelect(float*, unsigned long, unsigned long)", referenced from: EdgeML::hardThrsd(Eigen::Matrix&, float) in ProtoNNFunctions.o EdgeML::medianHeuristic(Eigen::Matrix const&, Eigen::Matrix, float) in ProtoNNFunctions.o "EdgeML::mm(Eigen::Matrix&, Eigen::SparseMatrix const&, CBLAS_TRANSPOSE, Eigen::Matrix const&, CBLAS_TRANSPOSE, float, float, long long, long long)", referenced from: EdgeML::gradL_Z(Eigen::Matrix const&, Eigen::SparseMatrix const&, Eigen::Matrix const&, long long, long long) in ProtoNNFunctions.o "EdgeML::mm(Eigen::Matrix&, Eigen::Matrix const&, CBLAS_TRANSPOSE, Eigen::SparseMatrix const&, CBLAS_TRANSPOSE, float, float, long long, long long)", referenced from: EdgeML::ProtoNN::ProtoNNTrainer::initializeModel() in ProtoNNTrainer.o EdgeML::ProtoNN::ProtoNNPredictor::scoreBatch(Eigen::Matrix&, unsigned long long, unsigned long long) in ProtoNNPredictor.o std::_Function_handler const&, long long, long long), EdgeML::altMinSGD(EdgeML::Data const&, EdgeML::ProtoNN::ProtoNNModel&, float*, std::__cxx11::basic_string, std::allocator > const&)::{lambda(Eigen::Matrix const&, long long, long long)#1}>::_M_invoke(std::_Any_data const&, Eigen::Matrix const&, long long&&, std::_Any_data const&) in ProtoNNFunctions.o EdgeML::gradL_W(Eigen::Matrix const&, Eigen::SparseMatrix const&, Eigen::Matrix const&, Eigen::Matrix const&, Eigen::SparseMatrix const&, Eigen::Matrix const&, float, long long, long long) in ProtoNNFunctions.o std::_Function_handler (Eigen::Matrix const&, long long, long long), EdgeML::altMinSGD(EdgeML::Data const&, EdgeML::ProtoNN::ProtoNNModel&, float*, std::__cxx11::basic_string, std::allocator > const&)::{lambda(Eigen::Matrix const&, long long, long long)#2}>::_M_invoke(std::_Any_data const&, Eigen::Matrix const&, long long&&, std::_Any_data const&) in ProtoNNFunctions.o EdgeML::altMinSGD(EdgeML::Data const&, EdgeML::ProtoNN::ProtoNNModel&, float*, std::__cxx11::basic_string, std::allocator > const&) in ProtoNNFunctions.o "EdgeML::mm(Eigen::Matrix&, Eigen::Matrix const&, CBLAS_TRANSPOSE, Eigen::Matrix const&, CBLAS_TRANSPOSE, float, float, long long, long long)", referenced from: EdgeML::ProtoNN::ProtoNNPredictor::RBF() in ProtoNNPredictor.o EdgeML::ProtoNN::ProtoNNPredictor::ProtoNNPredictor(unsigned long, char const*) in ProtoNNPredictor.o EdgeML::ProtoNN::ProtoNNPredictor::testDenseDataPoint(float const*, unsigned long long const*, unsigned long long const&, EdgeML::ProblemFormat const&) in ProtoNNPredictor.o EdgeML::ProtoNN::ProtoNNPredictor::scoreBatch(Eigen::Matrix&, unsigned long long, unsigned long long) in ProtoNNPredictor.o EdgeML::ProtoNN::ProtoNNPredictor::ProtoNNPredictor(int const&, char const**) in ProtoNNPredictor.o EdgeML::accuracy(Eigen::Matrix const&, Eigen::SparseMatrix const&, Eigen::Matrix const&, EdgeML::ProblemFormat const&) in ProtoNNFunctions.o EdgeML::gaussianKernel(Eigen::Matrix const&, Eigen::Matrix const&, float, long long, long long) in ProtoNNFunctions.o ... "EdgeML::Data::finalizeData()", referenced from: EdgeML::ProtoNN::ProtoNNTrainer::finalizeData() in ProtoNNTrainer.o EdgeML::ProtoNN::ProtoNNPredictor::ProtoNNPredictor(int const&, char const**) in ProtoNNPredictor.o "EdgeML::Data::feedDenseData(EdgeML::DenseDataPoint const&)", referenced from: EdgeML::ProtoNN::ProtoNNTrainer::feedDenseData(float const*, unsigned long long const*, unsigned long long const&) in ProtoNNTrainer.o "EdgeML::Data::feedSparseData(EdgeML::SparseDataPoint const&)", referenced from: EdgeML::ProtoNN::ProtoNNTrainer::feedSparseData(float const*, unsigned long long const*, unsigned long long const&, unsigned long long const*, unsigned long long const&) in ProtoNNTrainer.o "EdgeML::Data::loadDataFromFile(EdgeML::DataFormat, std::__cxx11::basic_string, std::allocator >, std::__cxx11::basic_string, std::allocator >, std::__cxx11::basic_string, std::allocator >)", referenced from: EdgeML::ProtoNN::ProtoNNTrainer::ProtoNNTrainer(int const&, char const**) in ProtoNNTrainer.o EdgeML::ProtoNN::ProtoNNPredictor::ProtoNNPredictor(int const&, char const**) in ProtoNNPredictor.o "EdgeML::Data::Data(EdgeML::DataIngestType, EdgeML::DataFormatParams)", referenced from: EdgeML::ProtoNN::ProtoNNTrainer::ProtoNNTrainer(EdgeML::ProtoNN::ProtoNNModel::ProtoNNHyperParams const&) in ProtoNNTrainer.o EdgeML::ProtoNN::ProtoNNTrainer::ProtoNNTrainer(EdgeML::ProtoNN::ProtoNNModel::ProtoNNHyperParams const&) in ProtoNNTrainer.o EdgeML::ProtoNN::ProtoNNTrainer::ProtoNNTrainer(int const&, char const**) in ProtoNNTrainer.o EdgeML::ProtoNN::ProtoNNPredictor::ProtoNNPredictor(int const&, char const**) in ProtoNNPredictor.o "EdgeML::Timer::nextTime(std::__cxx11::basic_string, std::allocator > const&)", referenced from: EdgeML::hardThrsd(Eigen::Matrix&, float) in ProtoNNFunctions.o EdgeML::accuracy(Eigen::Matrix const&, Eigen::SparseMatrix const&, Eigen::Matrix const&, EdgeML::ProblemFormat const&) in ProtoNNFunctions.o EdgeML::gaussianKernel(Eigen::Matrix const&, Eigen::Matrix const&, float, long long, long long) in ProtoNNFunctions.o EdgeML::gaussianKernel(Eigen::Matrix const&, Eigen::Matrix const&, float, long long, long long) [clone .constprop.878] in ProtoNNFunctions.o EdgeML::gradL_Z(Eigen::Matrix const&, Eigen::SparseMatrix const&, Eigen::Matrix const&, long long, long long) in ProtoNNFunctions.o EdgeML::gradL_W(Eigen::Matrix const&, Eigen::SparseMatrix const&, Eigen::Matrix const&, Eigen::Matrix const&, Eigen::SparseMatrix const&, Eigen::Matrix const&, float, long long, long long) in ProtoNNFunctions.o EdgeML::gradL_B(Eigen::Matrix const&, Eigen::SparseMatrix const&, Eigen::Matrix const&, Eigen::Matrix const&, Eigen::Matrix const&, float, long long, long long) in ProtoNNFunctions.o ... "EdgeML::Timer::Timer(std::__cxx11::basic_string, std::allocator >)", referenced from: EdgeML::hardThrsd(Eigen::Matrix&, float) in ProtoNNFunctions.o EdgeML::accuracy(Eigen::Matrix const&, Eigen::SparseMatrix const&, Eigen::Matrix const&, EdgeML::ProblemFormat const&) in ProtoNNFunctions.o EdgeML::gaussianKernel(Eigen::Matrix const&, Eigen::Matrix const&, float, long long, long long) in ProtoNNFunctions.o EdgeML::gaussianKernel(Eigen::Matrix const&, Eigen::Matrix const&, float, long long, long long) [clone .constprop.878] in ProtoNNFunctions.o EdgeML::gradL_Z(Eigen::Matrix const&, Eigen::SparseMatrix const&, Eigen::Matrix const&, long long, long long) in ProtoNNFunctions.o EdgeML::batchEvaluate(Eigen::Matrix const&, Eigen::SparseMatrix const&, Eigen::SparseMatrix const&, Eigen::Matrix const&, Eigen::Matrix const&, Eigen::Matrix const&, float const&, EdgeML::ProblemFormat const&, float*) in ProtoNNFunctions.o EdgeML::gradL_W(Eigen::Matrix const&, Eigen::SparseMatrix const&, Eigen::Matrix const&, Eigen::Matrix const&, Eigen::SparseMatrix const&, Eigen::Matrix const&, float, long long, long long) in ProtoNNFunctions.o ... "EdgeML::Timer::~Timer()", referenced from: EdgeML::hardThrsd(Eigen::Matrix&, float) in ProtoNNFunctions.o EdgeML::accuracy(Eigen::Matrix const&, Eigen::SparseMatrix const&, Eigen::Matrix const&, EdgeML::ProblemFormat const&) in ProtoNNFunctions.o EdgeML::gaussianKernel(Eigen::Matrix const&, Eigen::Matrix const&, float, long long, long long) in ProtoNNFunctions.o EdgeML::gaussianKernel(Eigen::Matrix const&, Eigen::Matrix const&, float, long long, long long) [clone .constprop.878] in ProtoNNFunctions.o EdgeML::gradL_Z(Eigen::Matrix const&, Eigen::SparseMatrix const&, Eigen::Matrix const&, long long, long long) in ProtoNNFunctions.o EdgeML::batchEvaluate(Eigen::Matrix const&, Eigen::SparseMatrix const&, Eigen::SparseMatrix const&, Eigen::Matrix const&, Eigen::Matrix const&, Eigen::Matrix const&, float const&, EdgeML::ProblemFormat const&, float*) in ProtoNNFunctions.o EdgeML::gradL_W(Eigen::Matrix const&, Eigen::SparseMatrix const&, Eigen::Matrix const&, Eigen::Matrix const&, Eigen::SparseMatrix const&, Eigen::Matrix const&, float, long long, long long) in ProtoNNFunctions.o ... "EdgeML::FileIO::Data::Data(std::__cxx11::basic_string, std::allocator >, Eigen::Matrix&, Eigen::Matrix&, unsigned long long, unsigned long long, unsigned long long, unsigned long long, unsigned long long, unsigned long long, EdgeML::DataFormat&)", referenced from: EdgeML::ProtoNN::ProtoNNTrainer::initializeModel() in ProtoNNTrainer.o "EdgeML::Logger::Logger(std::__cxx11::basic_string, std::allocator >)", referenced from: void EdgeML::accProxSGD >(std::function const&, long long, long long)>, std::function (Eigen::Matrix const&, long long, long long)>, std::function&)>, Eigen::Matrix&, int const&, unsigned long long const&, unsigned long long const&, float&, int const&) in ProtoNNFunctions.o "EdgeML::Logger::~Logger()", referenced from: void EdgeML::accProxSGD >(std::function const&, long long, long long)>, std::function (Eigen::Matrix const&, long long, long long)>, std::function&)>, Eigen::Matrix&, int const&, unsigned long long const&, unsigned long long const&, float&, int const&) in ProtoNNFunctions.o "EdgeML::safeDiv(float const&, float const&)", referenced from: EdgeML::accuracy(Eigen::Matrix const&, Eigen::SparseMatrix const&, Eigen::Matrix const&, EdgeML::ProblemFormat const&) in ProtoNNFunctions.o EdgeML::medianHeuristic(Eigen::Matrix const&, Eigen::Matrix, float) in ProtoNNFunctions.o EdgeML::altMinSGD(EdgeML::Data const&, EdgeML::ProtoNN::ProtoNNModel&, float*, std::__cxx11::basic_string, std::allocator > const&) in ProtoNNFunctions.o void EdgeML::accProxSGD >(std::function const&, long long, long long)>, std::function (Eigen::Matrix const&, long long, long long)>, std::function&)>, Eigen::Matrix&, int const&, unsigned long long const&, unsigned long long const&, float&, int const&) in ProtoNNFunctions.o "EdgeML::evaluate(Eigen::Matrix const&, Eigen::SparseMatrix const&, EdgeML::ProblemFormat)", referenced from: EdgeML::ProtoNN::ProtoNNPredictor::testBatchWise() in ProtoNNPredictor.o EdgeML::ProtoNN::ProtoNNPredictor::testPointWise() in ProtoNNPredictor.o "EdgeML::randPick(Eigen::Matrix const&, Eigen::Matrix&, unsigned long long)", referenced from: EdgeML::ProtoNN::ProtoNNTrainer::initializeModel() in ProtoNNTrainer.o "EdgeML::maxAbsVal(Eigen::Matrix const&)", referenced from: EdgeML::altMinSGD(EdgeML::Data const&, EdgeML::ProtoNN::ProtoNNModel&, float*, std::__cxx11::basic_string, std::allocator > const&) in ProtoNNFunctions.o "___cilkrts_cilk_for_32", referenced from: EdgeML::densekmeans::updateMinDistSqToCenters(Eigen::Matrix const&, float const*, unsigned long long, float const*, float*, float*) in cluster.o EdgeML::sparsekmeans::kmeans(Eigen::SparseMatrix const&, Eigen::Matrix&, int, unsigned long long*) in cluster.o EdgeML::densekmeans::lloydsIter(Eigen::Matrix const&, float const*, Eigen::Matrix

EMI-RNN notebooks error with "Cannot take the length of Shape with unknown rank."

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-6-a98b2336be21> in <module>()
     12     print("y_cap shape is:", y_cap.shape)
     13 
---> 14     emiTrainer(y_cap, y_batch)

/Users/jacobrosenthal/Downloads/EdgeML/tf/edgeml/trainer/emirnnTrainer.py in __call__(self, predicted, target)
    101             assert self.trainOp is not None
    102             return self.lossOp, self.trainOp
--> 103         self.__validateInit(predicted, target)
    104         assert self.__validInit is True
    105         if self.graph is None:

/Users/jacobrosenthal/Downloads/EdgeML/tf/edgeml/trainer/emirnnTrainer.py in __validateInit(self, predicted, target)
     76     def __validateInit(self, predicted, target):
     77         msg = 'Predicted/Target tensors have incorrect dimension'
---> 78         assert len(predicted.shape) == 4, msg
     79         assert predicted.shape[3] == self.numOutput, msg
     80         assert predicted.shape[2] == self.numTimeSteps, msg

/opt/conda/lib/python3.6/site-packages/tensorflow/python/framework/tensor_shape.py in __len__(self)
    482     """Returns the rank of this shape, or raises ValueError if unspecified."""
    483     if self._dims is None:
--> 484       raise ValueError("Cannot take the length of Shape with unknown rank.")
    485     return len(self._dims)
    486 

ValueError: Cannot take the length of Shape with unknown rank.

Bonsai for CIFAR10

Hi,

Couple of questions on applying Bonsai on CIFAR10, referring to your paper @ http://harsha-simhadri.org/EdgeML/files/Bonsai.pdf

  1. Table 1 refers to a dataset called CIFAR10-2. What does the -2 signify and is this dataset the same as
    the one from https://www.cs.toronto.edu/~kriz/cifar.html ?
    The reason I ask is because Table 1 lists the number of features as 400 whereas CIFAR10 features should 32x32x3=3072.

  2. Would it be possible for you to share Bonsai training parameters for CIFAR10, similar to the USPS scripts in your repository ?

Thanks !

tensorflow data files

A bunch of directions call for loading from a test or train.npy file. The text seems to be a list of label featurenum:float ... and doing np.load on the .txt doesnt work as I get UnpicklingError Any help?

Also what/where is CURET data?

Fix warnings

mmaped.cpp(247): warning C4244: 'argument': conversion from 'LABEL_TYPE' to 'Eigen::EigenBase::Index', possible loss of data
mmaped.cpp(258): warning C4018: '<': signed/unsigned mismatch
mmaped.cpp(267): warning C4018: '>=': signed/unsigned mismatch
mmaped.cpp(350): warning C4244: '=': conversion from 'LABEL_TYPE' to 'featureCount_t', possible loss of data
mmaped.cpp(371): warning C4018: '<=': signed/unsigned mismatch
mmaped.cpp(476): warning C4018: '<=': signed/unsigned mismatch
mmaped.cpp(566): warning C4018: '<=': signed/unsigned mismatch
mmaped.cpp(672): warning C4018: '<=': signed/unsigned mismatch
mmaped.cpp(761): warning C4018: '<=': signed/unsigned mismatch
utils.cpp(314): warning C4717: 'EdgeML::exportDenseMatrix': recursive on all control paths, function will cause runtime stack overflow

BonsaiIngestTest.cpp(33): warning C4244: '=': conversion from 'double' to 'int', possible loss of data
BonsaiIngestTest.cpp(61): warning C4018: '<': signed/unsigned mismatch
BonsaiIngestTest.cpp(66): warning C4018: '<': signed/unsigned mismatch
BonsaiIngestTest.cpp(120): warning C4018: '<': signed/unsigned mismatch
BonsaiIngestTest.cpp(130): warning C4018: '<': signed/unsigned mismatch
BonsaiFunctions.cpp(838): warning C4996: 'localtime': This function or variable may be unsafe. Consider using localtime_s instead. To disable deprecation, use _CRT_SECURE_NO_WARNINGS. See online help for details.

SeeDot Example Error

Hello, Development Team! I was trying the new SeeDot tool, following the example provided, but when I use the command:
python SeeDot.py -a protonn --train ../../tf/examples/ProtoNN/usps10/train.npy --test ../../tf/examples/ProtoNN/usps10/test.npy --model ../../tf/examples/ProtoNN/usps10/output -o arduino
give me a error.
I am using environment python 3.5
Packages: Antrl4 4.7.2, Numpy 1.16.2, Scikit-learn 0.20.3, jupyter 1.0.0, numpy 1.14.5, pandas 0.23.4, scipy 1.1.0, tensorflow 1.12.2.

Ubuntu 16.04 with Gcc 1.7.4 and make 4.1.

The Output of the command:

Executing on protonn for Arduino

Train file: ../../tf/examples/ProtoNN/usps10/train.npy
Test file: ../../tf/examples/ProtoNN/usps10/test.npy
Model directory: ../../tf/examples/ProtoNN/usps10/output


Collecting profile data

Generating input files for float training dataset...done

Build...success
Execution...success
Accuracy is 94.787%


Performing search to find the best scaling factor

Generating input files for fixed training dataset...done

Testing with max scale factor of 0
Generating code...failed!

Testing with max scale factor of -1
Generating code...completed
seedot_fixed.cpp: In function ‘int seedotFixed(MYINT**)’:
seedot_fixed.cpp:44:69: error: invalid conversion from ‘int’ to ‘const MYINT* {aka const short int*}’ [-fpermissive]
SparseMatMul(&Widx[0], 256, X, 128, 128, &tmp5[0][0], 128, &Wval[0]);
^
In file included from seedot_fixed.cpp:7:0:
library.h:18:6: note: initializing argument 2 of ‘void SparseMatMul(const MYINT*, const MYINT*, MYINT**, MYINT*, MYINT, MYINT, MYINT, MYINT)’
void SparseMatMul(const MYINT Aidx, const MYINT Aval, MYINT B, MYINT C, MYINT K, MYINT shrA, MYINT shrB, MYINT shrC);
^~~~~~~~~~~~
seedot_fixed.cpp:44:69: error: invalid conversion from ‘int’ to ‘MYINT
{aka short int
}’ [-fpermissive]
SparseMatMul(&Widx[0], 256, X, 128, 128, &tmp5[0][0], 128, &Wval[0]);
^
In file included from seedot_fixed.cpp:7:0:
library.h:18:6: note: initializing argument 4 of ‘void SparseMatMul(const MYINT
, const MYINT
, MYINT
*, MYINT*, MYINT, MYINT, MYINT, MYINT)’
void SparseMatMul(const MYINT Aidx, const MYINT Aval, MYINT B, MYINT C, MYINT K, MYINT shrA, MYINT shrB, MYINT shrC);
^~~~~~~~~~~~
seedot_fixed.cpp:44:43: error: invalid conversion from ‘MYINT
{aka short int
}’ to ‘MYINT {aka short int}’ [-fpermissive]
SparseMatMul(&Widx[0], 256, X, 128, 128, &tmp5[0][0], 128, &Wval[0]);
^~~~~~~~~~~
In file included from seedot_fixed.cpp:7:0:
library.h:18:6: note: initializing argument 6 of ‘void SparseMatMul(const MYINT
, const MYINT
, MYINT
*, MYINT*, MYINT, MYINT, MYINT, MYINT)’
void SparseMatMul(const MYINT Aidx, const MYINT Aval, MYINT B, MYINT C, MYINT K, MYINT shrA, MYINT shrB, MYINT shrC);
^~~~~~~~~~~~
seedot_fixed.cpp:44:61: error: invalid conversion from ‘const MYINT
{aka const short int
}’ to ‘MYINT {aka short int}’ [-fpermissive]
SparseMatMul(&Widx[0], 256, X, 128, 128, &tmp5[0][0], 128, &Wval[0]);
^~~~~~~~
In file included from seedot_fixed.cpp:7:0:
library.h:18:6: note: initializing argument 8 of ‘void SparseMatMul(const MYINT
, const MYINT
, MYINT
*, MYINT*, MYINT, MYINT, MYINT, MYINT)’
void SparseMatMul(const MYINT Aidx, const MYINT Aval, MYINT B, MYINT C, MYINT K, MYINT shrA, MYINT shrB, MYINT shrC);
^~~~~~~~~~~~
seedot_fixed.cpp:51:10: error: invalid conversion from ‘const MYINT
{aka const short int
}’ to ‘MYINT
{aka short int
}’ [-fpermissive]
MatSub(&B[i][0][0], 1, &tmp6[0][0], 1, 25, 128, 1, &tmp5[0][0]);
^~~~~~~~~~~
In file included from seedot_fixed.cpp:7:0:
library.h:8:6: note: initializing argument 1 of ‘void MatSub(MYINT
, const MYINT*, MYINT*, MYINT, MYINT, MYINT, MYINT, MYINT)’
void MatSub(MYINT A, const MYINT B, MYINT C, MYINT I, MYINT J, MYINT shrA, MYINT shrB, MYINT shrC);
^~~~~~
seedot_fixed.cpp:51:65: error: invalid conversion from ‘int’ to ‘const MYINT
{aka const short int
}’ [-fpermissive]
MatSub(&B[i][0][0], 1, &tmp6[0][0], 1, 25, 128, 1, &tmp5[0][0]);
^
In file included from seedot_fixed.cpp:7:0:
library.h:8:6: note: initializing argument 2 of ‘void MatSub(MYINT
, const MYINT*, MYINT*, MYINT, MYINT, MYINT, MYINT, MYINT)’
void MatSub(MYINT A, const MYINT B, MYINT C, MYINT I, MYINT J, MYINT shrA, MYINT shrB, MYINT shrC);
^~~~~~
seedot_fixed.cpp:51:54: error: invalid conversion from ‘MYINT
{aka short int
}’ to ‘MYINT {aka short int}’ [-fpermissive]
MatSub(&B[i][0][0], 1, &tmp6[0][0], 1, 25, 128, 1, &tmp5[0][0]);
^~~~~~~~~~~
In file included from seedot_fixed.cpp:7:0:
library.h:8:6: note: initializing argument 8 of ‘void MatSub(MYINT
, const MYINT*, MYINT*, MYINT, MYINT, MYINT, MYINT, MYINT)’
void MatSub(MYINT A, const MYINT B, MYINT C, MYINT I, MYINT J, MYINT shrA, MYINT shrB, MYINT shrC);
^~~~~~
seedot_fixed.cpp:56:44: error: invalid conversion from ‘int’ to ‘MYINT
{aka short int
}’ [-fpermissive]
Transpose(25, &tmp8[0][0], &tmp6[0][0], 1);
^
In file included from seedot_fixed.cpp:7:0:
library.h:26:6: note: initializing argument 1 of ‘void Transpose(MYINT
, MYINT*, MYINT, MYINT)’
void Transpose(MYINT A, MYINT B, MYINT I, MYINT J);
^~~~~~~~~
seedot_fixed.cpp:56:30: error: invalid conversion from ‘MYINT
{aka short int
}’ to ‘MYINT {aka short int}’ [-fpermissive]
Transpose(25, &tmp8[0][0], &tmp6[0][0], 1);
^~~~~~~~~~~
In file included from seedot_fixed.cpp:7:0:
library.h:26:6: note: initializing argument 3 of ‘void Transpose(MYINT*, MYINT*, MYINT, MYINT)’
void Transpose(MYINT A, MYINT B, MYINT I, MYINT J);
^~~~~~~~~
seedot_fixed.cpp:60:82: error: invalid conversion from ‘int’ to ‘MYINT
{aka short int
}’ [-fpermissive]
tMulNN(&tmp6[0][0], 4, 5, &tmp10[0][0], 1, &tmp8[0][0], 1, 25, 2, &tmp9[0], 0);
^
In file included from seedot_fixed.cpp:7:0:
library.h:10:6: note: initializing argument 2 of ‘void MatMulNN(MYINT*, MYINT*, MYINT*, MYINT*, MYINT, MYINT, MYINT, MYINT, MYINT, MYINT, MYINT)’
void MatMulNN(MYINT A, MYINT B, MYINT C, MYINT tmp, MYINT I, MYINT K, MYINT J, MYINT shrA, MYINT shrB, MYINT H1, MYINT H2);
^~~~~~~~
seedot_fixed.cpp:60:82: error: invalid conversion from ‘int’ to ‘MYINT
{aka short int
}’ [-fpermissive]
tMulNN(&tmp6[0][0], 4, 5, &tmp10[0][0], 1, &tmp8[0][0], 1, 25, 2, &tmp9[0], 0);
^
In file included from seedot_fixed.cpp:7:0:
library.h:10:6: note: initializing argument 3 of ‘void MatMulNN(MYINT
, MYINT
, MYINT*, MYINT*, MYINT, MYINT, MYINT, MYINT, MYINT, MYINT, MYINT)’
void MatMulNN(MYINT A, MYINT B, MYINT C, MYINT tmp, MYINT I, MYINT K, MYINT J, MYINT shrA, MYINT shrB, MYINT H1, MYINT H2);
^~~~~~~~
seedot_fixed.cpp:60:48: error: invalid conversion from ‘MYINT
{aka short int
}’ to ‘MYINT {aka short int}’ [-fpermissive]
MatMulNN(&tmp6[0][0], 4, 5, &tmp10[0][0], 1, &tmp8[0][0], 1, 25, 2, &tmp9[0], 0);
^~~~~~~~~~~
In file included from seedot_fixed.cpp:7:0:
library.h:10:6: note: initializing argument 6 of ‘void MatMulNN(MYINT
, MYINT
, MYINT*, MYINT*, MYINT, MYINT, MYINT, MYINT, MYINT, MYINT, MYINT)’
void MatMulNN(MYINT A, MYINT B, MYINT C, MYINT tmp, MYINT I, MYINT K, MYINT J, MYINT shrA, MYINT shrB, MYINT H1, MYINT H2);
^~~~~~~~
seedot_fixed.cpp:60:71: error: invalid conversion from ‘MYINT
{aka short int
}’ to ‘MYINT {aka short int}’ [-fpermissive]
MatMulNN(&tmp6[0][0], 4, 5, &tmp10[0][0], 1, &tmp8[0][0], 1, 25, 2, &tmp9[0], 0);
^~~~~~~~
In file included from seedot_fixed.cpp:7:0:
library.h:10:6: note: initializing argument 10 of ‘void MatMulNN(MYINT
, MYINT
, MYINT*, MYINT*, MYINT, MYINT, MYINT, MYINT, MYINT, MYINT, MYINT)’
void MatMulNN(MYINT A, MYINT B, MYINT C, MYINT tmp, MYINT I, MYINT K, MYINT J, MYINT shrA, MYINT shrB, MYINT H1, MYINT H2);
^~~~~~~~
seedot_fixed.cpp:64:62: error: invalid conversion from ‘int’ to ‘MYINT
{aka short int
}’ [-fpermissive]
ScalarMul(128, &tmp10[0][0], &tmp11[0][0], &tmp7, 1, 128, 1);
^
In file included from seedot_fixed.cpp:7:0:
library.h:28:6: note: initializing argument 1 of ‘void ScalarMul(MYINT
, MYINT
, MYINT*, MYINT, MYINT, MYINT, MYINT)’
void ScalarMul(MYINT A, MYINT B, MYINT C, MYINT I, MYINT J, MYINT shrA, MYINT shrB);
^~~~~~~~~
seedot_fixed.cpp:64:46: error: invalid conversion from ‘MYINT
{aka short int
}’ to ‘MYINT {aka short int}’ [-fpermissive]
ScalarMul(128, &tmp10[0][0], &tmp11[0][0], &tmp7, 1, 128, 1);
^~~~~
In file included from seedot_fixed.cpp:7:0:
library.h:28:6: note: initializing argument 4 of ‘void ScalarMul(MYINT
, MYINT*, MYINT*, MYINT, MYINT, MYINT, MYINT)’
void ScalarMul(MYINT A, MYINT B, MYINT C, MYINT I, MYINT J, MYINT shrA, MYINT shrB);
^~~~~~~~~
seedot_fixed.cpp:79:88: error: invalid conversion from ‘int’ to ‘MYINT
{aka short int
}’ [-fpermissive]
(&Z[i][0][0], 0, 128, &tmp16[0], 0, 10, 1, &tmp17[0][0], &tmp15[0][0], 128, 1);
^
In file included from seedot_fixed.cpp:7:0:
library.h:12:6: note: initializing argument 3 of ‘void MatMulCN(const MYINT
, MYINT*, MYINT*, MYINT*, MYINT, MYINT, MYINT, MYINT, MYINT, MYINT, MYINT)’
void MatMulCN(const MYINT A, MYINT B, MYINT C, MYINT tmp, MYINT I, MYINT K, MYINT J, MYINT shrA, MYINT shrB, MYINT H1, MYINT H2);
^~~~~~~~
seedot_fixed.cpp:79:54: error: invalid conversion from ‘MYINT
{aka short int
}’ to ‘MYINT {aka short int}’ [-fpermissive]
MatMulCN(&Z[i][0][0], 0, 128, &tmp16[0], 0, 10, 1, &tmp17[0][0], &tmp15[0][0], 128, 1);
^~~~~~~~~~~~
In file included from seedot_fixed.cpp:7:0:
library.h:12:6: note: initializing argument 8 of ‘void MatMulCN(const MYINT
, MYINT
, MYINT*, MYINT*, MYINT, MYINT, MYINT, MYINT, MYINT, MYINT, MYINT)’
void MatMulCN(const MYINT A, MYINT B, MYINT C, MYINT tmp, MYINT I, MYINT K, MYINT J, MYINT shrA, MYINT shrB, MYINT H1, MYINT H2);
^~~~~~~~
seedot_fixed.cpp:79:68: error: invalid conversion from ‘MYINT
{aka short int
}’ to ‘MYINT {aka short int}’ [-fpermissive]
MatMulCN(&Z[i][0][0], 0, 128, &tmp16[0], 0, 10, 1, &tmp17[0][0], &tmp15[0][0], 128, 1);
^~~~~~~~~~~~
In file included from seedot_fixed.cpp:7:0:
library.h:12:6: note: initializing argument 9 of ‘void MatMulCN(const MYINT
, MYINT
, MYINT*, MYINT*, MYINT, MYINT, MYINT, MYINT, MYINT, MYINT, MYINT)’
void MatMulCN(const MYINT A, MYINT B, MYINT C, MYINT tmp, MYINT I, MYINT K, MYINT J, MYINT shrA, MYINT shrB, MYINT H1, MYINT H2);
^~~~~~~~
seedot_fixed.cpp:90:36: error: invalid conversion from ‘int’ to ‘MYINT
{aka short int
}’ [-fpermissive]
ArgMax(1, &tmp19, 10, &tmp18[0][0]);
^
In file included from seedot_fixed.cpp:7:0:
library.h:24:6: note: initializing argument 1 of ‘void ArgMax(MYINT
, MYINT, MYINT, MYINT
)’
void ArgMax(MYINT A, MYINT I, MYINT J, MYINT index);
^~~~~~
seedot_fixed.cpp:90:12: error: invalid conversion from ‘MYINT
{aka short int
}’ to ‘MYINT {aka short int}’ [-fpermissive]
ArgMax(1, &tmp19, 10, &tmp18[0][0]);
^~~~~~
In file included from seedot_fixed.cpp:7:0:
library.h:24:6: note: initializing argument 2 of ‘void ArgMax(MYINT*, MYINT, MYINT, MYINT*)’
void ArgMax(MYINT *A, MYINT I, MYINT J, MYINT *index);
^~~~~~
make: *** [seedot_fixed.o] Error 1
Build...success
Execution...success
Traceback (most recent call last):
File "SeeDot.py", line 94, in
obj.run()
File "SeeDot.py", line 89, in run
obj.run()
File "/home/nelsonmartins/EdgeML/Tools/SeeDot/seedot/main.py", line 325, in run
return self.runForFixed()
File "/home/nelsonmartins/EdgeML/Tools/SeeDot/seedot/main.py", line 260, in runForFixed
res = self.findBestScalingFactor()
File "/home/nelsonmartins/EdgeML/Tools/SeeDot/seedot/main.py", line 185, in findBestScalingFactor
res = self.performSearch()
File "/home/nelsonmartins/EdgeML/Tools/SeeDot/seedot/main.py", line 136, in performSearch
Common.Version.Fixed, Common.DatasetType.Training, Common.Target.X86, i)
File "/home/nelsonmartins/EdgeML/Tools/SeeDot/seedot/main.py", line 118, in runOnce
acc = self.predict(version, datasetType)
File "/home/nelsonmartins/EdgeML/Tools/SeeDot/seedot/main.py", line 106, in predict
acc = obj.run()
File "/home/nelsonmartins/EdgeML/Tools/SeeDot/seedot/predictor.py", line 128, in run
acc = self.execute()
File "/home/nelsonmartins/EdgeML/Tools/SeeDot/seedot/predictor.py", line 109, in execute
return self.executeForLinux()
File "/home/nelsonmartins/EdgeML/Tools/SeeDot/seedot/predictor.py", line 102, in executeForLinux
acc = self.readStatsFile()
File "/home/nelsonmartins/EdgeML/Tools/SeeDot/seedot/predictor.py", line 121, in readStatsFile
return float(stats[0])
IndexError: list index out of range

Run ProtoNN

I am trying to execute the ProtoNN algorithm on my ubuntu 16.04 machine. However, I am unable to interpret what inputs to provide to the code and in what format. Kindly provide a small instruction set on how to run ProtoNN for the training and testing data you specified in the readme.
Thanks

Error in EdgeML/tf/examples/FastCells/helpermethods.py

Method saveMeanStd(mean, std, currDir):
Line number 267, 268 is:
np.save(os.path.join(dataDir, 'mean.npy'), mean)
np.save(os.path.join(dataDir, 'std.npy'), std)

Error: variable dataDir is undefined.

I guess it has to be:
np.save(os.path.join(currDir, 'mean.npy'), mean)
np.save(os.path.join(currDir, 'std.npy'), std)

Shall I raise a PR?

IHT routine failing due to numpy call on torch tensor

IHT routine fails in BonsaiTrainer due to use of np.copy() on Torch tensor.

Epoch Number: 32

Classification Train Loss: 0.1960460032439894
Training accuracy (Classification): 0.9822221928172641
Test accuracy 0.931241
MarginLoss + RegLoss: 0.18808786571025848 + 0.13880659639835358 = 0.32689446210861206


Epoch Number: 33
Traceback (most recent call last):
  File "bonsai_example.py", line 94, in <module>
    main()
  File "bonsai_example.py", line 89, in main
    dataDir, currDir)
  File "/home/pushkalkatara/pytorch/lib/python3.5/site-packages/edgeml-0.2.1-py3.5.egg/edgeml_pytorch/trainer/bonsaiTrainer.py", line 337, in train
  File "/home/pushkalkatara/pytorch/lib/python3.5/site-packages/edgeml-0.2.1-py3.5.egg/edgeml_pytorch/trainer/bonsaiTrainer.py", line 129, in runHardThrsd
  File "/home/pushkalkatara/pytorch/lib/python3.5/site-packages/numpy/lib/function_base.py", line 792, in copy
    return array(a, order=order, copy=True)
  File "/home/pushkalkatara/pytorch/lib/python3.5/site-packages/torch/tensor.py", line 458, in __array__
    return self.numpy()
RuntimeError: Can't call numpy() on Variable that requires grad. Use var.detach().numpy() instead.

FastGRNN Arduino/C++ code

Hello, I have a brief question about the results in the NIPS paper.

Did you use seeDot to generate the fastRNN/fastGRNN on Arduino, or is it something that you built from scratch and it is not yet available on the repo?

Thank you.

bat script

Have an equivalent bat file similar to the sh file for running protonn/bonsai test.

SeeDot Question

Hi, I'm using protonn with the seedot tool for my C program but I have some doubts regarding the tool.

  1. in my program I am using 32bits variables so I changed in common.py to WordLength from 16 to 32 and in the datatypes.h in the predictor folder I changed to #define INT32. I just wanted to know if anything else needed to be changed.
  2. The size of the exponential tables is constant for 16bits and 32bits gives to change size in the code, in article vi it ​​was the variable T that has value 6 to read a value of 6 bits.
  3. Continuing in the exponential I realized the theory in the article, but in the code, I do not understand the calculations of the variables tmp12, temp13, temp14.
    Currently, the precision of the C model is very close to the python model even though it ignores 20 bits of the 32bits variables in the exponential calculation but wanted to try to figure out these calculations to try to implement a third table to cover more of the variable (greater precision?). and maintain the size of other exponential tables.

Error in run_ProtoNNPredict_usps10.sh

There is a simple issue in the script that runs ProtoNNPredict on the example data set. I have highlighted the issue in the image I provided: the number of iterations is supposed to be 20 and not 2. Please fix this in one of the next commits so that it will run immediately after cloning the project and building it.
edgemlbug

EdgeML on Raspberry Pi 3

I have tried to run Bonsai and ProtoNN on Raspberry Pi 3.

But, there was a problem with MKL installation.

MKL only runs on the Intel processor not ARM processor.

You guys mentioned that you've tested on Arduino Uno,

but I wonder how that is possible.

Remove hardcodings

Remove hardcodings of file paths

std::ifstream ifs("/home/t-vekusu/ICMLDatasets/multiclass/mnist/train");

#pragma comment(lib, "../../../../Libraries/MKL/Win/Microsoft.MachineLearning.MklImports.lib")

How to predict using bonsai or protoNN in python

Hello, I trained a model in Bonsai and a ProtoNN using the python codes presented in the examples. I would like to know if there is a way to only predict results in python with the trained model.

Refactoring out serialization code

Move the serialization code out of Bonsai model. One option is to have a serialization base class and different strategies for custom export which takes in a Bonsai model.

process_google.py MemoryError

The script process_google.py reads all the .wav in a numpy array which might cause a memory error on many systems.
Either use of disk storage through numpy.memmap or storing array as .HDF5 with PyTables or use of some library like Pandas might solve this issue.

Confusion about Parsing Protocol

Hello, I am working on cross compiling ProtoNNPredict for a Mips device. It is the Onion Omega 2+ - an MT7688 device. It hangs when the model file, "test.txt," is memory mapped and is being read and interpreted. It has been hard to debug this issue without knowing exactly how the parser works. Can you explain it to me based on the problem variables index_value and labelValue?

SeeDot: ProtoNN X86 generated code (32-bit) gives segmentation fault

I'm working on using the X86 backend instead of the Arduino backend for the SeeDot compiler. Edit: Here I'm using ProtoNN. I'm able to generate C++ code, but I get a segmentation fault on the following line in seedot_fixed.cpp:

SparseMatMul(&Zidx[0], &Zval[0], X, &tmp6[0][0], 257, 128, 128, 2);

If I instead use 256, such that the line then becomes

SparseMatMul(&Zidx[0], &Zval[0], X, &tmp6[0][0], 256, 128, 128, 2);

then it works.

If we then assume that I have generated an executable called main and run this with the following command

./main protonn fixed training

then I get the following output

#test points = 2007
Correct predictions = 352
Accuracy = 17.539

although, at the end of the training, I'm presented with the following output

Execution...success
Accuracy is 89.537%

Edit: If I use Bonsai instead of ProtoNN, I get an accuracy of 26,5% (instead of 17,5% like above). And the segmentation fault is gone.

Some questions:

  • What is the reason for the segmentation fault?
  • How come the accuracy is low (the first I mentioned), and not equal to the second I mentioned?
  • I might have made some mistakes on the road to generating X86 and would be very glad for any directions to how to do his correctly. I had to make several changes to SeeDot.py and main.py in the seedot directory, as it was quite hardcoded for the Arduino platform.

Input Values in Arduino Sketch

Hello!
I am using the SeeDot to generate the code for the Arduino and I have some questions regarding the Arduino Sketch.
1.) Where do the test values in array X stored in devices' flash memory come from?
2.) How do the input values need to be presented? I am using Bonsai to create the model, the training and test data in presented as integer values. When using the Arduino sketch, do the input values need to be scaled? If so, which scaling factor should be used? The data stored in array X has different values than any training-/testdata used to create the model.

Thank you very much!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.