GithubHelp home page GithubHelp logo

harold-api's Introduction

Harold-API

This project is the fruit of my internship within Harold Waste

It sum the deep research in deep learning models in order to perform a classification task over waste images.

Waste images, having large characteristics and many features, can only be classified using sophisticated models.

For that in this work we intend to use the Autoencoder theory, Restricted Boltzmann Machines as well as the possbility of transfer learning for pre-trained models to classify our waste images.

Access to labeled data is not always easy to acheive, for that two possibilities could be take into account:

  • perform data augmentation over a certain labeled data set
  • using the scraping method to download data from web

Models as Autoencoders, RBM and DBN (Deep Belief Network : a stack of RBM) are trained via a specific process: First, model is been trained with unsupervised data in order to learn how to match it's input to it's output, in other words to learn how to reconstruct it's input from the hidden code or so-called features.

Then the model is fine-tuned using labeled data This process is called semi-supervised learning.

Finally model is tested on unseen data in order to measure its accuracy and efficency.

An small guide to understand the code developed :

  • Folder Data contains a first folder including waste images (data in orginal form) and a second folder containing the data obtained via data augmentation method

  • Folder Models contains two sub-folders :

    • Clustering : resumes the cluster model, inspired by k-means backed with sklearn library
    • Deep_Learning : includes python scripts of models developed during my internship, (FC Autoencoder, Convolutional Autoencoder, Denoising Autoencoder, RBM, DBN, Convolutional neural network, VGG16 backed with transfer learning...)
  • Folder Application, contains graphs of trained model, h5 files for saving model's weights, json files for saving model's architectures and a python file detailing the training process and clustering results

  • Folder Services, sums the utilities function used to develop models and to perform clustering and classification tasks

Training deep learning models was acheived using floydhub, a Platform-as-a-Service for training and deploying deep learning models in the cloud.

floydhub is flexible in terms of prices for GPU and CPU instance, it is also easy to manage, ease to upload and atttach data, add to managing your workspace and jupyter notebooks.

harold-api's People

Contributors

wassimabida avatar

Watchers

James Cloos avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.