GithubHelp home page GithubHelp logo

muskanmahajan37 / reco.ai Goto Github PK

View Code? Open in Web Editor NEW

This project forked from cipheraxat/reco.ai

0.0 0.0 0.0 18.6 MB

A Human Activity Recognising Web-App to classify your different Static and Dynamic Activities based on accelerometer and gyroscope readings.

Home Page: https://youtu.be/5ydDAwN0fu4

License: MIT License

Python 3.44% HTML 20.87% CSS 71.59% JavaScript 4.09%

reco.ai's Introduction

Whatcha doin'?

A Human Activity Recognising Deep Learning based Web-Application to classify your different Static and Dynamic Activities based on accelerometer and gyroscope readings.


Checkout the Project Demonstration and Explanation Video Here!


Preview

Functionalities

  • Predicts the activity that a person is performing based on Gyroscopic and Accelerometer Readings.
  • The different Static and Dynamic activities considered are:
  1. Walking
  2. Walking upstairs
  3. Walking downstairs
  4. Sitting
  5. Standing
  6. Laying
  • Minimal UI with an eye-soothing design.


Abstract

Mobile phones or smartphones have seen a gradual surge in becoming the central communication device in people’s lives. These tiny technological mastery, equipped with a set of sensors have an ability to simultaneously function as a platform for human activity recognition. The embedded sensors such as the accelerometer, digital compass, gyroscope, GPS and camera are enabling applications spanning across various domains. These sensors pave way to easier and accessible human activity recognition With such capabilities, the smartphones have immense applications in healthcare.

Smart environments can be developed to provide support to people with risk factors when it comes to living independently. We propose a categorization of 6 human activities:

  1. Walking
  2. Walking upstairs
  3. Walking downstairs
  4. Sitting
  5. Standing
  6. Laying
We divide this into two large categories:
  1. Static( standing, sitting, laying)
  2. Dynamic( Walking, walking upstairs, walking downstairs).
We classify the activities performed by the person using the sensor data and report the results,challenges and project prospects.

TARGET AUDIENCE

The application targets people who need monitoring and help with their daily activities,especially those with loss of cognitive autonomy. It aims on determining the activities of the person based on sensor data. The primary implementation of the application is in healthcare, especially for monitoring people undergoing physiotherapy, physical trauma recovery and the elderly.

DATASET

The dataset used was the Human Activity Recognition Using Smartphones Data Set.

Description:

The data was obtained by carrying out experiments on a group of 30 people within the ages of 19-48 years. The activities performed were :walking,walking upstairs, walking downstairs, sitting, standing, laying. The data was obtained by wearing a smartphone on the waist. The embedded accelerometer and gyroscope were used to capture linear acceleration and angular velocity for all the 3 axes at a rate of 50Hz.


Attributes:

For each record in the dataset it is provided:
  1. Triaxial acceleration from the accelerometer (total acceleration) and the estimated body acceleration.
  2. Triaxial Angular velocity from the gyroscope.
  3. A 561-feature vector with time and frequency domain variables.
  4. Its activity label.
  5. An identifier of the subject who carried out the experiment.

Sensor readings:

Accelerometer and Gyroscope readings

SUGGESTED SKELETAL ARCHITECTURE

Activity recognition systems have three main components: A low-level sensing module that continuously gathers relevant information about activities using microphones, accelerometers, light sensors, and so on.
A feature processing and selection module that processes the raw sensor data into features that help discriminate between activities A classification module that uses the features to infer what activity an individual or group of individuals is engaged

MODEL

The basic model principle applied here are deep learning networks with LSTM as their foundational functionalities . The model has been implemented via the latest version of Keras (Keras 2.4.2) with a tensorflow backend.


The model uses LSTM layers because of two main reasons:

  1. To avoid vanishing and exploding gradients
  2. To make best learning sense out of incomprehensible and noisy data


The dataset used here is huge and has noisy values, LSTM proves to be the best model by eradicating the complete need to go through the noisy data and undergo noise attenuation manually. As the data is also huge the gradient descent method usually undergoes vanishing or exploding gradients which is managed by LSTM as well. Architecture: The model’s data is using a mini batch of size 16 and is trained over 30 epochs. It contains 32 hidden layers of tan-h LSTM. A dropout layer of 0.5 parameter is passed to reduce the risk of overfitting. The architecture ends with a dense layer with an activation function of sigmoid. As the classification is multiclass, hence, the loss used is categorical cross entropy. The model is evaluated over the ‘rmsprop’ function and measured over ‘accuracy’ metrics.


An input dimension of 9 parameters accreted with 32 LSTM layers form a recondite convoluted learning network, which ends in a dense connected layer to bring out all the possible combinations of nodes for the module.

CHALLENGES

  1. Intraclass variation and interclass similarity
  2. Recognition under real world settings viz. Complex environmental factors,multiple subjects
  3. Data Security
  4. Availability of technology
  5. Scalability
  6. Adaptability

PROJECT PROSPECTS

Ambient Assisted Living Ambient Assisted Living (AAL) environments encompass technical systems and the Internet of Things (IoT) tools to support seniors in their daily routines. They aim to enable seniors to live independently and safely for as long as possible when faced declining physical or cognitive capacities. The demarcation between traditional and smart systems is automation. A vast number of existing commercial projects are based on predefined rules and actions,which can be changed manually, thereby reducing efficiency. By integrating IOT and Artificial Intelligence, we get a futuristic amalgamation:
Ambient Intelligence which has immense prospects to help the ailing.

Visual human activity recognition Smart environments which include cameras and sensors can be used to obtain images and hence can be used for surveillance and monitoring. This is a viable addition to the project which makes it even serviceable and gives it a commercial use case.

REFERENCES

https://github.com/STRCWearlab/DeepConvLSTM
http://www.mdpi.com/1424-8220/16/1/115/html



Contributors

Hritik Bhandari

Your Name Here (Insert Your Image Link In Src

Akshat Anand

Your Name Here (Insert Your Image Link In Src

Pratyaksh Bhalla

Your Name Here

Gayathri Seetharaman

Your Name Here (Insert Your Image Link In Src

License

License

Made with ❤️ by DS Community SRM

reco.ai's People

Contributors

cipheraxat avatar gayathri621 avatar hritikbhandari avatar pratyaksh64 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.