GithubHelp home page GithubHelp logo

opendatacam / opendatacam-mobile Goto Github PK

View Code? Open in Web Editor NEW
12.0 12.0 3.0 45.63 MB

OpenDataCam mobile app for android

Home Page: https://play.google.com/store/apps/details?id=com.opendatacam

License: MIT License

Java 0.60% CSS 0.01% HTML 0.05% JavaScript 0.06% CMake 0.76% C 55.17% C++ 42.97% Python 0.39%

opendatacam-mobile's Introduction

OpenDataCam – An open source tool to quantify the world

OpenDataCam is an open source tool that helps to quantify the world. With computer vision OpenDataCam understands and quantifies moving objects. The simple setup allows everybody to count moving objects from cameras and videos.

People use OpenDataCam for many different use cases. It is especially popular for traffic studies (modal-split, turn-count, etc.) but OpenDataCam detects 50+ common objects out of the box and can be used for many more things. And in case it does not detect what you are looking for, you can always train your own model.

OpenDataCam uses machine learning to detect objects in videos and camera feeds. It then follows the objects as they move accross the scene. Define counters via the easy to use UI or API, and every time an object crosses the counter, OpenDataCam takes count.

Demo Videos

πŸ‘‰ UI Walkthrough (2 min, OpenDataCam 3.0) πŸ‘‰ UI Walkthrough (4 min, OpenDataCam 2.0) πŸ‘‰ IoT Happy Hour #13: OpenDataCam 3.0
OpenDataCam 3.0 Demo OpenDataCam IoT

Features

OpenDataCam comes feature packed, the highlight are

  • Multiple object classes
  • Fine grained counter logic
  • Trajectory analysis
  • Real-time or pre-recorded video sources
  • Run on small devices in the field or data centers in the cloud
  • You own the data
  • Easy to use API

🎬 Get Started, quick setup

The quickest way to get started with OpenDataCam is to use the existing Docker Images.

Pre-Requesits

Installation

# Download install script
wget -N https://raw.githubusercontent.com/opendatacam/opendatacam/v3.0.2/docker/install-opendatacam.sh

# Give exec permission
chmod 777 install-opendatacam.sh

# Note: You will be asked for sudo password when installing OpenDataCam

# Install command for Jetson Nano
./install-opendatacam.sh --platform nano

# Install command for Jetson Xavier / Xavier NX
./install-opendatacam.sh --platform xavier

# Install command for a Laptop, Desktop or Server with NVIDIA GPU
./install-opendatacam.sh --platform desktop

This command will download and start a docker container on the machine. After it finishes the docker container starts a webserver on port 8080 and run a demo video.

Note: The docker container is started in auto-restart mode, so if you reboot your machine it will automaticaly start opendatacam on startup. To stop it run docker-compose down in the same folder as the install script.

Use OpenDataCam

Open your browser at `http://[IP_OF_JETSON]:8080``. (If you are running with the Jetson connected to a screen try: http://localhost:8080)

You should see a video of a busy intersection where you can immediately start counting.

Next Steps

Now you can…

  • Drag'n'Drop a video file into the browser window to have OpenDataCam analzye this file
  • Change the video input to run from a USB-Cam or other cameras
  • Use custom neural network weigts

and much more. See Configuration for a full list of configuration options.

πŸ”Œ API Documentation

In order to solve use cases that aren't taken care by our opendatacam base app, you might be able to build on top of our API instead of forking the project.

https://opendatacam.github.io/opendatacam/apidoc/

πŸ—ƒ Data export documentation

πŸ›  Development notes

See Development notes

πŸ’°οΈ Funded by the community

  • @rantgithub funded work to add Polygon counters and to improve the counting lines

πŸ“«οΈ Contact

Please ask any Questions you have around OpenDataCam in the GitHub Discussions. Bugs, Features and anythings else regarding the development of OpenDataCam is tracked in GitHub Issues.

For business inquiries or professional support requests please contact Valentin Sawadski or visit OpenDataCam for Professionals.

πŸ’Œ Acknowledgments

opendatacam-mobile's People

Contributors

b-g avatar tdurand avatar vsaw avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

opendatacam-mobile's Issues

Implementing Object detection on Android with NCNN

Starting to gathering some research / progress on github issues.. For future reference + help me to organize craziness in my head.

After benchmarking plenty of example apps / framework etc etc ... I came the conclusion at this time of writing / state of my knowledge that the fastest and most portable way to run YOLO on Android (and iOS if Android app sucessfull) is to use the NCNN framework from Tencent: https://github.com/Tencent/ncnn . Tencent being a company of the scale of Google in china.. The License is MIT.

There is a very nice example app showcasing several neural networks : https://github.com/cmdbug/YOLOv5_NCNN

Think of the NCNN as a framework to run a neural network (like darknet or tensorflow or pytorch).. but super optimized for Mobile phones CPU inference..

It is very very very optimized for android and iphone cpus ... I get 17 FPS for YOLOv4-tiny on a Xiaomi mi 8 (200€ phone).. for example with Tensorflow lite I think I get 2 FPS so this is crazy magic... but the interesting thing is that it also aims to support lots of platforms https://github.com/Tencent/ncnn#supported-platform-matrix , maybe to watch for the future also to run on Raspberrys, jetsons, web... Right now it is not very performance on GPUs.. but they are working on it. Another very impressive demo is that you can ship NCNN on the web via webassembly: https://github.com/nihui/ncnn-webassembly-nanodet (here it is running Nanonet, a new lightweight yolo like neural network that is almost as accurate but faster: https://github.com/RangiLyu/nanodet ..

The other super great news of being able to run YOLOv4-tiny on mobile is that you can train custom weights the same way and then convert them to NCNN "compatible" weights.. + we already know that YOLOv4-tiny is accurate enough.

The code from https://github.com/cmdbug/YOLOv5_NCNN is licensed GPLv3 but I think the actual YOLOv4-tiny C++ code that is the only part we need is available as MIT code here: https://github.com/Tencent/ncnn/blob/master/examples/yolov4.cpp .. So this is to be investigated to determine the future license of OpenDataCam mobile

Here is how to integrate it on Android:

  • NCNN comes as a binary file you include in the app
  • You can write C++ code that use NCNN .. eg: YOLOv4-tiny.cpp
  • You call this C++ code from Java / Kotlin
  • This gives you a YOLOv4.detect() function that takes in input an image and return a list of boxes
  • Then you hook the camera API of Android "to feed" frames to it
  • You do some "bridge" to be able to launch all this machinery from an Android Webview and have a way to render you UI on top of the camera preview + beeing able to get the frames data in real time..

This sounds super easy 😁, but I spent the last 2 weeks to really understand how to do this in practice.. by studying the code of the https://github.com/cmdbug/YOLOv5_NCNN , Android app development and the Cameras APIs..

The good news is that I have it mostly figured out πŸŽ‰.. I ended up doing my own "glue" code using the latest version of the CameraX api which simplifies a bit things.. and also supports things that are not supported in the example app.. like Camera/ Device orientation (was only working on portrait).. it is still a bit buggy though, the coordinates of the boxes are a bit off.. there is still some aspect ratio magic I need to figure out

I also put together a working Webview bridge using Capacitor (https://capacitorjs.com/) , and I'm able to render a HTML canvas on top of the Camera Preview which draw the boxes...

The whole things seems as performant as the Android native demo app.. so I guess the "core" Proof of concept is mostly under control now... Demo to try soon !

Change hardcoded path to app into something relative

Right now we hardcode the path to the app in android with this for NeDB and for Next.js

/data/data/com.opendatacam

This works if the app is installed on the internal storage.. but I think this can fails if installed on the SD card.

Need to so how to make this robust.

Improve assets copy to android filesystem on first install & on update

Problem

In order to use nodejs-mobile , you need to first copy the node.js app files to the phone filesystem, you can't start node.js directly with the APK file. (todo when the App in first installed and anytime that the node.js app is changed (each app update) )

Right now we were using the "recommended" method from nodejs-mobile, which is to ship the app + node_module folder in the assets directory and recursively copy this to the phone file system https://code.janeasystems.com/nodejs-mobile/android/node-project-folder#copy-the-nodejs-project-at-runtime-and-start-from-there .

Problem with this is that with a big node_module folder with quite a lot of dependency like OpenDataCam , this means copy pasting 24,723 Files, 4,709 Folders and 142 MB .. And on top of adding 140 MB to the app size, copying lots of small files is slow.

"Out of the box" , the first app install takes 2min30s (and each update).

Two things we are looking to improve here:

  • App first install and subsequent update time needed: ie-> copy files faster
  • App dependency size: reduce that 142 MB -> have less files to copy

copy files faster

Turns out that there was a low hanging fruit here to improve this, instead of copy pasting all the files recursively which is slow, we create a zip at build time of the node.js app and then unzip it on the phone. ( idea from stackoverflow: https://stackoverflow.com/a/42415755 )

I've tried that : c1fcca4 and we get from 2min30s to around 20 seconds .. So this is already a huge win πŸ™Œ

Only downside is that we need to integrate that zipping on the build process. for now I'm doing it manually with this command:

zip -0 -r ../nodejs-project.zip .

Have less files (and less MBs) to copy

Thinking of having 24,723 Files, 4,709 Folders for 142Mb of files in the node_modules folder is a bit crazy.. when the client side part of the app (actually loaded in the browser) is max 5 MB .. the server side part has obviously node.js has a dependency but not even included in those 142 MB ( it is 142 MB + the 30-50 MB of node.js) ... My guess is that it might also be only 5 - 20 MB in the end...

Normally we never care about this because we do the npm install on the server and the size of the node_modules folder doesn't matter.. what matters is how much you send to the client ( see explanation here: vercel/next.js#14339 ) . Here we want to also reduce this to avoid.

  • having a huge app of 300 MB (which is not that prioritary)
  • speed up the time to copy the files a install time by reducing the files number & weight

One tool to our service for this is npm prune --production , which you can run after npm run build and delete all the node_modules dependencies that are listed as "devDependencies" in the package.json: https://github.com/opendatacam/opendatacam/blob/master/package.json#L43

Out of the box doing this already reduce a bit, down to 115 MB (121,226,417 bytes) and 22,298 Files, 4,231 Folders

What I need to do next is to improve this, by:

  • putting more things in the devDependencies, maybe some stuff are listed in the normal dependencies but not needed when running
  • remove some dependencies.. for example mongodb : #7
  • maybe cherry-pick node_modules dependencies better than just with the npm prune (which is I think just copy/do not copy) , here is an analyse of the dependencies tree .. for example our node-moving-things-tracker is 10 MB because it ships MOT17 benchmark files... and the node.js is really using 3 javascript files, maybe 100 kB max πŸ˜‰

image

image

Persistence Layer : MongoDB -> NeDB

No, we won't run MongoDB on android (for now ;-) )

https://github.com/louischatriot/nedb sounds perfect , we already used it for the twitter bot.. lightweight and it was working very well ( @b-g recommended this back in the days πŸ™Œ )

TODO:

Potential mitigations for CPU throttling (thermal heating)

Problem

Life is unfair 😁 .

I was happily running OpenDataCam and playing with it in the street.. and after 2 min I noticed some FPS drops (from 20-17 FPS to 15 FPS).. and then from 15 to 12, and then from 12 to 9 FPS (this is 10 min time), it was like 25ΒΊC temperature in the afternoon

My first though was "shit" Node.js dependency is affecting long term run.. but then I did the test with the "native" benchmark app.. and got the same problem

The issue is caused by CPU thermal throttling . which is well known to game developer on mobile also.. The device overheats and then the CPU need to throttle to avoid damages..

I ran a benchmark over 15 min time: https://play.google.com/store/apps/details?id=skynet.cputhrottlingtest&hl=fr&gl=US this morning (temperature more like 16ΒΊC outside), and it holded up way better...

Screenshot_2020-12-12-08-22-02-685_skynet cputhrottlingtest

What can we do about it ?

6197AJ8U1uL AC_SL1000

  • Test , test and test and get an idea of the real performance achievable depending on outside temperature / cooling / CPU raw performance ..

  • Add alert to the user in the app that the performance are dropping cause of CPU throttling and he should cool the device to keep max perf.. there is an API for this: https://developer.android.com/ndk/reference/group/thermal

  • Investigate NCNN settings / android settings .. is it possible to have a "constant FPS" mode that does not use the CPU full speed but rather prioritize to be consistent for several hours etc etc.. Tencent/ncnn#1901 (comment) ??

Figure out LICENSE

This will be Open Source but need to figure out under which LICENSE .. Mostly depends on #2 ...

Create config UI

Expose some settings:

  • ncnn yolo target input ( currently 320x192 ), could increase for better precision but lower framerate
  • ncnn GPU mode with vulkan (slower on most phones, but for future ..)

later

  • camera controls ?
  • neural network switching, make it possible to run full YOLO or lightweight nano det...
  • neural network weights & label..

See how those configurable settings at runtime could integrate with global OpenDataCam config

Data view : Display tiny label with emoji

Right now we are displaying 6 mobility classes, but ideally would be better to just display the 6 most counted classes depending on the data recorded... I'm testing on my desk and would love to see some laptop / coffee / chair OpenMojis ;-)

TODO:

  • need to map all the COCO classes to an emoji in the config.json
  • rewrite a bit the code to display the 6 most counted items

Build Error?Who can help me!!!

Build command failed.
Error while executing process C:\Users\RD\AppData\Local\Android\Sdk\cmake\3.10.2.4988404\bin\ninja.exe with arguments {-C D:\AndroidStudioProjects\opendatacam-mobile\android\app.cxx\cmake\debug\arm64-v8a nodejsmobile yolov4}
ninja: Entering directory `D:\AndroidStudioProjects\opendatacam-mobile\android\app.cxx\cmake\debug\arm64-v8a'

ninja: error: 'D:/AndroidStudioProjects/opendatacam-mobile/android/app/src/main/cpp/libnode/bin/arm64-v8a/libnode.so', needed by 'D:/AndroidStudioProjects/opendatacam-mobile/android/app/build/intermediates/cmake/debug/obj/arm64-v8a/libnodejsmobile.so', missing and no known rule to make it

description of OpenDataCam ( 75 words max )

As part of the challenge, we need to have a 75 words max description that will be on the mygalileosolution website... @b-g your help with this would be nice πŸ˜‰ . need to submit by end of the week

They pre-wrote one:

OpenDataCam is an open source tool to automatically count moving objects. The AI-powered solution can process and detect objects in any live video feed. With the help of a user-friendly interface, it's possible to count objects at a certain location or visualize the trajectory on which an object moved through the frame.

But I think this can be improved.. as counting is just one of the many use case and to reflect the now multiplatform

My try at this:

OpenDataCam is an open source tool to quantify the world. It can detect, track and count objects on any video feed using AI. It is designed to be an accessible, affordable, and energy efficient solution running locally on smartphones, desktop, IoT devices without requiring your data to be send to external server.

App crashing on Android 12 (Huawei P20, Build 12.0.0.227)

When running first time after install, everything seems OK. But any further start ends with a crash of application (App has rights for use camera, use location and use memory).
Any plans to fix it and also support Android 13, because in Play-Store it is shown as not supported in Android 13.?

regards
Peter Dressel

Longterm running performance

Made some test running opendatacam-beta-2.apk over hours ... with Xiaomi Redmi Note 9 Pro.

Good news is that the phones doesn't get very hot even over several hours without a fan attached. However I still couldn't figure out how to run the ODC app for days without having the FPS going down.

Time FPS
Fri 16:00 10
Fri 16:30 8-9
Fri 17:40 9-10
Fri 18:45 9
Fri 19:35 9
Sat 7:30 6
Sat 9:30 6
Time FPS
Sat 17:00 9-10
Sat 17:45 8-9
Sat 18:45 8
Sun 7:30 7-8
Sun 9:40 7
Time FPS
Sat 17:00 8
Sat 20:00 8
Mon 7:30 6

Observations

  • Battery life is quite good!
  • I had problems with generic USB-C power supplies to keep the phone powered, but works with the original one
  • Gut feeling is that the lighting has something to do with the framerate ... much better during daylight.
  • CPU monitor apps seem to affect the performance by 1 to 2 FPS

Benchmark / Testing : Beta test

Once the first alpha / beta is ready.. gather a list of testers and build list of devices "tested" with the FPS performance for each of them in a table.

I think Google play has some nice way to do this.. where I just push a new build of the app and it gets distributed to beta tester list which a form to fill with feedback.

Using Node.js on mobile

This started as a casual question from @b-g .. "are you gonna run node.js" on android.. which I answered.. "no no, will not, don't think it is possible..."

And yet here I am πŸ˜‚.. Exploring the https://github.com/JaneaSystems/nodejs-mobile framework: https://code.janeasystems.com/nodejs-mobile

Turns out you can run Node.js on mobile .. I'm evaluating it on the node-mobile branch : https://github.com/opendatacam/opendatacam-mobile/tree/node-mobile

After battling a lot to integrate it along the NCNN framework.. I think I spent 6 hours on obscure and frustrating CMake builds errors yesterday ( it is also a C++ dependency).. I got a first version running. (not OpenDataCam yet.. but nodejs)

The idea behind this is to:

  • Be able to reuse code from the main github.com/opendatacam/opendatacam .. To really evaluate if this will be a win for future maintenance.. The other option I am considering for now is to have all the code from node.js being ported to the client side and having next.js export a static app.. which can be done also in a way we can maintain a common part from what I see... As anyways if we run node.js on the mobile we also need to diverge from the main opendatacam code / build adapters for Mobile ( not launching darknet.. not using mongodb, etc etc )

  • Use the Mobile device with the opendatacam app as a "server" like we do with Jetson... This is big feature I didn't think about at first.. but if we can run the node.js server on Android.. then we can connect to OpenDataCam from another device that is on the same network and operate OpenDataCam remotely like we do now. This enables more use cases..

  • Be faster to deliver the app ... as the first version deadline is a bit time sensitive. This is not really a clear upside.. I think it is the same amount of work to port the node.js code to the client only as to adapt the node.js code to work on mobile...

Potential/ Confirmed Drawbacks (for now)

  • Adds 50 MB to the app size .. Not that problematic I think as NCNN and yolo weights are at least 50 MB already.. so the app won't be lightweight anyways..

  • At app install and for each subsequent update.. we need to copy the app assets so node mobile can access them: .. and from first tests it seems it is slow as we are copying all the node_module folder.. like 30s at least right now.. but I'm sure this can be optimized ..

  • Next.js is not really made for a node-mobile deployment.. so right now need to hardcode the path where the files are copied to start :

    const app = next({ dir: "/data/data/com.opendatacam/files/nodejs-project" })
    , that was hard to figure out and I'm not sure if this is super portable.. would need more work..

  • More CPUs consumption.. node.js running means less power for other things.. not sure if this affects the YOLO performances..

  • If a Web version (client side) of OpenDataCam with neural network running via WebAssembly is something we want to do at some point.. already working on a client side only version of OpenDataCam would be a big step forward this

  • More complexity.. more problems.. less dependency is always best ;-)

Init screen

Note for my future self 😁

I added the loading screen so now we have a much cleaner experience.. the transition is still not very very good when we are loading the localhost:8080 view (the app) once the node.js app has started.. there is a quick flicker of 1 second where we jump from the "fake" loading screen (a simple index.html file) to the "real" one served by node.

the solution for this is to entirely get rid of the "real" loading screen as when node.js has started on Android YOLO has already started so we could directly render the main UI ..

But cause of some shortcut I took in the implementation this is not super straightforward to do.. as we expect that the main UI is rendered on the client and not on the server / or on a static generation.. there is some reliance on the window object that needs to be removed and then we could bypass it.

See some window reliance : https://github.com/opendatacam/opendatacam/blob/master/utils/colors.js#L29

Then we should be able to set true to isListeningToYOLO at init: https://github.com/opendatacam/opendatacam/blob/master/statemanagement/app/AppStateManagement.js#L30 , and it should render the UI : https://github.com/opendatacam/opendatacam/blob/master/components/MainPage.js#L92

UI : Counter view improvements

  • Very hard to get for the user that you have to double click to finish setting up. Make it more verbose that one is currently in edit mode.
  • Counter text (number) of polygon should be centered
  • While in setting up polygon counter mode, I got sometimes some blue glitches (see screenshot)
  • Counter details stats: can't be read if the area of interest is close to the upper side of the screen

IMAGE 2021-01-08 11:04:06
IMAGE 2021-01-08 11:04:10

Review opendatacam-beta-2.apk

Hi @tdurand! Very nice! Here is some feedback for the nitty gritty details :)

General

  • Framerate 6 - 10FPS
  • On the very first run ODC got stuck ... 2nd run was fine though
  • Is there a way to get a better confirmation on pressing the ODC buttons e.g. tactile feedback from android on press of data downloads?

Welcome

  • you can't download the logs

Live view

  • Labels in detection bbox look blurry, canvas seem not to be retina
  • There seems to be sometimes a missing image in the background, note the icon in the top left corner and the white border (2. screenshot)

Counter

  • Crash: if user does not finish setting up a counter and presses record
  • Crash: if user does not finish setting up a counter and presses delete
  • Text "crossing vehicles increase counter by 1" should be "crossing objects increase counter by 1"
  • All buttons in left button corner should have bigger size
  • Naming counter popup: OK button size is way too small
  • Very hard to get for the user that you have to double click to finish setting up. Make it more verbose that one is currently in edit mode.
  • Config download not working
  • Counter text (number) of polygon should be centered
  • While in setting up polygon counter mode, I got sometimes some blue glitches (see screenshot)
  • Counter details stats: can't be read if the area of interest is close to the upper side of the screen
  • Counter details stats: yes would be nice to show the most recent classes and not hard code the current 6 ones

IMAGE 2021-01-08 11:04:06
IMAGE 2021-01-08 11:04:10

Data

  • Delete button is way too tiny
  • JSON download kind of working, but ugly file names and sometimes if had to press several times the download link
  • CSV download not working

Menu

  • Get rid of the sticky footer
  • Version string does not match the one of the android app. Idea: could you add there also some device infos? Would be nice if we could people and testers point to a place in our app where there can get platform and runtime infos.
  • Tracker accuracy info popup not readable due to overlap with menu

UI : Make canvas resolution retina

Righ now it is a bit blurry.. needs to work out the devicePixelRatio and the ctx.scale() but it's not straightforward with the current implem

Switch Lenses Cam Setting?

Hi @tdurand, was playing around with the current alpha ... nice!

But I noticed that currently it is not possible to switch to the wide angle lens of my android phone, which is not ideal e.g. it prevents ODC to detect things on the sidewalk if the phone is mounted on passenger seat (next to driver) as the normal/default lens is too tele. Hence currently we could only address usecases in a vehicle which "look" ahead or behind but not sideways.

IMAGE 2021-01-04 14:12:53

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.