code4sac / trash-ai Goto Github PK
View Code? Open in Web Editor NEWWeb based trash image classification
Home Page: https://www.trashai.org
License: MIT License
Web based trash image classification
Home Page: https://www.trashai.org
License: MIT License
We should document some of the following to encourage and guide contributions:
Currently, any models we build must be trained on someone's local computer. This can be an issue as not everyone who contributes to this project may have a modern GPU and will have the resources to train the model.
Ideally, we would be able to train models in AWS to support model development and deployment.
What can we use in AWS or other cloud-based solutions to support model development (Sagemaker maybe)? How do we provide access to train models? Can we build Github Action scripts to support model building on merges to specific branches like staging and production?
Access to model training hardware can be difficult.
Use Cloud based solutions to train models for project members.
We could not support cloud solutions and require a project member to train any models that get developed.
We'd like to improve the model over time.
One way we can do that is to have a way to look at images that have no annotations and be able to draw and label trash in the images in the browser. We could also have a way to modify existing bounding boxes and label.
Ideally we would also upload the image and the new annotations to an s3 bucket so that we can use that data for future model refinement, but this can be addressed in a future issue to reduce scope of this one. (if not addressed in this ticket, open a new one when this is closed)
Submit events to google analytics to track how many images are processed
If possible also track:
The journal submission requires that we make sure there are pretty detailed deployment instructions for someone who is at least familiar with the coding language we are using. I think they will only review the local deployment as AWS deployment is a little outside the scope of the journal. Could we go through the local deployment https://github.com/code4sac/trash-ai/blob/production/docs/localdev.md and make sure that someone starting from having nothing downloaded on their device could get this up and running there? Maybe this already has been done, if so let me know!
Currently we use YoloV5.
Can we try training the model on a different framework?
How does it impact accuracy?
Are there other ways to improve accuracy of the model (i.e. more epochs)?
Might it be useful to have a model selector when using TrashAI?
Acceptance Criteria:
frontend/src/components/about.vue
We're using Multer and multers3 which we should be able to change to allow multiple files at once to be uploaded for user convenience
From the recruitment standpoint:
The duplicate key warning is triggered in the console when draging and dropping the same image file in two or more separate drag-drop operations.
The attached log file contains the full stack trace.
vue.runtime.esm.js:619 [Vue warn]: Duplicate keys detected: 'Screen Shot 2022-02-07 at 10.38.15 AM.png'. This may cause an update error.
found in
---> <TestUpload> at src/components/test/upload.vue
<VCard>
<Src/pages/index.vue> at src/pages/index.vue
<Nuxt>
<VMain>
<VApp>
<NavBase> at src/components/nav/base.vue
<Default> at src/layouts/innertab.vue
<Root>
After uploading a file on mobile, it is not clear that the user must navigate to the hamburger menu in the top left of the app in order to access the upload page and the summary page. I think we can design a better user experience by making these options more explicit on the initial page that after you upload, these are the beneficial pages to navigate too in order to access your insights.
a
a
a
Once we have a serialized model from #7, import the model into NodeJs and create a simple demo script for invoking the model with an image and getting the results.
Tracking issue for:
The journal we are submitting to requires that there are some automated tests in the repo that ensure that the code follows some standard logic. I can't find the tests currently but it is likely here. Could someone point me to it? I will add a link to it to the readme so that it is obvious for the reviewers.
Use a CI/CD platform (preferably github actions?) to manage the deployment of the trash-ai web application to AWS when there is a commit on the main branch.
Reviewer 1: While the video walkthrough is great, it would probably be best to have a bit more written instruction on usage in the README/on the website. I think the setup documentation could also be improved a bit. The local setup seems to refer to specifics of WSL a lot where it could be more system-agnostic, and while the usage of GH workflows is novel, the AWS deployment instructions are a bit nonstandard in that regard plus seem to assume that you're deploying it from the repo itself/as a repo owner.
Instructions are overly specific and not enough information on the website.
System agnostic deployment instructions and more information on the website.
NA
We need to create a more useful/informative labeling system for issues. This will allow existing and new contributors to have more direction/autonomy as they work on and complete issues.
Here's a link to how Hack for LA is already doing something like this:
This comes from our coauthor Kris Haamer who tested out the local deployment on his mac and was able to get it to run but had a few challenges.
Add additional metadata from exif around geography in addition to the bounding box and label data.
Tracking issue for:
Use the demonstrated method from #8 to invoke the model when an image is received on the /upload route. Return the results to the client (or any errors).
@createHernandez recommended we add issue templates, love the idea! saw this on another repo recently and was really impressed by the experience and also the data they got in the end from the issue was much higher quality.
I initially signed up for this but I don't have access to change the settings and would need access to be able to get this going. I ended up deploying it on another repo in just about 5 min using these instructions and mostly the vanilla settings. If someone wants to give me settings access I would be happy to set it up, just let me know.
The upload link in the header looks very bad. Make it look better.
Currently we store all uploaded images and their classifications so that we have data to improve the model over time. It should be clear as a user that this is the case.
We want to track how many people use TrashAI.
Add both front-end and back-end into project
There should be a single docker-compose file in the root of the project that manages the backend and front-end web apps. This should also support hot reloading when actively developing.
Get the app in a demo ready state so that we can present it to stakeholders and gather feedback on the future direction of the project.
The new image layout is nice, but we don't have the classification piece wired in. Should be simple enough to make a switch to enable / disable classification.
The new image layout is nice, but we don't have the classification piece wired in. Should be simple enough to make a switch to enable / disable classification.
The new image layout is nice, but we don't have the classification piece wired in. Should be simple enough to make a switch to enable / disable classification.
The new image layout is nice, but we don't have the classification piece wired in. Should be simple enough to make a switch to enable / disable classification.
Recommending some updates to the About tab:
About Trash AI
Welcome to Trash AI!
This is an open source project developed and maintained by Code For Sacramento in partnership with Win Cowger from The Moore Institute for Plastic Pollution Research and Walter Yu from CALTRANS. There have also been many contributors to the code base.
What is it?
Trash AI allows you to upload an image containing trash and get back data about the trash in the image, including the classification of trash and bounding box of where the trash is in the image.
How does it work?
Trash AI builds a model using the YoloV5 toolset trained on the TACO dataset. The model takes an image containing trash and returns a list of annotations and bounding boxes of trash within the image. The model is imported into the front-end Vue.js application where it is invoked when an image is uploaded. The Vue application then displayes the results of the model on the image.
How can I use Trash AI?
Trash AI is open source and free to use however you see fit. You may classify images and download the data. You may copy and modify the code for your own use. If you are using the tool for research purposes, we recommend you QAQC the output and we make no guarantee about the accuracy of the model for your specific images. Our overall goal is a generalizable AI, but there is still more work to do to get us there.
Disclaimer about uploaded images
The current version of Trash AI and the model we are using is just a start! When you upload an image, we are storing the image and the classification in an effort to expand the trash dataset and improve the model over time.
Reporting issues and improvements
If you would like to report an issue or request a feature, please open a Github Issue in our repository.
How to contribute
Open a Github Pull Request.
How to cite
We are working on a manuscript for the software, but in the meantime please cite as:
"Code for Sacramento, Trash AI, www.trashai.org, 2022.)
@wincowgerDEV would like to have the manuscript reviewed by 11/15/2022. If you have been assigned, please comment after you have reviewed the manuscript.
I think we should have a test image for people to upload to the tool if they don't have one they feel comfortable using right away since we are requiring all images to get saved currently. That way they can see a good example and be incentivized to use the tool.
This should include:
No response
No response
No response
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
MISSING DOIs
INVALID DOIs
Automated JOSS Bot.
DOI's to be valid.
No response
No response
No response
Reviewer 1 says: Even with the current structure, I'll also note that the deployment scripts and such could probably be simplified and consolidated a bit - when taking a brief look at it, it felt like I had to jump around a bit to find all the different bits and pieces and had a lot of different calls between different tools (eg, why use makefiles vs just having a single docker-compose in the root and use docker-compose up?).
Challenging deployment
Make deployment scripts easier to follow.
NA
I think we should default trashai.org to open the about tab first (not the upload tab) so that people can't say they didn't see the information about the tool e.g. upload saving or that it wasn't obvious to them what the tool did.
When updating the deploy_map zone it doesn't pick up the change and adjust the
iam deploy permissions.
Change SSM parameter setup for the zone id variable
There doesn't seem to be any automated tests. While the desired behavior here is relatively clear/trivial, there's no formal instructions to verify behavior.
NO
Need to add formal instructions to verify behavior and add clarity on the automated tests currently implemented.
NA
The page should be able to process at least 100 images at a time. After processing, the user should be presented with the metadata for the images. The browser should not display the processed images.
Now that we are getting close to a functional web application that can classify images, we should do a pass on the look and feel of the application to improve the user experience.
We can perhaps reach out to other local groups if we need help with this task.
We need to add a confirmation message or some visual way for users to know that the pictures they are uploading have uploaded successfully.
Visting https://trashai.org/ does not resolve -- you have to hit the www version https://www.trashai.org/.
We should either redirect non-www to www OR host the app at non-www and redirect www to it.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.