GithubHelp home page GithubHelp logo

frc2706 / trackerboxreloaded Goto Github PK

View Code? Open in Web Editor NEW
1.0 5.0 0.0 24.78 MB

2D Vision re-written in Java

License: MIT License

Java 87.77% Shell 12.09% Batchfile 0.14%
java networktables first-robotics-competition opencv

trackerboxreloaded's Introduction

TrackerboxReloaded

Attribution and license

2D Vision re-written in Java

We release our software under the MIT license in the hopes that other teams use and/or modify our software.

Our one request is that if you do find our code helpful, please send us an email at [email protected] letting us know. We love to hear when we've helped somebody, and we'd like to be able to measure our impact on the community.

Thanks, and enjoy!

Team contact: [email protected] Supervising mentor: Mike Ounsworth

The base code for this was from the WPILib samples: https://github.com/wpilibsuite/VisionBuildSamples/tree/master/Java

Below is their original readme.

This code was copied from the WPILib samples: https://github.com/wpilibsuite/VisionBuildSamples/tree/master/Java

Below is their original readme.

Java sample vision system

This is the WPILib sample build system for building Java based vision targeting for running on systems other than the roboRIO. This currently supports the following platforms

  • Windows
  • Raspberry Pi running Raspbian
  • Generic Armhf devices (such as the BeagleBone Black or the Jetson)

It has been designed to be easy to setup and use, and only needs a few minor settings to pick which system you want to be ran on. It has samples for interfacing with NetworkTables and CsCore from any device, along with performing OpenCV operations.

Choosing which system to build for

As there is no way to autodetect which system you want to build for, such as building for a Raspberry Pi on a windows desktop, you have to manually select which system you want to build for. To do this, open the build.gradle file. Near the top at line 10 starts a group of comments explaining what to do. For a basic rundown, there are 3 lines that start with ext.buildType =. To select a device, just uncomment the system you want to build for.

Note it is possible to easily switch which system you want to target. To do so, just switch which build type is uncommented. When you do this, you will have to run a clean gradlew clean in order to clear out any old artifacts.

Choosing the camera type

This sample includes 2 ways to get a camera image. The first way is from a stream coming from the roboRIO, which is created with CameraServer.getInstance().startAutomaticCapture();. This is the only method that is supported on windows. The second way is by opening a USB camera directly on the device. This will likely allow higher resolutions, however is only supported on Linux devices.

To select between the types, open the Main.java file in src/main/java, and scroll down to the line that says "Selecting a Camera". Follow the directions there to select one.

Building and running on the local device

If you are running the build for your specific platform on the device you plan on running, you can use gradlew run to run the code directly. You can also run gradlew build to run a build. When doing this, the output files will be placed into output\. From there, you can run either the .bat file on windows or the shell script on unix in order to run your project.

Building for another platform

If you are building for another platform, trying to run gradlew run will not work, as the OpenCV binaries will not be set up correctly. In that case, when you run gradlew build, a zip file is placed in output\. This zip contains the built jar, the OpenCV library for your selected platform, and either a .bat file or shell script to run everything. All you have to do is copy this file to the system, extract it, then run the .bat or shell script to run your program

What this gives you

This sample gets an image either from a USB camera or an already existing stream. It then restreams the input image in it's raw form in order to make it viewable on another system. It then creates an OpenCV sink from the camera, which allows us to grab OpenCV images. It then creates an output stream for an OpenCV image, for instance so you can stream an annotated image. The default sample just performs a color conversion from BGR to HSV, however from there it is easy to create your own OpenCV processing in order to run everything. In addition, it is possible to run a pipeline generated from GRIP. In addition, a connection to NetworkTables is set up, so you can send data regarding the targets back to your robot.

Other configuration options

The build script provides a few other configuration options. These include selecting the main class name, and providing an output name for the project. Please see the build.gradle file for where to change these.

trackerboxreloaded's People

Contributors

finlaywashere avatar ounsworth avatar matedwards avatar

Stargazers

 avatar

Watchers

Jaime Yu avatar James Cloos avatar Ken James avatar Kevin Lam avatar  avatar

trackerboxreloaded's Issues

Use match number from FMS in img log filenames

When connected to FMS during a match, we can get the match number / name from NetworkTables.

NetworkTables key: logging-level/match
value looks like: "VictoriaPark/Practice-1-0"

Goal: prepend this to the filenames of logged images so we know which event / match number the image was from.

Spec: decide which camera

Pi ribbon vs USB camera (lifecam 3000) vs ethernet

We need to do some testing on the pi ribbon camera / USB camera and decide if it's good enough. Questions include (but are not limited to):

  • Does the Pi camera do annoying things like auto-whitebalance? (some googling will probably answer this)
  • What's the colour depth like on the ribbon camera? Is it "rich" enough? (probably need to look at the two camera feeds side-by-side).
  • What's the overall quality of the image like (sharpness, lens distortion, etc)?
  • How bad is motion blur on the ribbon camera? ( can probably google "rolling shutter" vs "full-frame". We can probably come up with a way to test it, take a still while waving it around).
  • Other relevant questions?

Use NetworkTables for network communication with roboRIO

NetworkTables is a really cool system whereby the roboRIO and any other devices on the network can chatter data back and forth without needing to worry about IP addresses or writing any socket code.

The Bling team has re-written the bling code to use networktables (but it's python). See their code here:
https://github.com/FRC2706/blingServer/blob/master/blingServer.py

The official FIRST documentation for programming with networktables:
https://wpilib.screenstepslive.com/s/3120/m/7912/l/80205-writing-a-simple-networktables-program-in-c-and-java-with-a-java-client-pc-side

UPDATE: the guide above seems to be out of date since it uses the depricated (no longer supported) APIs. Here is the javadoc for the new APIs, which are a bunch more complicated ... but we can figure them out :)
http://first.wpi.edu/FRC/roborio/release/docs/java/edu/wpi/first/networktables/NetworkTable.html

More Chief Delphi posts with useful sample code about the new version of NetworkTables:

https://www.chiefdelphi.com/forums/showpost.php?s=a557d4e433cf4515f9adc76cdf33031b&p=1726159&postcount=5

https://www.chiefdelphi.com/forums/showpost.php?s=a557d4e433cf4515f9adc76cdf33031b&p=1726159&postcount=7

Make it analyse images

Look at past code and attempt to analyse a image using the same sort of commands. May want to communicate with person doing calculations to figure out best commands to use

Dump sample images to disk

As a debugging thing, dump images every 10? seconds to the Pi's hard drive, or a USB stick so we can analyze after a match what the vision camera was seeing and if it needs to be re-calibrated.

We would need to A) detect if the USB stick is inserted, then dump:

  • properties file (only on start-up?)
  • raw image
  • binary mask

(credit to Brian for the idea!)

Do some science on whether auto-whitebalance is harmful

High-end made-for-robitics cameras all allow you to disable auto white-balance and auto-exposure. Lower-end ones, like many of the raspberry pi cams, and USB webcams, do not give you the option.

Here is some good reading material on the theory of white-balancing:
https://www.ptgrey.com/tan/11109

Let's do some science to determine if auto white-balance is harmful. How best to design an experiment? Maybe calibrate last year's trackerbox to some brightly-coloured object and see if it performs better with auto white-balance turned on or off?

Relates to #6 .

Pi system clock doesn't work: re-think img name timestamps

Brian found that one of the ways Pis keep their cost down is no battery to keep time consistently across reboots. This means that using the datetime as part of the file name, or sorting by modification time, is no good.

Possibly solution:

  • Use an incrementing integer in the file name; on file write, look for the highest numbered file in the folder and +1 to it.
  • Put that number first in the file name (rather than "raw", "binary", "output") so that we can properly sort by filename and the triplets group properly.

Dump match images to USB drive

While it's running, dump maybe 1 frame every 5 seconds to a USB stick so we can see if it's falling out of calibration over the course of the competition weekend.

We would need to A) detect if the USB stick is inserted, then dump:

  • properties file (only on start-up?)
  • raw image
  • binary mask

(credit to Brian for the idea!)

Improve code

Make the code more useable and easy to use openCV commands in

Fix Filters Acting Weird

It seems we are having trouble with HSV filters we need to verify that it was done correctly and ensure it works

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.