GithubHelp home page GithubHelp logo

mrozycki / rustmas Goto Github PK

View Code? Open in Web Editor NEW
21.0 4.0 0.0 812 KB

Christmas lights controller capable of displaying 3D animations

Rust 96.81% Shell 0.09% HTML 0.17% CSS 2.93%
christmas-lights christmas-tree computer-vision creative-coding rust ws2812 bevy

rustmas's People

Contributors

dependabot[bot] avatar krzmaz avatar m-sz avatar mrozycki avatar pnwkw avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

rustmas's Issues

Implement better frame synchronization

Currently the gap between consequent frames is 1/30s, but that delay only starts after the frame is generated and sent to be displayed. That means that the actual frame rate of the animations is lower than 30 FPS.

Implement a better frame sync system, where the time taken to generate and display the frame is taken into account when computing the delay before next frame. This includes advancing the animation by more time if the last frame took more than 1/30s to produce.

Detect incorrectly placed lights using distance from their neighbors in 2D

Similarly to #85, we can detect incorrectly detected lights at the 2D capture stage. This should be done before normalization, so that obviously wrong points do not throw off normalization. However, incorrectly detected lights in this case should not be completely thrown away, but instead be ignored when calculating bounds during normalization, so that a stray point does not completely change the bounds. The points should still be normalized and returned with lowered accuracy.

We can use multiple strategies for such detection. In the scope of this task, calculate average distance between lights that are next to each other on the cable. Find any pairs of lights for which the distance is above a certain threshold based on the precalculated average. Those measurements should be ignored when calculating bounds for normalization.

Improve documentation

  • mention the dependency of webui on rustup target add wasm32-unknown-unknown
  • remove $ from commands for easier copying
  • endpoint change in webui to http://127.0.0.1:8081

Retry on transient failures in configurator

Currently the configurator will fail immediately upon transient failure, such as an issue connecting to the lights to set a frame. This means the process can crash due to a transient failure. This is somewhat mitigated with the option to save partial results, but ideally it should not stop the process at all.

In the scope of this task implement retries on tasks that might transiently fail in the configurator (e.g. setting lights, maybe taking a picture?). Retry a few times with an increasing delay. On several failures in a row prompt the user to check the connection and press a key to continue.

Properly handle unstable/lost connection to remote lights

Currently the animator will exit the moment a single request to remote lights fails, which might happen relatively frequently with a spotty connection. Additionally, turning the lights off and on again (e.g. for the night) should not require a restart of the server.

The remote light client should therefore retry failed requests, ideally with a sensible backoff, so that the network is not bombarded with failed requests for hours if the lights are turned off.

Interpolate missing lights in 3D

If a light is known to be missing (e.g. it was not correctly detected by the capturer, or it was discarded after detection based on other metrics), its position could be estimated by interpolating its location from the position of neighboring lights.

In the scope of this task, implement an interpolation strategy for points in 3d. At a minimum, a run of missing lights should end up evenly spaced between the nearest two known lights. Ideally, you should also interpolate the derivative of light positions and take that into account.

Pedal support

Add support to switch animations using foot pedals connected to the Raspberry Pi. Each pedal should have an animation (and a set of parameters) associated with it, and should switch to that animation on press. What animation and parameters a given pedal switches to should be configurable beforehand.

Implement combined light client

Create an implementation for LightClient which will accept multiple other light client implementations, and send the frame to all of them every time. The possible use-case is to have an animation show up both in the visualiser and on the tree.

Implement Local Light Client for Raspberry Pi communication over pins

For more time-sensitive applications, we might want to connect the lights to the Raspberry Pi running the backend directly with a cable, rather than through WiFi and Pico. In order to achieve that, we need:

  1. An implementation of LightClient that will communicate with the lights over local Raspberry Pi pins
  2. An option to run the webapi and animator with that new light client implementation

Start animator visualiser from the main thread

Currently the visualiser is started from a separate thread created by tokio::spawn. In order for the visualisation subsystem to acquire the necessary graphics context and window handles it is best to start it from the main thread instead, to avoid compatibility problems.

Save configuration reference images for better debugging

In WIP code we save just a reference image with all the lights turned on from every perspective, which has to be manually renamed before capturing the next side.

It would be great to have the option (controlled by a CLI flag possibly) to save an image of every single light separately, for possible review later. Currently we're left guessing why a mark might have been left in any given spot, since we don't see the actual image that was fed to OpenCV.

Add a feature gate for visualiser in animator and configurator

Currently running the animator/configurator requires the visualiser, which depends on OpenGL. If you want to run either without the visualiser, you'd still need to have OpenGL and libraries installed. Hiding visualiser behind a feature gate means you could build these projects without the dependency on OpenGL.

MIDI controller support

With the recently added Event Generators, we can create a MidiEventGenerator which will capture events through a MIDI interface and redirect them to the animations.

2D coordinates fail to save

When running the configurator with the -s option, after the first perspective is captured, the configurator fails with the following error:

Error: Error(Serialize("cannot serialize tuple container inside struct when writing headers from structs"))

The file that should contain the coordinates only contains the word "inner".

Detect incorrectly detected lights using distance to neighbors in 3D

If the distance from a light to its neighbors exceeds certain threshold (based on the average of all neighboring light distances), then we can assume it to be detected incorrectly. We can overshoot a bit how many lights will be discarded, since interpolation logic at a later stage of detection will be able to place them back into the tree.

Extract CV utilities into a separate crate

As a mitigation for the constantly rebuilding OpenCV, the code that directly interfaces with OpenCV should be moved to a separate crate, rustmas-cv. No other crate should directly use OpenCV.

Remove `data=` from the POST data

To align with the recent changes in the neopixel server, those 5 characters need to be removed, otherwise the data will be rejected due to not dividing cleanly into 3 bytes per pixel

Remove dependency on OpenCV

OpenCV is a large C++ library that has been causing us various issues over time, including problems building on Windows, constant rebuilds despite no changes etc.

Given that we only use a tiny fraction of what OpenCV has to offer, we might be able to replace that with either native Rust libraries or manually rewrite a couple of OpenCV functions into pure Rust, to avoid those issues in the future.

It seems that using imageproc for image processing and nokhwa for camera control might be enough for our use case.

Detect incorrectly placed lights in 2D using convex hull area

Similarly to #85, we can detect incorrectly detected lights at the 2D capture stage. This should be done before normalization, so that obviously wrong points do not throw off normalization. However, incorrectly detected lights in this case should not be completely thrown away, but instead be ignored when calculating bounds during normalization, so that a stray point does not completely change the bounds. The points should still be normalized and returned with lowered accuracy.

We can use multiple strategies for such detection. In the scope of this task, calculate the convex hull of all the detections (OpenCV has an implementation for that), and for each point on the hull try removing it and calculating how much it decreases the area of the hull. Lights that are far away from the tree should decrease the area of a hull by a significant amount.

Implement speed control and brightness control decorators for the animations

Speed control animation decorator should accept an existing animator and a time factor and pass an appropriately modified time parameter to the underlying animation, so that the animation runs faster or slower. E.g. time factor of 2 should mean that the animation is twice as fast, and time factor of 0.5 should mean that it runs twice as slow.

Brightness control decorator should accept an existing animation and a brightness factor, and then dim the frame returned by the animation by that factor. You can use the already implemented dim method on the color (which currently is implemented wrong, but fixing it in this one place will be easier than fixing it in two different places).

Implement variable frame rate for animations

Currently each animation is polled for next frame 30 times a second. For some animations this might be unnecessary, for example more traditional animations, where the lights will change once a second, or the "blank" animation, where the colors don't change at all.

Allow animations to set their own frame rate (including 0 for completely static animations), and only generate new frames when necessary.

Write setup instructions

  • how to build the project
  • how to run the configurator
  • how to start the webapp for testing and development
  • how to deploy the webapp on a Raspberry Pi

Improve light capturing logic

  • return score for each readout (e.g. maximum value in diff picture), so that we can use it to determine which readouts to take into account while merging perspectives
  • detect outliers after capturing every perspective, to ignore them during normalization
  • detect outliers after merging perspectives to fix them based on the position of their neighbors
  • fix undetected (or wrongly detected) lights by positioning them based on the location of neighbors

Make animations into a trait

Currently each animation is generated by a single function which takes a time parameter and a reference to all the points. This means any preprocessing of the points (e.g. rotating the model along an axis) needs to be done on each call to the function.

A better approach would be to have each animation be an implementation of a single trait, allowing the points to be passed in once at the beginning of the animation and any preprocessing done once at that point. This will also make it easier to expose a unified UI for setting animation parameters.

Implement animation cycle decorator

Implement a decorator for animations, which will accept a collection of animations and a switching period. The decorator should cycle through the given animations, switching to the next one every switching period (e.g. 30 seconds).

Create time-reversible decorator for animations

Certain animations might benefit from the ability to run backwards. Implement a decorator for animations that will add a parameter to the animation and allow the direction of time flow to be switched. Since the names for the directions might differ depending on the animation (forward/backward, left/right, up/down, clockwise/counterclockwise), the names for the "forward" and "backward" options should be passed into the decorator's constructor. The controller should appear as a drop down list in the UI. Make sure this works correctly with speed controlled decorator!

Save and load 2D coordinates pre-normalization

Since we might want to implement better pre-normalization filters in the future (see #86 and #87), we might want to re-run the configurator with previously captured light positions to take advantage of the improved algorithm. Currently the points are saved post-normalization, which means normalization is not re-ran when using pre-captured points. Technically normalization can be reversed, but that would also have to be implemented, and saving coordinates pre-normalization is easier.

In the scope of this task, change the 2D coordinates to be saved before normalization, rather than after.

Implement 2D configurator

After Christmas, the lights can be used as a 2D backdrop, in which case capturing 3D coordinates will not only be difficult (or even impossible) but also pointless. Implement new flow for 2D capture, where we only capture 1 perspective and use it to produce normalised 2D coort

Save single-light images to a subfolder of the capture

With the -s option in the configurator, we save a lot of files into a single folder. This means that finding the (generally more useful) aggregate files (e.g. 2D coordinates CSV file) in a sea of single-light images is not very convenient.

Instead the files should be stored like this:

* 2022-12-05T19:24:07/
    * single-lights/
        * 000.jpg
        * ...
    * front.csv
    * reference.jpg

This will make it much easier to find the aggregate results while also giving the option to access single light files when needed.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.