mrozycki / rustmas Goto Github PK
View Code? Open in Web Editor NEWChristmas lights controller capable of displaying 3D animations
Christmas lights controller capable of displaying 3D animations
Currently the gap between consequent frames is 1/30s, but that delay only starts after the frame is generated and sent to be displayed. That means that the actual frame rate of the animations is lower than 30 FPS.
Implement a better frame sync system, where the time taken to generate and display the frame is taken into account when computing the delay before next frame. This includes advancing the animation by more time if the last frame took more than 1/30s to produce.
The animation should only be locked by the animation controller for the time of generating the next and immediately released when the frame is generated. Currently the lock is held all the way until the next frame is generated, which can cause contention with animation switching.
Similarly to #85, we can detect incorrectly detected lights at the 2D capture stage. This should be done before normalization, so that obviously wrong points do not throw off normalization. However, incorrectly detected lights in this case should not be completely thrown away, but instead be ignored when calculating bounds during normalization, so that a stray point does not completely change the bounds. The points should still be normalized and returned with lowered accuracy.
We can use multiple strategies for such detection. In the scope of this task, calculate average distance between lights that are next to each other on the cable. Find any pairs of lights for which the distance is above a certain threshold based on the precalculated average. Those measurements should be ignored when calculating bounds for normalization.
webui
on rustup target add wasm32-unknown-unknown
$
from commands for easier copyinghttp://127.0.0.1:8081
Use html datalist to provide some warm white color options in the color picker UI.
Currently the configurator will fail immediately upon transient failure, such as an issue connecting to the lights to set a frame. This means the process can crash due to a transient failure. This is somewhat mitigated with the option to save partial results, but ideally it should not stop the process at all.
In the scope of this task implement retries on tasks that might transiently fail in the configurator (e.g. setting lights, maybe taking a picture?). Retry a few times with an increasing delay. On several failures in a row prompt the user to check the connection and press a key to continue.
Currently the animator will exit the moment a single request to remote lights fails, which might happen relatively frequently with a spotty connection. Additionally, turning the lights off and on again (e.g. for the night) should not require a restart of the server.
The remote light client should therefore retry failed requests, ideally with a sensible backoff, so that the network is not bombarded with failed requests for hours if the lights are turned off.
If a light is known to be missing (e.g. it was not correctly detected by the capturer, or it was discarded after detection based on other metrics), its position could be estimated by interpolating its location from the position of neighboring lights.
In the scope of this task, implement an interpolation strategy for points in 3d. At a minimum, a run of missing lights should end up evenly spaced between the nearest two known lights. Ideally, you should also interpolate the derivative of light positions and take that into account.
Add support to switch animations using foot pedals connected to the Raspberry Pi. Each pedal should have an animation (and a set of parameters) associated with it, and should switch to that animation on press. What animation and parameters a given pedal switches to should be configurable beforehand.
Create an implementation for LightClient
which will accept multiple other light client implementations, and send the frame to all of them every time. The possible use-case is to have an animation show up both in the visualiser and on the tree.
On form data change set a dirty flag. When the user tries to switch animations and the dirty flag is set, open an alert asking for confirmation. Give the user 3 options: don't switch, discard changed and switch, save changes and switch.
For more time-sensitive applications, we might want to connect the lights to the Raspberry Pi running the backend directly with a cable, rather than through WiFi and Pico. In order to achieve that, we need:
Currently the visualiser is started from a separate thread created by tokio::spawn. In order for the visualisation subsystem to acquire the necessary graphics context and window handles it is best to start it from the main thread instead, to avoid compatibility problems.
In WIP code we save just a reference image with all the lights turned on from every perspective, which has to be manually renamed before capturing the next side.
It would be great to have the option (controlled by a CLI flag possibly) to save an image of every single light separately, for possible review later. Currently we're left guessing why a mark might have been left in any given spot, since we don't see the actual image that was fed to OpenCV.
Currently running the animator/configurator requires the visualiser, which depends on OpenGL. If you want to run either without the visualiser, you'd still need to have OpenGL and libraries installed. Hiding visualiser behind a feature gate means you could build these projects without the dependency on OpenGL.
Detect and correct lights that were either not seen at all, or have been incorrectly placed outside of the tree.
With the recently added Event Generators, we can create a MidiEventGenerator which will capture events through a MIDI interface and redirect them to the animations.
When running the configurator with the -s
option, after the first perspective is captured, the configurator fails with the following error:
Error: Error(Serialize("cannot serialize tuple container inside struct when writing headers from structs"))
The file that should contain the coordinates only contains the word "inner".
If the distance from a light to its neighbors exceeds certain threshold (based on the average of all neighboring light distances), then we can assume it to be detected incorrectly. We can overshoot a bit how many lights will be discarded, since interpolation logic at a later stage of detection will be able to place them back into the tree.
Unknown points in the middle of the cable can be interpolated after #84. However, this approach does not work for points that are at the beginning or end of the table.
Implement extrapolation of light positions at the beginning and end of the table.
As a mitigation for the constantly rebuilding OpenCV, the code that directly interfaces with OpenCV should be moved to a separate crate, rustmas-cv. No other crate should directly use OpenCV.
To align with the recent changes in the neopixel server, those 5 characters need to be removed, otherwise the data will be rejected due to not dividing cleanly into 3 bytes per pixel
OpenCV is a large C++ library that has been causing us various issues over time, including problems building on Windows, constant rebuilds despite no changes etc.
Given that we only use a tiny fraction of what OpenCV has to offer, we might be able to replace that with either native Rust libraries or manually rewrite a couple of OpenCV functions into pure Rust, to avoid those issues in the future.
It seems that using imageproc for image processing and nokhwa for camera control might be enough for our use case.
Similarly to #85, we can detect incorrectly detected lights at the 2D capture stage. This should be done before normalization, so that obviously wrong points do not throw off normalization. However, incorrectly detected lights in this case should not be completely thrown away, but instead be ignored when calculating bounds during normalization, so that a stray point does not completely change the bounds. The points should still be normalized and returned with lowered accuracy.
We can use multiple strategies for such detection. In the scope of this task, calculate the convex hull of all the detections (OpenCV has an implementation for that), and for each point on the hull try removing it and calculating how much it decreases the area of the hull. Lights that are far away from the tree should decrease the area of a hull by a significant amount.
Speed control animation decorator should accept an existing animator and a time factor and pass an appropriately modified time parameter to the underlying animation, so that the animation runs faster or slower. E.g. time factor of 2 should mean that the animation is twice as fast, and time factor of 0.5 should mean that it runs twice as slow.
Brightness control decorator should accept an existing animation and a brightness factor, and then dim the frame returned by the animation by that factor. You can use the already implemented dim
method on the color (which currently is implemented wrong, but fixing it in this one place will be easier than fixing it in two different places).
Currently each animation is polled for next frame 30 times a second. For some animations this might be unnecessary, for example more traditional animations, where the lights will change once a second, or the "blank" animation, where the colors don't change at all.
Allow animations to set their own frame rate (including 0 for completely static animations), and only generate new frames when necessary.
Currently each animation is generated by a single function which takes a time parameter and a reference to all the points. This means any preprocessing of the points (e.g. rotating the model along an axis) needs to be done on each call to the function.
A better approach would be to have each animation be an implementation of a single trait, allowing the points to be passed in once at the beginning of the animation and any preprocessing done once at that point. This will also make it easier to expose a unified UI for setting animation parameters.
Implement a decorator for animations, which will accept a collection of animations and a switching period. The decorator should cycle through the given animations, switching to the next one every switching period (e.g. 30 seconds).
Certain animations might benefit from the ability to run backwards. Implement a decorator for animations that will add a parameter to the animation and allow the direction of time flow to be switched. Since the names for the directions might differ depending on the animation (forward/backward, left/right, up/down, clockwise/counterclockwise), the names for the "forward" and "backward" options should be passed into the decorator's constructor. The controller should appear as a drop down list in the UI. Make sure this works correctly with speed controlled decorator!
Since we might want to implement better pre-normalization filters in the future (see #86 and #87), we might want to re-run the configurator with previously captured light positions to take advantage of the improved algorithm. Currently the points are saved post-normalization, which means normalization is not re-ran when using pre-captured points. Technically normalization can be reversed, but that would also have to be implemented, and saving coordinates pre-normalization is easier.
In the scope of this task, change the 2D coordinates to be saved before normalization, rather than after.
After Christmas, the lights can be used as a 2D backdrop, in which case capturing 3D coordinates will not only be difficult (or even impossible) but also pointless. Implement new flow for 2D capture, where we only capture 1 perspective and use it to produce normalised 2D coort
With the -s
option in the configurator, we save a lot of files into a single folder. This means that finding the (generally more useful) aggregate files (e.g. 2D coordinates CSV file) in a sea of single-light images is not very convenient.
Instead the files should be stored like this:
* 2022-12-05T19:24:07/
* single-lights/
* 000.jpg
* ...
* front.csv
* reference.jpg
This will make it much easier to find the aggregate results while also giving the option to access single light files when needed.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.