nannou-org / nannou Goto Github PK
View Code? Open in Web Editor NEWA Creative Coding Framework for Rust.
Home Page: https://nannou.cc/
A Creative Coding Framework for Rust.
Home Page: https://nannou.cc/
Currently we mostly just re-export the events emitted by the winit crate, however these can be pretty verbose and the pattern matching can get pretty grueling when all you want to do is check if a key was pressed or not.
It would be sweet if there was a function to convert WindowEvent
to a much simpler event, something like:
enum SimpleWindowEvent {
KeyPressed(Key),
KeyReleased(Key),
MouseMoved(Vector2),
MouseDragged(Vector2, MouseButton),
MousePressed(Vector2, MouseButton),
MouseReleased(Vector2, MouseButton),
MouseEntered(Vector2),
MouseExited(Vector2),
Resized(Dimensions),
}
In this case, the template could maybe look like something closer to this:
extern crate nannou;
use nannou::{App, Event, Frame};
use nannou::SimpleWindowEvent::*;
fn main() {
nannou::run(model, update, draw);
}
struct Model {
window: nannou::window::Id,
}
fn model(app: &App) -> Model {
let window = app.new_window().build().unwrap();
Model { window }
}
fn update(_app: &App, model: Model, event: Event) -> Model {
match event {
// Handle window events like mouse, keyboard, resize, etc here.
Event::WindowEvent(_id, event) => match nannou::simple_window_event(event) {
KeyPressed(key) => {
},
KeyReleased(key) => {
},
MouseMoved(pos) => {
},
MouseDragged(pos, button) => {
},
MousePressed(pos, button) => {
},
MouseEntered(pos) => {
},
MouseExited(pos) => {
},
Resized(pos) => {
},
},
// `Update` the model here.
Event::Update(_update) => {
},
_ => (),
}
model
}
// Draw the state of your `Model` into the given `Frame` here.
fn draw(_app: &App, model: &Model, frame: Frame) -> Frame {
// Our app only has one window, so retrieve this part of the `Frame`. Color it grey.
frame.window(model.window).unwrap().clear_color(0.1, 0.11, 0.12, 1.0);
// Return the cleared frame.
frame
}
draw.text("asdfasdfafd")
All examples and nannou run on Rust stable.
clear
should take any type that can be converted into an Rgba
(this would allow for passing Hsla
, Rgba
and a bunch of other different colour spaces).
Example that shows the basics of Rusts ownership.
Mutability, Move, Borrow, Mutable Borrow, Scope
It would be useful to have a standard directory where stuff that is loaded at runtime (shaders, models, audio, images, etc) can be stored so that a simple method like this can be added, allowing the user to easily get access their data without having to search directories.
The assets
term seems to be used fairly thoroughly throughout the rust ecosystem (at least within the gamedev scene) so we can probably just go with that for now. This would be the equivalent to the bin/data
directory often used in oF projects.
assets
directoryDuring development it generally seems to be easiest to keep the assets
directory in the root of the project (at the same level as the Cargo.toml
). However, when shipping a binary it's more common to put the assets directory in a directory alongside the executable.
The easiest approach might be to use the find_folder crate to first check at the executable level and then recurse through parent directories (until some depth limit) until the assets
directory is found. Searching should happen relative to the executable - not to the actual "current directory", I've made this mistake way too many times heh.
Could possibly refer to Rust's code of conduct. Rust is renown for its incredibly inclusive community - it would be awesome if we could carry this forward into the nannou ecosystem.
It may be because it's been a while since I've used Rust, but I can't figure out how to run the examples in /examples
- some build steps would be great.
I presume I'm doing something wrong because running just cargo build
gives me a fatal error. As I work through this issues would love to help out building this project!
calling no_loop_mode() should still allow the view() function to run once.
The contents that were drawn into the frame should remain visible and unchanged until loop mode is triggered again.
One of the nicest things about working with OF or Processing is how easy it is to just instantly draw something to the screen. There's no worrying about triangulation, where vertices should be stored, caching, or anything like that - it's as simple as ellipse(x, y, w, h)
, ofRectangle(x, y, w, h)
, text("stuff", x, y)
, etc.
The frustrating thing about these functions is that they're generally pretty inefficient, and when you do start running into performance bottlenecks you do have to start thinking about meshes and in turn nice ways of packing vertices, indices, etc.
It would be really nice if we could offer something that felt this nice to use, but cached all of these commands into a big buffer that gets drawn all in a single graphics call and gets re-used between frames for efficiency.
Imagining something along these lines:
fn draw(app: &App, model: &Model, frame: Frame) -> Frame {
let ref mut g = app.graphics();
// Building relative layout.
let background = draw::background(model.window).color(color::RED).set(g);
let ellipse = draw::ellipse().on(background).x_y_w_h(10.0, 20.0, 30.0, 40.0).set(g);
let rectangle = draw::rectangle().wh_of(ellipse).down_from(ellipse, 10.0).set(g);
draw::text("doot doot").middle_of(rectangle).set(g);
// Shorthand for common stuff.
g.clear(color::RED);
g.ellipse(10.0, 20.0, 30.0, 40.0);
g.rectangle(40.0, 30.0, 20.0, 10.0);
g.text("doot doot");
// Submit the graphics to OpenGL.
g.draw_to_frame(&frame);
frame
}
These are the simple examples I made for the introduction to processing class at RMIT.
I am going to port the examples found here => https://github.com/RMIT-Industrial-Design/IntroToProcessingTutorials
Then we can have a wiki page that shows the processing code and the relating nannou code to draw the same thing. This should then make it easier for Processing and oF users understand the difference between the syntax.
Once we have the basics of the framework implemented then it would be great to run some introductory workshops with the local creative coding community in Melbourne.
I know Ben was interested in helping make things like this happen. Could probably use the library at the docklands to host it. Would make for a great test run allowing us to evaluate how people are receiving it. Then we can iterate on the ideas making it more solid before approaching an overseas festival.
Milestone for a basic examples
Rect::from_w_h
, Rect::from_x_y_w_h
constructors. Similar to the from_wh
and from_xy_wh
but takes each axis as individual arguments rather than via Point2
s or Vector2
s.
Rect::top_right_of
, Rect::shift_down
, etc.
Kinda tedious and will be annoying to keep up to date with upstream changes, but will defs make things much simpler and easier to understand for new users.
Currently the user has to specify which window frame they want which is probably pretty gross and confusing for new users:
frame.window(model.window).unwrap().clear_color(0.1, 0.11, 0.12, 1.0);
Frame
should have a simple helper method which just clears all windows.
frame.clear(some_color);
Maybe the app could have a struct that always contains the latest window state.
/// The state of the most recently focused window.
pub struct WindowState {
/// DPI-agnostic dimensions.
pub width: f64,
pub height: f64
/// DPI factor.
pub dpi_factor: f64,
/// ID of the monitor on which the window resides.
pub monitor: monitor::Id,
}
Usage might look like this:
let w = app.window.width;
let monitor = app.window.monitor;
Should return something like:
pub struct Mouse {
/// The unique identifier of the last window currently in focus.
window: window::Id,
/// The DPI-agnostic position of the mouse relative to the centre of the window.
x: f64,
y: f64,
}
Usage would look something like:
let x = app.mouse.x;
Would be great if the user called set_title() without arguments if nannou could use the name of the file minus the extension. If the user does provide an argument to set_title() then this would be visible instead.
instead of typing out .rgb(0.5,0.5,0.5); it would be nice to have something like .gray(0.5)
We could use cpal for cross platform audio I/O. Currently it only supports audio output, however I believe they're open to adding support for input - we just need someone to need it enough to actually implement it (and then also implement synchronised duplex streams).
Audio is processed on its own thread and so should likely have its own model->update->render architecture in the same way that the graphics thread has its own model->update->draw.
mod audio {
struct Model {}
fn model(app: &AppAudio) -> Model {}
fn update(app: &AppAudio, model: Model, event: AudioEvent) -> Model {}
fn render(app: &AppAudio, model: &Model, buffer: Buffer) -> Buffer {}
}
One issue that comes to mind with this structure is that the render
stage in audio tends to require mutable access to the model in order to step forward the phase of oscillators or step forward a playhead over a buffer of samples. A user could get around this using Cell
or RefCell
but this kind of feels like it defeats the purpose.
The AudioEvent
enum might look something like this:
enum AudioEvent {
/// The sample rate of the target `Buffer` has changed.
SampleRate(f64),
/// The number of channels per buffer has changed.
Channels(usize),
/// The default output device changed.
DefaultDevice(Device),
/// A new device was detected.
DeviceAdded(Device),
/// A device was removed.
DeviceRemoved(Device),
}
Similarly to App
, AppAudio
should act as a general application context but with methods related to audio rather than windowing. Methods might include:
Come to think of it, the device info might even be more useful on the GUI thread, as it is normally via some sort of GUI interaction that devices are viewed or selected.
I often want something like this for testing appications that are meant to take some OSC input. An app that allowed you to craft arbitrary kinds of OSC messages (E.g. a list of OSC args) and send them (via a big button) to a custom address (typed in via a text box). I thought this might make a good nannou example as it will provide more of an insight into how to use both GUI and OSC in a more detailed manner, while also providing a useful utility for nannou users.
Every public function has doc comment with example that compiles.
@JoshuaBatty was just thinking it would be awesome if we could walk through a bunch of the basics of GLSL in the examples/tutorial too! Maybe as a follow up chapter or separate section or something. Could take some inspiration from Inigo's tutes, tomaka's guide, all the shaders you've been building up over the years. Nannou could be a nice friendly place to experiment with custom glsl, load shaders, plug inputs via uniforms etc.
The template.rs
example currently names the three primary functions model
, event
and view
, however the type aliases for these functions within the nannou crate root are ModelFn
, UpdateFn
and DrawFn
. We should probably change UpdateFn
to EventFn
and DrawFn
to ViewFn
for consistency with the examples.
rosc is a pure-rust encoding/decoding crate for OSC that should be perfect for this. That said, it provides a pretty low-level API in the sense that it requires users to manually setup UDP sockets and interacte with buffers of bytes (for encoding/decoding).
It would be nice if we could provide something that felt slightly higher level than this - something along the lines of osc::Sender
and osc::Receiver
types which abstract away some of the tedious byte handling.
Would be amazing to port the examples from the amazing generative gestultung book to work in nannou as well. http://www.generative-gestaltung.de/code
Potentially could then offer links on the website just like they currently have processing and vvvv links.
Would be great if we could have helper methods for calling random with the type inferred as f32.
also an range based random would be helpful. Something like random(0.5,3.14);
Currently the windowing API is half-baked, and it's not possible to get access to all of the functionality that a glium::Display
actually offers and that a user might need.
We should probably avoid giving the user direct access to the glium::Display
in the regular case in order to avoid allowing display.draw
to be called outside of the nannou application's draw
function call. Instead, we'd probably do best to offer some sort of Window
wrapper type that restricted access to certain methods. This might be a pain to keep up to date, but at least we can keep better control over the API that we're offering.
That said, for flexibility we should at least offer a method for gaining access to the underlying glium display. Something that is verbose but well documented, and mentions that it should be avoided unless the user needs access to the underlying display for some sort of functionality that nannou does not provide. Maybe window.inner_glium_display()
or something like this.
Window
I think it might make most sense to continue to refer to Window
's via a unique ID. The reason I'm leaning towrads Id
s rather than a direct handle is that a user's window may be closed at any point in time and the API should reflect this. A user probably should not be able to hold a handle to a Window
if the window has been previously closed. In the case that we need to access a window directly for some sort of functionality, I imagine something like the following allowing to borrow the window for the given ID:
let window = app.window(model.window_id).unwrap();
We return an Option
here as the window at the given Id
may or may not be open.
The window should expose the useful methods from the inner glium::Display
, glium::Context
and glutin::GlWindow
. The following methods should probably not be exposed by the window to avoid breaking the App
API:
For the most part it would be awesome if we could avoid the possibility of errors etc by taking advantage of the type system where possible, however there will inevitably be cases where we'll have to do some checks at runtime.
We should be able to return Result
and Option
in most of these cases, however their may be times where this too is impractical. E.g. there isn't really a nice way to deliver errors that occur behind the scenes in the code that drives the event loop itself, or if a user attempts to map a range with a magnitude of 0.0.
For these cases it would be nice to use a proper logging setup (rather than println!
s everywhere). openFrameworks handles this nicely with an ofLog
function, where users can choose what level of logging they want to listen to (errors, warnings, info, etc). Rust has the log crate which would probably be perfect for this and be familiar to existing users.
This is the bottom of my output when I attempt to run cargo run --release --example simple_window
, although this happens with every single example (so unfortunately I can't get anything running at the moment). I'm using cargo 0.26.0-nightly (1d6dfea44 2018-01-26)
. Seems like an error with rusttype?
Compiling syn v0.11.11
Compiling rusttype v0.4.1
Compiling num-bigint v0.1.41
error[E0599]: no method named `units_per_em` found for type `stb_truetype::FontInfo<SharedBytes<'a>>` in the current scope
--> /Users/andrescuervo/.cargo/registry/src/github.com-1ecc6299db9ec823/rusttype-0.4.1/src/lib.rs:361:19
|
361 | self.info.units_per_em()
| ^^^^^^^^^^^^
error: aborting due to previous error
If you want more information on this error, try using "rustc --explain E0599"
error: Could not compile `rusttype`.
warning: build failed, waiting for other jobs to finish...
error: build failed
If we can create nannou into a not for profit organisation then we are eligible to apply for certain grants. For example there is the organisation grant offered by the australia council. http://www.australiacouncil.gov.au/funding/funding-index/arts-projects-organisations/
The grant is between $10-100k.
In 2018, the grant rounds will close on:
Guiding principles we should consider and define before we become a not for profit.
Steps to be a not for profit:
This should be fixed by updating to the latest palette crate.
I'm going to have a go at integrating conrod as a GUI solution in the meantime to see how tight we can get it. It should be possible to remove a load of the noise/boilerplate associated with most conrod apps as we've already decided on a graphics backend (glium) and event source (winit).
Conrod itself doesn't explicitly support the idea of multiple windows and just assumes that each Ui
is associated with a single window. This is probably fine for most use cases, but it would be sweet if the actual management of having a Ui
per window (if necessary) was hidden from the user. To do this, we could have the app own a ui::Arrangement
type, which could act as a map from window::Id
s to Ui
s.
Ui
I'm imagining an API that is similar to the window and audio stream building APIs.
let ui = app.new_ui(window_id).build().unwrap();
where the user can optionally specify some custom theme or non-window dimensions via builder methods if they wish.
Regular conrod apps need to manually convert and pass input events to the Ui
. As we have access to both the event stream and the Ui
s, we could:
conrod::Input
s behind the scene.Ui::handle_input
for each valid input event.One potential issue with the second step is that if a user wants to hide certain input/events from the Ui
(if e.g. some custom game object or something is covering it) this may be more difficult to do if we're automatically submitting inputs to it. Perhaps we can find some nice API for filtering out certain events from the Ui
? Otherwise it might be best if we simply offer event conversion instead, so that the user just has to do something like this:
if let Some(input) = window_event.into_ui_input() {
ui.handle_input(window_event);
}
where the result of into_ui_input()
is only Some
if it can be interpreted as some input. This way the user can check the input and decide whether or not they want to submit it.
Another possible solution is to automatically submit user input by default, but add a Ui
builder method along the lines of .automatic_input_handling(false)
which allows users to opt-out of this if they wish to manually filter and submit certain input events. This probably sounds like the most user-friendly option as I'd imagine in most cases submitting all input is probably fine as the Ui
most often sits on top anyways.
Ui
Typically a conrod Ui
is drawn via eiither .draw()
or .draw_if_changed()
, both of which return a large list of primitives graphics primitives which can be rendered by the user however they wish.
We could add two complementary ui.draw_to_frame(&frame)
and ui.draw_to_frame_if_changed(&frame)
methods to simplify this for nannou's case.
Seeing as we're going with (0, 0) centre, y pointing upwards for nannou, the conrod coordinate system should feel pretty natural as it uses the same system.
It would also be really nice to have a GUI framework that was built upon the components that come with nannou for consistency, such as event handling, graphics, colours, text layout, math (points, vectors, matrices), etc.
Seeing as so much work has already gone into it, it would be worth experimenting with how tightly we can integrate conrod so that it feels this transparent and tightly integrated. I can foresee there being some limitations though:
[f64; 2]
for Point
typesRange
, Rect
, etc)Rectangle
, Triangles
, etc)color
module (nowhere near as comprehensive as the palette crate which we will probs use).If we decide to roll our own, we could still likely take a lot of logic directly from conrod to make the process easier (e.g. text layout, scroll logic, widget graph, etc).
It could be worth experimenting with a design that allowed for both Retained and Immediate style GUI too. I can imagine a retained-mode GUI providing an optional immediate-mode layer on top, where the immediate layer acted as a cache for retained widgets (this is pretty much how conrod works internally anyways, despite only providing an immediate API).
If we're to take this route, it would be worth fleshing out the graphics API first so that this can be built on top.
const fixed size arrays
Indicies array
Variants
draw.mesh()
to support:
.colored_tris(tris)
.textured_tris(tex, tris)
.colored_points(pts)
calls colored_tris
internally..textured_points(pts)
calls textured_tris
internally..colored_indexed(verts, indices)
.textured_indexed(tex, verts, indices)
SimpleWindowEvents
ElementState
App
Event
WindowEvent
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.