GithubHelp home page GithubHelp logo

arximboldi / lager Goto Github PK

View Code? Open in Web Editor NEW
690.0 29.0 66.0 2.2 MB

C++ library for value-oriented design using the unidirectional data-flow architecture — Redux for C++

Home Page: https://sinusoid.es/lager/

License: MIT License

C++ 50.58% CMake 3.35% Nix 0.83% Elm 1.29% HTML 0.82% Makefile 0.03% Python 0.12% JavaScript 40.99% Emacs Lisp 0.02% QML 1.31% Starlark 0.11% SCSS 0.55%
library redux interactive functional-programming value-semantics cpp17

lager's Introduction

Github Actions Badge CodeCov Badge Sinusoidal Engineering badge

Logotype

lager is a C++ library to assist value-oriented design by implementing the unidirectional data-flow architecture. It is heavily inspired by Elm and Redux, and enables composable designs by promoting the use of simple value types and testable application logic via pure functions. And you get time-travel for free!

This project is part of a long-term vision helping interactive and concurrent C++ programs become easier to write. Help this project's long term sustainability by becoming a patron or buying a sponsorship package: [email protected]

Examples

For a guided introductory tour with code samples, please read the architecture overview section. Other examples:

  • Counter, a minimalistic example with multiple UIs (link).
  • Autopong, a basic game using SDL2 (link).
  • Ewig, a terminal text editor with undo, asynchronous loading, and more (link).

Why?

Most interactive software of the last few decades has been written using an object-oriented interpretation of the Model View Controller design. This architecture provides nice separation of concerns, allowing the core application logic to be separate from the UI, and a good sense of modularity. However, its reliance on stateful object graphs makes the software hard to test or parallelize. It's reliance on fine-grained callbacks makes composition hard, resulting in subtle problems that are hard to debug.

Value-based unidirectional data-flow tackles a few of these problems:

  • Thanks to immutability and value-types, it is very easy to add concurrency as threads can operate on their local copies of the data without mutexes or other flaky synchronization mechanisms. Instead, worker threads communicate their results back by dispatching actions to the main thread.
  • The application logic is made of pure functions that can be easily tested and are fully reproducible. They interact with the world via special side-effects procedures loosely coupled to the services they need via dependency injection.
  • This also means that data and call-graphs are always trees or DAGs (instead of cyclical graphs), with explicit composition that is to trace and debug. You can also always snapshot the state, making undo and time-travel easy peasy!

Dependencies

This library is written in C++17 and a compliant compiler and standard library necessary. It is continuously tested with GCC 7, but it might work with other compilers and versions.

It also depends on Zug and Boost Hana. Some optional extensions and modules may have other dependencies documented in their respective sections.

Usage

This is a header only you can just copy the lager subfolder somewhere in your include path.

Some components, like the time-travelling debugger, also require the installation of extra files.

You can use CMake to install the library in your system once you have manually cloned the repository:

mkdir -p build && cd build
cmake .. && sudo make install

Development

In order to develop the library, you will need to compile and run the examples, tests and benchmarks. These require some additional tools. The easiest way to install them is by using the Nix package manager. At the root of the repository just type:

nix-shell

This will download all required dependencies and create an isolated environment in which you can use these dependencies, without polluting your system.

Then you can proceed to generate a development project using CMake:

mkdir build && cd build
cmake ..

From then on, one may build and run all tests by doing:

make check

License

Boost logo

This software is licensed under the MIT license.

The full text of the license is can be accessed via this link and is also included in the LICENSE file of this software package.

lager's People

Contributors

altschuler avatar arximboldi avatar asa avatar ashtum avatar carsten-grimm-at-ipolog avatar cjbussey avatar colugomusic avatar craigscott-crascit avatar cyrilromain avatar dgehri avatar dimula73 avatar gracicot avatar jarekglobus avatar jarzec avatar jcelerier avatar kamibo avatar kfish avatar khmiller-globus avatar ltjax avatar luismerino avatar mjcaisse avatar nyanpasu64 avatar pinotree avatar siapran avatar thecoconutchef avatar timpatt avatar tusooa avatar vincent-hui avatar waybeforenow avatar xzorit avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

lager's Issues

Maybe not inject event loop

I don't think you should inject an event loop into the store. As I said at the conference, I think one defining aspect of the UI feedback loop is that it is synchronous. If you want to do async, you should do that in a thunk or another "advanced action-generation step".
Either way, it's kind of orthogonal anyways, and you can put the dispatch call into an event, if you really wanted - without having the event loop as part of the store.

Integrating with a QML application

I am trying to build a QML application using lager. Looking at the different sources (lager documentation, talks on lager/immer/ewig), there are multiple ways to do this. What would be the most idiomatic way to incorporate the lager architecture into a QML application. It might be a good idea to put in to the documentation, which I am willing to do once I know the answer to my question.

Inconsistency between "Genericity" in doc and API

The section https://sinusoid.es/lager/modularity.html#dealing-with-underlying-effects has code:

       [&] (DocumentAction a) {
           auto doc = static_cast<DocumentModel>(m);
           auto eff = lager::effect<DocumentAction, DocumentDeps>{lager::noop};
           lager::invoke_reducer<DocumentDeps>(r, doc, a, [&](auto e) {
               eff = e;
           });
           m.position = m.history.size();
           m.history = m.history.take(m.position).push_back(doc);
           return {m, eff};
       },

It calls invoke_reducer as lager::invoke_reducer<DocumentDeps>(r, doc, a, <lambda>); and discards its return value.
However, API specifies that invoke_reducer "returns the resulting model." These are inconsistent.

After peeking at the source code, it turns out that the API is consistent with it. So this call to invoke_reducer will not update the underlying model as expected.

From my point of view, a possible change would be to turn this function into:

       [&] (DocumentAction a) {
           auto eff = lager::effect<DocumentAction, DocumentDeps>{lager::noop};
           auto doc = lager::invoke_reducer<DocumentDeps>(r, static_cast<DocumentModel>(m), a, [&](auto e) {
               eff = e;
           });
           m.position = m.history.size();
           m.history = m.history.take(m.position).push_back(doc);
           return {m, eff};
       },

Composing modules does not compile

We have defined following structs for our main module:

using Action = std::variant<ViewportAction>;

struct AppState {
    Viewport viewport;
};

using XournalppResult = std::pair<AppState, lager::effect<Action>>;
using XournalppStore = lager::store<Action, XournalppResult>;

auto update(AppState model, Action action) -> XournalppResult;

The viewport module contains following:

struct Viewport {
    int width;
    int height;
    double x;
    double y;
    double rawScale;
};

// Actions
struct Scroll {
    enum Direction { HORIZONTAL, VERTICAL };
    Direction direction;
    double difference;
};

struct Scale {
    double rawScale;
};

struct Resize {
    int width;
    int height;
};

using ViewportAction = std::variant<Scroll, Resize, Scale>;
using ViewportResult = std::pair<Viewport, lager::effect<ViewportAction>>;

auto viewportUpdate(Viewport model, ViewportAction action) -> ViewportResult;

When trying to compile following it fails:

auto state = lager::make_store<Action>(AppState{}, update, with_gtk_event_loop{});
auto window = MainWindow{std::move(state)};

MainWindow has a pass by value constructor and internally std::move the store into a local variable.

We get following error message:

/usr/local/include/lager/watch.hpp:50:11: error: no matching constructor for initialization of 'lager::watchable_base<lager::detail::store_node_base<std::variant<std::variant<Scroll, Resize, Scale> >, std::pair<AppState, lager::effect<std::variant<std::variant<Scroll, Resize, Scale> >, lager::deps<> > > > >::node_ptr_t' (aka 'shared_ptr<lager::detail::store_node_base<std::variant<std::variant<Scroll, Resize, Scale> >, std::pair<AppState, lager::effect<std::variant<std::variant<Scroll, Resize, Scale> >, lager::deps<> > > > >') [clang-diagnostic-error]
        : node_(std::move(x.node_))

Add post dispatch effects (dispatch futures)

The idea is that dispatch() may return a future that is realized when the action is actually executed. This would allow us to do stuff like:

store.dispatch(some_action{})
    .then([](auto ctx) {
         // more effects depending on action having been deliverd 
    });

no matching constructor

Hello, thank you for the library!

I am using clang++ compiler in MacOS:

clang version 11.0.0
Target: x86_64-apple-darwin20.2.0
Thread model: posix
InstalledDir: /usr/local/opt/llvm/bin

I just get started using this library and I want to create modular store like in this link

The counter::model and todo::model are the models that I got from your examples.

struct model {
  counter::model counter;
  todo::model    todo;
};

using action = std::variant<counter::action, todo::action>;

model init();

model update(model app, action act);

using boost::fusion::operators::operator==;
using boost::fusion::operators::operator!=;

LAGER_CEREAL_STRUCT(modules::app::model, (counter)(todo));
LAGER_CEREAL_STRUCT(modules::app::action);

I created store in class attribute like this :

class store_impl {
public:
  using type = lager::store<modules::app::action, modules::app::model>;

  store_impl(logger_t* logger, io_t* io)
      : store_(lager::make_store<modules::app::action>(
          modules::app::init(),
          modules::app::update,
          lager::with_asio_event_loop{io->get_executor()},
          zug::comp(
#ifdef DEBUG
 			  lager::with_debugger(debugger_),
#endif
              lager::identity))),
#ifdef DEBUG
        debugger_(logger),
#endif
        logger_{logger} {
    logger_->debug("Creating store instance");
  }

  /**
   * Raw Lager store (const)
   *
   * @return raw Lager store (const)
   */
  virtual const type& raw() const override { return store_; }

  /**
   * Raw Lager store
   *
   * @return raw Lager store
   */
  virtual type& raw() override { return store_; }

  /**
   * Get store reader
   *
   * @return store reader
   */
  virtual decltype(std::declval<type>().get()) get()
      const override {
    return store_.get();
  }

  /**
   * Dispatch
   */
  virtual void dispatch(type::action_t action) override {
    store_.dispatch(action);
  }

private:
  /**
   * Lager store
   */
  store_t::type store_;

#ifdef DEBUG
  /**
   * Debugger
   */
  debug_logger debugger_;
#endif
  /**
   * Logger
   */
  logger_t* logger_;
};

It is fine without DEBUG defined.
However, when I tried to activate DEBUG and use lager::with_debugger I got error:

../third-parties/lager/lager/watch.hpp:49:11: error: no matching constructor for initialization of 'lager::watchable_base<lager::detail::store_node_base<std::__1::variant<std::__1::variant<modules::counter::increment_action, modules::counter::decrement_action, modules::counter::reset_action>, std::__1::variant<modules::todo::add_todo_action, std::__1::pair<unsigned long, std::__1::variant<modules::todo::toggle_item_action, modules::todo::remove_item_action>>>>, modules::app::model>>::node_ptr_t' (aka 'shared_ptr<lager::detail::store_node_base<std::__1::variant<std::__1::variant<modules::counter::increment_action, modules::counter::decrement_action, modules::counter::reset_action>, std::__1::variant<modules::todo::add_todo_action, std::__1::pair<unsigned long, std::__1::variant<modules::todo::toggle_item_action, modules::todo::remove_item_action>>>>, modules::app::model>>')
        : node_(std::move(x.node_))

Is there something that I missed?
Thank you!

Fix edge case in lenses::bind_opt

https://github.com/arximboldi/lager/blob/master/lager/lenses/optional.hpp#L37

Consider the following:

struct foo {
	optional<int> value;
};

using model = vector<foo>;

lens<model, optional<int>> first_value = at(0) | bind_opt(attr(&foo::value));

model clear_first_value(model m) {
	return set(first_value, m, nullopt);
}

The attr lens will be bypassed, because we are writing nullopt back into the lifted lens. This should not be the case, because the original lens can handle nullopt. I need to find a solution that lets me keep most of the logic from opt_impl factored.

Batch updates and readers

Consider the following program:

#include <lager/store.hpp>
#include <lager/reader.hpp>
#include <lager/event_loop/boost_asio.hpp>

#include <thread>
#include <boost/asio.hpp>

#include <iostream>

using Model = int;
using Action = int;
Model update(Model orig, Action add)
{
    return orig + add;
}

int main()
{
    boost::asio::io_context io;
    auto store = lager::make_store<Action>(
        Model{},
        &update,
        lager::with_boost_asio_event_loop{io.get_executor()});

    auto reader = lager::reader<Model>(store);

    lager::watch(reader,
                 [](Model m) {
                     std::cerr << "model is now: " << m << std::endl;
                 });

    auto guard = boost::asio::executor_work_guard(io.get_executor());

    std::thread([&] { io.run(); }).detach();

    for (int i = 0; i < 100; ++i) {
        store.dispatch(1);
    }

    std::this_thread::sleep_for(std::chrono::milliseconds(100));
}

One may expect that upon running this one will get 100 lines of output from 1 to 100. But this is actually not the case. It actually only outputs starting from ~50 and will sometimes skip values.

Is this expected? From my understanding, the dispatch() is just a call to event loop's post(), which should run the whole push_down() process, including those notify()s, which basically call the connected callback function, in io_context's thread (which we detach()ed earlier). If this is correct, there should not be any data races and no output should be skipped... but it is not the case.

The thing behind this is that I am trying to use store as an event listener (https://gitlab.com/kazv/libkazv/-/blob/servant/src/eventemitter/lagerstoreeventemitter.hpp). Or does this idea just not work?

Template instanciation error : lager/detail/signal.hpp

I encounter the following errors when attempting to compile this simple example from the doc.
https://gist.github.com/odecaux/17cc74b005dbbdf76948ccaf153b93b7

I'm using the latest commit : d0359a3

I have the exact same ouptut on both clang-cl 10.0.0, and clang 10.0.1 on my linux machine

In file included from F:\Sources\Lager\First_Example\main.cpp:6:
In file included from F:\Sources\vcpkg\installed\x86-windows\include\lager/store.hpp:17:
In file included from F:\Sources\vcpkg\installed\x86-windows\include\lager/state.hpp:16:
In file included from F:\Sources\vcpkg\installed\x86-windows\include\lager/cursor.hpp:16:
In file included from F:\Sources\vcpkg\installed\x86-windows\include\lager/detail/nodes.hpp:41:
F:\Sources\vcpkg\installed\x86-windows\include\lager/detail/signal.hpp(49,58): error: too few arguments to function call, expected 2, have 1
        void operator()(Args... args) final { fn_(args...); }
                                              ~~~        ^
F:\Sources\vcpkg\installed\x86-windows\include\lager/detail/signal.hpp(46,9): note: in instantiation of member function 'lager::detail::signal<const model &>::slot<void (*)(model, model)>::operator()' requested here
        slot(Fn fn)
        ^
C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.28.29333\include\memory(2833,32): note: in instantiation of member function 'lager::detail::signal<const model &>::slot<void (*)(model, model)>::slot' requested here
    return unique_ptr<_Ty>(new _Ty(_STD forward<_Types>(_Args)...));
                               ^
F:\Sources\vcpkg\installed\x86-windows\include\lager/detail/signal.hpp(66,29): note: in instantiation of function template specialization 'std::make_unique<lager::detail::signal<const model &>::slot<void (*)(model, model)>, void (&)(model, model), 0>' requested here
        auto s       = std::make_unique<slot_t>(std::forward<Fn>(fn));
                            ^
F:\Sources\vcpkg\installed\x86-windows\include\lager/watch.hpp(89,34): note: in instantiation of function template specialization 'lager::detail::signal<const model &>::connect<void (&)(model, model)>' requested here
        conns_.push_back(base_t::connect(std::forward<CallbackT>(callback)));
                                 ^
F:\Sources\vcpkg\installed\x86-windows\include\lager/watch.hpp(99,18): note: in instantiation of function template specialization 'lager::watchable_base<lager::detail::store_node_base<std::variant<increment_action, decrement_action, reset_action>, model> >::watch<void (&)(model, model)>' requested here
    return value.watch(std::forward<CallbackT>(callback));
                 ^
1 error generated.

gcc 9.3.0 (linux machine) gives me :

In file included from /home/odecaux/libraries/lager/lager/detail/nodes.hpp:41,
                 from /home/odecaux/libraries/lager/lager/cursor.hpp:16,
                 from /home/odecaux/libraries/lager/lager/state.hpp:16,
                 from /home/odecaux/libraries/lager/lager/store.hpp:17,
                 from main.cpp:6:
/home/odecaux/libraries/lager/lager/detail/signal.hpp: In instantiation of ‘void lager::detail::signal<Args>::slot<Fn>::operator()(Args ...) [with Fn = void (*)(model, model); Args = {const model&}]’:
/home/odecaux/libraries/lager/lager/detail/signal.hpp:49:14:   required from here
/home/odecaux/libraries/lager/lager/detail/signal.hpp:49:47: error: too few arguments to function
   49 |         void operator()(Args... args) final { fn_(args...); }
        |                                               ^~~

license

hey juanpe! license seems to be a bit unclear right now. the top level file says MIT, while the sources say GPL3. That's not intentional, is it?

Brainstorm: operators for `xform` and `zoom` (IDEAS/OPINIONS WANTED)

At the moment, there are three ways you can derive a cursor:

  • operator[x] you can pass an index or a lens
  • .zoom(lens) you can pass a lens
  • .xform(xf [, wfx]) you can pass one or two transducers

This is alright, but I am not super fan of the .xform() part in particular, it leads to code like:

data.xform(zug::map([](auto&& x) {
  // three level of parens/braces until you get to the meat
}))

One idea, would be to use an operator for that. For example operator^:

data ^ zug::map([] (auto&& x) { ... })
     ^ zug::filter([] {auto&& x} { ... });

Then it would be nice to use the same syntax for indexes and lenses, then, if we used operator/ instead, we can write:

data / &foo::bar / 0 / or_default 
     / zug::filter([](auto&& x) { return x > 0; })
     / zug::map([](auto&& x) { 
           return std::to_string(x);
       })

Sadly, we can't SFINAE on transducers or lenses, because we use deduced return types all over the place when defining them. So we need a different operator for each, we can combine those two:

data / &foo::bar / 0 / or_default 
  ^ zug::filter([](auto&& x) { return x > 0; })
  ^ zug::map([](auto&& x) { 
        return std::to_string(x);
    })

This works because of operator precedence, but if we put the transducers first then we would need parenthesis:

(data
  ^ zug::filter([](auto&& x) { return x > 0; })
  ^ zug::map([](auto&& x) { 
        return std::to_string(x);
    }))
  / &foo::bar / 0 / or_default 

Something that, on the other hand, would be more obvious if we just forget about using a symmetric operator for lenses and stick to operator[]:

(data
  ^ zug::filter([](auto&& x) { return x > 0; })
  ^ zug::map([](auto&& x) { 
        return std::to_string(x);
    }))
  [&foo::bar][0][or_default]

Also, instead of operator^ or / we have other candidates, for example >> or / for transducers?

Improve Qt MOC compilation time

The Qt MOC compiler is notoriously slow when preprocessing.

In particular, each time code includes lager/extra/qt.hpp, the MOC compiler spends about four seconds on precompiling it, even though the Qt relevant code is less than 100 lines.

Simply excluding non-Qt relevant code in lager/extra/qt.hpp as follows would reduce MOC processing time to a fraction of a second:

#ifndef Q_MOC_RUN
#include <lager/cursor.hpp>
#include <lager/reader.hpp>
#include <lager/watch.hpp>
#endif

Clang 10 and 11 cannot generate optimized code for house example

If Clang 10 or 11 is chosen to compile the house example https://godbolt.org/z/cyeawy
we can see the generated code is not as optimized as gcc counterpart.
Is it possible to change the implementation of lager so the Clang 11 can generate optimized code?

Thanks

The generated code is

main: # @main
  mov qword ptr [rsp - 32], offset .L.str
  mov byte ptr [rsp - 24], 0
  mov qword ptr [rsp - 16], offset .L.str.1
  mov byte ptr [rsp - 8], 0
  movups xmm0, xmmword ptr [rsp - 16]
  movaps xmmword ptr [rsp - 72], xmm0
  mov rax, qword ptr [rsp - 72]
  movaps xmmword ptr [rsp - 56], xmm0
  mov rcx, qword ptr [rsp - 32]
  mov dl, byte ptr [rsp - 24]
  mov esi, dword ptr [rsp - 23]
  movzx edi, word ptr [rsp - 19]
  mov r8b, byte ptr [rsp - 17]
  mov qword ptr [rsp - 56], rax
  mov byte ptr [rsp - 48], 1
  mov qword ptr [rsp - 104], rcx
  mov byte ptr [rsp - 96], dl
  mov dword ptr [rsp - 95], esi
  mov word ptr [rsp - 91], di
  mov byte ptr [rsp - 89], r8b
  mov rax, qword ptr [rsp - 56]
  mov qword ptr [rsp - 88], rax
  mov al, byte ptr [rsp - 48]
  mov byte ptr [rsp - 80], al
  mov eax, dword ptr [rsp - 47]
  mov dword ptr [rsp - 79], eax
  movzx eax, word ptr [rsp - 43]
  mov word ptr [rsp - 75], ax
  mov al, byte ptr [rsp - 41]
  mov byte ptr [rsp - 73], al
  movzx eax, byte ptr [rsp - 80]
  ret
.L.str:
  .asciz "kitchen"

.L.str.1:
  .asciz "bedroom"

Processing long lists

In the qml example, we see that a count is used for the model of a list view: https://github.com/arximboldi/lager/blob/master/example/todo/qml/main.qml#L123 .

This is viable in most cases, but if we would like to transform the list, we are in trouble. immer operations are generally efficient, but not so if we have to create new containers out of old ones.

This could include transforming individual items:

auto list = someCursor.make(); // some lager::reader<immer::flex_vector<T>>
auto transform = zug::map([](auto item) { return ...; });
auto transformed = list.map([=](auto container) { return intoImmer(..., transform, container); }).make(); // O(len(list)) on every update

This may be worked around by putting the transformations into the delegate of the list view, but also:

Filtering:

auto filter = zug::filter([](auto item) { return wantThisP(item); });
auto transformed = list.map([=](auto container) { return intoImmer(..., filter, container); }).make(); // O(len(list)) on every update

Still, this might be also worked around by using the visible property on the delegate, but if there are too many non-visible items (as in a search operation) this is definitely not ideal. And now there is Sorting:

auto transformed = list.map([=](auto container) { return zug::sorted(container.transient()).persistent(); }); // O(nlogn) on every update where n = len(list)

Maintaining a secondary index (as in https://lily.kazv.moe/kazv/libkazv/-/blob/ba5bbfca67b8b1ae9cf34244dbbccc769b19872b/src/client/clientutil.hpp#L200 combined with https://lily.kazv.moe/kazv/libkazv/-/blob/ba5bbfca67b8b1ae9cf34244dbbccc769b19872b/src/client/room/room-model.cpp#L67 ,r.timeline is a list containing all message ids sorted by their timestamp, while r.messages is a map from id to message) might be useful, but still inconvenient.

Do you have any ideas about this? Thank you very much.

Implement lenses for std::tuple, std::variant and std::optional

It would be nice to access the Nth element of a tuple (std::get), the Nth type of a variant (also std::get), and the only element of a std::optional.

It would also be nice to have cursors return an optional (or throw?) if the underlying part of the view is not present (out of bounds for collections and tuples, bad variant access, nothing in optional).

Allow forward-declared reference deps

Now if you do:

lager::context<..., lager::deps<foo&>>

requires a full definition of foo. It would be nice if it would work with just a forward declaration.

How to implement aggregation?

Consider we have a component called foo, and it has

struct foo_model;
using foo_action = std::variant<...>;
auto update_foo(foo_model, foo_action) -> std::pair<foo_model, lager::effect<foo_action, ...>>;

Now we would like to have a component that holds a list of foos:

using id_t = ...;
struct aggreate_model
{
    immer::map<id_t, foo_model> foos;
};

Intuitively, we would like to at least add, remove and update the items in the list.

The question comes in -- how to implement its actions and reducer?

Adding and removing seems simple:

struct add_foo { id_t id; foo_model foo; };
struct remove_foo { id_t id; };

using aggregate_action = std::variant<add_foo, remove_foo>;

auto update_aggregate(aggregate_model m, aggregate_action a)
    -> std::pair<aggreagte_model, lager::effect<aggregate_action>>
{
    std::visit(lager::visitor{...}, a);
}

For updating, it seems we have to uniquely identify the individual item first.

One solution might be to wrap the foo_action into another class that contains
the id of the foo one wants to modify, but it now becomes unclear what one
should do with effects that update_foo returns:

struct change_foo { id_t id; foo_action action; };

using aggregate_action = std::variant<add_foo, remove_foo, change_foo>;

auto update_aggregate(aggregate_model m, aggregate_action a)
    -> std::pair<aggreagte_model, lager::effect<aggregate_action>>
{
    std::visit(lager::visitor{
        ...,
        [=] (change_foo a) {
            auto foo_to_update = m.foos.at(a.id);
            auto [new_foo, eff] = update_foo(foo_to_update, a.action);
            return {
                aggregate_model{m.foos.set(a.id, new_foo)},
                // What should I do with eff?
            };
        }
    }, a);
}

We cannot return eff directly, because it is of type lager::effect<foo_action>,
but we want lager::effect<aggregate_action>, and aggregate_action does not have
a variant typed foo_action -- it only has change_foo, because we need to uniquely
identify foo with id.

Alternatively, we could change the foo_actions to let them record their id. However,
as the aggregation level increases, we will end up changing the most fundamental models
every time and having to record more and more ids. This is also far from satisfactory,
because this makes the item model (foo) heavily depend on the aggregation model (aggregate):

// Assume foo_action records its id
using aggregate_action = std::variant<add_foo, remove_foo, foo_action>;

auto update_aggregate(aggregate_model m, aggregate_action a)
    -> std::pair<aggreagte_model, lager::effect<aggregate_action>>
{
    std::visit(lager::visitor{
        ...,
        [=] (foo_action a) {
            auto foo_to_update = m.foos.at(a.id);
            auto [new_foo, eff] = update_foo(foo_to_update, a);
            return {
                aggregate_model{m.foos.set(a.id, new_foo)},
                eff
            };
        }
    }, a);
}
// What if we want to get an aggregation of aggregation?
struct whole_model { immer::map<id_t, aggregate_model> aggregates; };
using whole_action = std::variant<add_aggregate, remove_aggregate, aggregate_action>;

auto update_whole(whole_model m, whole_action a)
    -> std::pair<whole_model, lager::effect<whole_action>>
{
    std::visit(lager::visitor{
        ...,
        [=] (aggregate_action a) {
            auto [new_aggregate, eff] = m.aggregates.set(a.aggregate_id, update_aggregate(m.aggregates.at(a.id), a));
            return {
                aggregate_model{new_aggregate},
                eff,
            };
        }
    }, a);
}
// Now foo_action must have `foo_id` and `aggregate_id` in order for the
// context to accept that effect.

Does this mean that aggregation does not work well with effects?
Or is there a better approach to this problem? Thank you very much for any help.

Lower standard to C++14

C++17 support is not widely available yet. As a compromise, use 17 only in the optional parts.

Missing <memory> include

I have to #include <memory> before including lager for it to work. The other way around doesnt work. Maybe you're missing an #include <memory> in context.hpp?

#include <iostream>
#include <lager/store.hpp>
#include <memory>

int main(int argc, char **argv) {
  std::cout << "Hello, world!" << std::endl;
  return 0;
}

Integrate a React type reconciliation algorithm

To use common UI libraries that are not value based you have to signal them when and how to update.

Maybe it is possible to implement the React way of adapting these two by describing state as a tree and by implementing their diffing algorithm (https://reactjs.org/docs/reconciliation.html).

I noticed in the documentation that a kind of adapter between state and UI was planned anyway.

Specifically I am trying to use this library in a GTK application. But I do not want to pass the state along the whole widget tree manually and check for changes before redrawing as I think it would be possible to automate this.

How to use external (spaghetti code) libraries?

As we are currently implementing a new library (it is mostly finished already) to use in Xournalpp and want to rewrite our main application to use Redux we now have following problems:

Prerequisites

  • the library is written as normal object oriented code with listeners, getters, setters and inheritance
  • the main application is using lager and value oriented state

Problems

  • how do we guarantee interoperability that is still clean? (i was thinking to use the library as side effect)
  • where do we keep the references to our library objects? (this would be in effect dependencies then …)
  • what do we do if we get a simple signal from our library that indicates a view rerender without our internal model actually changing because the library state is not directly part of the application state (in value terms)?
  • we also want to exchange the renderer for our document dynamically. Rendering is part of the library as well as the document.

Allow mixing cursors and readers into a cursor node

Consider:

cursor<vector<T>> collection = ...;
reader<size_t> identifier = ...;

cursor<T> element = with(collection, identifier).xform(
	zug::map([](vector<T> vect, size_t index) -> T {
		return view(at(index), vect);
	}),
	lager::update([](tuple<vector<T>, size_t> whole, T part) -> vector<T> {
		return set(at(whole.second), whole.first, part);
	}));

Alternatively, converting a reader node into a cursor that discards any writes would be acceptable.

Trouble with lenses

I'm trying to get a lens (or a cursor, really) to an item in a tree:
code sample

I can't seem to compose lenses like I though I'd be able to, am I missing anything?

How to use custom event loops?

I am a little bit confused how event loops work in this library.

Specifically our application event flow would be something like this:

  • GtkEvents are dispatched on the gtk event loop
  • events are parsed into actions and dispatched
  • reducing and updating state can theoretically be asynchronous as long as event ordering from the gtk loop is preserved
  • rerendering just triggers an asynchronous rerender signal which leads to querying and redrawing on the gtk draw thread

As we will provide pen/touch input there shouldn't be much of a delay when translating pointer movements to state changes.

Dependency on Zug

Hi, real quick I managed to install the package locally but it won't run because of a dependency on Zug. After looking around in the code I found it was this package https://github.com/arximboldi/zug. There is no documentation on the page though, should I install that package locally in the same way as well?

Cereal implementation for immer::map

Is there a cereal serializer (or load/save pair) available for the immer::map type? It seems like there is code for immer::vector, immer::flex_vector etc. in the lager/debug/cereal/ folder, but I cannot find one for map.

I'm asking because adapting the load/save functionality in the todos example is giving a compile error from cereal about missing implementation for the map type.

If it's not available, can someone provide a pointer to how one would go about implementing it? I assume I could adapt the vector/flex_vector code:

namespace cereal {

template <typename Archive,
          typename T,
          typename MP,
          std::uint32_t B,
          std::uint32_t BL>
void CEREAL_SAVE_FUNCTION_NAME(
    Archive& ar, const immer::flex_vector<T, MP, B, BL>& flex_vector)
{
    ar(make_size_tag(static_cast<size_type>(flex_vector.size())));
    for (auto&& v : flex_vector)
        ar(v);
}

template <typename Archive,
          typename T,
          typename MP,
          std::uint32_t B,
          std::uint32_t BL>
void CEREAL_LOAD_FUNCTION_NAME(Archive& ar,
                               immer::flex_vector<T, MP, B, BL>& flex_vector)
{
    size_type size;
    ar(make_size_tag(size));

    auto t = std::move(flex_vector).transient();
    for (auto i = size_type{}; i < size; ++i) {
        T x;
        ar(x);
        t.push_back(std::move(x));
    }
    flex_vector = std::move(t).persistent();

    assert(size == flex_vector.size());
}

} // namespace cereal

My C++ fu is weak so any advice is greatly appreciated! Thanks :)

Create a release

Hi! Thank you a lot for the awesome library!

Can you please make a release of the library? It'll help a lot with packaging your library in Conan.

Thank you!

Combining cursors

I have something like this model:

struct thing {
    std::string id;
    std::string name;
}

struct model {
    immer::map<std::string, thing> things;
    std::optional<std::string> selected_id;
}

I'm trying to make a cursor that gives me the currently selected thing, based on the things and selected_id fields, because I want to watch without having to do the lookup logic.

So I've tried the following approach (_store is a lager::store with the model)

        auto selected_thing =
                lager::with(_store[&model::things], _store[&model::selected_id])
                        .xform(zug::map([](auto things, auto selected_id) {
                            std::cout << "in xform" << std::endl;
                            return selected_id.has_value() ? std::make_optional(things[selected_id.value()]) : std::nullopt;
                        }))
                        .make();

        lager::watch(selected_thing, [&](auto thing) {
            std::cout << "in watch" << std::endl;
        });

This prints "in xform" once, and never "in watch". Coming from the Rx world, I'm thinking of lager::with as something like the combine latest operator, but that might not be the case?

Is there something obvious wrong with the code above?

PS. I kind of wanted to use the at "operator", but I couldn't think of a way to use a reader as the lookup value. Also, I have a feeling some of this could be simplified; I'm all ears :)

Zoom available as a method for cursor.

In this slide, you introduce a zoom(lens) method on cursors, before replacing it with the operator[] syntax.
Currently, in Lager, zoom is only available as a free function, which is fine, until you reach this kind of situation:

LAGER_QT(imagePath){zoom(boxLens, data
                    [&VolumeTree::children][index]
                    [&VolumeVolumeRegistration::tree])
                    [&VolumeTree::volume][&Volume::name]
                .xf(stdToQtStrMapper)}

Not having an infix solution for zooming makes this kind of construct a bit cumbersome.

Example application using lager

Hi @arximboldi,

TLDR - I used your library and thought you might find this interesting - lager-hierarchy.

This isn't an issue so please feel free to close after you've read this.

I've been playing around with an incredibly simple library that represents an entity hierarchy. The base library is available here - hierarchy. It includes a simple macOS and Windows console application using the library. I made another example that uses Qt here - qt-hierarchy and have an ImGui version as part of this app (relevant file).

I saw the awesome presentation you gave at C++ on Sea 2020 about lager and have finally got around to giving it try. This mini example again consumes the hierarchy library but uses lager to drive the UI - lager-hierarchy.

It's incredibly simple but I thought it might be interesting to share. There's still a lot more I'd like to add to it (one thing might be swapping out my use of std containers with immer in future).

I'd love to hear any feedback, suggestions or advice!

Thanks for putting together such a cool and interesting library and for the excellent documentation and guide!

Cheers,

Tom

Thread-safety of cursors

I have been thinking that cursors are thread-safe, but it seems not the case.

#include <lager/store.hpp>
#include <vector>
#include <boost/asio.hpp>
#include <lager/event_loop/boost_asio.hpp>
#include <random>
#include <iostream>

using Model = std::vector<int>;
struct Action {};

const int len = 3000;

Model update(Model, Action)
{
    auto r = std::rand() %3;
    return Model(len, r);
}

void print(Model m)
{
    for (auto s : m) {
        std::cout << s;
    }
    std::cout << std::endl;
}

int main()
{
    boost::asio::io_context io;

    auto store = lager::make_store<Action>(
        Model(len, 0),
        lager::with_boost_asio_event_loop{io.get_executor()}
        );

    auto reader = lager::reader<Model>(store);

    store.dispatch(Action{});

    boost::asio::executor_work_guard g(io.get_executor());

    std::thread([&io] { io.run(); }).detach();

    std::thread([&store] {
                    std::this_thread::sleep_for(std::chrono::milliseconds(1));
                    for (auto i = 0; i < 500; ++i) {
                        std::this_thread::sleep_for(std::chrono::microseconds(39));
                        store.dispatch(Action{});
                    }
                }).detach();



    for (auto i = 0; i < 700; ++i) {
        std::this_thread::sleep_for(std::chrono::microseconds(40));
        print(reader.get());
    }
}

Using this code to execute:

./readertest | perl -ne '$len = 3000;chomp; if ($_ ne "0" x $len and $_ ne "1" x $len and $_ ne "2" x $len) { print;print "\n" }'

If cursors are thread-safe, readertest will always print out lines with 3000 same digits of 0 or 1 or 2, so the pipe should print out nothing. But in fact it occasionally print out lines like

111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111122222222222222222222222222222222222222222222222222222222111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111

(the sequence is mostly 1 but contains some 2 in the middle) which means reader.get() gives us corrupted data.

Does this mean we should not use cursors outside the thread that contains the event loop? In particular, the following operations:

  1. Construct a cursor by deriving from a local cursor object (In the program, calling reader.map(...).make() in the main thread)
  2. Calling get() on a local cursor object (reader.get() in the main thread)
  3. Construct a cursor and use it only in another thread (std::thread([reader2=reader] { reader2.get(); }).detach();)

Which of these are safe, considering the event loop is not running on the main thread?

If any of these are not safe, how can we mitigate the thread-unsafety here?

Thank you very much for your help.

clang format

Using clang format would be nice to get all formatting problems out of the way.

states/cursors vs store/readers/actions

If I understand correctly, the store only allows the use of actions/reducers and readers, which makes the use of cursors somewhat complicated.
It feels like you have to chose either using the store and the full unidirectional architecture, or using cursors to integrate easily with existing OO code.
Am I understanding this correctly, and if so, which approach would you recommend?

Problems with ordering of chains of actions/effects

This is something that in the beginning I thought is a modeling problem, but I'm starting to find situations that make me wonder.

Example, we have action1, finish_action1, action2

  • action1 returns an effect that does something (maybe something as innocent as reading a time-stamp) and dispatches finish_action1, eg:
[](auto&& ctx) { ctx.dispatch(finish_action1{}); }
  • action2 does something, that can be dependent on state touched by finish_action1.
  • The user does:
dispatch(action_1{});
dispatch(action_2{});

The execution we get is however action_1, action_2, finish_action1, even if maybe one could expect (or require for correctness): action_1, finish_action1, action_2.

One solution would be to ensure that the effects from an action an all actions dispatched within it gets queued before any other actions in the queue. The implementation seems not trivial and can add a small overhead also in cases where it may be not necessary, since it requires keeping a nested queue for actions dispatched within the effect.

Considertions are:

  • Should we put this aspect of queueing in the event_loop interface?
  • Should we add a method dispatch_nested() to consider this special scenario?

Avoid copying when deriving a node to simply access a member

Reader nodes do a significant amount of copying. This assumes that copying is light-weight. This is normally the case when using immer. However, there are situations where copying could be avoided. For example, a derived node could keep a reference to the current data, instead of a copy, when the lens or transducer is simply "accessing" a member.

Add map() and filter() aliases to cursors

Now you need to do:

// let x be a cursor
x.xform(zug::map(...));
x.xform(zug::filter(...));

This is so common that it makes sense to have aliases that do:

x.map(...).filter(...);

Maybe this could be implemented in Zug itself, via some kind of xformable mixin that you can use to bring most of the Zug API as methods.

Confusion about basic example (reducer not used?)

In the Architecture section of the docs, a simple example program is constructed using lager::with_manual_event_loop.

I'm confused because the update function that is created in that example (and in the std version of the counter program in /examples), does not seem to be used anywhere. The simple manual event loop which takes character inputs from stdin, creates an action using the intent function and dispatches an action in the store, but how would the store to know that it should run the actions through the update function?

I suspect that the stdin-based event loop should call the update and set its state using the new state, but I might just be missing something?

Thanks!

EDIT: When looking at 3a9733c I realized it was a namespace issue; it worked after I put the update function into the same namespace as the model.

I'm a C++ rookie, so this might be something obvious, but it had me very confused, so might be worth documenting :)

Asynchronous action feedback / chaining

Assume we have some action that performs some async tasks:

ModelResult update(Model m, SomeAction)
{
    return { m, [](auto &&ctx) { 
        lager::get<SomeAsyncHandler>(ctx).someAsyncRequest()
            .then([=](auto res) { ctx.dispatch(UpdateModelAccordingly{res}); });
    } };
}

And we want to perform some other action after UpdateModelAccordingly has run. Here, using post-dispatch effect would not work as we would be processing according to the model, which it does not provide. The flow is roughly like:

SomeAction --| async request |~~~> UpdateModelAccordingly --> (some processing + SomeOtherAction{some data from model})

In traditional way it may be like

model.someAction()
    .then([](auto m, auto res) { m.updateModelAccordingly(res); return m; })
    .then([](auto m) { someProcessing(); m.someOtherAction(some data from model); return m; });

Although it is possible to use wrapper actions for that, but it is cumbersome when the flow gets longer (3-4 requests).
Another solution may be to use lambdas as actions, but then they are no longer good value types.

Is there a better solution for this? Thank you.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.