GithubHelp home page GithubHelp logo

trevoke / dwarlixir-mistakes Goto Github PK

View Code? Open in Web Editor NEW
0.0 0.0 0.0 3.64 MB

The Big Elixir 2019 - "Dwarlixir: mistakes were made"

Home Page: http://blog.trevoke.net/dwarlixir-mistakes

CSS 27.87% HTML 19.30% JavaScript 52.84%
talk

dwarlixir-mistakes's People

Contributors

dependabot[bot] avatar trevoke avatar

Watchers

 avatar  avatar  avatar

dwarlixir-mistakes's Issues

unsorted feedback

the algorithm improvement section seemed promising, but was hard to follow… partly the model of the graph was a little confusing (maybe more info about hunt the wumpus, or maybe if the details of the graph don’t matter for the algorithm just cut the image and details out and leave it at graph traversal….)… going from Enum.flat_map to List.foldl is sort of interesting, but I think foldl is basically the same as Enum.reduce, right? I know basically everything in Enum is mostly build on reduce and that it tends to be more performant than some of the others as a result… could also play with/mention for comprehension which is a macro that compiles to reduce as a generally more performant option than flat_map or other Enum fn’s I think, but a) not sure that applies here and b) may be worth confirming, but it might make it clear this isn’t entirely about List vs Enum but impelementation details of particular Enum functions… though I could be wrong and maybe the protocol overhead of Enum plays a role also? But point being I guess if this graph happened to be a reursive map with performance problems using flat_map I think reduce would still help a lot.
also it’s a little odd how later slide adds a few lines of context not there in earlier slide, so I think maybe should be there in both? or neither perhaps, but I think both is good
on extreme local state, and maybe some other places, may be a few extra slides worth throwing in for things mentioned but not shown in slides, as bullet/reference points, like dining philosophers problem… I thought of it on the deadlocks slide, and then you talked about it a minute or two later, but there’s no slide referencing it and someone might want it up to copy down and look into later/check they heard the name correctly etc

In distributed state, I don’t remember if you talked about/explicitly labeled your realization that you’d created caches by having more local state copies, but natural place to maybe remind/reference difficulty of cache invalidation, perhaps in a slide, perhaps “2 hard things reference”, whatever, but it’s a useful reference point that… well, a) distribtued state, if its happening, should be something like sharding not cacheing, and b) things that are a cache need to somehow explicitly and carefully address the hard problem of invalidation
the heartbeat thing is hilarious with CPU usage… maybe worth showing a visualization/screenshot of the actual CPU activity this created from activity monitor or whatever?

also not totally cleaer reviewing slides but how seperate are the scheduler/tick issue and flooding processes issue? seem like potentially one issue that could be merged but maybe I’m forgetting different causes in spoken parts of talk
the misconception text is very similar though: “There won’t be a sizable impact to sending lots of processes a message at the same time” vs “It’s hard to send a process too many messages”… if really distinct maybe can differentiate better in slides/text
I guess one is about a single process the other is about collective messaging to the system, but, dunno, the second one feels like a detail/extreme case for humor/punctuation at end of the first one

maybe more details about what OOM killer behavior looked like? BEAM has one process/thread per core right? so was it killing a single scheduler at a time and the others were OK? how did you diagnose/how would you save someone else time diagnosing this if they were seeing something and wanted to be reminded of this/rule it out?
also curious, you run linux right? would this look very different on BSD/Mac do you know? or inside a docker container vs outside?

reductions

So - I figured out why they’re called “reductions”. It’s a shorthand for “goal-reductions” (I mean, they’re even shortened to “reds” now …), which is a leftover from when Erlang was written in Prolog. A goal-reduction in prolog is essentially a constituent operation of a larger program. This does literally mean that the processes have a set number of operations they’re allowed to run --- the BEAM VM, though, as we saw last night, is now using this as an abstraction to help support operational spikes.
http://erlang.org/pipermail/erlang-questions/2017-March/091832.html

processes sending messages to a process with a long message queue
are penalized by increasing the number of reductions it costs to send the
message. This is an attempt by the runtime system to allow processes with
long message queues to catch up by slowing down the processes sending the
messages in the first place. The latter bottleneck often manifests itself
in a reduction of the overall throughput of the system.

https://github.com/happi/theBeamBook/blob/master/chapters/scheduling.asciidoc#reductions

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.