GithubHelp home page GithubHelp logo

Comments (11)

jhoffner avatar jhoffner commented on July 28, 2024

I just commented on the related #7. If we used a queue we could use the same container and just pull jobs, however we are trying to stick with an architecture that allows fresh containers on each run. It costs us a bit to spin up the containers but it provides a clean slate for users and the ability to do more sophisticated things later on.

Another possibility is to instead use pre-warmed instances. As soon as one instance is done, we bring up a new one and pre-warm a Drip server, before serving the next request. We could also play around with doing this pre-warming while another container is being ran, although we would want to make sure it doesn't affect the running container's performance too much. The big change here would be with how we communicate with the containers, since we wouldn't just being doing a basic run anymore.

from codewars-runner-cli.

jhoffner avatar jhoffner commented on July 28, 2024

@xcthulhu not sure if you saw this comment yet. We can discuss the slow runners here.

from codewars-runner-cli.

xcthulhu avatar xcthulhu commented on July 28, 2024

So here’s where my mind is at for this problem:

  1. Start a JVM process in a docker container. The main function just listens to stdin until EOF
  2. Parse stdin into JSON. The following schema seems reasonable:
    {language: ‘clojure’, solution: ‘(ns blah) (println “leeerooy!”)’, test: ‘(deftest jenkins (is 1 1))’, setup: ‘(ns wow) (defn doge [] :much-lame)’}
  3. Write to files. If it were up to me, I’d make directories with UUIDs for names in '/tmp’ and check for collisions and so on
  4. Compile based on language; actually in the case of clojure you can just add a class path containing your clj files to your execution thread and it will compile on the fly.
  5. Execute some (static) function hook in a predictable class/namespace; the function should print to stdout and then the listener should exit.

That way, if you want more JVM listeners, you spin up more containers and just communicate with them via stdin. The containers will never need networking or anything dangerous.

Now, I could write this thing in clojure and be done with clojure and java by this afternoon, but I am not sure that this would be maintainable on your end.

On Jul 17, 2014, at 9:52 AM, Jake Hoffner [email protected] wrote:

@xcthulhu not sure if you saw this comment yet. We can discuss the slow runners here.


Reply to this email directly or view it on GitHub.

from codewars-runner-cli.

jhoffner avatar jhoffner commented on July 28, 2024

Interesting. So would there be just one container that gets reused or would this use pre-warmed single serving instances?

Ideally we keep the execution logic all within node, just to keep the architecture simple. We are not a Java/Clojure shop so the lack of experience in that language is also a factor. If Drip would do the trick for us then we could just use that and node to communicate.

Our original project that was designed to reuse containers did communicate via stdin. We ended up scraping it to go with something that was easier to manage and that didn't re-use containers, but the stdin code could probably be extracted.

from codewars-runner-cli.

xcthulhu avatar xcthulhu commented on July 28, 2024

Interesting. So would there be just one container that gets reused or would this use pre-warmed single serving instances?

That's how I’d do it, yeah.
Ideally we keep the execution logic all within node, just to keep the architecture simple. We are not a Java/Clojure shop so the lack of experience in that language is also a factor. If Drip would do the trick for us then we could just use that and node to communicate.

Drip needs to prewarm, and the only way I can think of how to do this with Docker is to have some CMD at the end. I do not know of a way to have a Docker container wrap a service and also expose a CLI.
Our original project that was designed to reuse containers did communicate via stdin. We ended up scraping it to go with something that was easier to manage and that didn't re-use containers, but the stdin code could probably be extracted.

Well… it's pretty trivial in clojure if you use cheshire for JSON marshaling https://github.com/dakrone/cheshire

       (cheshire.core/parse-stream *in*)

Which is why I figured I could do it quickly…

from codewars-runner-cli.

jhoffner avatar jhoffner commented on July 28, 2024

Thats how you would do it reuse containers or thats how you would do it pre-warm? :)

We want to stay away from re-using containers for now. Maybe its just something that I've been holding onto, but I really want to to keep each container fresh.

The pre-warm should be ok, because we would be pre-warming the images anyway. The CLI could just be exposed and it could handle starting the Drip service itself. That would be ideal, because it could track when the drip server is ready, just in case it receives a request before it is.

I'm also realizing that I am not convinced that communicating to the container via a port is unsafe. It seems as though the docker container already has an ip address that the host can communicate through. We could just have another node server setup to handle requests, but this server is designed to run within docker and is simply a wrapper around the CLI. On startup, the server would run Drip.

from codewars-runner-cli.

xcthulhu avatar xcthulhu commented on July 28, 2024

Sent from my iPhone

On Jul 17, 2014, at 2:33 PM, Jake Hoffner [email protected] wrote:

Thats how you would do it reuse containers or thats how you would do it pre-warm? :)

Prewarm
We want to stay away from re-using containers for now. Maybe its just something that I've been holding onto, but I really want to to keep each container fresh.

I'm of the same mind. There is some really crazy stuff you can do to the JVM. It's safer to have every instance be fresh
The pre-warm should be ok, because we would be pre-warming the images anyway. The CLI could just be exposed and it could handle starting the Drip service itself. That would be ideal, because it could track when the drip server is ready, just in case it receives a request before it is.

Yeah, I just don't know how to do this, which is why I suggested listening to stdin...
I'm also realizing that I am not convinced that communicating to the container via a port is unsafe. It seems as though the docker container already has an ip address that the host can communicate through. We could just have another node server setup to handle requests, but this server is designed to run within docker and is simply a wrapper around the CLI. On startup, the server would run Drip.

You're right; exposing a port for an express server is about as insecure as interfacing through stdin/stdout.

Bear in mind that even warmed up, drip has a pretty big overhead (~0.20 seconds on my box). I'd have to do some instrumentation but I suspect listening to stdin will be faster.


Reply to this email directly or view it on GitHub.

from codewars-runner-cli.

jhoffner avatar jhoffner commented on July 28, 2024

Bear in mind that even warmed up, drip has a pretty big overhead (~0.20 seconds on my box). I'd have to do some instrumentation but I suspect listening to stdin will be faster.

That makes sense. So we could have something like this:

  • A node server that is listening to a port
  • That server spawns a separate JVM based CLI on startup
  • the server receives a request
  • if the request is for a JVM based language, it will send its data onto the CLI via stdin
  • JVM CLI will hot patch its instance somehow with the new code, so no bootup time is needed
  • Node server receives final stdout and passes it out of Docker

from codewars-runner-cli.

xcthulhu avatar xcthulhu commented on July 28, 2024

That's pretty much what I always had in mind. I'll start the JVM hot-patch thing today, after I finish fixing Julia.

from codewars-runner-cli.

xcthulhu avatar xcthulhu commented on July 28, 2024

Now this is done, we just need to figure out how an SOA will work for different kinds of runners.

from codewars-runner-cli.

xcthulhu avatar xcthulhu commented on July 28, 2024

Hi Codewars/codewars-runner,
A message from Matthew Wampler-Doty was left on IQ Elite for you.
Click here a.iqelite.com/j/0JmRlo/s15/?hg=E81ByDHjf0 to read your message

The IQ Elite Team


This message was meant for [email protected]. If you do not want to continue receiving such mails from IQ Elite, please click on this link: a.iqelite.com/common/unsubscribe/?code=3ff573b9&email=[email protected]&hg=E81ByDHjf0

IQ Elite is a service with main office: C/ Gran De Gràcia 15, 1a 1a, 08012 Barcelona, Spain.

from codewars-runner-cli.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.