GithubHelp home page GithubHelp logo

ottomatica / slim Goto Github PK

View Code? Open in Web Editor NEW
2.1K 17.0 51.0 6.51 MB

Build and run tiny vms from Dockerfiles. Small and sleek.

License: Apache License 2.0

JavaScript 52.88% Dockerfile 13.88% Shell 33.24%
docker micro-vm virtualization

slim's People

Contributors

chrisparnin avatar dependabot-preview[bot] avatar dependabot[bot] avatar gjabell avatar mprey avatar paralax avatar saraghds avatar ssmirr avatar stephengroat avatar unixfox avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

slim's Issues

SSH key copy should use absolute paths

fs.copyFileSync(path.resolve('baker_rsa'), path.join(slimdir, 'baker_rsa'));

Slim command fails unless you are in the script/keys directory. Not sure how best to resolve the path in Node, guess it should be in relation to the project directory?

Error building image

After running slim build images/alpine3.8-simple I get:

building docker image
Step 1/28 : FROM alpine:3.8.4 AS openrc
 ---> c2b4b73a5fef
Step 2/28 : RUN mkdir -p /lib/apk/db /run
 ---> Using cache
 ---> bd1006585f67
Step 3/28 : RUN apk add --no-cache --initdb openrc
 ---> Using cache
 ---> 5e63b6e9e35a
Step 4/28 : FROM alpine:3.8.4 AS kernel
 ---> c2b4b73a5fef
Step 5/28 : RUN mkdir -p /lib/apk/db /run
 ---> Using cache
 ---> bd1006585f67
Step 6/28 : RUN apk add --no-cache --initdb linux-virt virtualbox-guest-modules-virt
 ---> Running in 44c71b3e49cf
fetch http://dl-cdn.alpinelinux.org/alpine/v3.8/main/aarch64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.8/community/aarch64/APKINDEX.tar.gz
ERROR: unsatisfiable constraints:
  linux-virt (missing):
    required by: world[linux-virt]
  virtualbox-guest-modules-virt (missing):
    required by:
                 world[virtualbox-guest-modules-virt]
exporting docker filesystem
Error: (HTTP code 404) no such container - No such image: slim-vm:latest

I'm running on an M1 mac...

Broken `slim push` logic

If an asset already exists, slim push will fail to push to a GitHub release asset. Current workaround is to manually delete asset and then push.

cjparnin at MacBookPro in ~/classes/519/dungeons/ubuntu-node-dungeon
$ GH_TOKEN="xxxx" slim push ubuntu-node-dungeon CSC-DevOps/Images#Spring2020
{ HttpError: request to https://uploads.github.com/repos/CSC-DevOps/Images/releases/22449587/assets?name=ubuntu-node-dungeon-slim.iso& failed, reason: write EPIPE
    at fetch.then.then.catch.error (/Users/cjparnin/projects/slim/node_modules/@octokit/request/dist-node/index.js:107:11)
    at process._tickCallback (internal/process/next_tick.js:68:7)
  name: 'HttpError',
  status: 500,
  headers: {},
  request:
   { method: 'POST',
     url:
      'https://uploads.github.com/repos/CSC-DevOps/Images/releases/22449587/assets?name=ubuntu-node-dungeon-slim.iso&',
     headers:
      { accept: 'application/vnd.github.v3+json',
        'user-agent': 'octokit.js/16.36.0 Node.js/10.5.0 (macOS Catalina; x64)',
        'content-type': 'application/octet-stream',
        'content-length': 202522624,
        authorization: 'token [REDACTED]' },
     body:
      ReadStream {
        _readableState: [ReadableState],
        readable: true,
        _events: [Object],
        _eventsCount: 2,
        _maxListeners: undefined,
        path:
         '/Users/cjparnin/.slim/registry/ubuntu-node-dungeon/slim.iso',
        fd: 15,
        flags: 'r',
        mode: 438,
        start: undefined,
        end: Infinity,
        autoClose: true,
        pos: undefined,
        bytesRead: 1572864,
        closed: false },
     request:
      { hook: [Function: bound bound register], validate: [Object] } } }
Pushed asset: /Users/cjparnin/.slim/registry/ubuntu-node-dungeon/slim.iso

Error: ENOENT: no such file or directory ~/alpine3.8-runc-ansible/info.yml

So i'm testing out slim and i'm not sure about an error i'm getting. I can get the build to work correctly -

-rw-r--r--  1 pulberg  RJF\Domain Users  24856576 Jun 18 13:04 /Users/pulberg/.slim/registry/alpine3.8-simple/slim.iso
success!

If I then try slim images I get this -

slim images
Error: ENOENT: no such file or directory, open '/Users/pulberg/.slim/registry/alpine3.8-runc-ansible/info.yml'
    at Object.openSync (fs.js:447:3)
    at Object.readFileSync (fs.js:349:35)
    at Images.info (/Users/pulberg/slim/lib/images.js:41:39)
    at /Users/pulberg/slim/lib/images.js:16:35
    at Array.map (<anonymous>)
    at Images.list (/Users/pulberg/slim/lib/images.js:15:59)
    at Object.exports.handler (/Users/pulberg/slim/lib/commands/images.js:11:30)
    at Object.runCommand (/Users/pulberg/slim/node_modules/yargs/lib/command.js:242:26)
    at Object.parseArgs [as _parseArgs] (/Users/pulberg/slim/node_modules/yargs/yargs.js:1078:30)
    at Object.get [as argv] (/Users/pulberg/slim/node_modules/yargs/yargs.js:1012:21) {
  errno: -2,
  syscall: 'open',
  code: 'ENOENT',
  path: '/Users/pulberg/.slim/registry/alpine3.8-runc-ansible/info.yml'
}

Even if i add the info.yml file, I then get this -

slim images
Error: ENOENT: no such file or directory, stat '/Users/pulberg/.slim/registry/alpine3.8-runc-ansible/slim.iso'
    at Object.statSync (fs.js:856:3)
    at Object.statSync (/Users/pulberg/slim/node_modules/graceful-fs/polyfills.js:295:24)
    at VirtualBox.size (/Users/pulberg/slim/lib/providers/virtualbox.js:124:19)
    at /Users/pulberg/slim/lib/images.js:21:47
    at processTicksAndRejections (internal/process/task_queues.js:89:5)
    at async Promise.all (index 0)
    at async Images.list (/Users/pulberg/slim/lib/images.js:15:16)
    at async Object.exports.handler (/Users/pulberg/slim/lib/commands/images.js:11:17) {
  errno: -2,
  syscall: 'stat',
  code: 'ENOENT',
  path: '/Users/pulberg/.slim/registry/alpine3.8-runc-ansible/slim.iso'
}

And, sure enough, no .iso in that path -

pulberg ~ alpine3.8-runc-ansible
 ➜ ls
.rw-r--r--  pulberg  RJF\Domain Users  92 B  Tue Jun 18 13:06:58 2019    info.yml

So i'm not sure what needs to be done here or why it's asking for something that doesn't exist.

Thanks,

Phillip

Attach command

Hello,

I am a student at UT Austin in a virtualization class and I was looking to contribute to this project. After using this for a bit, having to C+P the ssh every time to connect to the micro-vm seemed redundant. I was wondering if I could create a slim attach command that would automatically try and connect to the most recent micro-vm provisioned. Alternatively, there could be an additional flag in slim build that would auto-attach to the VM after creation.

What do you think?

EDIT: Adding health checking with this ticket too. Details in PR #73

Providing acpid support for KVM

Currently, it seems only VirtualBox provides any remote support for acpid actions. Hyperkit only responds to killing its PID, but KVM has some potential here. The current KVM VM's will completely ignore any reboot or shutdown request and only respond to destroy. This leads to things such as slim start and slim stop practically being useless except for VirtualBox.

What do you think about adding support for ACPID in KVM's vms? My thoughts would be adding acpid installation through the XML template, but this would add to the build's size and start up time.

I mention this because as I'm looking into slim start, really the only added support for this command would be in VirtualBox unless an unexpected shutdown occurs with KVM.

slim on AWS EC2

Hello,

I'm trying to boot an image from slim on AWS EC2 but I'm unable to make it work:
slimaws

Here what I did to convert an image from slim to an AMI:

  1. slim build images/alpine3.8-simple -f qcow2 -p kvm
  2. cd ~/.slim/registry/alpine3.8-simple
  3. qemu-img convert slim.qcow2 slim.raw
  4. aws s3 cp slim.raw s3://slimiso
  5. aws ec2 import-snapshot --description "slim image" --disk-container file://container.json
  6. Create an AMI from the imported snapshot:
    2019-08-13_11-22
  7. Launch an EC2 instance from the AMI created previously.

Can someone help me to make an image from slim booting properly on AWS EC2?

Add shell completion for images

Fish/zsh/bash shell completion for images, so that hitting tab with slim run <name> will list and auto-complete available images

VBoxManage: error: RawFile#0 failed to create the raw output file /home/tlittle/.baker/micro1.log (VERR_FILE_NOT_FOUND)

Just tried building and running under VirtualBox the alpine3.9-simple sample and I get the following when I try to run the image:
$ slim run micro1 alpine3.9-simple
/home/tlittle/.slim/registry/alpine3.9-simple/slim.iso
Searching between ports 2002 and 2999 for ssh on localhost for this vm.
Excluding the following ports already used by VirtualBox VMS:
Port 2002 is available for ssh on localhost!
Executing VBoxManage createvm --name "micro1" --register
Executing VBoxManage modifyvm "micro1" --memory 1024 --cpus 1
Executing VBoxManage storagectl "micro1" --name IDE --add ide
Executing VBoxManage storageattach micro1 --storagectl IDE --port 0 --device 0 --type dvddrive --medium "/home/tlittle/.slim/registry/alpine3.9-simple/slim.iso"
Executing VBoxManage modifyvm micro1 --uart1 0x3f8 4 --uartmode1 file /home/tlittle/.baker/micro1.log
Executing VBoxManage modifyvm micro1 --nic1 nat
Executing VBoxManage modifyvm micro1 --nictype1 virtio
Executing VBoxManage modifyvm micro1 --natpf1 "guestssh,tcp,,2002,,22"
Executing VBoxManage sharedfolder add micro1 --name "vbox-share-0" --hostpath "/home/tlittle/slim/images/alpine3.9-simple"
Executing VBoxManage sharedfolder add micro1 --name "vbox-share-1" --hostpath "/"
Executing VBoxManage startvm micro1 --type emergencystop
Executing VBoxManage startvm micro1 --type headless
=> exec error: { Error: Command failed: VBoxManage startvm micro1 --type headless
VBoxManage: error: RawFile#0 failed to create the raw output file /home/tlittle/.baker/micro1.log (VERR_FILE_NOT_FOUND)
VBoxManage: error: Details: code NS_ERROR_FAILURE (0x80004005), component ConsoleWrap, interface IConsole

at ChildProcess.exithandler (child_process.js:281:12)
at emitTwo (events.js:126:13)
at ChildProcess.emit (events.js:214:7)
at maybeClose (internal/child_process.js:915:16)
at Process.ChildProcess._handle.onexit (internal/child_process.js:209:5)

killed: false,
code: 1,
signal: null,
cmd: 'VBoxManage startvm micro1 --type headless' }
ssh -i /home/tlittle/.slim/baker_rsa [email protected] -p 2002 -o StrictHostKeyChecking=no

Should I have to create that log file manually?

-tl

How to stop a VM?

Currently, there is documentation such that you can build a VM, and run a VM, and delete a VM but it's not obvious to me how to stop a VM?

Error: ENOENT: no such file or directory

Here is the Dockerfile I'm using:

FROM darribas/gds_py:4.1

And the info.yml I'm including:

description: A containerised (Python) platform for Geographic Data Science

I'm getting the following error:

dani@homepod:~/code/gds_env/virtualbox$ slim build ./
building docker image
Step 1/1 : FROM darribas/gds_py:4.1
 ---> 654de8c9ff7a
[Warning] One or more build-args [SSHPUBKEY PKGS] were not consumed
Successfully built 654de8c9ff7a
Successfully tagged slim-vm:latest
exporting docker filesystem
creating initrd
Error: ENOENT: no such file or directory, stat '/home/dani/.slim/slim-vm/vmlinuz'
dani@homepod:~/code/gds_env/virtualbox$ 

Am I doing something wrong? (complete newbie here)

slim run gets stuck at "Waiting for IP address"

I'm running macOS 10.15.5, and fiddling with writing a Nix package for slim. When I invoke slim run ...:

  • I get a password prompt
  • When I satisfy it, it prints Running hyperkit
  • I get another password prompt
  • When I satisfy this, it prints Waiting for IP address, but nothing else happens (or appears to happen).

If I run the hyperkit command directly, it connects to the VM just fine:

  • look up the hyperkit process and extract the arguments from ps
  • cancel the execution
  • kill the hyperkit process
  • remove the pidfile
  • invoke the hyperkit command

Here's an edited copy of my shell session:

    ╓─[ delete_me ] master ~/work/slim
    ╚═>> abathur in nix-shell on <snip> $ result/bin/slim build -p hyperkit images/alpine3.12-simple
    Sat Aug 15 2020 16:28:50 --> 
    building docker image
    Step 1/27 : FROM alpine:3.12 AS openrc
     ---> a24bb4013296
    Step 2/27 : RUN mkdir -p /lib/apk/db /run
     ---> Using cache
     ---> 13b2a5f4bba4
    Step 3/27 : RUN apk add --no-cache --initdb openrc
     ---> Using cache
     ---> 575e6c7a7090
    Step 4/27 : FROM alpine:3.12 AS kernel
     ---> a24bb4013296
    Step 5/27 : RUN mkdir -p /lib/apk/db /run
     ---> Using cache
     ---> 13b2a5f4bba4
    Step 6/27 : RUN apk add --no-cache --initdb linux-virt virtualbox-guest-modules-virt
     ---> Using cache
     ---> 225579c66729
    Step 7/27 : FROM alpine:3.12 AS install
     ---> a24bb4013296
    Step 8/27 : USER root
     ---> Using cache
     ---> d01e80e9223b
    Step 9/27 : ARG SSHPUBKEY
     ---> Using cache
     ---> 764de71d25ac
    Step 10/27 : ARG PKGS
     ---> Using cache
     ---> c4825a1461c6
    Step 11/27 : COPY --from=openrc /lib/ /lib/
     ---> Using cache
     ---> 06d868c9c36f
    Step 12/27 : COPY --from=openrc /bin /bin
     ---> Using cache
     ---> bfbfa1bb1f27
    Step 13/27 : COPY --from=openrc /sbin /sbin
     ---> Using cache
     ---> 393e0fab9eca
    Step 14/27 : COPY --from=openrc /etc/ /etc/
     ---> Using cache
     ---> ac47829fb19b
    Step 15/27 : COPY --from=kernel /lib/modules /lib/modules
     ---> Using cache
     ---> 3c055c092658
    Step 16/27 : COPY --from=kernel /boot/vmlinuz-virt /vmlinuz
     ---> Using cache
     ---> ce779d67ff36
    Step 17/27 : RUN mkdir -p /lib/apk/db /run
     ---> Using cache
     ---> 7f22bc6788ad
    Step 18/27 : RUN apk add --update --no-cache --initdb alpine-baselayout apk-tools busybox ca-certificates util-linux     openssh openssh-client rng-tools dhcpcd virtualbox-guest-additions
     ---> Using cache
     ---> 4f91d3bf1b27
    Step 19/27 : RUN [ ! -z "$PKGS" ] && apk add --no-cache $PKGS || echo "No optional pkgs provided."
     ---> Using cache
     ---> d7e03ac93527
    Step 20/27 : RUN rm -rf /var/cache/apk/*
     ---> Using cache
     ---> 977c75f1fa8c
    Step 21/27 : COPY files/init /init
     ---> Using cache
     ---> 9b906630a38a
    Step 22/27 : RUN echo "Welcome to slim!" > /etc/motd
     ---> Using cache
     ---> 81f5a530df68
    Step 23/27 : RUN echo "tty0::respawn:/sbin/agetty -a root -L tty0 38400 vt100" > /etc/inittab
     ---> Using cache
     ---> 930c6e52534e
    Step 24/27 : RUN echo "ttyS0::respawn:/sbin/agetty -a root -L ttyS0 115200 vt100" >> /etc/inittab
     ---> Using cache
     ---> 0d7f2b84e6e4
    Step 25/27 : RUN mkdir -p /etc/ssh /root/.ssh && chmod 0700 /root/.ssh
     ---> Using cache
     ---> 8409b1dad4b0
    Step 26/27 : RUN echo $SSHPUBKEY > /root/.ssh/authorized_keys && chmod 600 /root/.ssh/authorized_keys
     ---> Using cache
     ---> 6579b30f1d65
    Step 27/27 : RUN sed -i 's/root:!/root:*/' /etc/shadow
     ---> Using cache
     ---> 9d880c138e02
    Successfully built 9d880c138e02
    Successfully tagged slim-vm:latest
    exporting docker filesystem
    creating initrd
    success!
    Sat Aug 15 2020 16:28:59 (9.34s)

    ╓─[ delete_me ] master ~/work/slim
    ╚═>> abathur in nix-shell on <snip> $ result/bin/slim images
    Sat Aug 15 2020 16:32:39 --> 
    ┌───────────────────┬────────────┬───────────────────────────────────┬─────────────────┐
    │      (index)      │    size    │            description            │    providers    │
    ├───────────────────┼────────────┼───────────────────────────────────┼─────────────────┤
    │ alpine3.12-simple │ '30.58 MB' │ 'A basic alpine server with ssh.' │ 'kvm, hyperkit' │
    └───────────────────┴────────────┴───────────────────────────────────┴─────────────────┘
    Sat Aug 15 2020 16:32:39 (475ms)

    ╓─[ delete_me ] master ~/work/slim
    ╚═>> abathur in nix-shell on <snip> $ result/bin/slim run test1 alpine3.12-simple --sync=no --attach
    Sat Aug 15 2020 16:36:41 --> 
    Running hyperkit
    Waiting for IP address
    ^C
    Sat Aug 15 2020 16:37:20 (39.5s)

    ╓─[ delete_me ] master ~/work/slim
    ╚═>> abathur in nix-shell on <snip> $ sudo rm ~/.slim/hyperkit/test1/hyperkit.pid
    Sat Aug 15 2020 16:38:09 --> 
    Sat Aug 15 2020 16:38:09 (55.7ms)

    ╓─[ delete_me ] master ~/work/slim
    ╚═>> abathur in nix-shell on <snip> $ sudo kill -9 23506
    Sat Aug 15 2020 16:39:06 --> 
    Sat Aug 15 2020 16:39:06 (53.3ms)

    ╓─[ delete_me ] master ~/work/slim
    ╚═>> abathur in nix-shell on <snip> $ sudo hyperkit -m 1024 -c 1 -s 0:0,hostbridge -s 31,lpc -s 2:0,virtio-net -l com1,stdio -F /Users/abathur/.slim/hyperkit/test1/hyperkit.pid -U 37368c73-5414-4a83-b8ea-90e4ed48ae2e -f kexec,/Users/abathur/.slim/registry/alpine3.12-simple/vmlinuz,/Users/abathur/.slim/registry/alpine3.12-simple/initrd,modules=virtio_net console=ttyS0
    Sat Aug 15 2020 16:39:09 --> 
    Using fd 5 for I/O notifications
    rdmsr to register 0x140 on vcpu 0
                                     rdmsr to register 0x64e on vcpu 0
                                                                      rdmsr to register 0x34 on vcpu 0

    Welcome to Alpine Linux 3.12
    Kernel 5.4.43-1-virt on an x86_64 (ttyS0)

    nanobox login: root (automatic login)

    Welcome to slim!
    nanobox:~# 

Missing license

Would love to be able to use this project. Could you please add a license to it?

slim: Unable to uncompress kernel

Greetings,
i'm trying to build qcow2 images from the ubuntu-20.04-cloud-init image, but am running into an issue.

Uncompressing compressed kernel
events.js:291
throw er; // Unhandled 'error' event
^

Error: incorrect header check
at Zlib.zlibOnError [as onerror] (zlib.js:182:17)
Emitted 'error' event on Gunzip instance at:
at errorOrDestroy (internal/streams/destroy.js:108:12)
at Gunzip.onerror (_stream_readable.js:754:7)
at Gunzip.emit (events.js:314:20)
at Zlib.zlibOnError [as onerror] (zlib.js:185:8) {
errno: -3,
code: 'Z_DATA_ERROR'
}

From what i can see the vmlinuz file is not compressed and the library is raising an error.

Are we certain that the docker kernel is at all compressed?

Regards,
Daniel

Not able to run VM on MacOS

Built an image based on alpine successfully. But when I tried to run the VM it results in the following error.

$ sudo slim run micro1 alpine3.8-simple -p hyperkit
Password:
TypeError [ERR_INVALID_ARG_TYPE]: The "data" argument must be of type string or an instance of Buffer, TypedArray, or DataView. Received type number (37620)

Sharing some information that might help in identifying the issue below.

$ node --version
v15.7.0

$ hyperkit -v
hyperkit: v0.20200224-44-gb54460

Homepage: https://github.com/docker/hyperkit
License: BS

$ git rev-parse HEAD
73eb95d02bc09075e814b31e992c7f36e19c4c1c

Building behind a proxy

I can't find a way to build images when operating behind a proxy. I can manually build a Docker image if I pass in the --build-arg http_proxy=http://www.foo.bar:80/ to the Docker build command. How can I perform slim builds behind a proxy?

-tl

Add `slim start [vm-name]` command

Add a command to start an existing vm.

Suggestion:

Implementation could use this for virtualbox provider:

await vbox({start: true, vmname: name, syncs: [], verbose: true});

This probably can be a modified version of run command, with the difference of not accepting any modification to the vm.

Store provider info in info.yml

Store the provider an image was built with (ie virtualbox, kvm, hyperkit) in the info.yml file in the registry and add handling to prevent the image from being run with a different provider

Adding QEMU provider

Slim currently supports KVM and creating qcow2 images on Linux (slim build [image] -p kvm -f qcow2). It would be good to also support QEMU on other platforms. QEMU can run on Windows/Linux/MacOS + different acceleration types on each platform.

Supported -accel=[ hax | whpx | kvm | hvf ] options:

  • Windows: HAXM or WHPX (Hyper-V)
  • Linux: KVM (already implemented)
  • Mac: HVF (Hypervisor.framework, similar to what hyperkit uses. This option is new and seemed to work nicely when I tried it)

The execution commands will probably be almost the same on all platforms, except for the -accel option. We should also be able to use slim's qcow2 and iso images for QEMU.

Example QEMU command:

qemu-system-x86_64 \
  -m 2048 \
  -vga virtio \
  -show-cursor \
  -usb \
  -device usb-tablet \
  -enable-kvm \
  -drive file=ubuntu-mini.qcow2,if=virtio \
  -accel hvf \
  -cpu host \
  -net user,hostfwd=tcp::2222-:22 -net nic

Support concurrent builds

Might be nice to be able to slim build multiple images at a time. Should be possible by naming support folders / docker images to be image specific, so we don't overwrite files as they are building.

How to mount filesystems

Are there any guides on usage especially regarding mounts? I'm interested in setting up a development environment using slim and hyperkit, but I need to mount my code in a performant manner. Any other tips or writeups out there? This may be the wrong place to ask, feel free to point me in a different direction.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.