GithubHelp home page GithubHelp logo

elektrobit / flake-pilot Goto Github PK

View Code? Open in Web Editor NEW
9.0 9.0 5.0 1007 KB

Registration/Control utility for applications launched through a runtime-engine, e.g containers

License: MIT License

Rust 90.78% Makefile 1.77% Shell 3.87% Python 1.60% RobotFramework 1.98%
apps containers vms

flake-pilot's People

Contributors

aymenott avatar ichmed avatar isbm avatar m-kat avatar nader138 avatar schaefi avatar wintron04 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

flake-pilot's Issues

Allow to specify user identity for running the app

Description

As of today the user id to run the engine is specified in the flake setup as follows:

container:
  runtime:
    runas: root

There is no option to specify the user on the commandline

Acceptance Criteria

  • User ID can be specified at registration time for the supported engines

Allow to specify resume mode on the commandline

Description

As of today the resume mode can be configured in the flake file directly via

runtime:
  resume: true

There is no way to do this on the command line via oci-ctl register ...

Acceptance Criteria

  • A commandline option to activate the resume mode during app registration exists

FireCracker: network handling

Description

To handle the network between guest and host firecracker offers the following documentation:

As of today firecracker supports networking through tap devices only. The qemu supported virtio-net model is currently
not supported. This comes with the issue that the host network must be prepared for firecracker VMs to work. firecracker will create/delete a tap device if the config lists:

"network-interfaces": [
  {
    "iface_id": "eth0",
    "guest_mac": "AA:FC:00:00:00:01",
    "host_dev_name": "tap0"
  }
],

but it does not care for the real connection and routing of data on the host. This means the address assignment as well
as the eventual NAT or bridge configuration is kept as responsibility to the owner of the host. For us and our use case this might be an issue because we aim for a transparent process which does not require the user to manage its host network prior being able to have networking available in the guest. For the container based approach podman and CNI automatically cares for the network capabilities inside of the container. That level of comfort currently does not exist with firecracker

I personally think it should not become a responsibility of the flake-pilot project to prepare the host for firecracker networking. So we should at least mention it in the documentation / man pages and adapt as firecracker evolves.

This is mostly for conversation and agreement how far we get in terms of networking

FireCracker: Implement flake-ctl firecracker register

Description

Along with the effort to add support for the firecracker engine, an app registration for the later firecracker-pilot is required.
The app registration should look similar to flake-ctl podman register like the following call:

flake-ctl firecracker register --vm NAME --app APPNAME ... [--overlay-storage ... --mem-size ... --cache-type ... --cpu-count ... ]
  • The requires rootfs image, kernel image and maybe initrd are taken from the pulled vm image data which needs to exists below /var/lib/firecracker/images/NAME
  • The registered flake will live below /usr/share/flakes/APPNAME.yaml

Acceptance Criteria

  • flake-ctl firecracker register exists
  • An app registration contains all elements such that the firecracker.json file to run the VM can be constructed

Fix runtime options handling

Specifying runtime options is done like this:

runtime:
  podman:
    --volume: /var/tmp:/var/tmp
    --volume: /usr/share/kiwi/nautilos:/usr/share/kiwi/nautilos
    --privileged:
    --rm:
    -ti:

However this concept does not work if options of the same name can be specified multiple times like it is the case for podman's --volume option. The reason is, in the above concept option names are handled as hash keys and would overwrite each other if specified multiple times

Thus this should be fixed such that runtime options can be set as list as follows:

runtime:
  podman:
     - --volume /var/tmp:/var/tmp
     - --volume /usr/share/kiwi/nautilos:/usr/share/kiwi/nautilos
     - --privileged
     - --rm
     - -ti

Add oci-registry utility

Description

At the moment the oci-registry tool exists as part of an example image description. The tool allows to prepare the read-write part of a podman (OCI) registry below /var/lib/containers/storage. The tool also allows to switch between different read-write storage areas and is handy in combination with flake-pilot. Thus instead of maintaining the tool as part of an image description it would fit into this project and could be delivered as a package

Acceptance Criteria

  • /usr/bin/oci-registry is provided as part of a package

Fix build-deb container packaging

As of now the file name to oci-ctl build-deb --oci ... must use the .docker.tar extension.
This is actually pretty misleading because we are dealing with OCI containers only

  • Thus the extension for the input file should be .oci.tar
  • Also the extension name should be checked by oci-ctl build-deb

Support user assignment for flakes

If a Flake requires a root access, instead it will do something like:

Error: short-name "elektrobit/amd64_sdk" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"

Calling this with sudo .... solves the issue. Probably we have to do the following:

  1. If a Flake requires sudo, it should call it on its own with sudo. This could be set as an option in a corresponding Flake YAML config.
  2. If sudo failed (user denied), then gracefully tell the user "sorry, you need to have sudo access, ask your admin"
  3. I would not pre-check if sudo rights are there, but just go and call stuff with sudo and just "crash", gracefully wrapping the output, if no access.

Update oci-pilot code reading symlinks

To allow different paths for the application on the host and inside of the container the symlink structure in oci-register
has been changed. This change must be adapted in oci-pilot once oci-register has merged the commit implementing the registration

Add support for image composition

Allow to create OCI layers on top of an existing OCI KIWI based image description

layers can be used in two ways:

  • natively via the docker/podman build and a Dockerfile. As users can do this already there is nothing we need to add
  • via kiwi from so called derived images. This feature already exists in kiwi and can be utilized via the oci-build implementation #18

Delete socat dependency on sci

Description

sci is a static compiled binary to be used with any rootfs. When using an app registration for firecracker in resume mode, sci calls socat in VSOCK-CONNECT mode to establish a connection. That socat creates a dependency to socat in the guest. We could prevent that by adding an implementation based on:

To run the command in the same way socat does it

Acceptance Criteria

  • guestvmtools sci no longer requires socat

Add support for oci-ctl build

Allow to build a flake as debian package either from an existing OCI tar image or via kiwi from an image description

Allow to specify the attach setting on the commandline

Description

As of today the setting to attach to the app if running can be configured in the flake file directly via

runtime:
  attach: true

There is no way to do this on the command line via oci-ctl register ...

Acceptance Criteria

  • A commandline option to set the attach option during app registration exists

FireCracker: Implement firecracker-pilot

Description

Along with the effort to support the firecracker engine we need to implement the pilot launcher to actually run the firecracker VM and the app matching the app registration. The following features needs to be implemented in the launcher, not all of them are required for the acceptance criteria:

  • If overlay_storage is requested, create ext2 based overlay_storage file on demand and only once. Uses a dedicated storage space as /var/lib/firecracker/storage
  • Creates firecracker.json file from registration data and caller arguments on each call
  • Run firecracker through firecracker-service tool
  • Implement provisioning of overlay and layers into the overlay_storage file prior launching
  • Uses VM files (rootfs, kernel, initrd) from /var/lib/firecracker/images/

Acceptance Criteria

  • firecracker-pilot binary exists
  • #84 is a requirement that needs to be implemented first
  • An registered simple example app calls firecracker-service correctly
  • The above features checked off from the checkbox are implemented

FireCracker: host vs. guest sharing

Description

It seems firecracker has no sharing concept at the moment. We should at least add this to the documentation unless there is a better solution available.

Allow to specify podman runtime options on the commandline

Description

As of today the runtime options passed to podman can be added in the flake file as follows:

runtime:
  podman:
    - setting
    - setting
    - setting

There is no way to specify runtime settings on the commandline during oci-ctl register ...

Acceptance Criteria

  • A commandline switch to add a runtime setting during app registration exists
  • The commandline setting can be added multiple times

FireCracker: Command output intermix with kernel log

Description

As we plan to use firecracker as the launcher for applications, the process to run commands involves the startup
of the VM through firecracker and next to it the startup of the application. As of today the output of the command
is captured through the console setting of the VM console=ttyS0 which transfers all output to the caller. This works
but also includes other messages not related to the actual command call. To keep the bootup of the VM as silent as
possible we currently use the kernel options loglevel=0 quiet which eliminates most of the unwanted messages but
surely not all of them and it also depends on the used kernel image if there might be more unrelated messages.

This opens the question how we can separate the command output from the VM startup messages in a generic way

man page missing for sci

Description

There is currently no man page for /usr/sbin/sci

Acceptance Criteria

man page exists

Refactor resume behavior

Description

In a flake setup that uses the resume feature the container ID for this instance is looked up according to the exact command line of the application. Example:

oci-ctl register --container foo --app /usr/bin/aws --resume true
aws --help

The call of the app is connected to the container ID by it's .cid file

/tmp/flakes/aws--help.cid

resuming the container instance currently only happens if the exact same command is called again.
However if the command line changes e.g aws ec2 help, a new instance is created.

It might be better to only store the base app name and resume this instance for any call of this application.
The .cid file name would then change to:

/tmp/flakes/aws.cid

Acceptance Criteria

  • In resume mode only one instance for the app exists
  • Multiple instances of the same app can still be generated using the @magic argument, e.g aws ec2 help @mine

FireCracker: random port number calculation should be more robust

The following code from firecracker-pilot/src/firecracker.rs can be improved as written in the fixme note

pub fn get_exec_port() -> u32 {
    /*!
    Create random execution port
    !*/
    let mut random = rand::thread_rng();
    // FIXME: A more stable version
    // should check for already running socket connections
    // and if the same number is used for an already running one
    // another rand should be called
    let exec_port = random.gen_range(49200..60000);
    exec_port
}

Host cid metadata files on a persistent storage

Description

At the moment the metadata files to store information about running app instances are stored
in /tmp/flakes. However this location is a tmpfs and gets lost on reboot, power-cycle, etc. For
non resume instances this is not a problem but for resume instances it prevents them to be
correctly resumed. I suggest to move the location to the registry storage area, e.g

/var/lib/containers/storage/tmp

Acceptance Criteria

flake metadata is persistently stored

FireCracker: serial console data might be lost due to too early reboot

Description

A command writes and the data is transfered through the serial console and the VM layer to the caller.
However the call in the VM has already finished and the reboot sequence kicks in. The VM could be
exited before all data has been transfered through the serial console.

  • Is there a way to wait until /dev/console has no data prior reboot ?

Implement oci-ctl remove

Code is prepared for implementation of the actual remove sub-command.

This is divided into two operation modes

  1. remove an app registration
  2. purge a container including all its app registrations

Add support for the FireCracker engine

Description

FireCracker is a project which aims for running KVM based virtual machines in a fast way. We would like to add support for a firecracker-pilot to allow registration and running of applications inside of FireCracker based virtual machines

Acceptance Criteria

  • Implementation of firecracker-pilot exists and is able to startup a FireCracker compatible image, runs an application inside e.g by init="/the/app", redirects input/output channels to the caller e.g via serial console and gracefully handles the shutdown of the VM such that calling the app feels like a native application the system
  • Implementation of app registration exists as part of flake-ctl

Origin of "update_changelog.py" tool?

@schaefi the helper/update_changelog.py seems copied from here:
https://github.com/OSInside/kiwi/commits/master/helper/update_changelog.py however, History of that file also doesn't looks original from the project and so it is not clear if the updater is GPL v3 or not. ๐Ÿ˜‰ Nevertheless, since it comes from Kiwi which is under different license, this tool's licence needs to be properly mentioned.

Could you please fix that?

Alternatively, since it is a tool on its own, you could change its licence to permissive and allow more freedom (i.e. MIT or BSD), so it would be useful to bigger audience (as you wish).

Add 3 way dependency resolution

Currently oci-ctl can resolve dependencies between 2 containers/layers. For example if I have a Python application that depends on a specific interpreter, say 3.10 I can build a delta container that has only my bits and then use the interpreter from python-3-10-basesystem and oci-ctl handles the layers and lets me register "my-cmd" and makes it appear to the user as if "my-cmd" is installed on the host system, i.e. the containers are transparent.

There is another use-case where my application may also depend on something I know to be part of the host system, the package manager is a good example. As such it would be great if oci-ctl could also layer the proper parts of the host os into the view of the file system for "my-cmd". In the flakes file this could be handled with additional directives such as

hostsystem:
    - zypper

oci-ctl could inspect the package database to determine which bits and pieces need to be visible to "my-cmd" to make this work.

The approach can be thought of in the way that open linking works. I can link a C application in a way that the linker doesn't complain at link time when a specific library is not found, the resolution is deferred to run time.

Allow further layers for provisioning

Description

As of today delta containers can be provisioned against one base container and optional host dependencies. However when creating delta of deltas this introduces additional layers that needs to be taken into account for provisioning the container. A possible example could like this

oci-ctl register --container aws-cli --app /usr/bin/aws --base basesystem --layer basepython --layer ...

Acceptance Criteria

Multiple additional container layers can be used for provisioning a container instance

FireCracker: Implement flake-ctl firecracker pull

Description

On the effort to support the firecracker engine we need to add support for pulling firecracker ready images onto the target system. This should be done via:

flake-ctl firecracker pull --rootfs-image ... --kernel-image ... [--initrd-image ...]
flake-ctl firecracker pull --kis-image ...

The data for a firecracker image should be organized below /var/lib/firecracker/images

Acceptance Criteria

Add support for NorthStar engine

Description

NorthStar is like podman a container engine. It uses a different format for the containers and also implements different mechanics for starting and managing containers. In order to support users who prefer northstar over podman we would like to add support for this engine.

Acceptance Criteria

  • Implementation of northstar-pilot exists and is able to startup a NorthStar compatible container, runs an application inside and works comparable to the existing podman engine implementation
  • Implementation of app registration exists as part of flake-ctl

Runtime Cleanup

The Problem

Unlike read-only Canonical Snaps (squashfs), Flakes has two modes:

  • Read-only
    Just like snaps. Data may be stored outside somewhere.

  • Read-write
    Data may be also stored outside, but the Flake's state itself can be modified independently.

When Flake is finished its runtime, logically always to remove it. This prevents pollution of useless instance images. This is ideal for just a software, which runs in a container, but looks like it is just a native software. So far, so good: same behaviour as with the Snaps.

However, what to do if a Flake is internally modified and user wants that to be persisted for a certain period of time? An example: a development environment of a different hardware architecture with a Git code inside or a deployed system?

The Expectations

Read-write Flakes are rather exception than a mainstream. Because the data still can be stored outside with no harm to the flow and immutable is still better. However, some cases might require read-write Flake, like a confined package build to ensure nothing wrong is linked in the process. Such Flake can be used for further debugging of a failing package, for example.

The expectation is that a user, who is using Flakes (not podman directly!) is automatically helped to cleanup the mess left behind of a dangling container instances.

Of course, manual cleanup is possible. But this path should not be required, because the main purpose of Flakes is to hide the OCI engine at first place.

The Solution?

  1. Limit read-write use and make all read-only behaviour by default
    Currently engine.yml is all "up to user". It is OK, as long as a packager knows what he is doing and 100% sure he is not doing anything wrong by not adding --rm to the options (remove after quit). Probably we need engine wrappers, which will add cleanup always by default, unless explicitly turned off in the same engine.yml configuration, like:
# Explicitly say the Flake is mutable. If omitted, "immutable" by default.
type: mutable

# The usual runtime config
runtime:
  podman:
    ...
  1. Force mutable Flakes to be always reused
    Since making mutable Flake and then cleaning it all up makes no sense, then in case a Flake is mutable, for podman engine --rm should be always removed. If found in a config, a flake should quit with an error of mis-packaging. Example behaviour of Flake-delivered Python (mock-up):
$ python
Packaging error: Flake is mutable, but you ask to remove its instance on finish.
Either define it immutable or do not explicitly remove it.
  1. Allow instance cleanup (see the Open Issues, mainly)

Read-write instances brings a problem of "Always at least one instance left" (for the reuse). So the problem is not just if a user launches a new read-write Flake 1000 times, but also if a user launches 1000 read-write different Flakes at least one time.

A possible solution to prevent overbloating the system is to force the user to look after his Flakes. For example, a config of the Pilot itself anywhere in /etc/oci-pilot.conf for a quota limit, something like:

# Limit instances per a user
instances-per-user: 100

# Purge dangling instances those were not accessed more than 60 days
instances-timeout: 60 d

In case user wants 101th, oci-pilot should abort a flake and suggest cleaning up the instances (with either podman directly or other ways). Practically none of read-write Flakes would be launch-performance critical. Therefore every time any read-write (!) Flake is launched, it should check for outdated instances as well and pre-purge them first, and only then launch a target Flake. It will surely for time to time impacts a performance on a very messy system, but that's OK.

Read-only Flakes, those are equivalents to Snaps, should not do this check due to performance savings.

The Open Issues

Handling multiple instances of one mutable Flake

This is sort of podman ps already... The use-case of this would be very limited. Possible solutions are:

  • Do not support this. ๐Ÿ˜‰
  • Add flake option that will operate instances

In case of flake option, each flake will have it wrapped before any other options, e.g.:

$ aarch64-dev-env flake --help
  new=[NAME]     Launch a new instance. Optionally specify a name.
  resume=<NAME>  Resume an existing instance explicitly to a precise name.
  args ....      Add arguments, those are native to an application

Calling aarch64-dev-env alone will just create one instance of a read-write Flake. Quitting it and calling it again will:

  • Resume a container
  • Attach back to its ID in background

In case user wants another instance from this, then something like:

$ aarch64-dev-env flake new

However, this leaves a question to which flake one should re-attach back on a next run? One way is to specify the Podman's name (or ID):

$ aarch64-dev-env
Multiple Flake instances detected. Which one to re-attach?
(1) Stinky_Sniffer
(2) Bombastic_Bummer

Enter a number: _

Or this way:

$ aarch64-dev-env flake resume=Stinky_Sniffer

This can be used in scripts, when new can have a name, like:

$ aarch64-dev-env flake new=Junky_Jack

...and then resume=Junky_Jack later in the script.

Limitation of this approach is that when explicitly using flake argument, there is a need for complex arguments interface, where e.g. args would be native to the Flake arguments. I.e. python -c 'print("Hi")' would be done like:

$ python flake new=Pimpy_Punk args -c 'print("Hi")'

The above would call a Python read-write'able Flake with the usual arguments into a new instance Pimpy_Punk, also visible under Podman ps.

Provide --info flag for podman app registration

Description

If the container provides a default app registration, allow to read it with

flake-ctl podman register --container foo --info

This will lookup foo.yaml inside of the foo container and allows for providing a default registration for this container.

increase Max retries for VM connection

Description

On slower hardware the number of max retries in firecracker resume mode is not enough. Should be increased

Acceptance Criteria

resume instance testing on low performance machines like HPCs (e.g rPI) succeeds

Accumulate all removed file data

Description

Currently only the /removed file information from the last container (the app container) in the chain is taken into account.
But it's required to take all eventually existing /removed files from all containers used to provision the instance into account

Acceptance Criteria

  • no /removed gets ignored

Replace yaml-rust with serde_yaml

Right now the config in both pilots is being parsed into a generic Yaml node that is evaluated on the fly. This may cause destructive actions to be performed before terminating the program due to a faulty config.

By parsing deserializing the config into a fixed struct the pilot can quit on startup if the config is invalid. This will also prevent errors due to typos and make the config more discoverable for new contributors.

Refactor packaging

Description

At the moment one package provides all pilots and the ctrl tool. This should be changed to have dedicated pilot packages (flake-pilot-podman, flake-pilot-firecracker) and a base package flake-pilot

Acceptance Criteria

  • pilots can be installed individually

Refactor writing of flake yaml file

Description

The implementation of the yaml writer in oci-ctl has room for improvement. The initial version done by myself could be implemented way more elegant. With regards to more elements/sub-elements that will be written according to the information provided at app registration time, the yaml writer code should be refactored

Acceptance Criteria

  • yaml writer is based on an instance of rust::Yaml
  • manual writing of yaml format data will be deleted

Naming matters ๐Ÿ˜‰

As we add more and more engines, we have to finalise the structure.
Since this is "Flakes", let's regroup to two categories:

  1. Packaging helpers
  2. Running pilots for different runtimes

That is, rename all helpers oci-ctl/-reg/-fufu into flake-ctl/-reg/-fufu,
and rename pilots into <engine>-pilot that would "hardlink" itself to a runtime.

For example, oci-pilot is implemented to run Podman, so it would make sense
to have it actually as podman-pilot, explicitly showing up what engine is behind of it.
In a future, it would be northstar-pilot, firecracker-pilot and so on.

Names would be longer, but:

  • these are anyway not be user-used, so they do not have to be convenient to type (developers can stand that ๐Ÿ˜‰ )
  • as Flakes will scale and grow, it will be much easier to quick-spot what you are dealing with, just looking at a random Flake of a day without explicitly navigating to its YAML config.

FireCracker: Add support to fetch VM as components

Description

As of today one can call

flake-ctl firecracker pull --kis-image URI

But a kis image is a kiwi specific image type that provides a tarball with initrd, kernel and rootfs. It would be good to also allow:

flake-ctl firecracker pull --rootfs URI --kernel URI [--initrd URI]

That way offers additional flexibility and we could also fetch and test the kernels provided by firecracker

Acceptance criteria

  • flake-ctl firecracker pull can fetch VM components

Prepare CLI for supporting multiple engines

Description

At the moment the podman engine is the only runtime we support. Assuming we want to support more engines e.g firecracker in the future the commandline interface needs to be extended. One example:

oci-ctl register --container joe --app /usr/bin/joe --base basesystem

Should also have an engine specific caller variant:

oci-ctl podman register --container joe --app /usr/bin/joe --base basesystem

The existing caller semantics must not change for compat reasons and continues to default to the podman engine

Acceptance Criteria

  • support for new engines with different caller parameters can be added in the future
  • existing caller semantic does not change and defaults to podman

Add spinner to newly registered deltas

On the first time, the delta-app is always auto-provisioned. That often may cause unexpected delay to the app and panic user vigorously hitting ^C^C^C^C... which will mess up everything.

Solution would be the following:

  1. Guard ^C keyboard interrupt and spit-out a message "Please wait to finish!" (or similar) when user hits ^C
  2. Add a generic message for the first time while provisioning, something like "Setting up $APPNAME... [/]" (with a -|/ spinner).

Add support for include section in flake file

Description

Allow to include files as part of the container provisioning e.g

include:
  tar:
    - tar-archive-to-include
  file:
    - file/to/include
  ...and_so_on

Acceptance Criteria

  • An include statement in the flake file gets evaluated by oci-pilot
  • At least one include format is implemented, starting with tar

FireCracker: Add support for provisioning delta / includes and layers

Description

For the podman engine podman-pilot supports provisioning of a container with data from a delta container or arbitrary include files or additional layer containers. The same concept can be applied for the VM based firecracker engine. The provisioning will require overlay_size to be configured. If this is the case all provision data can be synced into the overlay which would implement the same concept that we use for containers also for VMs.

I also suggest to allow to use containers for provisioning the VM. This has the advantage that the same delta or layer containers can be used for running a container or a VM.

Acceptance Critearia

  • provision code for include data is implemented to firecracker-pilot
  • provision code for delta and layer OCI containers into the VM overlay is implemented to firecracker-pilot
  • flake-ctl firecracker register allows to specify --include-tar
  • flake-ctl firecracker register allows to specify --base-container
  • flake-ctl firecracker register allows to specify --layer-container

add --force option for registration

A registration attempt of the form

oci-ctl register --app  /usr/bin/python --container foo

will fail with an error message if /usr/bin/python already exists on the system. This is a good behavior but we should allow an option that force deletes the existing file and allows registration of a containerized version

FireCracker: Implement guestvm tool to run commands through a vsoc

Description

Along with the firecracker support and to allow --resume in the flake-ctl firecracker register ... registration we need a service running inside of the VM which accepts and calls commands from the host to be called inside of the VM. We named this command sce (service command execution)

Acceptance Criteria

  • sci support for reading and executing commands through vsoc is implemented
  • flake-ctl firecracker register supports --resume option
  • firecracker-pilot supports command handling through a vsoc connection

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.