GithubHelp home page GithubHelp logo

kubernetes / kubernetes Goto Github PK

View Code? Open in Web Editor NEW
106.8K 3.2K 38.4K 1.21 GB

Production-Grade Container Scheduling and Management

Home Page: https://kubernetes.io

License: Apache License 2.0

Python 0.03% Makefile 0.09% Shell 2.63% C 0.01% Go 96.97% HTML 0.01% sed 0.01% Dockerfile 0.07% PowerShell 0.20% Batchfile 0.01%
kubernetes go cncf containers

kubernetes's Introduction

Kubernetes (K8s)

CII Best Practices Go Report Card GitHub release (latest SemVer)


Kubernetes, also known as K8s, is an open source system for managing containerized applications across multiple hosts. It provides basic mechanisms for the deployment, maintenance, and scaling of applications.

Kubernetes builds upon a decade and a half of experience at Google running production workloads at scale using a system called Borg, combined with best-of-breed ideas and practices from the community.

Kubernetes is hosted by the Cloud Native Computing Foundation (CNCF). If your company wants to help shape the evolution of technologies that are container-packaged, dynamically scheduled, and microservices-oriented, consider joining the CNCF. For details about who's involved and how Kubernetes plays a role, read the CNCF announcement.


To start using K8s

See our documentation on kubernetes.io.

Take a free course on Scalable Microservices with Kubernetes.

To use Kubernetes code as a library in other applications, see the list of published components. Use of the k8s.io/kubernetes module or k8s.io/kubernetes/... packages as libraries is not supported.

To start developing K8s

The community repository hosts all information about building Kubernetes from source, how to contribute code and documentation, who to contact about what, etc.

If you want to build Kubernetes right away there are two options:

You have a working Go environment.
git clone https://github.com/kubernetes/kubernetes
cd kubernetes
make
You have a working Docker environment.
git clone https://github.com/kubernetes/kubernetes
cd kubernetes
make quick-release

For the full story, head over to the developer's documentation.

Support

If you need support, start with the troubleshooting guide, and work your way through the process that we've outlined.

That said, if you have questions, reach out to us one way or another.

Community Meetings

The Calendar has the list of all the meetings in the Kubernetes community in a single location.

Adopters

The User Case Studies website has real-world use cases of organizations across industries that are deploying/migrating to Kubernetes.

Governance

Kubernetes project is governed by a framework of principles, values, policies and processes to help our community and constituents towards our shared goals.

The Kubernetes Community is the launching point for learning about how we organize ourselves.

The Kubernetes Steering community repo is used by the Kubernetes Steering Committee, which oversees governance of the Kubernetes project.

Roadmap

The Kubernetes Enhancements repo provides information about Kubernetes releases, as well as feature tracking and backlogs.

kubernetes's People

Contributors

a-robinson avatar bgrant0607 avatar brendandburns avatar dchen1107 avatar deads2k avatar derekwaynecarr avatar dims avatar erictune avatar feiskyer avatar gmarek avatar ixdy avatar j3ffml avatar jbeda avatar jsafrane avatar justinsb avatar k8s-ci-robot avatar lavalamp avatar liggitt avatar mikedanese avatar nikhiljindal avatar pohly avatar saad-ali avatar sataqiu avatar smarterclayton avatar sttts avatar thockin avatar vmarmol avatar wojtek-t avatar yujuhong avatar zmerlynn avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kubernetes's Issues

Integration test flaky in Travis

We've seen a bunch of failures that look like this:

2014/06/14 16:55:36 Creating etcd client pointing to [http://localhost:4001]
2014/06/14 16:55:36 API Server started on http://127.0.0.1:48122
2014/06/14 16:55:36 Synchronization error &etcd.EtcdError{ErrorCode:100, Message:"Key not found", Cause:"/registry", Index:0x2}
2014/06/14 16:55:46 POST /api/v1beta1/replicationControllers
2014/06/14 16:55:46 Synchronization error &etcd.EtcdError{ErrorCode:100, Message:"Key not found", Cause:"/registry", Index:0x2}
2014/06/14 16:55:56 Synchronization error &etcd.EtcdError{ErrorCode:100, Message:"Key not found", Cause:"/registry", Index:0x2}
2014/06/14 16:55:58 GET /api/v1beta1/pods
2014/06/14 16:55:58 FAILED

I'm not sure, but I think it may only happen with the latest go and not the released go 1.2.

"Running a container (simple version)" -- spin up, then ???, stop/delete

Apologies for the noob question. I'll ask it humbly and, for others like me keen to try kubernetes and docker by way of learning more about them: the instructions are clear and were without problem until I tried this section ["Running a container (simple version)"] spun up the containers but was then unsure how to access the Nginx web server. I understand that it may be on 8080 but how do I determine the IP?

"cloudcfg create" prints extraneous log output

While running through the guestbook sample, I noticed that this command prints log output to the console in addition to the command output.

$ cluster/cloudcfg.sh -c examples/guestbook/redis-master-service.json create /services
2014/06/15 15:48:01 Parsed config file successfully; sending:
{"id":"redismaster","port":10000,"labels":{"name":"redis-master"}} 
Name                Label Query         Port
----------          ----------          ----------
redismaster         name=redis-master   10000

Kubelet should accept list of pods/manifests everywhere

The Kubelet will launch and manage a set of pods. The pod is defined in a container manifest. When the Kubelet reads from etcd, it gets a list of manifests/pods to run.

However, when configuring via other methods (http reach out, http push in, read from filesystem) it will only accept a single manifest. It should accept a list.

Options/ideas:

  • Identify a new type called manifest set that can be submitted via these methods. This would be a simple list of manifests.
  • Make the manifest itself evolve (v1beta2?) so that it lists a set of pods.
  • When reading from a 'file', instead allow kubelet to monitor a directory (/etc/kubelet.d) and merge all of the files defined there together.

Run locally

It would be great to have an easy way to run a kubernetes master/minion locally. Currently, it's not that easy to set up. We should have a convenient script to make it work.

Provide restart operation in Kubernetes API

Right now to restart a container, the user has to query to find the host and then ssh sudo docker restart. Providing a restart operation would be convenient and would eliminate this need for ssh/sudo.

I suggest making it idempotent to prevent multiple disruptive, unclean restarts. This can be done by passing a timestamp and ensuring that the container started more recently than the timestamp, for any reason.

We should support the grace period, too.

Once we have a way to report termination reasons, this operation should accept a client-provided reason, since it is often used as a building block for higher-level orchestration:
#137

IP affinity is per-container rather than per-pod

Per the Kubernetes design, IPs should be shared between containers within a pod. However, to accomplish this, Docker must be instructed to use container networking when starting the containers reference. This configuration appears to be missing from the Kubulet code:

https://github.com/GoogleCloudPlatform/kubernetes/blob/master/pkg/kubelet/kubelet.go#L307

As a result, it would appear that IPs will be assigned per container based on the external bridge specified to the Docker daemon arguments.

Need real cluster integration tests

We should have something that:

  • Brings up a cluster
  • Makes sure that it works by starting containers, deleting, etc.
  • Make sure networking works
  • On error, collects a bunch of logs from all over the place
  • Shuts down the cluster

We should then be able to run this continuously overnight to make sure that cluster bring up/down is reliable.

kubelet should know which containers it is managing

Right now the kubelet will assume that it is in charge of all containers on a machine. That means you can't mix/match kubelet managed containers and non-kubelet managed containers.

Instead, we should have kubelet mark those containers it is managing (perhaps either storing them locally or putting more stuff into the container name or, eventually, using docker supported container labels) and only kill/curate those.

@lavalamp says:

A simple hack to accomplish this is only pay attention to containers for which strings.Split(container_name, "--")[2] parses as an int (w/ appropriate error checking, of course). The names we generate are sufficiently obscure that this should be good enough for the moment.

GC old dead containers and unused images

As far as I can tell, we never call DELETE/RemoveContainer nor RemoveImage.

We should keep containers around for some amount of time for inspection, copying files out, etc., but should eventually clean them up.

We could treat images as a cache, evicting on LRU when we reach some predetermined space limit -- ideally before the disk just fills up.

DNS

Provide DNS resolution for pod addresses.

Add generic documentation

Hi,

I'm happy to contribute the missing documentation but I've spend now quite some time on trying to get kubernetes working on a plain docker host and had no success so far.
I've read the design doc, shell scripts and salt states and this is how I deployed it:

Kubelet log shows (I've created an empty kubelet.conf to stop it throwing errors, not sure if necessary though):

2014/06/17 13:01:12 Desired:[]api.ContainerManifest{}
2014/06/17 13:01:12 Existing:
[]string{} Desired: map[string]bool{}

apiserver prints nothing at all.

controller-manger logs:

etcd 2014/06/17 13:13:17 DEBUG: get [/registry/controllers http://10.0.1.115:4001] [%!s(MISSING)]
etcd 2014/06/17 13:13:17 DEBUG: [Connecting to etcd: attempt 1 for keys/registry/controllers?consistent=true&recursive=false&sorted=false]
etcd 2014/06/17 13:13:17 DEBUG: [send.request.to  http://10.0.1.115:4001/v2/keys/registry/controllers?consistent=true&recursive=false&sorted=false  | method  GET]
etcd 2014/06/17 13:13:17 DEBUG: watch [/registry/controllers http://10.0.1.115:4001] [%!s(MISSING)]
etcd 2014/06/17 13:13:17 DEBUG: get [/registry/controllers http://10.0.1.115:4001] [%!s(MISSING)]
etcd 2014/06/17 13:13:17 DEBUG: [Connecting to etcd: attempt 1 for keys/registry/controllers?consistent=true&recursive=true&wait=true]
etcd 2014/06/17 13:13:17 DEBUG: [send.request.to  http://10.0.1.115:4001/v2/keys/registry/controllers?consistent=true&recursive=true&wait=true  | method  GET]
etcd 2014/06/17 13:13:17 DEBUG: [recv.response.from http://10.0.1.115:4001/v2/keys/registry/controllers?consistent=true&recursive=false&sorted=false]
etcd 2014/06/17 13:13:17 DEBUG: [recv.success. http://10.0.1.115:4001/v2/keys/registry/controllers?consistent=true&recursive=false&sorted=false]
2014/06/17 13:13:17 Synchronization error &etcd.EtcdError{ErrorCode:100, Message:"Key not found", Cause:"/registry", Index:0x2}

Now running cloudcfg fails:

./cmd/cloudcfg/cloudcfg -h http://localhost:10250 -p 8080:80 run dockerfile/nginx 2 myNginx
2014/06/17 15:16:03 Error: &errors.errorString{s:"request [POST http://localhost:10250/api/v1beta1/replicationControllers] failed (404) 404 Not Found"}

If I try to create a pod I get:

./cmd/cloudcfg/cloudcfg -h http://localhost:10250 -p 8080:80 -c api/examples/pod.json create /pods
2014/06/17 15:17:14 Failed to print: &json.SyntaxError{msg:"invalid character 'N' looking for beginning of value", Offset:1}

Configurable restart behavior

Right now we assume that all containers run forever. We should support configurable restart behavior for the following modes:

  1. run forever (e.g., for services)
  2. run until successful termination (e.g., for batch workloads)
  3. run once (e.g., for tests)

The main tricky issues are:

  1. what to do for multi-container pods
  2. what to do for replicationController

We should also think about how to facilitate implementation of custom policies outside the system. See also:
googlearchive/container-agent#9

Remove need for the apiserver to contact kubelet for current container state

While the kubelet certainly is the source of truth for what is running on a particular host, it might be nice to have it push that info to the apiserver on a regular basis (and on state changes) rather than force the apiserver to ask.

Reasons:

  • In some deployment scenarios, the apiserver might not have direct contact to individual kubelets.
  • It'd give the apiserver (and it's clients) access to a reasonably current state of the world without needing to poll each kubelet. This would be handy for improved replication/placement/auto-scale algorithms.
  • The update could include other info like statistics for both the host and containers
  • Open the door for auto-registering hosts

The kubelet could publish this to etcd, but I think it'd be good to aim for fewer dependencies on etcd rather than more. Thoughts on that?

Container and pod resource limits

Before we implement QoS tiers (#147), we need to support basic resource limits for containers and pods. All resource values should be integers.

For inspiration, see lmctfy:
https://github.com/google/lmctfy/blob/master/include/lmctfy.proto

Arguably we should start with pods first, to at least provide isolation between pods. However, that would require the ability to start Docker containers within cgroups. The support we need for individual containers already exists.

We should allow both minimum and maximum resource values to be provided, as lmctfy does. But let's not reuse lmctfy's limit and max_limit terminology. I like "requested" (amount scheduler will use for placement) and "limit" (hard limit beyond which the pod/container is throttled or killed).

Even without limit enforcement, the scheduler could use resource information for placement decisions.

Support multiple different types of volumes

Currently, the only type of volume is a directory on the host machine. We'd like to support different types of volumes. For example, mounting a PD into a container. Or a git repo at a specific revision.

The first step is to add a 'type' to the volume API, and then to add support to the kubelet too.

More comprehensive reporting of termination reasons

Docker reports State.Running, State.Paused, State.ExitCode (negative for signals IIRC), and State.FinishedAt. It would be useful to collect even more termination reason information, such as:

  • stop/restart operations, with client-provided reason, such as "cancel", "reload", "resize", "move", "host_update"
  • liveness probe failures (see #66)
  • other identifiable failure reasons: OOM, container creation error, docker crash, machine crash, reboot, lost/ghost

Minion instances should automatically register with GCE load balancer

It would be nice to setup a replica of nginx servers and have the minion nodes that are running nginx containers automatically add themselves to a load balancer traffic pool.

Traffic would flow through the GCE load balancer to the minion, through kube-proxy, and finally to the docker container.

I'm of the opinion that it should be up to the developer to manage the creation of the load balancer instances, or at least provide a static IP in the service definition if an ephemeral IP is not acceptable on the load balancer that is created.

Create a single kubernetes binary for all cmds

As we start building a bunch of binaries the size of our output goes up. Because of go static linking, we end up with 8-10MB for each binary. Currently we 7 of these for a total of 58M (find output/go -type f -perm +111 | xargs du -ch).

If we created a single binary with a switch for which functionality users want we would have a single binary that would adapt as necessary and probably be ~10M.

I'd image we'd call this thing kube and by default it'd be the client binary. If you want to run a server component you would do something like kube --daemon=api-server.

Thoughts?

readability: package comment

package comment should start with "Package <package_name> ..."

some package have incomplete package comment. e.g. pkg/cloudcfg/cloudcfg.go

also, don't make copyright comment as package comment. e.g. pkg/client/container_info.go

copyright

[package comment]
package statement

some file missing copyright? e.g. pkg/cloudpkg/resource_printer.go

build-go.sh fails

hack/build-go.sh fails with this trace:

+++ Building proxy
# github.com/coreos/go-etcd/etcd
../../third_party/src/github.com/coreos/go-etcd/etcd/client.go:174: method c.dial is not an expression, must be called
../../third_party/src/github.com/coreos/go-etcd/etcd/client.go:200: method c.dial is not an expression, must be called
../../third_party/src/github.com/coreos/go-etcd/etcd/client.go:352: tcpConn.SetKeepAlivePeriod undefined (type *net.TCPConn has no field or method SetKeepAlivePeriod)
../../third_party/src/github.com/coreos/go-etcd/etcd/requests.go:166: c.httpClient.Transport.(*http.Transport).CancelRequest undefined (type *http.Transport has no field or method CancelRequest)

Please help.

Publish Dockerfile for example images

Please publish Dockerfile's and source for the docker images used in kubernetes/examples/guestbook

Specifically;

  • brendanburns/php-redis
  • brendanburns/redis-slave

Pretty up cluster scripts

Right now the cluster scripts just call 'gcloud compute' and pass on the yaml output of those commands verbatim. This is really verbose and ugly.

Instead, we should probably do something like save those to /tmp and delete them if there are no errors. We want the output of the operations around in case there are errors but there is no need to spam the console.

Idempotent creation

Provide a mechanism to ensure idempotent creation of all resources and/or update kubernetes.raml if it is already possible.

Integration with cAdvisor

cAdvisor right now already have some basic statistical information about containers running on the machine. We currently hold most recent stats in memory but we have a framework to dump stats into some backend storage. google/cadvisor#39 has a discussion about support influxdb and I think @erikh is working on that now.

I think the information collected by cAdvisor may be useful for kubernetes. I'm currently considering to add some code into kubelet so that kubelet could pull information from cAdvisor periodically.

Before getting started, I would like to discuss the approach we should take and other issues.

Currently, cAdvisor collects stats and stores them into memory. It will only remember recent stats and provide resource usage percentiles (currently, there are only CPU and memory). I think the resource usage percentiles would be useful for kubernetes master to do scheduling.

There are two possible ways to retrieve such information from cAdvisor:

Solution 1: kubelet pulls information from cAdvisor through REST API and expose another REST API for the master. The master will periodically check the containers' stats through kubelet's REST API. In this case, any information is sent through REST API. Currently, kubelet communicate with master through etcd only. So this approach will add one more communicate channel between kubelet and the master.

Solution 2: Since the percentiles information (e.g. what is the 90th percentile memory usage of a container) is not too big and may be small enough to fit into etcd. We could let cAdvisor update containers' information in etcd and let the master retrieve it whenever it wants to. Or, we could let kubelet pulls such information from cAdvisor and update the corresponding ectd key. In both cases, the communication is through etcd. The disadvantage of this approach is that the information that the master needs may increase to some big message not suitable to be put into etcd.

I would like to see other approaches or discussions of proposed ones.

Separate the pod template from replicationController

We should separate the pod template from replicationController, to make it possible to create pods from template without replicationController (e.g., for cron jobs, for deferred execution using hooks). This would also make updates cleaner.

Port forwarding should be through iptables

As we've disabled docker managing IP tables we no longer get it setting up explicit port forwarding. Instead we are falling back on user mode proxying that is built into docker to do this port forwarding.

We should either build our networking mode into Docker proper or have the kubelet set up and maintain the IPTables forwarding rules we need.

Tools/Master should show true external IP

When you start a container and map a port, figuring out what public IP that container/port is on is non-trivial. We should smooth that experience. When we happen to be running on GCE, we should know how to decode the internal IP of the host VM to the external IP of that VM.

Related to #122.

PreStart and PostStop event hooks

Many systems support event hooks for extensions. A few examples:
https://developers.google.com/appengine/docs/java/javadoc/com/google/appengine/api/LifecycleManager
http://developer.android.com/guide/components/activities.html
http://upstart.ubuntu.com/cookbook/#event
https://coreos.com/docs/launching-containers/launching/getting-started-with-systemd/
http://elasticbox.com/documentation/configuring-and-managing-boxes/start-stop-and-upgrade-boxes/
http://git-scm.com/docs/githooks.html

docker stop and restart currently send SIGTERM followed by SIGKILL, similar to many other systems (e.g., Heroku: https://devcenter.heroku.com/articles/dynos#graceful-shutdown-with-sigterm), which provides an opportunity for applications to cleanly shut down, but lacks the ability to communicate the grace period duration or termination reason and doesn't directly provide support for notifying other processes or services.

As described in the (liveness probe issue)[https://github.com//issues/66], it would be useful to support multiple types of hook execution/notification mechanisms. It would also be useful to pass arguments from clients, such as "reason" (e.g., "cancel", "restart", "reload", "resize", "reboot", "move", "host_update", "probe_failure"). Another way "reason" could be handled is with user-defined events.

In addition to pre-termination notification, we should define other lifecycle hook points, probably at least pre- and post- start and terminate.

It would be useful for post-terminate to be passed the (termination reason)[https://github.com//issues/137], which could either be successful completion, a client-provided stop reason (see above), or detailed failure reason (exit, signal, OOM, container creation error, docker crash, machine crash, lost/ghost).

If the application generated an assertion failure or exception message, a post-termination hook could copy it to (/run/status.txt)[https://github.com//issues/139].

It would also be useful to be able to control (restart behavior)[https://github.com//issues/127] from a hook. We'd need a convenient way to carry over state from a previous execution. The simplest starting point would be for the user to keep it in a (volume)[https://github.com//issues/97].

Use Docker to host most server components

Right now we use salt to distribute and start most of the server components for the cluster.

Instead, we should do the following:

  • Only support building on a linux machine with docker installed. Perhaps support local development on a mac with a local linux VM
  • Package each server component up as a Docker image, built with a Dockerfile
    • Support uploading these Docker images to either the public index or a GCS backed index with google/docker-registry. Or GCR?
    • Minimize docker image size by switching to single layer static golang binary docker images.
    • Support docker save to generate tars of the docker image(s) for dev/private development.
  • Use the kubelet to run/health check the components. This means the kubelet will manage a set of static tasks on each machine (including the master) and a set of dynamic tasks.
  • The only task that shouldn't run under the docker should be the kubelet itself. We may have to hack in something for (network mode = host) for the proxy.

Document local startup scripts

Our README.md should document the local startup scripts so that we have options for kicking the tires that doesn't require GCE.

Cloudcfg should parse/convert request body

Cloudcfg is currently more of an API debugger than a real client. For instance, when creating a pod the JSON file is sent to the server verbatim. It would be great if this were parsed to catch early syntax errors. If we did that we could also support YAML input and make those files much easier to author.

Need a DESIGN.md document

We should have a good overarching technical overview of the design of kubernetes. For new developers or those that want to understand a system deeply before using it.

This should lay out the major moving parts and how they work together.

@lavalamp says:

We should add this information to package comments, too, now that I've split it into multiple packages.

Capture application termination messages/output

When applications terminate, they may write out important information about the reason, such as assertion failure messages, uncaught exception messages, stack traces, etc. We should establish an interface for capturing such information in a first-class way for termination reporting, in addition to whatever is logged.

I suggest we pull the deathrattle message from /dev/final-log or something similar.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.