GithubHelp home page GithubHelp logo

porter-dev / porter Goto Github PK

View Code? Open in Web Editor NEW
4.1K 39.0 212.0 62.99 MB

Kubernetes powered PaaS that runs in your own cloud.

Home Page: https://porter.run

License: Other

Go 47.92% Shell 0.18% Dockerfile 0.11% HTML 0.14% TypeScript 51.04% JavaScript 0.10% Makefile 0.01% Open Policy Agent 0.21% CSS 0.25% Starlark 0.05%
helm kubernetes paas aws gcp digitalocean golang react devops heroku

porter's Introduction

Porter

License: MIT Go Report Card Twitter

Porter is a Kubernetes-powered PaaS that runs in your own cloud provider. Porter brings the Heroku experience to your own AWS/GCP account, while upgrading your infrastructure to Kubernetes. Get started on Porter without the overhead of DevOps and customize your infrastructure later when you need to.

image

Community and Updates

To keep updated on our progress, please watch the repo for new releases (Watch > Custom > Releases) and follow us on Twitter!

Why Porter?

A PaaS that grows with your applications

A traditional PaaS like Heroku is great for minimizing unnecessary DevOps work but doesn't offer enough flexibility as your applications grow. Custom network rules, resource constraints, and cost are common reasons developers move their applications off Heroku beyond a certain scale.

Porter brings the simplicity of a traditional PaaS to your own cloud provider while preserving the configurability of Kubernetes. Porter is built on top of a popular Kubernetes package manager helm and is compatible with standard Kubernetes management tools like kubectl, preparing your infra for mature DevOps work from day one.

image

Features

Basics

  • One-click provisioning of a Kubernetes cluster in your own cloud console
    • ✅ AWS
    • ✅ GCP
    • ✅ Azure
  • Simple deploy of any public or private Docker image
  • Auto CI/CD with buildpacks for non-Dockerized apps
  • Heroku-like GUI to monitor application status, logs, and history
  • Application rollback to previously deployed versions
  • Zero-downtime deploy and health checks
  • Monitor CPU, RAM, and Network usage per deployment
  • Marketplace for one click add-ons (e.g. MongoDB, Redis, PostgreSQL)

DevOps Mode

For those who are familiar with Kubernetes and Helm:

  • Connect to existing Kubernetes clusters that are not provisioned by Porter
  • Visualize, deploy, and configure Helm charts via the GUI
  • User-generated form overlays for managing values.yaml
  • In-depth view of releases, including revision histories and component graphs
  • Rollback/update of existing releases, including editing of raw values.yaml

image

Docs

Below are instructions for a quickstart. For full documentation, please visit our official Docs.

Getting Started

  1. Sign up and log into Porter Dashboard.

  2. Create a Project and put in your cloud provider credentials. Porter will automatically provision a Kubernetes cluster in your own cloud. It is also possible to link up an existing Kubernetes cluster.

  3. 🚀 Deploy your applications from a git repository or Docker image registry.

porter's People

Contributors

abelanger5 avatar anukul avatar d-g-town avatar ferozemohideen avatar ianedwards avatar igalakhov avatar ishankhare07 avatar jimcru21 avatar jnfrati avatar jogly avatar jose-fully-ported avatar josephch405 avatar joshuaharry avatar jusrhee avatar jyash97 avatar mauaraujo avatar meehawk avatar mnafees avatar oshtman avatar porter-deployment-app[bot] avatar porter-internal[bot] avatar portersupport avatar rootusrsystem32 avatar rudimk avatar sdess09 avatar seenry avatar stefanmcshane avatar sunguroku avatar xetera avatar yosefmih avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

porter's Issues

Revision history for Helm

Client should be able to retrieve a revision history for a chart of a specific name, via GET /api/charts/{name}/history.

Authentication with Cookies & Sessions

High level

The implementation for the MVP will not implement any admin privileges, but the authentication flow will eventually involve a Rancher-style RBAC in the hosted version.

There are two views: Login and Register. Upon installation of Porter, user is first directed to the Register view. The first user to register after installing Porter is automatically given admin privileges - this initial registration that automatically gives admin rights is only available during cluster initialization. From here on, every new user registers through an invite link that the initial user (admin) generates on the dashboard.

For the MVP, the goal is to implement an auth flow that's backward-compatible with the future change as described above.

Implementation

Session/Cookie management using github.com/gorilla/sessions. Session store for the backend will be a postgres interface around the default store of gorilla/sessions written from scratch, in reference to github.com/antonlindstrom/pgstore. Written from scratch to make it consistent with GORM.

Routes

Method Endpoint Header/Params Description
POST /api/login session-id, email, password Writes the cookie and sets session.Values["authenticated"] to true
POST /api/register email, password Creates a new User object in the db
GET /api/logout session-id Sets session.Values["authenticated"] to false

Database

User

Field Type Null Key Default Note
Id int64 NO YES Randomly generated
Email string NO NO None
Password string NO NO None Hashed with golang.org/x/crypto/bcrypt
Admin bool NO NO true

Session

Field Type Null Key Default Note
Id int64 NO YES Randomly generated
Authenticated bool NO NO false
Role string NO NO admin
CreatedOn time.Time NO NO
ModifiedOn time.Time YES NO
ExpiresOn time.Time NO NO

Middleware

A simple middleware in /internal/auth that will authenticate users. Chi accepts standard middleware form in golang.

Password hashing and helper functions

Passwords should be hashed before written to the database and endpoints will require the following helper function from the repository:

(u *UserRepository) CheckPasswords(id int, testPassword string) (bool, error)

Create Typescript API client

A prerequisite to issue #9 , we should formalize the API and create a typescript API client that can be used as a separate package for integration testing.

Extensive testing of kubeconfig input

The kubeconfig implementation should be redone using the clientcmd package from the official Go client. This allows us to control and construct configs based on the ClusterConfigs that are stored in the database. This also ensures that malformed inputs are parsed and caught in the internal decoding of the k8s package.

Specifically, we should:

  • Use clientcmd.NewClientConfigFromBytes to parse the client config, and then transform that client config to a set of ClusterConfigs using the exported clientcmd.Cluster class.
  • Pass detailed errors to the client for kubeconfig parsing issues
  • Include a method which generates the client config and sets the current context based on the allowedClusters field. This has the following signature:
func GetRestrictedClientConfigFromBytes(
	bytes []byte,
	contextName string,
	allowedClusters []string,
) (clientcmd.ClientConfig, error)

[beta.2] service-account should be generated for kubectl gcp plugins

The Porter server needs to connect with an arbitrary number of GKE clusters using service account credentials. This sort of arbitrary service account switching is difficult -- there was a PR here that attempted to add this functionality using service account keys. As discussed in the issue, it doesn't make sense to implement this functionality using kubeconfig-based auth, and also doesn't make sense to download service account keys for each cluster and link the key files (for one, this would lead to a bunch of key files written in the container, which seems unsafe).

We'll need support for more idiomatic ways of connecting to clusters, and we'll likely have to drop the []byte storage of the kubeconfig in favor of non-kubeconfig based auth. The solution here is based on the following sources: [1], [2], [3]:

  1. Attempt to infer the GCP project_id automatically. Query the user if the project_id is correct -- if it is not, or it is not possible to find a project_id, ask the user to input a project_id.

  2. Create a service-account in that project using the iam admin package -- equivalent gcloud command:

gcloud iam service-accounts create porter-dashboard
  1. Add policy binding to the service account using the iam package (will have to configure the roles differently depending on a provisioner/connector type) -- equivalent gcloud command:
gcloud projects add-iam-policy-binding PROJECT_ID \
    --member=serviceAccount:porter-dashboard@PROJECT_ID.iam.gserviceaccount.com
    --role=roles/container.developer
  1. Get the service account credentials and store them in the DB.

  2. Do something like the following to generate the client config:

func ClientFromSAKeyFile(ctx context.Context, filename string, scopes ...string) *rest.RESTClient, error {
	b, err := ioutil.ReadFile(filename)
	if err != nil {
		return nil, err
	}
	creds, err := google.CredentialsFromJSON(ctx, b, scopes...)
	if err != nil {
		return nil, err
	}
	rest.RESTClientFor(&rest.Config{
		Transport: &oauth2.Transport{
			Source: creds.TokenSource,
		}
	}), nil
}

Create test framework for API

API functions for the /api/users, the /api/releases, and the /api/k8s endpoints should be unit tested and testing with fake clients. We should move integration testing fixtures into a /tests or /fixtures folder, which will be tracked in a separate issue.

[beta.2] service-account should be generated for kubectl aws plugins

To connect with an EKS cluster in an out-of-cluster configuration, we will have to generate a bearer token that can be passed as part of the request to the EKS instance. The solution here is based on the following sources: [1], [2].

  1. Create an AWS IAM user named porter-dashboard. Get the credentials for this IAM user and create a new Porter service account that will use these credentials.

  2. Query the aws-auth ConfigMap in the Amazon EKS cluster that provides the mappings between IAM principals (roles/users) and Kubernetes subjects (Users/Groups). If the ConfigMap does not contain a mapping between the IAM user porter-dashboard and the correct Kubernetes subject, update the ConfigMap.

Note: this will grant the Porter service account the same level of access as the admin user, which we have enforced in other kubectl auth plugin implementations.

  1. During runtime, query the user model to retrieve a token, if one was previously generated. If this token is expired, perform steps 4-6. Otherwise, go to step 7.

  2. During runtime, configure the AWS Golang SDK to use a custom Config and create a session using this config.

  3. During runtime, use the NewGenerator function exposed by the aws-iam-authenticator to create an object that can generate the token.

  4. Use the generator. GetWithOptions method to generate a token, and save this token

  5. Use the given token as a bearer token:

restConfigs := &rest.Config{
  Host:        aws.StringValue(cluster.Endpoint),
  BearerToken: tok.Token,
  TLSClientConfig: rest.TLSClientConfig{
   CAData: ca,
  },
 }

SessionStore implementation should write new Cookie when session isn't found, and FE should allow registration

When a cookie already exists on the browser, but the sessions table has been wiped or a fresh DB instance is being used, the user always gets redirected to /login, which continues to throw and error but prevents redirects to /register.

If the session isn't found, the user should be redirected to login which should have a working link to register as well. This requires some minor frontend changes and fixes to the auth package.

Steps to recreate: if you have previously made an account by linking to a postgresql instance in docker-compose, change the QUICK_START env variable to true, which will use the sqlite database instead. The sessionstore will fail in querying the cookie-based session, the server throws a 500 error caused by:

porter_1    | 2020/10/07 19:48:00 /porter/internal/repository/gorm/session.go:48 record not found
porter_1    | [4.730ms] [rows:0] SELECT * FROM `sessions` WHERE Key = "KSV52WKAIR3223CTCZ6HKX7BDRRGH6OXS35SVQ5D4BAOMGWEUWQQ" ORDER BY `sessions`.`id` LIMIT 1
nginx       | 172.18.0.1 - - [07/Oct/2020:19:48:00 +0000] "GET /api/auth/check HTTP/1.1" 500 22 "http://localhost:8080/login" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/85.0.4183.121 Safari/537.36"

[beta.1-fix] porter start command on Windows

Running porter start throws an error after prompting for email:

$ ./porter.exe start
Please register your admin account with an email and password:
Email: [email protected]
Error running start: The handle is invalid.
Shutting down...
Stopping containers...

Potentially due to different line ending on Windows.

[beta.3] run server natively on Windows/Mac/Linux, instead of in docker

For testing purposes, we only run the Porter server using the Docker engine. We will continue to run inside of Docker by default.

But there are cases that it would make sense to run the server natively -- for example, connecting to minikube via 127.0.0.1 instead of using host.docker.internal. There are likely other configurations (proxies, VPNs) that wouldn't work with the Docker driver.

Rollback endpoint and deployment status for Helm

The API should expose rollbacks to a specific revision of a Helm chart: POST /api/charts/{name} with request body

{
  "revision": Number,
}

We should also figure how to stream the deployment/rollback status to the FE.

Kubernetes agent should implement RESTClientGetter for both in and out of cluster configurations

The kubernetes package should generate the following Agent:

type Agent struct {
	RESTClientGetter genericclioptions.RESTClientGetter
	Clientset        *kubernetes.Clientset
}

This agent can be generated from an in-cluster and out-of-cluster configuration -- in-cluster using the service account provided by kubernetes for the pod, out-of-cluster using a kubeconfig file with a specific context (along with the allowedContexts). Thus, the kubernetes package should also expose the following functions:

func AgentFromInClusterConfig() (*Agent, error)

type OutOfClusterConfig struct {
	KubeConfig      []byte
	AllowedContexts []string
	Context         string `json:"context" form:"required"`
}

func AgentFromOutOfClusterConfig(conf *OutOfClusterConfig) (*Agent, error)

Both of these functions may require a custom implementation of genericclioptions.RESTClientGetter -- we can perhaps look at third-party kube clients for some best practices here (for example, the klient implementation).

PostgreSQL database setup with GORM

Postgres database should be integrated using gorm and the User model should be stored in Postgres.

  • Database connection and some built-in resilience to dropped connection (should attempt retries, if database is down a descriptive error message should appear)
  • Specify simple user model and integrate with gorm

/api/users endpoint implementation

This is not a formal specification, but rather a rough outline for implementation. The full specification for all endpoints will be generated after an MVP is complete.

User {
  // ID is a string that uniquely identifies a user 
  ID string
  // see ClusterConfig below
  Clusters []*ClusterConfig
  // kubeconfig is the raw kubeconfig
  Kubeconfig []byte
}

ClusterConfig {
  // Name is the name of the cluster
  Name string
  // Server is the endpoint of the kube apiserver for a cluster
  Server string
  // Context is the name of the context
  Context string
  // User is the name of the user for a cluster
  User string
}

The following endpoints are exposed for the User object:

Method Endpoint Params Description
POST /api/users kubeconfig User must be authorized on the cluster
GET /api/users/{user_id} User must be authorized on the cluster, request must come from User identified with user_id
PUT /api/users/{user_id} kubeconfig User must be authorized on the cluster, request must come from User identified with user_id
DELETE /api/users/{user_id} User must be authorized on the cluster, request must come from User identified with user_id

porter start command proposal

This is an implementation proposal for part 1 of the onboarding flow (#50).

porter start \
  --insecure \
  --skip-kubeconfig \
  --kubeconfig=(path/to/kubeconfig) \
  --contexts=(list,of,contexts) \
  --image-tag=(latest) \
  --db=(sqlite|memory|postgres)

Authentication and Kubeconfig:

  • If the server is not started with --insecure, look for an admin account in the local keychain/pass/wincred. If the admin account does not exist, prompt the user for username/password/confirm password.

  • If a --skip-kubeconfig flag has not been passed, generate a new config and place it in $HOME/.porter/porter.kubeconfig using the kubeconfig based on either the default location or the --kubeconfig flag.

  • Pass an admin user/password to the Docker container together with a path to the parsed kubeconfig. If the admin user/password already exists but a kubeconfig is passed in, update the user with the new kubeconfig.

Server Startup and Termination:

  • Pull the image and image tag specified by --image-tag.

  • If the db is specified as postgres, create a volume for postgres if it does not exist, create the postgres container if it does not exist, and create a shared network for postgres and porter. If the db is specified as sqlite, create a volume for the sqlite filesystem storage if it does not exist.

  • On SIGINT or SIGTERM, stop all running containers.

  • Create the container if it does not exist, or update the container if the container does not contain the required volumes/mounts. Updating the container must:

    • Leave the sqlite volume mounted as porter.db in the container, and create it if it doesn't exist.
    • Do not include the kubeconfig bind mount if --skip-kubeconfig is passed.

[beta.2] in-tree kubectl auth plugins proposal

Right now, the primary supported auth mechanism is an x509 certificate. Most in-tree auth plugins are unlikely to work at the moment as they invoke CLI commands unavailable in the container (only oidc auth-providers that don't use idp-certificate-authority as a host file location will work). In this issue, we'll track our plan for the supported in-tree auth mechanisms in the beta.2 release.

What is meant by "first-class support"? It means it is possible to invoke a single CLI command (porter start or porter connect) to generate credentials for a given cluster in a process that does not run natively on the host machine. This is rather essential in order to orchestrate multiple clusters in an agent-less fashion ("agent-less"=without a process running on the cluster). In other words, we enforce that there is a single process running on a single host that can orchestrate multiple clusters reliably, using authentication mechanism provided either natively by k8s, or supported by in-tree providers.

beta.2 first-class support:

OIDC

This should be pretty well-supported for modern oidc kubeconfigs that use a subset of idp-issuer-url, client-id, client-secret, id-token, and refresh-token. We should use the same mechanism for populating idp-certificate-authority and injecting the idp-certificate-authority-data field.

GKE Cluster

The dashboard should be able to authenticate to a GKE cluster using a GKE service account (documented here).

EKS

aws mechanisms (as of this writing, exec running aws-iam-authenticator or exec running aws eks get-token) should be detected, and an iam role that links to a k8s service account should be generated. We should investigate the mechanism used by eksctl for generating a token.

/api/charts basic endpoints implementation

Note: This is not a formal specification, but rather a rough outline for implementation. The full specification for all endpoints will be generated after an MVP is complete.

This is just listing the charts:

Chart {
  namespace string
  name string
  version string
  timeCreated time.Time?
  status FullStatus 
}

FullStatus {
  status Status
  msg string
}

Status int 

const (
  HEALTHY Status = iota // (0)
  WARNING // (1)
  ERROR // (2)
) 

The following endpoints are exposed for the Chart object:

Method Endpoint Params Description
GET /api/charts namespace (optional) User must be authorized on the cluster with permissions to view charts in a namespace

Missing ~/.porter/porter.yaml file in clean installation

After clean installation
minikube + porter on linux - Ubuntu 20.04

Trying start server
porter server start
I've got
open $HOME/.porter/porter.yaml: no such file or directory

Trying connect to kubeconfig
porter connect kubeconfig
I've got
Error: Get /api/auth/check: unsupported protocol scheme ""

Manually created empty file
mkdir ~/.porter && touch ~/.porter/porter.yaml
solved problem

[beta.3] onboarding flow proposal

Overview: here's the proposal for onboarding, considering both in-cluster configurations, out-of-cluster configurations, and the hosted offering. The onboarding flow is primarily defined by the actions start, which starts a Porter instance, auth register/auth login, which allows new users to login/create an account on an existing instance, project create which creates a new project within the Porter instance, and connect which allows the project to link or provision across clusters. All actions will have equivalent CLI commands/dashboard views except for start.

Starting the server (porter start)

The server should be started as a Docker container running either in-cluster or on a machine with the Docker engine running.

(1) Quick start: Run on host machine or via Docker engine

A user will be able to start the dashboard with a single command, porter server start, that will either use a local driver or detect if a compatible Docker engine is running, and if it is, will start the dashboard as a Docker container bound to a specific port on the host machine. The driver is selected via the --driver flag. This command has the following signature:

Starts a Porter server instance on the host

Usage:
  porter server start [flags]

Flags:
      --db string          the db to use, one of sqlite or postgres (default "sqlite")
      --driver string      the driver to use, one of "local" or "docker" (default "local")
  -h, --help               help for start
      --image-tag string   the Porter image tag to use (if using docker driver) (default "latest")
  -p, --port int           the host port to run the server on (default 8080)

Global Flags:
      --host string   host url of Porter instance

(2) Quick start: Helm chart

For in-cluster configurations, the only auth mechanism enabled by default is token-based authentication. The admin user will be identified via the service-account-token associated with the deployed chart. Once the admin user logs in, they will be asked to name the project, and upon project creation they will be prompted to name the cluster that will be connected by default. The admin can add other users by generating in-cluster secrets (selected by default) or using Porter's JWT implementation. The following features will be disabled by default: creation of new projects, connecting to other clusters, cluster provisioning, basic authentication (email/password), oauth/oidc authentication.

Logging in (porter auth login)

A user can log in as a local or an external user, partially modeled off of Rancher's external vs local authentication. A local user is identified via email, while an external user is identified via a token. While we may eventually support SSO, we will likely restrict external users to Github/Gitlab accounts. Note that external users are not supported by default in self-hosted configurations: setting up external users will require an advanced configuration.

It should be possible to prompt this login flow from the CLI, which will open the default browser and ask the user to log in. If the user specifies the --no-launch-browser option, the user will be prompted for an email/password directly from the CLI. If the user specifies the --token option, they can input the token given to them by the admin. In the case of Github/Gitlab login, they will have to authenticate via the browser.

(1) Email-based login:

A user should be able to create an account via an email/password. To make the onboarding flow easier, an admin user will be able to share a link to members that contains a token which identifies an email address and a project id. If the user is logged in, they will automatically join the project; if not, they will be prompted to create a password. In the hosted platform, we will send this email to the specified email address; in self-hosted versions without email configuration, the admin will have to share the tokens directly.

(2) External login:

A user should be able to log in using an external identity provider. To make the onboarding flow easier, an admin user will be able to grant access to members in the same Github/Gitlab organization. If the user is already logged in via an email/password, the user will be prompted to link their Github/Gitlab account to that account. If the user is not logged in, they will be automatically prompted to log in using their Github/Gitlab account.

(3) Token-based login

For certain in-cluster configurations, a user will be able to log in via a service account token that exists in the same namespace as the Porter pod.

Creating a project (porter project create)

This is rather self-explanatory, but a user should be able to create a new project from the dashboard or the CLI. The user that creates the project will be an admin user by default, which allows them to add new members to the project.

Creating/linking a service account (porter connect)

A service account is Porter's method for accessing and provisioning a cluster. Service accounts are identified via a (kind, auth-mechanism, project_id) tuple. A kind is defined as one of provisioner, connector: a provisioner can only provision a new cluster and update cluster infrastructure, while a connector can read cluster objects.

Note: a connector service account is the equivalent of a native serviceAccount object within Kubernetes, while a provisioner service account is the equivalent of a native serviceAccount in GCP/AWS.

[beta.2] url-based rendering

Pages should have url-based rendering, specifically:

/{cluster_name} -- the view of releases within a cluster. This view should populate the release filter options as query params as well, so for example /{cluster_name}?namespace=default should automatically populate the default namespace filter.
/{cluster_name}/{release_name} -- the view of a release for a cluster

local kubeconfig from a docker volume

For users that would like to just test the dashboard quickly with a single Docker command, we should provide two flags/env variables that make this possible:

  • Read the local kubeconfig by attaching a volume from $HOME/.kube/config to /porter/.kubeconfig, and implement a method ReadLocalKubeConfig in the kubernetes package to automatically search this location and parse it for a config.
  • Allow users to set the environment variable ENABLE_AUTH which is set to true by default, but if false:
    • A repo implementation is used that generates the correct models during runtime by reading the local kubeconfig
    • The auth middleware accepts all connections
    • All clusters are added by default

Upgrade w/ new values.yaml for Helm

We should expose an endpoint for upgrading a release with new config, passed as values.yaml. Implemented as POST /api/charts/{name}/upgrade.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.