GithubHelp home page GithubHelp logo

kubermatic / kubermatic Goto Github PK

View Code? Open in Web Editor NEW
1.0K 19.0 155.0 164.77 MB

Kubermatic Kubernetes Platform - the Central Kubernetes Management Platform For Any Infrastructure

Home Page: https://www.kubermatic.com

License: Other

Go 96.44% Makefile 0.19% Shell 3.10% Dockerfile 0.22% CSS 0.03% Smarty 0.01%
kubernetes cluster-api kubermatic-kubernetes-platform

kubermatic's Introduction

last stable release go report card godoc

Overview / User Guides

Kubermatic Kubernetes Platform is in an open source project to centrally manage the global automation of thousands of Kubernetes clusters across multicloud, on-prem and edge with unparalleled density and resilience.

All user documentation is available at the Kubermatic Kubernetes Platform docs website.

Editions

There are two editions of Kubermatic Kubernetes Platform:

Kubermatic Kubernetes Platform Community Edition (CE) is available freely under the Apache License, Version 2.0. Kubermatic Kubernetes Platform Enterprise Edition (EE) includes premium features that are most useful for organizations with large-scale Kubernetes installations with more than 50 clusters. To access the Enterprise Edition and get official support please become a subscriber.

Licensing

See the LICENSE file for licensing information as it pertains to files in this repository.

Installation

We strongly recommend that you use an official release of Kubermatic Kubernetes Platform. Follow the instructions under the Installation section of our documentation to get started.

The code and sample YAML files in the main branch of the kubermatic repository are under active development and are not guaranteed to be stable. Use them at your own risk!

More information

The documentation provides a getting started guide, plus information about building from source, architecture, extending kubermatic, and more.

Please use the version selector at the top of the site to ensure you are using the appropriate documentation for your version of kubermatic.

Troubleshooting

If you encounter issues file an issue or talk to us on the #kubermatic channel on the Kubermatic Community Slack (click here to join).

Contributing

Thanks for taking the time to join our community and start contributing!

Before you start

  • Please familiarize yourself with the Code of Conduct before contributing.
  • See CONTRIBUTING.md for instructions on the developer certificate of origin that we require.

Repository layout

├── addons    # Default Kubernetes addons
├── charts    # The Helm charts we use to deploy
├── cmd       # Various Kubermatic binaries for the controller-managers, operator etc.
├── codegen   # Helper programs to generate Go code and Helm charts
├── docs      # Some basic developer-oriented documentation
├── hack      # scripts for development and CI
└── pkg       # most of the actual codebase

Development environment

git clone [email protected]:kubermatic/kubermatic.git
cd kubermatic

There are a couple of scripts in the hacks directory to aid in running the components locally for testing purposes.

Running components locally

user-cluster-controller-manager

In order to instrument the seed-controller to allow for a local user-cluster-controller-manager, you need to add a worker-name label with your local machine's name as its value. Additionally, you need to scale down the already running deployment.

# Using a kubeconfig, which points to the seed-cluster
export cluster_id="<id-of-your-user-cluster>"
kubectl label cluster ${cluster_id} worker-name=$(uname -n)
kubectl scale deployment -n cluster-${cluster_id} usercluster-controller --replicas=0

Afterwards, you can start your local user-cluster-controller-manager.

# Using a kubeconfig, which points to the seed-cluster
./hack/run-user-cluster-controller-manager.sh
seed-controller-manager
./hack/run-seed-controller-manager.sh
master-controller-manager
./hack/run-master-controller-manager.sh

Run linters

Before every push, make sure you run:

make lint

Run tests

make test

Update code generation

The Kubernetes code-generator tool does not work outside of GOPATH (upstream issue), so the script below will automatically run the code generation in a Docker container.

hack/update-codegen.sh

Pull requests

  • We welcome pull requests. Feel free to dig through the issues and jump in.

Changelog

See the list of releases to find out about feature changes.

kubermatic's People

Contributors

ahmedwaleedmalik avatar alvaroaleman avatar embik avatar imharshita avatar irozzo-1a avatar jiachengxu avatar kdomanski avatar kgroschoff avatar kron4eg avatar lbb avatar lsviben avatar mate4st avatar metalmatze avatar moadqassem avatar moelsayed avatar mrincompetent avatar nikhita avatar p0lyn0mial avatar pkprzekwas avatar pratikdeoghare avatar rastislavs avatar realfake avatar s-urbaniak avatar scheeles avatar sttts avatar vgramer avatar wurbanski avatar xmudrii avatar xrstf avatar zreigz avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kubermatic's Issues

setup basic functional tests for the api

-Create a test file for every api handler
-Create one test case for every endpoint
-- This test should only test the successful request
-If possible mock the kubernetes client or make sure the tests have a proper setup&tear-down

incorporate digitcalocean into api endpoint /ext/${dc}/keys

Proposal:
value for dc is provided for either one of:

  • aws
  • digitalocean

payload of request contains json formated data. In case of:

  • aws: {"username": "..." , "password": "..."}
  • digitalocean: {"token": "..."}

Response should return a list of the following JSON object:
{"name": "...", "fingerprint": "..."}

This API endpoint is necessary for kubermatic/dashboard#5

Create service to watch for changes in cluster plugins

To make sure the customer does not break its cluster by changing parameters in plugins (the ones we deployed), we need to create a "Watcher" which monitors changes in plugin-files.
In case a change is detected, we revert the change.

Technical part: TBD

Error when creating >1 node

Error message: Do: InvalidParameterValue: User data is limited to 16384 bytes status code: 400, request id:

Subscription and Billing

We want to capture the usage of the nodes connected to the customer cluster and use this information for billing. For this we need two parts:

NodeUpTime:

NodeUpTime will run as a cron-job every minute on the seed-cluster in the namespace of each customer cluster. It will connect to the customer api-server and will execute a function similar to “kubectl get nodes”. The job will grab some information of the node and save this in a ThirdPartyResource kubermatic.io/node in each namespace. Additional the nodeUpTime will calculate based on the existing and new information the new upTime. This is executed when the kubelet has the status “ready”.

Information which are stored in the ThirdPartyResource kubermatic.io/node:

 name: gke-dev-kubermatic-default-pool-3a37543c-84zw
 creationTimestamp: 2016-11-23T08:52:44Z 
 lastHeartbeatTime: 2016-12-14T18:00:32Z
 lastTransitionTime: 2016-11-23T08:53:14Z
 upTime: 126 sec
 message: kubelet is posting ready status. 
 reason: KubeletReady
 status: "True"
 type: Ready
 nodeInfo:
    architecture: amd64
    bootID: a68486b0-d4be-4bcb-86e8-db5ac6b701d2
    containerRuntimeVersion: docker://1.11.2
    kernelVersion: 4.4.21+
    kubeProxyVersion: v1.4.6
    kubeletVersion: v1.4.6
    machineID: 01a630835c568051c126195458355858
    operatingSystem: linux
    osImage: Google Container-VM Image
    systemUUID: FC52A2D3-D8FB-EBDC-0C1E-57DDD95FFC00

##SubscriptionJob
The subscription job will forward the upTime from each ThirdPartyResource kubermatic.io/node to stripe https://stripe.com/subscriptions.

Exact spec for this tbd.

Integration of AWS

  • Investigate the integration of AWS
  • How do we handle Region/Zones (DC only support Datacenter)
  • Check if we create an own AMI with all deps

Reduce duplication in config files

Following the DRY principle, variables should be isolated to a single source, and those variables should be propagated out where required without user intervention. The current per-directory kubeconfig, makefiles, etc... violate the DRY principle.

Cluster Controller hits NPE

The cluster controller on k.loodse.com has died several times due to hitting an NPE. We should investigate the cause and resolve.

Add-on controller

Deploy k8s add-on on the customer clusters e.g. kube-dns, monitoring, logging

For this we need to extend the api server. The api server should create a third party resource.

The Add-on controller deploy the application based on the third party resource.

Update customer master

Update order:

  • update etcd
  • update apiserver
  • update scheduler + controller manager
  • update nodes

Proposed extension of the phase automaton:

  • Current phases: Unknown, Pending, Launching, Failed, Running, Paused, Deleting
  • New phase: Updating

When Updating, we add UpdatePhases:

  • Start, UpdateEtcd, UpdatedEtcd, UpdateApiserver, UpdatedApiserver, UpdateController (including scheduler), UpdatedController, Done

An UpdateController will watch cluster with Phase Updating and do:

  • Start: switch to UpdateEtcd
  • UpdateEtcd: if version differs, change etcd deployment -> UpdatedEtcd
  • UpdatedEtcd: wait until etcd is healthy -> UpdateApiserver
  • UpdateApiserver: if version differs, change apiserver deployment -> UpdatedApiserver
  • UpdatedApiserver: wait until apiserver is healthy -> UpdateController
  • UpdateApiserver: if version differs, change scheduler+controller-managers deployments -> UpdatedControllers
  • UpdatedControllers: wait until scheduler+controller-manager is healthy -> Done

The ClusterController will watch clusters with Phase Updating and UpdatePhase "Done" and switch it to Running.

Create a separte entdpoint for nodes and get rid of cors

Currently we are using CORS to access the customer cluster to get the details of the nodes
e.g.

https://l308esocxc.gke.k.loodse.com/api/v1/nodes

Create a nodes endpoint on the api server and proxy the traffic to the customer cluster api server

api/v1/dc/{dc}/cluster/{cluster}/nodes
api/v1/dc/gke/cluster/l308esocxc/nodes

Include commit hash as version

The API & Controller should print the commit hash from which they were built.
It should be printed on application start.

Info: http://stackoverflow.com/questions/11354518/golang-application-auto-build-versioning

So we should do the following:
-Add an uninitialized variable to the api (/api/cmd/kubermatic-api/main.go) & controller (/api/cmd/kubermatic-cluster-controller/main.go)
-Add set commit hash to variable on build (wercker.yaml & http://devcenter.wercker.com/docs/environment-variables/available-env-vars)
-Print variable in main()

Change node info path for node.

The current path is /api/v1/dc/{dc}/cluster/{cluster}/k8s/node/{node}. The correct pattern would be /api/v1/dc/{dc}/cluster/{cluster}/k8s/nodes/{node}.

Remove configuration from code

Allow dynamic changes in the application without modifying the logic, specifically sourcing information on the namespaces/containers/deployments/etc from a configuration file instead of using magic strings in the application logic

Reload configuration without restarting API server

The API server needs to be restarted if to pick up changes in the config file, which will result in some downtime. We can mitigate that by reloading the config either via a watch on the files, or by adding a reload endpoint

Document code

Add some more documentation, This should make it more easy for new people to get into the code.

Metalinter update (ineffassign warning)

Go metalinter was updated and generates new errors on.

  • handler/kubeconfig.go:93:8
  • controller/cluster/pending.go:31:2
handler/kubeconfig.go:93:8:warning: ineffectual assignment to err (ineffassign)
controller/cluster/pending.go:31:2:warning: ineffectual assignment to changedC (ineffassign)

Vendoring: migrate to Glide

I want to discuss here the pros and cons to migrate from godep to Glide

In general the update process with godep and k8s is quite tricky and the problem we had with log_dir #63 was due to the fact that the dependencies was not correctly resolved.

Also as we got this fixed, new things comes up e.g install.go was missing.
You can compare our repo
https://github.com/kubermatic/api/tree/sch-upgrade-godeps/vendor/k8s.io/kubernetes/pkg/apis/authentication.k8s.io
with the k8s repo
https://github.com/kubernetes/kubernetes/tree/v1.3.5/pkg/apis/authentication.k8s.io

I migrate to Glide and the dependencies issues where resolved
https://github.com/kubermatic/api/pull/65/commits/141414246c5d0ba9631dac7c41a372ab729caf1e

Pros:

  • easier dependencies updates
  • direct download of dependencies in vendor folder (no bloating of src)
  • vendor folder could be removed from git

Cons:

  • currently Glide download more files then godep (when we change to k8s client lib, this will reduce the files size dramatically )

Let me know, what you think or if you see another alternative.

Failure in status=pending if partial creation failed

I0922 12:15:59.735642 5 controller.go:318] Syncing cluster "cluster-xdma4f2966"
I0922 12:15:59.736046 5 controller.go:291] Launch timeout for cluster "xdma4f2966" after 23m8.736040036s
I0922 12:15:59.742208 5 controller.go:320] Finished syncing namespace "cluster-xdma4f2966" (6.566312ms)
E0922 12:15:59.742377 5 controller.go:420] Error syncing cluster with key cluster-xdma4f2966: error in phase "Failed" sync function of cluster "xdma4f2966": secrets "apiserver-tls" already exists

Kubermatic CLI

As a user i want to interact with kubermatic via a CLI.
The CLI needs to accept static access credentials as it must be possible to use it in a completely automated workflow.

Panic flag redefined: log_dir after upgrade to k8s to 1.3.5

After upgrading k8s dependencies to 1.3.5 I go the following dump when I try to run api-server

/private/var/folders/2s/dnrtgx8j4wz4w50pj31d57fc0000gn/T/API-Servergo flag redefined: log_dir
panic: /private/var/folders/2s/dnrtgx8j4wz4w50pj31d57fc0000gn/T/API-Servergo flag redefined: log_dir

goroutine 1 [running]:
panic(0x59140a0, 0xc82010e190)
    /usr/local/opt/go/libexec/src/runtime/panic.go:481 +0x3e6
flag.(*FlagSet).Var(0xc820072060, 0x8f54708, 0xc82010e140, 0x60404e8, 0x7, 0x61e30a0, 0x2f)
    /usr/local/opt/go/libexec/src/flag/flag.go:776 +0x454
flag.(*FlagSet).StringVar(0xc820072060, 0xc82010e140, 0x60404e8, 0x7, 0x0, 0x0, 0x61e30a0, 0x2f)
    /usr/local/opt/go/libexec/src/flag/flag.go:679 +0xc7
flag.(*FlagSet).String(0xc820072060, 0x60404e8, 0x7, 0x0, 0x0, 0x61e30a0, 0x2f, 0xc82010e130)
    /usr/local/opt/go/libexec/src/flag/flag.go:692 +0x83
flag.String(0x60404e8, 0x7, 0x0, 0x0, 0x61e30a0, 0x2f, 0x463c1e2)
    /usr/local/opt/go/libexec/src/flag/flag.go:699 +0x5f
github.com/kubermatic/api/vendor/github.com/golang/glog.init()
    /Users/Sebastian/git/kubermatic/src/github.com/kubermatic/api/vendor/github.com/golang/glog/glog_file.go:41 +0x13c
github.com/kubermatic/api/handler.init()
    /Users/Sebastian/git/kubermatic/src/github.com/kubermatic/api/handler/user.go:60 +0x6c
main.init()
    /Users/Sebastian/git/kubermatic/src/github.com/kubermatic/api/cmd/kubermatic-api/main.go:56 +0x5e

Process finished with exit code 2

this is the line which create the trouble https://github.com/kubernetes/kubernetes/blob/master/vendor/github.com/golang/glog/glog_file.go#L41

LoadTest for Beta

Please write a script which creates ~500 clusters. Those clusters should be distributed across all available seed clusters.

If all clusters are up, create 3-5 nodes per cluster. For simplicity use just one provider.

All this should happen via script!

Adopt docker libmachine

We can use github.com/docker/machine/drives as our providers for the cloud provider/cloud.
This way we can have several datacenters abstracted and implemented.
This should be implemented as a service using swagger for auto generated and self documenting code.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.