GithubHelp home page GithubHelp logo

rancher / rancher Goto Github PK

View Code? Open in Web Editor NEW
22.5K 643.0 2.9K 146.41 MB

Complete container management platform

Home Page: http://rancher.com

License: Apache License 2.0

Shell 1.27% Python 13.49% Makefile 0.01% Go 81.96% Dockerfile 0.28% PowerShell 0.31% Batchfile 0.01% Groovy 2.10% JavaScript 0.01% HCL 0.32% Mustache 0.02% Jinja 0.24%
rancher docker kubernetes orchestration cattle containers

rancher's Introduction

Rancher

This file is auto-generated from README-template.md, please make any changes there.

Build Status Docker Pulls Go Report Card

Rancher is an open source container management platform built for organizations that deploy containers in production. Rancher makes it easy to run Kubernetes everywhere, meet IT requirements, and empower DevOps teams.

Latest Release

  • v2.8
    • Latest - v2.8.3 - rancher/rancher:v2.8.3 / rancher/rancher:latest - Read the full release notes.
    • Stable - v2.8.3 - rancher/rancher:v2.8.3 / rancher/rancher:stable - Read the full release notes.
  • v2.7
    • Latest - v2.7.10 - rancher/rancher:v2.7.10 - Read the full release notes.
    • Stable - v2.7.10 - rancher/rancher:v2.7.10 - Read the full release notes.
  • v2.6
    • Latest - v2.6.14 - rancher/rancher:v2.6.14 - Read the full release notes.
    • Stable - v2.6.14 - rancher/rancher:v2.6.14 - Read the full release notes.

To get automated notifications of our latest release, you can watch the announcements category in our forums, or subscribe to the RSS feed https://forums.rancher.com/c/announcements.rss.

Quick Start

sudo docker run -d --restart=unless-stopped -p 80:80 -p 443:443 --privileged rancher/rancher

Open your browser to https://localhost

Installation

See Installing/Upgrading Rancher for all installation options.

Minimum Requirements

  • Operating Systems
    • Please see Support Matrix for specific OS versions for each Rancher version. Note that the link will default to the support matrix for the latest version of Rancher. Use the left navigation menu to select a different Rancher version.
  • Hardware & Software

Using Rancher

To learn more about using Rancher, please refer to our Rancher Documentation.

Source Code

This repo is a meta-repo used for packaging and contains the majority of Rancher codebase. For other Rancher projects and modules, see go.mod for the full list.

Rancher also includes other open source libraries and projects, see go.mod for the full list.

Build configuration

Refer to the build docs on how to customize the building and packaging of Rancher.

Support, Discussion, and Community

If you need any help with Rancher, please join us at either our Rancher forums or Slack where most of our team hangs out at.

Please submit any Rancher bugs, issues, and feature requests to rancher/rancher.

For security issues, please first check our security policy and email [email protected] instead of posting a public issue in GitHub. You may (but are not required to) use the GPG key located on Keybase.

License

Copyright (c) 2014-2024 Rancher Labs, Inc.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

rancher's People

Contributors

aiwantaozi avatar aiyengar2 avatar bmdepesa avatar cbron avatar cmurphy avatar daxmc99 avatar dramich avatar gitlawr avatar harrisonwaffel avatar ibuildthecloud avatar igomez06 avatar jakefhyde avatar jiaqiluo avatar kevinjoiner avatar kinarashah avatar luthermonson avatar maxsokolovsky avatar mbolotsuse avatar moelsayed avatar mrajashree avatar nathan-jenan-rancher avatar nickgerace avatar oats87 avatar orangedeng avatar pennyscissors avatar prachidamle avatar rmweir avatar sowmyav27 avatar strongmonkey avatar superseb avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

rancher's Issues

Load Balancer Service

Overview

Load balancer is a key requirement for developers and ops who want to deploy large-scale applications. Load balancing support is notoriously uneven across different clouds. By offering a uniform, simple, and powerful load balancer solution, Rancher makes it possible for Docker developers to port applications easily across multiple clouds, or to use different cloud providers as independent resource pools to host highly available applications.

Description

The basic flow of creating and using an HTTPS load balancer in Rancher is as follows:

  1. User uploads SSL certificates into Rancher
  2. User creates a load balancer using the uploaded SSL certificate
  3. Users adds Docker containers into the load balancer
  4. The system automatically monitors the amount of traffic going across the load balancer and can elastically boost its capacity as the traffic grows.

Rancher creates containers running HAProxy on Linux servers. The container listens on ports 80, 443, or other ports enables by the user and maps these ports to the ports on Linux servers. Linux server IP becomes load balancer VIP. Rancher monitors CPU, memory, and networking resources consumes by the HAProxy containers. Rancher monitors the load on HAProxy containers and automatically creates new HAProxy containers on additional Linux servers on demand.

Rancher load balancer service uses DNS-based load balancing to distribute traffic across multiple availability zones and across multiple HAProxy instances in the same availability zone. HAProxy instances are added into user’s DNS resolver. Rancher also automatically terminates HAProxy containers when load decreases. Once an HAProxy container dies, it is removed from the DNS resolver. DNS-based load balancing has the following advantages:

  1. It is much simpler compared with maintaining the same VIP across multiple hosts and multiple availability zones.
  2. It offers consistent failover behavior for global availability zone and local HAProxy container failures.

The advantages of DNS-based load balancing and complexities of traditional approaches to load balancing has led a lot of web companies to adopt DNS-based load balancing on their own. (https://www.loggly.com/blog/why-aws-route-53-over-elastic-load-balancing/)

Background

It is useful to review the following documents which cover the basic concepts of load balancing and DNS failover:

  1. AWS Route 53 Developer Guide
  2. AWS Elastic Load Balancing Developer Guide
  3. HAProxy Configuration Manual

Operations

CreateLoadBalancer

CreateLoadBalancer takes as input name of the load balancer, a list of listeners, a list of Linux servers used to host HAProxy containers, and returns a DNS name in the form of load_balancer_id.user_id.lb.rancher.io

Rancher internally uses AWS Route 53 service to host the DNS resolvers for *.lb.rancher.io

Because each Linux server has only one IP, each Linux server can only be used to create one load balancer.

The initial implementation may decide to create HAProxy on each Linux server and register them in DNS regardless of the load. The overhead of running a Docker container is small. This approach saves the effort of dynamically scaling up/down HAProxy containers.

CreateLoadBalancerListener

CreateLoadBalancerListener takes as input the following arguments:

  • LoadBalancerPort
  • BackendPort
  • Protocol
  • BackendProtocol
  • SSLCertificateID

ConfigureHealthCheck

ConfigureHealthCheck must work both at DNS and HAProxy level. Refer to AWS ELB health check syntax for a possible design: http://docs.aws.amazon.com/cli/latest/reference/elb/configure-health-check.html

Multi-container (aka "pods", "services") upload/deploy from file

I know everything is in early stages, but I figured I'd submit something anyway so that there's a place to discuss this particular topic.

I tend to have apps/stacks that spread out over many containers. I'll have a couple volume containers, one container for my configuration and dynamic run-time content that also is a hub for all my volume containers, and then any number of application containers that --volumes-from all the others via my configuration container to share sockets amongst other things. All together they form a "stack" or a "service". Some can be 6-10 containers in number, with webs of shared volumes and links between them. Fig has been my normal method of keeping track of everything so far. It's been very nice to see the entire stack in one file and be able to edit it's deploy from the same location. So far this has been single host, but I feel like you've solved most of the multihost networking issues in Rancher already, and thus being able to deploy services in various host configurations is possible.

Instead of entering single containers one by one, I think it would be nice to use either YAML or JSON files to specify a suite or "pod" of containers and be able to upload it from the web UI all at once. Better still, if you could incorporate fleet-like logic within that file to have certain containers "follow" others or be "global" etc., that would be ideal so that I could do things like have my webapp be on the same host as my database because I like to use unix sockets between them, or have them NOT be on the same host for fault-tolerance etc. Is such a feature planned for Rancher?

My hold up on many docker-based web interfaces (actually, every single one I've tried) is that they require manual entry one-by-one which just does not work for complex container situations. I like the strategy you're taking with Rancher all around, and I think if you can make suites of containers the core element as services, rather then individual containers, you will really get an upper leg on the web-ui docker world (which is really elementary ATM).

Thanks for your work here, I'm enjoying using rancher!

Historical container resource utilization

Rancher needs to support real-time and historical container resource utilization for:

  • CPU
    • Usage vs Capacity
  • Memory
    • Usage vs Capacity
  • Disk
    • Usage vs Capacity
    • IOPS
    • R/W throughput
  • Network

How far back we want to keep history is TBD.

Load Balancer API[Merge] and HAProxy Lifecycle

  • Load Balancer API, schema, and process logic (not health check policy) [Merge]
  • HAProxy lifecycle management [WIP]
  • Need to also refactor some cattle code to accommodate the life cycle mgmt of HAProxy.

Stack trace: State [removing] is not valid

I am not sure how to reproduce, because I experienced different problems at the same time; but stack trace hereunder seems to indicate a bug.

Running on GCE Ubuntu 14.04.

2014-12-16 16:13:05,489 ERROR [e5917261-2d79-40c9-8a3a-5d35b6a1179c:61] [instance:5] [instance.start->(InstanceStart)] [] [cutorService-14] [i.c.p.process.instance.InstanceStart] Failed to create compute for instance [5] 
2014-12-16 16:13:05,513 ERROR [:] [] [] [] [cutorService-14] [i.c.p.e.e.i.ProcessEventListenerImpl] Unknown exception running process [instance.start:61] on [5] io.cattle.platform.engine.process.impl.ProcessCancelException: State [removing] is not valid
        at io.cattle.platform.engine.process.impl.DefaultProcessInstanceImpl.preRunStateCheck(DefaultProcessInstanceImpl.java:275) ~[cattle-framework-engine-0.5.0-SNAPSHOT.jar:na]
        at io.cattle.platform.engine.process.impl.DefaultProcessInstanceImpl.assertState(DefaultProcessInstanceImpl.java:513) ~[cattle-framework-engine-0.5.0-SNAPSHOT.jar:na]
        at io.cattle.platform.engine.process.impl.DefaultProcessInstanceImpl.runWithProcessLock(DefaultProcessInstanceImpl.java:336) ~[cattle-framework-engine-0.5.0-SNAPSHOT.jar:na]
        at io.cattle.platform.engine.process.impl.DefaultProcessInstanceImpl$2.doWithLockNoResult(DefaultProcessInstanceImpl.java:259) ~[cattle-framework-engine-0.5.0-SNAPSHOT.jar:na]
        at io.cattle.platform.lock.LockCallbackNoReturn.doWithLock(LockCallbackNoReturn.java:7) [cattle-framework-lock-0.5.0-SNAPSHOT.jar:na]
        at io.cattle.platform.lock.LockCallbackNoReturn.doWithLock(LockCallbackNoReturn.java:3) [cattle-framework-lock-0.5.0-SNAPSHOT.jar:na]
        at io.cattle.platform.lock.impl.AbstractLockManagerImpl$3.doWithLock(AbstractLockManagerImpl.java:40) [cattle-framework-lock-0.5.0-SNAPSHOT.jar:na]
        at io.cattle.platform.lock.impl.LockManagerImpl.doLock(LockManagerImpl.java:33) [cattle-framework-lock-0.5.0-SNAPSHOT.jar:na]
        at io.cattle.platform.lock.impl.AbstractLockManagerImpl.lock(AbstractLockManagerImpl.java:13) [cattle-framework-lock-0.5.0-SNAPSHOT.jar:na]
        at io.cattle.platform.lock.impl.AbstractLockManagerImpl.lock(AbstractLockManagerImpl.java:37) [cattle-framework-lock-0.5.0-SNAPSHOT.jar:na]
        at io.cattle.platform.engine.process.impl.DefaultProcessInstanceImpl.acquireLockAndRun(DefaultProcessInstanceImpl.java:256) ~[cattle-framework-engine-0.5.0-SNAPSHOT.jar:na]
        at io.cattle.platform.engine.process.impl.DefaultProcessInstanceImpl.runDelegateLoop(DefaultProcessInstanceImpl.java:187) ~[cattle-framework-engine-0.5.0-SNAPSHOT.jar:na]
        at io.cattle.platform.engine.process.impl.DefaultProcessInstanceImpl.executeWithProcessInstanceLock(DefaultProcessInstanceImpl.java:160) ~[cattle-framework-engine-0.5.0-SNAPSHOT.jar:na]
        at io.cattle.platform.engine.process.impl.DefaultProcessInstanceImpl$1.doWithLock(DefaultProcessInstanceImpl.java:109) ~[cattle-framework-engine-0.5.0-SNAPSHOT.jar:na]
        at io.cattle.platform.engine.process.impl.DefaultProcessInstanceImpl$1.doWithLock(DefaultProcessInstanceImpl.java:106) ~[cattle-framework-engine-0.5.0-SNAPSHOT.jar:na]
        at io.cattle.platform.lock.impl.AbstractLockManagerImpl$3.doWithLock(AbstractLockManagerImpl.java:40) [cattle-framework-lock-0.5.0-SNAPSHOT.jar:na]
        at io.cattle.platform.lock.impl.LockManagerImpl.doLock(LockManagerImpl.java:33) [cattle-framework-lock-0.5.0-SNAPSHOT.jar:na]
        at io.cattle.platform.lock.impl.AbstractLockManagerImpl.lock(AbstractLockManagerImpl.java:13) [cattle-framework-lock-0.5.0-SNAPSHOT.jar:na]
        at io.cattle.platform.lock.impl.AbstractLockManagerImpl.lock(AbstractLockManagerImpl.java:37) [cattle-framework-lock-0.5.0-SNAPSHOT.jar:na]
        at io.cattle.platform.engine.process.impl.DefaultProcessInstanceImpl.execute(DefaultProcessInstanceImpl.java:106) ~[cattle-framework-engine-0.5.0-SNAPSHOT.jar:na]
        at io.cattle.platform.engine.eventing.impl.ProcessEventListenerImpl.processExecute(ProcessEventListenerImpl.java:51) ~[cattle-framework-engine-0.5.0-SNAPSHOT.jar:na]
        at sun.reflect.GeneratedMethodAccessor595.invoke(Unknown Source) ~[na:na]
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.7.0_65]
        at java.lang.reflect.Method.invoke(Method.java:606) ~[na:1.7.0_65]
        at io.cattle.platform.eventing.annotation.MethodInvokingListener$1.doWithLockNoResult(MethodInvokingListener.java:69) [cattle-framework-eventing-0.5.0-SNAPSHOT.jar:na]
        at io.cattle.platform.lock.LockCallbackNoReturn.doWithLock(LockCallbackNoReturn.java:7) [cattle-framework-lock-0.5.0-SNAPSHOT.jar:na]
        at io.cattle.platform.lock.LockCallbackNoReturn.doWithLock(LockCallbackNoReturn.java:3) [cattle-framework-lock-0.5.0-SNAPSHOT.jar:na]
        at io.cattle.platform.lock.impl.AbstractLockManagerImpl$3.doWithLock(AbstractLockManagerImpl.java:40) [cattle-framework-lock-0.5.0-SNAPSHOT.jar:na]
        at io.cattle.platform.lock.impl.LockManagerImpl.doLock(LockManagerImpl.java:33) [cattle-framework-lock-0.5.0-SNAPSHOT.jar:na]
        at io.cattle.platform.lock.impl.AbstractLockManagerImpl.lock(AbstractLockManagerImpl.java:13) [cattle-fra
mework-lock-0.5.0-SNAPSHOT.jar:na]
        at io.cattle.platform.lock.impl.AbstractLockManagerImpl.lock(AbstractLockManagerImpl.java:37) [cattle-fra
mework-lock-0.5.0-SNAPSHOT.jar:na]
        at io.cattle.platform.eventing.annotation.MethodInvokingListener.onEvent(MethodInvokingListener.java:65) 
[cattle-framework-eventing-0.5.0-SNAPSHOT.jar:na]
        at io.cattle.platform.eventing.impl.AbstractThreadPoolingEventService$2.doRun(AbstractThreadPoolingEventS
ervice.java:136) [cattle-framework-eventing-0.5.0-SNAPSHOT.jar:na]
        at org.apache.cloudstack.managed.context.NoExceptionRunnable.runInContext(NoExceptionRunnable.java:13) [c
attle-framework-managed-context-0.5.0-SNAPSHOT.jar:na]
        at org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49) [ca
ttle-framework-managed-context-0.5.0-SNAPSHOT.jar:na]
        at org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
 [cattle-framework-managed-context-0.5.0-SNAPSHOT.jar:na]
        at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext
.java:104) [cattle-framework-managed-context-0.5.0-SNAPSHOT.jar:na]
        at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.
java:53) [cattle-framework-managed-context-0.5.0-SNAPSHOT.jar:na]
        at org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:46) [catt
le-framework-managed-context-0.5.0-SNAPSHOT.jar:na]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [na:1.7.0_65]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [na:1.7.0_65]
        at java.lang.Thread.run(Thread.java:745) [na:1.7.0_65]

A docker image name starting with space ' ' will lead to stacktrace and no UI response

rancher/server image id 26345bd126dd;

Seen behavior: if I use image name ' google/cadvisor:latest' (I copied two spaces) when adding an image, clicking "Deploy" doesn't do anything. The rancher/server logs showed a stacktrace. Problem resolved by removing spaces.
Expected behavior: resilience (strip()), or a notification of the error.

2014-12-17 16:59:50,420 ERROR [:] [] [] [] [qtp430484574-54] [i.g.i.g.r.handler.ExceptionHandler  ] Exception in 
API for request [io.github.ibuildthecloud.gdapi.request.ApiRequest@39e2bcc3] java.lang.IllegalArgumentException: Illegal character in path at index 40: https://index.docker.io/v1/repositories/  google/cadvisor/images
        at java.net.URI.create(URI.java:859) ~[na:1.7.0_65]
        at org.apache.http.client.methods.HttpGet.<init>(HttpGet.java:69) ~[httpclient-4.3.1.jar:4.3.1]
        at org.apache.http.client.fluent.Request.Get(Request.java:80) ~[fluent-hc-4.3.1.jar:4.3.1]
        at io.cattle.platform.docker.client.DockerClient.lookup(DockerClient.java:43) ~[cattle-docker-storage-0.5.0-SNAPSHOT.jar:na]
        at io.cattle.platform.docker.storage.DockerStoragePoolDriver.populateExtenalImageInternal(DockerStoragePoolDriver.java:46) ~[cattle-docker-storage-0.5.0-SNAPSHOT.jar:na]
        at io.cattle.platform.storage.pool.AbstractKindBasedStoragePoolDriver.populateExtenalImage(AbstractKindBasedStoragePoolDriver.java:31) ~[cattle-iaas-storage-service-0.5.0-SNAPSHOT.jar:na]
        at io.cattle.platform.storage.service.impl.StorageServiceImpl.populateNewRecord(StorageServiceImpl.java:51) ~[cattle-iaas-storage-service-0.5.0-SNAPSHOT.jar:na]
        at io.cattle.platform.storage.service.impl.StorageServiceImpl.registerRemoteImage(StorageServiceImpl.java:38) ~[cattle-iaas-storage-service-0.5.0-SNAPSHOT.jar:na]
        at io.cattle.platform.storage.api.filter.ExternalTemplateInstanceFilter.validateImageUuid(ExternalTemplateInstanceFilter.java:56) ~[cattle-iaas-storage-service-0.5.0-SNAPSHOT.jar:na]
        at io.cattle.platform.storage.api.filter.ExternalTemplateInstanceFilter.create(ExternalTemplateInstanceFilter.java:42) ~[cattle-iaas-storage-service-0.5.0-SNAPSHOT.jar:na]
        at io.github.ibuildthecloud.gdapi.request.resource.impl.FilteredResourceManager.create(FilteredResourceManager.java:56) ~[gdapi-java-server-0.4.2.jar:na]
        at io.github.ibuildthecloud.gdapi.request.handler.ResourceManagerRequestHandler.generate(ResourceManagerRequestHandler.java:39) ~[gdapi-java-server-0.4.2.jar:na]
        at io.github.ibuildthecloud.gdapi.request.handler.AbstractResponseGenerator.handle(AbstractResponseGenerator.java:14) ~[gdapi-java-server-0.4.2.jar:na]
        at io.github.ibuildthecloud.gdapi.request.handler.write.DefaultReadWriteApiDelegate.handle(DefaultReadWriteApiDelegate.java:27) ~[gdapi-java-server-0.4.2.jar:na]
        at io.github.ibuildthecloud.gdapi.request.handler.write.DefaultReadWriteApiDelegate.write(DefaultReadWriteApiDelegate.java:22) ~[gdapi-java-server-0.4.2.jar:na]
        at sun.reflect.GeneratedMethodAccessor570.invoke(Unknown Source) ~[na:na]
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.7.0_65]
        at java.lang.reflect.Method.invoke(Method.java:606) ~[na:1.7.0_65]
        at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:317) ~[spring-aop-3.2.5.RELEASE.jar:3.2.5.RELEASE]
        at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:183) ~[spring-aop-3.2.5.RELEASE.jar:3.2.5.RELEASE]
        at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:150) ~[spring-aop-3.2.5.RELEASE.jar:3.2.5.RELEASE]
        at org.springframework.transaction.interceptor.TransactionInterceptor$1.proceedWithInvocation(TransactionInterceptor.java:96) ~[spring-tx-3.2.4.RELEASE.jar:3.2.4.RELEASE]
        at org.springframework.transaction.interceptor.TransactionAspectSupport.invokeWithinTransaction(TransactionAspectSupport.java:260) ~[spring-tx-3.2.4.RELEASE.jar:3.2.4.RELEASE]
        at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:94) ~[spring-tx-3.2.4.RELEASE.jar:3.2.4.RELEASE]
        at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172) ~[spring-aop-3.2.5.RELEASE.jar:3.2.5.RELEASE]
        at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:91) ~[spring-aop-3.2.5.RELEASE.jar:3.2.5.RELEASE]
        at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172) ~[spring-aop-3.2.5.RELEASE.jar:3.2.5.RELEASE]
        at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:204) ~[spring-aop-3.2.5.RELEASE.jar:3.2.5.RELEASE]
        at com.sun.proxy.$Proxy35.write(Unknown Source) ~[na:na]
        at io.github.ibuildthecloud.gdapi.request.handler.write.ReadWriteApiHandler.handle(ReadWriteApiHandler.java:19) ~[gdapi-java-server-0.4.2.jar:na]
        at io.github.ibuildthecloud.gdapi.servlet.ApiRequestFilterDelegate.doFilter(ApiRequestFilterDelegate.java:88) ~[gdapi-java-server-0.4.2.jar:na]
        at io.cattle.platform.api.servlet.ApiRequestFilter$1.runInContext(ApiRequestFilter.java:61) [cattle-framework-api-0.5.0-SNAPSHOT.jar:na]
        at org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49) [cattle-framework-managed-context-0.5.0-SNAPSHOT.jar:na]
        at org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56) [cattle-framework-managed-context-0.5.0-SNAPSHOT.jar:na]
        at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:104) [cattle-framework-managed-context-0.5.0-SNAPSHOT.jar:na]
        at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53) [cattle-framework-managed-context-0.5.0-SNAPSHOT.jar:na]
        at org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:46) [cattle-framework-managed-context-0.5.0-SNAPSHOT.jar:na]
        at io.cattle.platform.api.servlet.ApiRequestFilter.doFilter(ApiRequestFilter.java:54) [cattle-framework-api-0.5.0-SNAPSHOT.jar:na]
        at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419) [jetty-servlet-8.1.11.v20130520.jar:8.1.11.v20130520]
        at org.eclipse.jetty.servlets.UserAgentFilter.doFilter(UserAgentFilter.java:82) [jetty-servlets-8.1.11.v20130520.jar:8.1.11.v20130520]
        at org.eclipse.jetty.servlets.GzipFilter.doFilter(GzipFilter.java:256) [jetty-servlets-8.1.11.v20130520.jar:8.1.11.v20130520]
        at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419) [jetty-servlet-8.1.11.v20130520.jar:8.1.11.v20130520]
        at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455) [jetty-servlet-8.1.11.v20130520.jar:8.1.11.v20130520]
        at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137) [jetty-server-8.1.11.v20130520.jar:8.1.11.v20130520]
        at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557) [jetty-security-8.1.11.v20130520.jar:8.1.11.v20130520]
        at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231) [jetty-server-8.1.11
.v20130520.jar:8.1.11.v20130520]
        at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075) [jetty-server-8.1.1
1.v20130520.jar:8.1.11.v20130520]
        at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384) [jetty-servlet-8.1.11.v20130
520.jar:8.1.11.v20130520]
        at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193) [jetty-server-8.1.11.
v20130520.jar:8.1.11.v20130520]
        at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009) [jetty-server-8.1.11
.v20130520.jar:8.1.11.v20130520]
        at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135) [jetty-server-8.1.11.v20
130520.jar:8.1.11.v20130520]
        at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116) [jetty-server-8.1.11.v
20130520.jar:8.1.11.v20130520]
        at org.eclipse.jetty.server.Server.handle(Server.java:370) [jetty-server-8.1.11.v20130520.jar:8.1.11.v201
30520]
        at org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489) [jetty-
server-8.1.11.v20130520.jar:8.1.11.v20130520]
        at org.eclipse.jetty.server.AbstractHttpConnection.content(AbstractHttpConnection.java:960) [jetty-server
-8.1.11.v20130520.jar:8.1.11.v20130520]
        at org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.content(AbstractHttpConnection.java:102
1) [jetty-server-8.1.11.v20130520.jar:8.1.11.v20130520]
        at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:865) [jetty-http-8.1.11.v20130520.jar:8.1.
11.v20130520]
        at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:240) [jetty-http-8.1.11.v20130520.jar
:8.1.11.v20130520]
        at org.eclipse.jetty.server.AsyncHttpConnection.handle(AsyncHttpConnection.java:82) [jetty-server-8.1.11.
v20130520.jar:8.1.11.v20130520]
        at org.eclipse.jetty.io.nio.SelectChannelEndPoint.handle(SelectChannelEndPoint.java:668) [jetty-io-8.1.11
.v20130520.jar:8.1.11.v20130520]
        at org.eclipse.jetty.io.nio.SelectChannelEndPoint$1.run(SelectChannelEndPoint.java:52) [jetty-io-8.1.11.v
20130520.jar:8.1.11.v20130520]
        at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608) [jetty-util-8.1.11.v2
0130520.jar:8.1.11.v20130520]
        at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543) [jetty-util-8.1.11.v20
130520.jar:8.1.11.v20130520]
        at java.lang.Thread.run(Thread.java:745) [na:1.7.0_65]
Caused by: java.net.URISyntaxException: Illegal character in path at index 40: https://index.docker.io/v1/reposit
ories/  google/cadvisor/images
        at java.net.URI$Parser.fail(URI.java:2829) ~[na:1.7.0_65]
        at java.net.URI$Parser.checkChars(URI.java:3002) ~[na:1.7.0_65]
        at java.net.URI$Parser.parseHierarchical(URI.java:3086) ~[na:1.7.0_65]
        at java.net.URI$Parser.parse(URI.java:3034) ~[na:1.7.0_65]
        at java.net.URI.<init>(URI.java:595) ~[na:1.7.0_65]
        at java.net.URI.create(URI.java:857) ~[na:1.7.0_65]
        ... 63 common frames omitted

Support Cloud Providers

Although Rancher is agnostic to where what cloud a host exist in or whether it is a bare metal machine, it does support various cloud specific features like adding a host (through Machine) or programming Route 53 from AWS.

Rancher needs to support a way to add a "Cloud Provider" so that these providers can be re-used by other features within Rancher. Initial supported cloud providers should be AWS, Digital Ocean, and GCE. Each cloud provider should minimally contain the credentials to access that cloud in order to execute cloud specific commands.

Feature: Add a new container Logs API

Overview

Logs are always useful to figure out what's going on in the distributed system especially in situations when something goes wrong.Docker currently supports getting logs from a container that logs to stdout/stderr. Everything that the process running in the container writes to stdout or stderr docker will convert to json and store in a file on the host machine's disk which you can then retrieve with the docker logs command. This feature aims to provide a way to monitor logs from the Rancher UI.

What Docker already supports?

The current way of fetching the logs of a container is to run the docker logs command with 5 possible options from the docker remote API as in version 1.16 (https://docs.docker.com/reference/api/docker_remote_api_v1.16/#get-container-logs) -

Query Parameters:

follow – 1/True/true or 0/False/false, return stream. Default false
stdout – 1/True/true or 0/False/false, show stdout log. Default false
stderr – 1/True/true or 0/False/false, show stderr log. Default false
timestamps – 1/True/true or 0/False/false, print timestamps for every log line. Default false
tail – Output specified number of lines at the end of logs: all or . Default all

Need for this feature

Currently, there is no way from the Rancher UI to monitor docker container logs. Hence, this feature aims to obtain docker logs from each container, collect and ship it using this new API to the Rancher UI so that the user can always monitor it through the console on the Rancher UI. Thus, this feature provides an alternative to the end user to monitor the logs as agains the existing ways of either using the Docker Remote API or the Docker CLI.

Tentative Design

The result of this new action(API) should return a url to a websocket on the host and also a jwt token. The websocket should connect to the host-api go process which runs on the host. (I'll update this section as I progress further)

Implementation Details

The implementation of this new API would probably be based on how the stats action and the exec action on instance have been implemented in Rancher.Currently, the host-api go process creates a secure websocket that proxies information. Rancher currently uses it to grab information from "cadvisor". Here in this new action API, I plan to call docker logs command with appropriate input parameters required for fetching the logs of a container.

Query Parameters Format -

Name Format
"follow" - boolean
"lines" - Integer (corresponds to tail in docker's remote API) - default Value - 100
"stdOut" - boolean
"stdErr" - boolean
"timestamps" - boolean

Example API usage -

http://Cattle Server IP:8080/v1/containers/container ID/?action=logs&lines=10&stdOut=true&follow=true

Use Cases Needed to be Explored/Supported

  1. Tailing the logs - This functionality will be supported where in the user can input the number of lines of logs to retrieve. The default value will be set to 100 lines.
  2. Downloading the logs from the UI - YET TO BE DECIDED?
  3. Scrolling through the logs on the UI - Once the logs are being viewed on the UI, the user should be able to scroll through them.
  4. Searching through the logs on the UI ?

Feature: User and Project Management

Feature

Add user and project management capabilities using GitHub OAuth

Description

We currently default to no authentication, when you setup Rancher you get an admin account with full access to everything. This is not entirely desirable if you set it up on a public IP open to the world. There should be an easy way to optionally require authentication via GitHub and provide multi-tenancy with an account per user + accounts for GitHub teams.

Definitions

  1. User - any person that uses the system, including admin
  2. Admin - any person that has access to all resources managed by the system

Enhancements

  1. admin auth setup
    • admin can setup a GitHub application and set the clientId and clientSecret in Rancher
    • admin can enable or disable auth in Rancher
    • admin can whitelist GitHub orgs
    • admin can whitelist GitHub users
  2. user (unauthenticated)
    • if auth is off
      • user essentially acts as admin
    • if auth is on
      • user can login to Rancher using Github OAuth
      • user can register with Rancher using Github OAuth
  3. user (authenticated)
    • user can create projects
      • user can create a project and limit it in scope
        - organization scope : everyone within the GitHub organization can view the resources managed in this project
        - team scope : everyone with the GitHub team can view the resources managed in this project
        - user scope : only the creator of the project can view the resources managed in this project
      • user can set project name
      • user can set project description
    • user can delete projects
    • user can update projects
    • user can view allowed projects
      • the projects that the user view projects that belong to teams that he/she is a part of with team scope and
      • projects that belong to organizations that he/she is a part of with organization scope and
      • projects created by this user
Tasks
UI (authentication disabled)
  • Detect that authentication is not enabled and show a warning in the header that you might in fact like to have auth maybe.
  • Provide instructions on how to go to GitHub and setup an application & organization
UI (authentication enabled)
  • Detect that authentication is enabled and show a login screen
  • Provide a mechanism to update the authentication configuration, or disable auth.
API
  • Add a mechanism to create, update, and remove the authentication configuration (client_id, client_secret, whitelisted user and organization IDs)
  • Enable/disable the unauthenticated admin user according to the config.
  • Add an endpoint to accept POST /v1/token {code: ''} and return a JSON web token.
  • Take the code and send to github API to get back an access_token. Then use the access_token to request info about the user, e.g. their team memberships.
  • If the github user is not a member of the configured users/organization(s), return 403.
  • JWT should be valid for a configurable amount of time (default: 1hr?)
  • JWT body should contain any info you need to authorize an API request for this user. e.g. ID of the account, ID of team accounts the user is associated with.
  • JWT body should not contain the github access_token, if we can avoid it. Try to get everything we need up-front.
  • Validate API requests against the token and account ID passed in the Authorization and X-API-Project-Id headers. If expired (exp) or not valid yet (iat), return 401. If the signature doesn't match, the user is not a member of the account, etc, return 403.
  • GET /v1/accounts should return only accounts that the user (identified by token) has access to.
  • List accounts ("projects") that the user has access to and provide a mechanism to switch between them
  • Send X-API-Project-Id: header to identify the account each request is for.
  • Create a team account in Rancher for each team the user has access to, if one does not already exist.

UI doesn't show metrics (cpu, storage) for node (agent) on fedora 20 - it just shows "processing"

docker version - 1.3

Running rancher/server:latest is successful and it is an active running container. (it shows on docker ps)
However running the second command to bring up the rancher/agent fails with:
"Invalid URL [http://:9090] or not authorized
Will retry in another second"

Notes - 9090 was set in rancher/server also because 8080 is occupied on that machine.
it was set as 9090:9090

both containers are ran as "root".
firewall is off.

Container fails to start due to net-util.sh

When you launch a container sometimes it fails with an obscure error involving net-util.sh. This error is actually masking the real issue. The container fails to start for some reason then the net-util.sh can not run because the container doesn't exist. The typical error for why the container doesn't start is because -it is not currently (v0.2.0) supported from the UI. If you try to run an image like ubuntu it will fail because it launches bash and assumes you have a tty or stdin open.

Two things need to be fixed

  1. UI needs to support -it
  2. net-util.sh needs to not mask the real error

Debian on GCE fails

For some reason the arp proxying that Rancher uses does not seem to function on the default Debian images on GCE. More investigation is needed here.

[Install] [Agent] Error response from daemon: No such container:

getting error when trying to register docker nodes:

"Error response from daemon: No such container:"

It appears to be different than the following issues - #26 #33

see below for details of docker version, docker info, and docker images (from rancher/server)

docker version - NODE:
Client version: 1.3.3
Client API version: 1.15
Go version (client): go1.3.3
Git commit (client): c78088f/1.3.3
OS/Arch (client): linux/amd64
Server version: 1.3.3
Server API version: 1.15
Go version (server): go1.3.3
Git commit (server): c78088f/1.3.3

docker info - NODE:
Client version: 1.3.3
Client API version: 1.15
Go version (client): go1.3.3
Git commit (client): c78088f/1.3.3
OS/Arch (client): linux/amd64
Server version: 1.3.3
Server API version: 1.15
Go version (server): go1.3.3
Git commit (server): c78088f/1.3.3

nmap from NODE to RANCHER SERVER:
Starting Nmap 5.51 ( http://nmap.org ) at 2015-01-28 20:39 UTC
Nmap scan report for 10.0.0.143
Host is up (0.0026s latency).
Not shown: 998 closed ports
PORT STATE SERVICE
22/tcp open ssh
8080/tcp open http-proxy

docker images - RANCHER SERVER:
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
rancher/server latest fc50e07ee16a 8 days ago 338.6 MB
rancher/server v0.4.3 fc50e07ee16a 8 days ago 338.6 MB
rancher/server v0.4.3-rc1 fc50e07ee16a 8 days ago 338.6 MB

docker ps - RANCHER SERVER:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6d21b12f162f rancher/server:latest "/usr/share/cattle/c 32 minutes ago Up 32 minutes 0.0.0.0:8080->8080/tcp rancher

This is all within a VPC in our dev AWS environment using the new amazon-ecs ami. If you need anything else just let me know.

Best Regards,
Bryce Kottke

Validation tests periodically fail

On a fresh build of a new cluster, the validation tests will experience errors/failures. The ping neighbors test will timeout while waiting for containers to provision.

Linking test sometimes fails, though infrequently.

Leaving bill-043-fedora cluster online. This cluster had 1 error and one failure during the test run. Second pass all tests were successful.

Needs more investigation.

Docker Swarm API/Schema [Merge] and Swarm integration Phase I PoC

  • Docker Swarm API/Schema/Process logic [Merge]
  • Start a proof of concept for the following:
    • Creating a new cluster from Rancher results in the registering of the swarm cluster using Dockers Hosted Discovery service. We can integrate with other services later.
    • Launch a container per cluster with swarm-server daemon. May need to create a docker image of this that we host on DockerHub.
    • Adding a host via Rancher would result in registering the host with the Hosted Discovery Service so the swarm-server daemon can understand what hosts to eventually launch containers to.

Native Docker Interface

Although Rancher does provide a set of API to manipulate Docker containers, it also needs the ability to support the following:

  • Poll mechanism for Rancher agents to detect existing containers running on a host and import them into Rancher.
  • Event listening mechanism for Rancher agents to detect new changes from Docker API execution. (i.e. a user executing a docker run command locally on a host, adding new ports, etc.).

UI Failed to load from CDN

On a new rancher server there was a failure to get the hosts from the CDN.

Server rancher server is latest.

2014-12-09 06:26:29,621 INFO    [main] [ConsoleStatus] [98/98] [16711ms] [1ms] Starting storage-simulator
2014-12-09 06:26:29,850 ERROR [:] [] [] [] [main           ] [i.c.p.i.a.s.filter.UIPathFilter     ] Failed to load UI from [http://cdn.rancher.io/ui/0.6.5/static/index.html] java.net.SocketException: Connection reset
    at java.net.SocketInputStream.read(SocketInputStream.java:196) ~[na:1.7.0_65]
    at java.net.SocketInputStream.read(SocketInputStream.java:122) ~[na:1.7.0_65]
    at java.io.BufferedInputStream.fill(BufferedInputStream.java:235) ~[na:1.7.0_65]
    at java.io.BufferedInputStream.read1(BufferedInputStream.java:275) ~[na:1.7.0_65]
    at java.io.BufferedInputStream.read(BufferedInputStream.java:334) ~[na:1.7.0_65]
    at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:687) ~[na:1.7.0_65]
    at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:633) ~[na:1.7.0_65]
    at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:769) ~[na:1.7.0_65]
    at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:633) ~[na:1.7.0_65]
    at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1323) ~[na:1.7.0_65]
    at io.cattle.platform.iaas.api.servlet.filter.UIPathFilter.reloadIndex(UIPathFilter.java:57) [cattle-iaas-api-logic-0.5.0-SNAPSHOT.jar:na]
    at io.cattle.platform.iaas.api.servlet.filter.UIPathFilter.init(UIPathFilter.java:35) [cattle-iaas-api-logic-0.5.0-SNAPSHOT.jar:na]
    at org.eclipse.jetty.servlet.FilterHolder.doStart(FilterHolder.java:119) [jetty-servlet-8.1.11.v20130520.jar:8.1.11.v20130520]
    at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64) [jetty-util-8.1.11.v20130520.jar:8.1.11.v20130520]
    at org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:719) [jetty-servlet-8.1.11.v20130520.jar:8.1.11.v20130520]
    at org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:265) [jetty-servlet-8.1.11.v20130520.jar:8.1.11.v20130520]
    at org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1252) [jetty-webapp-8.1.11.v20130520.jar:8.1.11.v20130520]
    at org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:710) [jetty-server-8.1.11.v20130520.jar:8.1.11.v20130520]
    at org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:494) [jetty-webapp-8.1.11.v20130520.jar:8.1.11.v20130520]
    at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64) [jetty-util-8.1.11.v20130520.jar:8.1.11.v20130520]
    at org.eclipse.jetty.server.handler.HandlerWrapper.doStart(HandlerWrapper.java:95) [jetty-server-8.1.11.v20130520.jar:8.1.11.v20130520]
    at org.eclipse.jetty.server.Server.doStart(Server.java:282) [jetty-server-8.1.11.v20130520.jar:8.1.11.v20130520]
    at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64) [jetty-util-8.1.11.v20130520.jar:8.1.11.v20130520]
    at io.cattle.platform.launcher.jetty.Main.main(Main.java:127) [0.5.0-SNAPSHOT-b47824695babf24ea2a5bc025f43e9d7e894a4ea-023998f1-d599-4c58-b9a1-f4ca962bd60f/:na]
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.7.0_65]
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) ~[na:1.7.0_65]
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.7.0_65]
    at java.lang.reflect.Method.invoke(Method.java:606) ~[na:1.7.0_65]
    at io.cattle.platform.launcher.Main.run(Main.java:186) [0.5.0-SNAPSHOT-b47824695babf24ea2a5bc025f43e9d7e894a4ea-023998f1-d599-4c58-b9a1-f4ca962bd60f/:na]
    at io.cattle.platform.launcher.Main.main(Main.java:249) [0.5.0-SNAPSHOT-b47824695babf24ea2a5bc025f43e9d7e894a4ea-023998f1-d599-4c58-b9a1-f4ca962bd60f/:na]
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.7.0_65]
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) ~[na:1.7.0_65]
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.7.0_65]
    at java.lang.reflect.Method.invoke(Method.java:606) ~[na:1.7.0_65]
    at io.cattle.platform.packaging.Bootstrap.run(Bootstrap.java:386) [cattle.jar:0.5.0-SNAPSHOT]
    at io.cattle.platform.packaging.Bootstrap.main(Bootstrap.java:426) [cattle.jar:0.5.0-SNAPSHOT]

06:26:29.880 [main] INFO  ConsoleStatus - [DONE ] [22970ms] Startup Succeeded, Listening on port 8080
2014-12-09 06:26:56,675 ERROR [:] [] [] [] [tp1379460953-50] [i.c.p.i.a.s.filter.UIPathFilter     ] Failed to load UI from [http://cdn.rancher.io/ui/0.6.5/static/index.html] java.io.IOException: Server returned HTTP response code: 422 for URL: http://cdn.rancher.io/ui/0.6.5/static/index.html
    at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1626) ~[na:1.7.0_65]
    at io.cattle.platform.iaas.api.servlet.filter.UIPathFilter.reloadIndex(UIPathFilter.java:57) [cattle-iaas-api-logic-0.5.0-SNAPSHOT.jar:na]
    at io.cattle.platform.iaas.api.servlet.filter.UIPathFilter.doFilter(UIPathFilter.java:75) [cattle-iaas-api-logic-0.5.0-SNAPSHOT.jar:na]
    at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419) [jetty-servlet-8.1.11.v20130520.jar:8.1.11.v20130520]
    at org.eclipse.jetty.servlets.UserAgentFilter.doFilter(UserAgentFilter.java:82) [jetty-servlets-8.1.11.v20130520.jar:8.1.11.v20130520]
    at org.eclipse.jetty.servlets.GzipFilter.doFilter(GzipFilter.java:294) [jetty-servlets-8.1.11.v20130520.jar:8.1.11.v20130520]
    at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419) [jetty-servlet-8.1.11.v20130520.jar:8.1.11.v20130520]
    at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455) [jetty-servlet-8.1.11.v20130520.jar:8.1.11.v20130520]
    at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137) [jetty-server-8.1.11.v20130520.jar:8.1.11.v20130520]
    at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557) [jetty-security-8.1.11.v20130520.jar:8.1.11.v20130520]
    at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231) [jetty-server-8.1.11.v20130520.jar:8.1.11.v20130520]
    at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075) [jetty-server-8.1.11.v20130520.jar:8.1.11.v20130520]
    at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384) [jetty-servlet-8.1.11.v20130520.jar:8.1.11.v20130520]
    at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193) [jetty-server-8.1.11.v20130520.jar:8.1.11.v20130520]
    at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009) [jetty-server-8.1.11.v20130520.jar:8.1.11.v20130520]
    at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135) [jetty-server-8.1.11.v20130520.jar:8.1.11.v20130520]
    at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116) [jetty-server-8.1.11.v20130520.jar:8.1.11.v20130520]
    at org.eclipse.jetty.server.Server.handle(Server.java:370) [jetty-server-8.1.11.v20130520.jar:8.1.11.v20130520]
    at org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489) [jetty-server-8.1.11.v20130520.jar:8.1.11.v20130520]
    at org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:949) [jetty-server-8.1.11.v20130520.jar:8.1.11.v20130520]
    at org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1011) [jetty-server-8.1.11.v20130520.jar:8.1.11.v20130520]
    at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:644) [jetty-http-8.1.11.v20130520.jar:8.1.11.v20130520]
    at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235) [jetty-http-8.1.11.v20130520.jar:8.1.11.v20130520]
    at org.eclipse.jetty.server.AsyncHttpConnection.handle(AsyncHttpConnection.java:82) [jetty-server-8.1.11.v20130520.jar:8.1.11.v20130520]
    at org.eclipse.jetty.io.nio.SelectChannelEndPoint.handle(SelectChannelEndPoint.java:668) [jetty-io-8.1.11.v20130520.jar:8.1.11.v20130520]
    at org.eclipse.jetty.io.nio.SelectChannelEndPoint$1.run(SelectChannelEndPoint.java:52) [jetty-io-8.1.11.v20130520.jar:8.1.11.v20130520]
    at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608) [jetty-util-8.1.11.v20130520.jar:8.1.11.v20130520]
    at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543) [jetty-util-8.1.11.v20130520.jar:8.1.11.v20130520]
    at java.lang.Thread.run(Thread.java:745) [na:1.7.0_65]

Allow binding to a specified IP

Hey, so far using rancher was a great experience, but it's missing a quite vital feature. In docker, you can bind to a specified IP by passing -p 10.0.0.1:8080:8080. Sending 10.0.0.1:8080 in host port field in Rancher's UI does not work. Could it get added?

Error: client and server don't have same version

After cloning and running vagrant up I get the following error:

vagrant up
Bringing machine 'rancher' up with 'virtualbox' provider...
==> rancher: Importing base box 'coreos-alpha'...
==> rancher: Matching MAC address for NAT networking...
==> rancher: Checking if box 'coreos-alpha' is up to date...
==> rancher: A newer version of the box 'coreos-alpha' is available! You currently
==> rancher: have version '353.0.0'. The latest is version '550.0.0'. Run
==> rancher: `vagrant box update` to update.
==> rancher: Setting the name of the VM: rancher_rancher_1420547237181_66568
==> rancher: Clearing any previously set network interfaces...
==> rancher: Preparing network interfaces based on configuration...
    rancher: Adapter 1: nat
    rancher: Adapter 2: hostonly
==> rancher: Forwarding ports...
    rancher: 8080 => 8080 (adapter 1)
    rancher: 22 => 2222 (adapter 1)
==> rancher: Running 'pre-boot' VM customizations...
==> rancher: Booting VM...
==> rancher: Waiting for machine to boot. This may take a few minutes...
    rancher: SSH address: 127.0.0.1:2222
    rancher: SSH username: core
    rancher: SSH auth method: private key
    rancher: Warning: Connection timeout. Retrying...
==> rancher: Machine booted and ready!
==> rancher: Setting hostname...
==> rancher: Configuring and enabling network interfaces...
==> rancher: Running provisioner: shell...
    rancher: Running: inline script
==> rancher: Unable to find image 'rancher/server:latest' locally
==> rancher: Pulling repository rancher/server
==> rancher: 9ac2845c2bd20bd7978abd4c35353188e939a4d6138b005ad3654901707fff1d
==> rancher: Running provisioner: shell...
    rancher: Running: inline script
==> rancher: Unable to find image 'rancher/agent:latest' locally
==> rancher: Pulling repository rancher/agent
==> rancher: 2015/01/06 12:30:38 Error response from daemon: client and server don't have same version (client : 1.15, server: 1.12)
==> rancher: Please ensure Host Docker version is >= 1.3.2 and container has r/w permissions to docker.sock
The following SSH command responded with a non-zero exit status.
Vagrant assumes that this means the command failed!

chmod +x /tmp/vagrant-shell && /tmp/vagrant-shell

Stdout from the command:



Stderr from the command:

Unable to find image 'rancher/agent:latest' locally
Pulling repository rancher/agent
















2015/01/06 12:30:38 Error response from daemon: client and server don't have same version (client : 1.15, server: 1.12)
Please ensure Host Docker version is >= 1.3.2 and container has r/w permissions to docker.sock


Can't run any containers

I'm running rancher agents directly on my CoreOS cluster nodes. When trying to run anything via the web UI, I get the same error:

this was a busybox test container:

: 5d4dd585-1653-4746-a543-b92621bb1ae3 : Command '['/var/lib/cattle/pyagent/cattle/plugins/docker/net-util.sh', '-p', '6844', '-i', '10.42.121.21/16', '-m', u'02:ca:f9:6d:54:71', '-d', 'eth0']' returned non-zero exit status 43

then directly after, crosbymichael/dockerui:

: cefe9c9e-21d1-4fb2-9cdd-8910565fae0e : Command '['/var/lib/cattle/pyagent/cattle/plugins/docker/net-util.sh', '-p', '6845', '-i', '10.42.242.207/16', '-m', u'02:ca:f9:1d:b8:0e', '-d', 'eth0']' returned non-zero exit status 43

Those were both on my node01, I tried crosbymichael/dockerui on node02:

: 6b621368-9faa-4bc2-9f0d-07ba40f1b1b6 : Command '['/var/lib/cattle/pyagent/cattle/plugins/docker/net-util.sh', '-p', '5306', '-i', '10.42.120.197/16', '-m', u'02:ca:f9:9a:fa:aa', '-d', 'eth0']' returned non-zero exit status 43

The network agent downloads and runs just fine on each node however.

adding images from private repos

Is it possible to create a container using a image that is kept on a private docker repository? At present when I try to add one I am told that the image format is invalid or something to that effect.

Feature Request: Authentication

Nothing fancy, no accounts or preferences, just:

  • UI - username/password.
  • API - tokens

idk, maybe store these in a JSON file on the host? So something like: $ docker run -v /opt/rancher/auth.json:/auth.json rancher/server

Just thinking it might be nice to have the option of exposing externally rather than only available through VPNs and the like. Auth might be nice over VPNs anyways to keep pesky coworkers / family members / developers out.

GitHub Authentication Phase II

  • GitHub Phase I [Merged]
  • Support token based authentication of rancher-agent to rancher-server [Merged]
  • Support for Project [PR]

How to add a second Vagrant instance?

Hey there,

Rancher is extremely cool. Nice work.

It would be really useful to have explicit instructions on how to spin up two VMs in Vagrant with Rancher. Maybe you could provide an example Vagrantfile for two or more hosts?

If I wanted to do this manually, what would I need to do to tell Rancher hosts about eachother? Does it figure out group membership automatically based on the hosts in the CoreOS cluster?

Thanks!
Luke

[Install] [Server] FATA[0004] Error response from daemon: Cannot start container : no such file or directory

Server container start error

liang@liang-ubuntu:$ sudo docker run -d -p 8000:8000 rancher/server
046ed8e413da7127c657a732d1f5901481436efa4ea9111440d224fce4481e3d
FATA[0004] Error response from daemon: Cannot start container 046ed8e413da7127c657a732d1f5901481436efa4ea9111440d224fce4481e3d: no such file or directory
liang@liang-ubuntu:
$ sudo docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
046ed8e413da rancher/server:latest "/usr/share/cattle/c 20 seconds ago fervent_turing
liang@liang-ubuntu:$ sudo docker start 04
Error response from daemon: Cannot start container 04: no such file or directory
FATA[0000] Error: failed to start one or more containers
liang@liang-ubuntu:
$ sudo docker logs 04

Allow binding to a specific interface

On CoreOS v550.0.0 rancher binds to the docker0 interface, which has an 172.17.x.y address.
I want to make rancher bind to ens192, which has a 192.168.x.y DHCP-leased address and is accessible inside my LAN-network.
Since I can not access the 172.17.x.y range directly, all containers binded to the network IN that host are unreachable.
Is binding to interface ens192 an option somehow or is this a CoreOS-specific issue?

[Containers] Containers/Network Agent fail to start

I downloaded the images and ran them on a single host, but I have been unable to get Rancher to launch a container on the host. I have seen one container start, but it wasn't clear in the UI that it had started and it seemed to have failed before it actually succeeded. I keep getting this error:
screen shot 2015-01-14 at 7 53 43 am

I'm not sure if there's an issue running on a single host, but I can't see why there would be. I haven't had time to look at the code to help diagnose the problem, so it could be something I'm just missing. I originally received errors from trying to run the container that may have been related to #13 when I used the default settings. I thought that issue may be resolved to some extent as there is an option for running -it. That doesn't seem to be the case. I then switched to running nginx with no additional options, and that started successfully once.

I think you guys should update the default settings to something that works without any changes. You could have your own nginx image that has some cool website saying how awesome we are for being able to start everything. With something like "Imagine how we'd react if we saw you open the peanut butter by yourself. Yeah, it's that easy." I look forward to seeing what you guys produce here.

Load Balancer API/Schema/Process Logic

  • Implement API, schema, and initial process logic (no actual backend HAProxy support) [PR]
  • Does not include any Health Policy support
  • Does not include GLB support

Docker Swarm Support

Feature

Support Docker Swarm integration to allow for provisioning Docker containers across clusters of hosts.

Description

Docker just recently announced swarm that allows for deploying Docker container across multiple hosts through a single "virtual" host. Just like machine, swarm currently has limitations in both the scheduling algorithm and the manual steps of setting up a swarm cluster. At a high level, this feature will allow Rancher users to be capable of setting up multiple swarm cluster via the UI/API and have swarm-agents be automatically installed on the appropriate hosts.

Enhancements

  • To use swarm, the swarm-server must be installed on a machine where Docker daemon is running. A swarm-agent must then be installed on each Docker enabled hosts and "joined" with the swarm-server. After that, all docker commands must specify the swarm-server ip and port to be allow for deployment on a swarm cluster. Rancher needs to support the ability to create a swarm cluster and the ability select the hosts to join this cluster. High level user scenarios:
    1. User selects an existing host and either creates a new swarm cluster (a single step of creating a swarm-server and swarm-agent on selected host) or joins an existing swarm cluster (just installing the swarm-agent on the host and joining with the selected swarm-server). We should support the ability for a host to join multiple clusters.
    2. When a user wanted to start a container, they can select from the host or a swarm cluster.
    3. User removes a host from the swarm cluster. This should result in the shutdown of the swarm-agent.
      • shutting down an agent will still cause a lag time before the swarm-server actually realizes it is no longer registered
      • should we have an option to stop all containers through the API/UI?
    4. User deactivates a host from the swarm cluster. Deactivating a host should result in the host no longer being considered for deployment of a container even though it is still running. No containers should be shutdown.

Additional scenarios:

  1. swarm-server goes down. We need to be able to automatically detect this and recreate it.
  2. swarm-agent goes down. We need to be able to automatically detect this and recreate it.
  3. Import an existing swarm cluster into Rancher. Do we have enough information on the swarm-agents to do this?
  4. Support for manually adding a swarm-agent outside of Rancher and importing that into Rancher. The only reason this needs to be thought of is because if we cannot determine that a swarm-agent is added outside of Rancher, then it is possible for any deployments of containers within Rancher to result in the deployment of the container in the host not recognized by Rancher. However, if we plan our first iteration to bypass Swarm for container deployment when executing via Rancher, this may not really be an issue anymore.

Task List

  • Backend : TBD
  • UI : TBD

[Install] [Agent] When starting rancher-agent: FATA[0002] Error response from daemon: Cannot destroy container

When you start the rancher-agent container, you see the following error:

FATA[0002] Error response from daemon: Cannot destroy container 446f0351cf1f378a6fc8d18fb220b453cb1e27812637e59bc337268ce4dfec05: Driver aufs failed to remove root filesystem 446f0351cf1f378a6fc8d18fb220b453cb1e27812637e59bc337268ce4dfec05: rename /mnt/sda1/var/lib/docker/aufs/mnt/446f0351cf1f378a6fc8d18fb220b453cb1e27812637e59bc337268ce4dfec05 /mnt/sda1/var/lib/docker/aufs/mnt/446f0351cf1f378a6fc8d18fb220b453cb1e27812637e59bc337268ce4dfec05-removing: device or resource busy

With different directory names, but you always get that error.
The error is misleading because the container ultimately starts fine.

Note that this is different than #26 but I suspect the two may be related.


$ docker info
Containers: 1
Images: 47
Storage Driver: aufs
Root Dir: /mnt/sda1/var/lib/docker/aufs
Dirs: 61
Execution Driver: native-0.2
Kernel Version: 3.16.4-tinycore64
Operating System: Boot2Docker 1.2.0 (TCL 5.4); master : 8a06c1f - Fri Nov 28 17:03:52 UTC 2014
CPUs: 8
Total Memory: 999.6 MiB
Name: boot2docker
ID: 3WJU:FUAI:G4QO:GPOQ:BHBV:NAII:MRO2:NMPG:MNWI:L5B2:SWXX:RVVM
Debug mode (server): true
Debug mode (client): false
Fds: 18
Goroutines: 18
EventsListeners: 0
Init Path: /usr/local/bin/docker
Username: redacted
Registry: [https://index.docker.io/v1/]


$ docker version
Client version: 1.3.1-dev
Client API version: 1.16
Go version (client): go1.3.3
Git commit (client): 831d09f
OS/Arch (client): darwin/amd64
Server version: 1.3.1-dev
Server API version: 1.16
Go version (server): go1.3.3
Git commit (server): 831d09f


docker images rancher/agent
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
rancher/agent latest 61678f70cabd 2 weeks ago 271.2 MB

Snapshot and Backup Service

Rancher should support local storage volumes (i.e., storage volumes built on local disks.) Rancher storage volumes provide the following capabilities:

  1. Snapshot. A crash-consistent snapshot of the volume.
  2. Backup. Store a copy of the snapshot on secondary storage. Only changed blocks should be copied.
  3. Restore. Reconstruct a new volume from a backup stored in secondary storage.
  4. Rollback. Stop container, and restart the container with an older version of the volume.

We will rely on thin pools in Linux device mapper to implement local storage volumes for Docker. Note that in Rancher storage volumes are not mirrored. Losing a Linux host will result in data loss. The user can restore from a snapshot backup, but they will lose the latest changes that have not been backed up.

Rancher will format newly create storage volumes as ext4 file systems before handing them over to Docker containers.

Backup are snapshot functions are implemented as follows:

  1. Volume snapshots can be backed up to secondary storage, which can be object storage or NFS shares
  2. Backup is performed incrementally from local disk to secondary storage (only changed blocks are copied)
  3. A snapshot backup is stored as a directory of 2MB segments
  4. Each 2MB segment can be shared among multiple snapshot backups
  5. When a snapshot backup is deleted, unused 2MB segments can be GC’ed
  6. Volumes can be reconstituted from a snapshot backup
  7. To replicate a volume from one server to another, simply take a snapshot, backup the snapshot, and reconstitute the snapshot on the target server

The following figure illustrates how 3 snapshot backups are stored in the secondary storage. The 3 snapshots shared some 2MB segments. Both the directory and 2MB segments are stored in the secondary storage.

snapshots

Host stats do not work with Vagrant

Vagrant tries to connect 10.1.42.1 to get stats but that is not available. The agent should be using the IP 172.x that is in the Vagrant file

Feature : Docker Machine Support

Feature

Support Docker Machine integration to add new hosts via Rancher.

Description

Docker just recently announced machine that would allow easier creation of Docker hosts on a local hypervisor or any supported cloud providers. At a high level, this feature should allow Rancher users to add any hosts (as supported by Docker machine) via the UI/API and also have it automatically registered to rancher-server.

Enhancements

  • Docker machine only supports commands where it was installed. Currently, there is no remote API access. Rancher needs to create an equivalent of a rancher-machine-agent that will bridge between rancher-server and docker machine. This will facilitate any communication between Rancher and Docker machine.
  • Rancher currently only allows users to add a pre-existing Docker enabled hosts via manually installing the Rancher agent on each host. This feature will provide a single step process of allowing users to add a host via the UI/API. High level steps:
    1. User adds a Host through Rancher UI/API and selects a cloud provider as currently supported by Docker. DigitalOcean should be the first integration if possible as AWS and GCE are still a work in progress.
    2. User adds all necessary inputs and sends command to rancher-server.
    3. If no rancher-machine-agent exist, launch one.
      • or is this launched when rancher-server is launched?
      • is there a single rancher-machine-agent per account or per rancher-server? There doesn't seem like there's a need to have multiple rancher-machine-agent given we can send the required machine info down on a per call basis.
    4. All docker machine state information is stored in rancher-server.
      • we don't need to store credentials per cloud. But if the rancher-machine-agent dies, we will need to prompt the user to re-submit it
    5. Launch the host via docker machine call.
    6. Once host is up, store any host information into Rancher.
    7. rancher-server should automatically install the rancher-agent so it can be registered properly.

Task List

  • Backend cattle : rancherio/cattle#65
  • UI : TBD

Support RHEL/CentOS 7

Because RHEL and its derivatives often lag behind on the version of Docker bundled, special consideration must be given to these distributions to support them.

Rancher events and alerts

Rancher needs to generate events and alerts for integration to notification systems like SMTP, SNMP, etc.:

Events (more TBD):

  • Container modify
  • Container start
  • Container stop
  • Container delete
  • Host add
  • Host remove
  • Cluster create
  • Cluster delete
  • Cluster add host
  • Cluster remove host
  • Project create
  • Project add member/team/org
  • Project remove member/team/org
  • LB * actions
  • Snapshot * actions

Alerts (more TBD):

  • Container memory capacity threshold
  • Container disk capacity threshold
  • Container instance unexpected shutdown
  • Host unexpected shutdown
  • Network agent unexpected shutdown
  • Containers added outside of Rancher (i.e. Docker run outside of Rancher API)
  • Any Docker creation of resources outside of Rancher that is later imported

Also need to allow users to see a history of webhook event actions. For example, we should see when a webhook has been activated, for what event/alert, and the HTTP request and response, and whether it succeeded or failed.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.