GithubHelp home page GithubHelp logo

uselagoon / lagoon Goto Github PK

View Code? Open in Web Editor NEW
545.0 24.0 147.0 64.16 MB

Lagoon, the developer-focused application delivery platform

Home Page: https://docs.lagoon.sh/

License: Apache License 2.0

Shell 2.33% JavaScript 12.20% HTML 0.01% Makefile 1.85% PHP 3.23% Dockerfile 2.17% TypeScript 52.00% Python 0.11% Go 13.54% Smarty 0.11% HCL 0.41% CSS 0.64% FreeMarker 10.36% Fluent 0.01% SCSS 0.51% Java 0.50%
hacktoberfest

lagoon's Introduction

The Lagoon logo is a blue hexagon split in two pieces with an L-shaped cut

Lagoon - the developer-focused application delivery platform for Kubernetes

Table of Contents

  1. Project Description
  2. Usage
  3. Architecture
  4. Testing
  5. Other Lagoon Components
  6. Contribution
  7. History
  8. Connect

Project Description

Lagoon solves what developers are dreaming about: A system that allows developers to locally develop their code and their services with Docker and run the exact same system in production. The same container images, the same service configurations and the same code.

Lagoon is an application delivery platform. Its primary focus is as a cloud-native tool for the deployment, management, security and operation of many applications. Lagoon greatly reduces the requirement on developers of those applications to have cloud-native experience or knowledge.

Lagoon has been designed to handle workloads that have been traditionally more complex to make cloud-native (such as CMS, LMS, and other multi-container applications), and to do so with minimal retraining or reworking needed for the developers of those applications.

Lagoon is fully open-source, built on open-source tools, built collaboratively with our users.

Usage

Installation

Note that is not necessary to install Lagoon on to your local machine if you are looking to maintain websites hosted on Lagoon.

Lagoon can be installed:

For more information on developing or contributing to Lagoon, head to https://docs.lagoon.sh/contributing-to-lagoon

For more information on installing and administering Lagoon, head to https://docs.lagoon.sh/administering-lagoon

Architecture

Lagoon comprises two main components: Lagoon Core and Lagoon Remote. It's also built on several other third-party services, Operators and Controllers. In a full production setting, we recommend installing Lagoon Core and Remote into different Kubernetes Clusters. A single Lagoon Core installation is capable of serving multiple Remotes, but they can also be installed into the same cluster if preferred.

To enhance security, Lagoon Core does not need administrator-level access to the Kubernetes clusters that are running Lagoon Remote. All inter-cluster communication happens only via RabbitMQ. This is hosted in Lagoon Core, and consumed (and published back to) by Lagoon Remote. This allows Lagoon Remotes to be managed by different teams, in different locations - even behind firewalls or inaccessible from the internet.

Lagoon services are mostly built in Node.js. More recent development occurs in Go, and most of the automation and scripting components are in Bash.

Lagoon Core

All the services that handle the API, authentication and external communication are installed here. Installation is via a [Helm Chart].(https://github.com/uselagoon/lagoon-charts/tree/main/charts/lagoon-core)

  • API
    • api (the GraphQL API that powers Lagoon)
    • api-db (the MariaDB storage for the API)
    • api-redis (the cache layer for API queries)
  • Authentication
    • keycloak (the main authentication application)
    • keycloak-db (the MariaDB storage for Keycloak)
    • auth-server (generates authentication tokens for Lagoon services)
    • ssh (provides developers with ssh access to the sites hosted on Lagoon)
  • Messaging
    • broker (the RabbitMQ message service used to communicate with Lagoon Remote)
    • webhooks2tasks (the service that converts incoming webhooks to API updates)
    • actions-handler (the service that to manage bulk action processing for builds and tasks)
  • Webhooks
    • webhook-handler (the external service that Git Repositories and Registries communicate with)
    • backup-handler (the service used to collect and collate information on backups)
  • Notifications
    • logs2notifications (the service that pushes build notifications to a configured notification types)

Lagoon Remote

All the services that are used to provision, deploy and maintain sites hosted by Lagoon on Kubernetes live here. These services are mostly comprised of third-party tools, developed external to Lagoon itself. Installation is via a Helm Chart

  • Docker Host (the service that stores and caches upstream docker images for use in builds)
  • Storage Calculator (an optional service to collect the size of storage and databases)
  • Remote Controller (the controllers that handle building and deploying sites onto Lagoon)
  • Build Deploy Tool (the service that computes which services, configuration and settings to provision for Kubernetes)
  • Aergia (an optional controller that can idle non-production sites not currently in use to conserve resources)
  • Dioscuri (an optional operator that provides Active/Standby functionality to Lagoon)
  • DBaaS Operator (an optional operator that provisions databases from an underlying managed database)

Lagoon UI

  • ui (the main user interface and dashboard for Lagoon, usually installed in lagoon-core, but can also be installed anywhere as a Lagoon project)

Lagoon Tools

  • lagoon-cli (the command-line interface for managing sites on Lagoon)
  • lagoon-sync (a command-line interface for syncing databases or file assets between Lagoon environments)
  • drush-alias (provides Drupal developers with an automated alias service for Drush)

Additional Services

These services are usually installed alongside either Lagoon Core or Lagoon Remote to provide additional functionality to Lagoon.

Testing

Lagoon has a comprehensive test suite, designed to cover most end-user scenarios. The testing is automated in Ansible, and runs in Jenkins, but can also be run locally in a self-contained cluster. The testing provisions a standalone Lagoon cluster, running on Kind (Kubernetes in Docker). This cluster is made of Lagoon Core, Lagoon Remote, an image registry and a set of managed databases. It runs test deployments and scenarios for a range of Node.js, Drupal, Python and NGINX projects, all built using the latest Lagoon images.

Other Lagoon components

Here are a number of other repositories, tools and components used in Lagoon

These images are used by developers to build web applications on, and come preconfigured for running on Lagoon as well as locally. There are php, NGINX, Node.JS, Python (and more) variants. These images are regularly updated, and are not only used in hosted projects, they're used in Lagoon too!

To browse the full set of images, head to https://hub.docker.com/u/uselagoon

A meta-project that houses a wide range of example projects, ready-made for use on Lagoon. These projects also include test suites that are used in the testing of the images. Please request an example via that repository if you want to see a particular one, or even better, have a crack at making one!

Houses all the Helm Charts used to deploy Lagoon, it comes with a built-in test suite.

To add the repository helm repo add lagoon https://uselagoon.github.io/lagoon-charts/

amazee.io has developed a number of tools, charts and operators designed to work with Lagoon and other Kubernetes services.

To add the repository helm repo add lagoon https://amazeeio.github.io/charts/

Contribution

Do you want to contribute to Lagoon? Fabulous! See our Documentation on how to get started.

History

Lagoon was originally created and open sourced by the team at amazee.io in August 2017, and powers their global hosting platform.

Connect

Find more information about Lagoon:

At our website - https://lagoon.sh

In our documentation - https://docs.lagoon.sh

In our blog - https://dev.to/uselagoon

Via our socials - https://twitter.com/uselagoon

On Discord - https://discord.gg/te5hHe95JE

lagoon's People

Contributors

alannaburke avatar alexskrypnyk avatar bomoko avatar cdchris12 avatar cgoodwin90 avatar dasrecht avatar dependabot[bot] avatar fubarhouse avatar fubhy avatar jaimed-amazee avatar johnalbin avatar justinlevi avatar karlhorky avatar markxtji avatar rocketeerbkw avatar rtprio avatar ryyppy avatar schnitzel avatar seanhamlin avatar shreddedbacon avatar simesy avatar smlx avatar spuky avatar steveworley avatar stooit avatar thom8 avatar tobybellwood avatar twardnw avatar vincenzodnp avatar wintercreative avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

lagoon's Issues

Donate and Maintain Drupal Images to https://github.com/drupal-docker

During DrupalCon Vienna we had a BOF about Docker and Drupal, together with @zaporylie we discussed that it would make sense to combine efforts for preconfigured Drupal Images.

The idea would be that the Drupal Images that are built within Lagoon are donated to https://github.com/drupal-docker and maintained over there.

There are a couple of questions left:

  1. https://github.com/drupal-docker has currently alpine and non-alpine images. At Lagoon we plan to run everything with Alpine. Is it fine to deprecate the non-alpine images?
  2. How many other structural changes are allowed? For example there is https://github.com/drupal-docker/php/blob/master/7.1/Dockerfile-cli which is a cli container with drush, composer, etc. installed. Inside Lagoon this would be called a builder image, can we rename them?

Implement drush role for api

For Drush we need to find a new role that allows READ access to graphql queries like this one:

{
  siteGroup:siteGroupByName(name: "amazee_io") {
  gitUrl
    slack {
      webhook
      channel
      informStart
      informChannel
    }
    sites {
      siteName
      siteBranch
      siteEnvironment
      siteHost
      serverInfrastructure
      serverIdentifier
      serverNames
      deployStrategy
      webRoot
      domains
      jumpHost
    }
  }
}

I'm perfectly fine with generating one token with a long lifetime which is then hardcoded in here: https://github.com/amazeeio/lagoon/blob/master/helpers/drush-alias/web/aliases.drushrc.php.stub, we will over time slowly convert drush calls to real authed calls via the cli, so it's just a temporary solution.

UPDATE:

For now, our credential system is not capable of attribute-based read permissions... so we will limit the drush role to only access Site / SiteGroup information. Access Client information will be denied.

Add logs2mattermost

Title says it all, like we have logs2slack today, we would need logs2mattermost.
Maybe mattermost ist slack API compatible, then we could actually use logs2slack for it.

How to start using it?

Is there any document or a best way to start using it?

Let's say I want to set up a new drupal site to work on and have it deployed somewhere, say digital ocean, linode or whatever.

Thanks.. I am very curious to give this a try.

Auto Generate Nginx config based on ENV variables

for common tasks (redirects, basic auth) it would be nice to tell the developers to define some ENV variables instead of writing full nginx configs.
These ENV variables are read by an entrypoint script and auto generates nginx configs based on them.

Refactor oc-build-deploy-dind in Go

The Image oc-build-deploy-dind is mostly responsible for checking out git code, running Docker builds, create openshift resources and monitor deployments.

It is all implemented in Bash and we hit limits in terms of handling special cases, etc.

So the idea would be to reimplement it in Go with using Kompose as inspiration for this.

Implement Drush Site Aliases Connector for Lagoon

  • add project: to .lagoon.yml
  • Implement JWT Token getting via auth-ssh service
  • query the old and new API
  • Create Lagoon Site Aliases which work with #52
  • Load existing environments via #66

Needed for:

  • Drush8 (current implementation)
  • New Drush 9 Plugin Implementation

Rethinking the storage of API

I think we could rethink how the API stores it's data. Currently we are required to store data in the Hiera YAML format as our v3 infrastructure also uses this api in order to provision servers, etc.
The idea initially was that the v4 infrastructure uses the same exact same api and then in the future when everything is migrated we can remove the hiera YAML format and move to another storage system - I would call this process the parallel migration process.

I think though we should rethink this idea and have the following suggestion:

  • We leave the Client, Sitegroup, Sites Storage within Hiera YAML and the API like it is right now (Read Only, no mutations)
  • For the new v4 infra (openshift) we implement a new storage system (couch? neo4j? mongo? tbd), maybe even with new terminology
  • If needed I'm also happy to fork the current api into a new version which handles this new storage system, in order to not need to query two storage systems at once (as it will actually never happen that we need in one single request data from the new and old storage at the same time)
  • we would also need to point the cli to the new system only, the old system does not need a cli (clients would therefore need to migrate first to the v4 infra before they can use the cli, but that's okay)
  • If we like we could also come up with new terminology. We currently have the problem that our clients don't know what a "SiteGroup" really is, and rather describe it as a git repo or a "project". also "Site" is not really understood, it's more described as an "environment"

`siteHost` is `null` when using drush role

Looks like the drush role cannot access siteHost:

access with role drush

curl 'http://localhost:3000/graphql?' -H 'Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzc2hLZXkiOiJBQUFBQjNOemFDMXljMkVBQUFBREFRQUJBQUFDQVFDNWI5QXdiNml3V1J6MXorUUR3TGxTdnRweGcvN3BSN1JmK2tKVDNraFFHaUx1WDRRazJhRENydDFmS25kanlVdWo1NW9jUmM4T2xFeHJBOEZQZlpicG5mYU5YZlZoOVRseWJoaklXVUh0TGJjOWdpQ29EdWlWMlRFYjZaVy9lc0tsVnRXb216Zlkvbmp3UEF1WEVVMmkzajVZeFZRdVoweStBTjR5Y2E0VjF5N3kyQmxVT3ZwNXFTdnovOFBjQkplZ2FvZXNTU3VGcFFXQ1I3ODgvaTBzVVJKaWFHNit0Wk4rYkdWU25KZ3RiWFFKdmMwcjQ1YXUzUEw1anlNY0FwYkhPQ0RRVUlpV3lDNmlwVzJFSVlUVUJWTzgwUVZFanltbUJacFpFUWwvMERRUEV6QmRxQ2k0WDR3bXdob2FuN2hRSlE3ZE5kcW1STCtXNEw3NXlQTnluZ05nSlZOVGxJWE9IWGV2cE5za0gvL0hVbHdFcDRXUEh1VDc2QlpUM2NxMVJDYXRuNmlNc0Zpd3BFU2s5eHlEYkVWUGZ4S0FyNzdjUnBGSUh1SmQ3YW1EalRrNy9LUDBQVGxpTzJWaUN5akR1ZGlhandOaDdYbVkwWnFZcmhISTE2ZWJUU0VTRHNsaDBhMEpsWnZXbFlhNHhEQkt0S3dmSU5ScmNyWW11WVV6U0Y3d243RjFvNDVjSFoya3VmdGlvT1FoTE5neDBMcXp0Uk1uM1JCb2VHM1FIYnhtdjlXZk8vMGhIYTRGZGtRTEVyRWR6RU05SlI0V2V3Um5oMDRkY2plalgxZGFXS2JvMlg4bmRhQ2MwQnZkTkRML0hwU2dxZkNuSjJObHE0cE02UG9ta3ZkcDV3VFdwWndDakRsMDhVZ3R0QUZyUXdqWW9CeTlVcFVtRFE9PSIsImlzcyI6ImF1dGgtc2VydmVyLmRldiIsInJvbGUiOiJkcnVzaCIsImF1ZCI6ImFwaS5kZXYiLCJpYXQiOjE1MDQxNzYyMDh9.MabPlSIRC-heC35MRZrYXdRDvUsPuBCjOWNAufq1k1c' -H 'Origin: http://localhost:3000' -H 'Accept-Encoding: gzip, deflate, br' -H 'Accept-Language: en' -H 'User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.113 Safari/537.36' -H 'Content-Type: application/json' -H 'Accept: application/json' -H 'Referer: http://localhost:3000/graphql' -H 'Cookie: _ga=GA1.1.1402557004.1486995190; ACEGI_SECURITY_HASHED_REMEMBER_ME_COOKIE=YWRtaW46MTUwNTM2NDQ3MjU3Mjo4ZjY4OThkNDBiZDhjM2ZkYzkzNjgxNjg5MmYzNmFiYzgwMjlkODdkM2IzZTdhYjM3NjFjODNlYzg2OTU3Yjcz' -H 'Connection: keep-alive' --data-binary '{"query":"{\n  siteGroup:siteGroupByName(name: \"credentialtest\") {\n  gitUrl\n    slack {\n      webhook\n      channel\n      informStart\n      informChannel\n    }\n    client {\n      clientName\n    }\n    sites {\n      siteName\n      siteBranch\n      siteEnvironment\n      siteHost\n      serverInfrastructure\n      serverIdentifier\n      serverNames\n      deployStrategy\n      webRoot\n      domains\n      jumpHost\n    }\n  }\n}","variables":null,"operationName":null}' --compressed

result:

{
  "data": {
    "siteGroup": {
      "gitUrl": "git@git:/git/credentialtest.git",
      "slack": {
        "webhook": "https://hooks.slack.com/services/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",
        "channel": "amazeeio-testing",
        "informStart": null,
        "informChannel": null
      },
      "client": null,
      "sites": [
        {
          "siteName": "credentialtest_branch2",
          "siteBranch": "branch2",
          "siteEnvironment": "development",
          "siteHost": null,
          "serverInfrastructure": "compact",
          "serverIdentifier": "credentialtest",
          "serverNames": [
            "credentialtest.compact"
          ],
          "deployStrategy": null,
          "webRoot": null,
          "domains": [
            "credentialtest"
          ],
          "jumpHost": null
        }
      ]
    }
  }
}

access with role admin

curl 'http://localhost:3000/graphql?' -H 'Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzc2hLZXkiOiJBQUFBQjNOemFDMXljMkVBQUFBREFRQUJBQUFDQVFDNWI5QXdiNml3V1J6MXorUUR3TGxTdnRweGcvN3BSN1JmK2tKVDNraFFHaUx1WDRRazJhRENydDFmS25kanlVdWo1NW9jUmM4T2xFeHJBOEZQZlpicG5mYU5YZlZoOVRseWJoaklXVUh0TGJjOWdpQ29EdWlWMlRFYjZaVy9lc0tsVnRXb216Zlkvbmp3UEF1WEVVMmkzajVZeFZRdVoweStBTjR5Y2E0VjF5N3kyQmxVT3ZwNXFTdnovOFBjQkplZ2FvZXNTU3VGcFFXQ1I3ODgvaTBzVVJKaWFHNit0Wk4rYkdWU25KZ3RiWFFKdmMwcjQ1YXUzUEw1anlNY0FwYkhPQ0RRVUlpV3lDNmlwVzJFSVlUVUJWTzgwUVZFanltbUJacFpFUWwvMERRUEV6QmRxQ2k0WDR3bXdob2FuN2hRSlE3ZE5kcW1STCtXNEw3NXlQTnluZ05nSlZOVGxJWE9IWGV2cE5za0gvL0hVbHdFcDRXUEh1VDc2QlpUM2NxMVJDYXRuNmlNc0Zpd3BFU2s5eHlEYkVWUGZ4S0FyNzdjUnBGSUh1SmQ3YW1EalRrNy9LUDBQVGxpTzJWaUN5akR1ZGlhandOaDdYbVkwWnFZcmhISTE2ZWJUU0VTRHNsaDBhMEpsWnZXbFlhNHhEQkt0S3dmSU5ScmNyWW11WVV6U0Y3d243RjFvNDVjSFoya3VmdGlvT1FoTE5neDBMcXp0Uk1uM1JCb2VHM1FIYnhtdjlXZk8vMGhIYTRGZGtRTEVyRWR6RU05SlI0V2V3Um5oMDRkY2plalgxZGFXS2JvMlg4bmRhQ2MwQnZkTkRML0hwU2dxZkNuSjJObHE0cE02UG9ta3ZkcDV3VFdwWndDakRsMDhVZ3R0QUZyUXdqWW9CeTlVcFVtRFE9PSIsInN1YiI6ImFuc2libGUtdGVzdCIsImlzcyI6ImF1dGgtc2VydmVyLmRldiIsInJvbGUiOiJhZG1pbiIsImF1ZCI6ImFwaS5kZXYiLCJpYXQiOjE1MDM1MDI3MDB9.PGr-w3Wicb3X1ggF71emPnPbps3Zyh0DgKsmNxUAVoc' -H 'Origin: http://localhost:3000' -H 'Accept-Encoding: gzip, deflate, br' -H 'Accept-Language: en' -H 'User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.113 Safari/537.36' -H 'Content-Type: application/json' -H 'Accept: application/json' -H 'Referer: http://localhost:3000/graphql' -H 'Cookie: _ga=GA1.1.1402557004.1486995190; ACEGI_SECURITY_HASHED_REMEMBER_ME_COOKIE=YWRtaW46MTUwNTM2NDQ3MjU3Mjo4ZjY4OThkNDBiZDhjM2ZkYzkzNjgxNjg5MmYzNmFiYzgwMjlkODdkM2IzZTdhYjM3NjFjODNlYzg2OTU3Yjcz' -H 'Connection: keep-alive' --data-binary '{"query":"{\n  siteGroup:siteGroupByName(name: \"credentialtest\") {\n  gitUrl\n    slack {\n      webhook\n      channel\n      informStart\n      informChannel\n    }\n    client {\n      clientName\n    }\n    sites {\n      siteName\n      siteBranch\n      siteEnvironment\n      siteHost\n      serverInfrastructure\n      serverIdentifier\n      serverNames\n      deployStrategy\n      webRoot\n      domains\n      jumpHost\n    }\n  }\n}","variables":null,"operationName":null}' --compressed

result

{
  "data": {
    "siteGroup": {
      "gitUrl": "git@git:/git/credentialtest.git",
      "slack": {
        "webhook": "https://hooks.slack.com/services/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",
        "channel": "amazeeio-testing",
        "informStart": null,
        "informChannel": null
      },
      "client": {
        "clientName": "credentialtestclient"
      },
      "sites": [
        {
          "siteName": "credentialtest_branch2",
          "siteBranch": "branch2",
          "siteEnvironment": "development",
          "siteHost": "credentialtest.compact",
          "serverInfrastructure": "compact",
          "serverIdentifier": "credentialtest",
          "serverNames": [
            "credentialtest.compact"
          ],
          "deployStrategy": null,
          "webRoot": null,
          "domains": [
            "credentialtest"
          ],
          "jumpHost": null
        }
      ]
    }
  }
}

expected result

"siteHost": "credentialtest.compact", also for role drush

add logs2rocketchat

Title says it all, like we have logs2slack today, we would need logs2rocketchat.
Maybe rocketchat ist slack API compatible, then we could actually use logs2slack for it.

I think @twardnw did some research here already?

Building of master crashes on lagoon/auth-ssh with `OPENSSL_1.0.2' not found (required by ssh-keygen)

Running make build on macos 10.12.6 crashes with the error "ssh-keygen: /lib64/libcrypto.so.10: version `OPENSSL_1.0.2' not found (required by ssh-keygen)"

Steps to reproduce :

Clone lagoon repo and run make build
Wait a while and on building the auth-ssl the process stops

docker build --quiet --build-arg IMAGE_REPO=lagoon -t lagoon/auth-ssh -f services/auth-ssh/Dockerfile .
steps 12 in the process is the crash.

See attached console log

crash.log.txt

it looks like an upstream issue with "${IMAGE_REPO:-amazeeiolagoon}/centos7:" upstream.

Trying to figure out how to fix it, because I do think it happened on a 'regular' Centos 7 server I have in production to, but that was when installing apache 2.4.27 on a clean install.

Imagebuild of centos7-mariadb10 stalls with error

When building the centos7-mariadb10

warning: /var/cache/yum/x86_64/7/mariadb/packages/MariaDB-10.2.8-centos7-x86_64-common.rpm: Header V4 DSA/SHA1 Signature, key ID 1bb943db: NOKEY
Public key for MariaDB-10.2.8-centos7-x86_64-common.rpm is not installed

I solved the build locally by adding

RUN rpm --import https://yum.mariadb.org/RPM-GPG-KEY-MariaDB

Will push a MR to fix that issue in a bit

/bastian

X-Debug

  • install xdebug php module by default into the php-fpm image, disabled by default and enable as soon as XDEBUG_ENABLE env variables are existing

Implement API Storage for Environments

  • Add GraphQL API Object for Environments
  • CRUD for Environments API via GraphQL
  • Update OpenShiftBuildDeploy service to add environment via API if it has been created
  • Update OpenShiftRemove service to save deletion time via API if it has been deleted
  • Should be a new key on API Project storage, which allows to define which environment (aka branch name) should be used as the production environment

Store secrets inside git projects

  • Provide clients possibility to save secrets encrypted inside their git repository

Current idea:

  • create for each Lagoon Project an OpenShift Project which has a TLS Private Key
  • Allow clients via the ssh wrapper and cli container to encrypt environments with that TLS private key
  • decrypt secret during buildtime and inject it into openshift project as regular secrets

Implement new Storage for API Objects

As discussed in #29 we want to rebuild the Storage of the API

At the same time we also create new Objects, in this hierarchy:

  • customer (the current client, stores information about the customer)
    • project (each project, like each git repository)
      • openshift (information about the openshift that the project is pushed too)

ssh keys can be referenced from customer and from project

customer

key type description example value
name String Unique. name of the customer. amazeeio
ssh_keys reference to ssh_keys object can be referencing to multiple ssh keys, this ssh key will have access to all projects of this customer
comment free text some comment about the client
created date time of creation date of customer. Date format tbd.
private_key SSH Private key ssh private key for this specific user, will be used during the deployment to access the git repositories that should be deployed

project

key type description example value
name string Unique. name of the project awesomewebsite
client reference to client reference to the client of this project
ssh_keys reference to ssh_keys object can be referencing to multiple ssh keys, used to allow specific ssh keys only access to a single project
git_url string git url of the project, needs to be in ssh format [email protected]:amazeeio/awesomewebsite.git
slack reference to slack object can be either existing (if slack enabled) or not (no slack notifications)
active_systems_deploy String Name of the active system for deployment lagoon_openshiftDeploy
active_systems_remove String Name of the active system for removals lagoon_openshiftRemove
branches String Regex of branches to be deployed, default is all branches ^(master|staging)$ or .*
pullrequests Bool Enable or disable pull request builds, default: false true, false
openshift Reference to OpenShift Object Used to define to which OpenShift this project should be deployed too

openshift

key type description example value
name string unique, name of the openshift server
console_url URL URL of the console to connect to https://console.appuio.ch
registry Domain String Domain (not full URL) of the docker registry to push to registry.appuio.ch
token JSON Web Token token of the service account to use to create openshift resources.
username String Username of the OpenShift User that should be used to create openshift resources foo
password String Password of the OpenShift user that should be used to create openshift resources bar
router_pattern String String with the router pattern that will be used on that specific OpenShift server, has two substitutions: ${project}, ${environment} that will be subsituted automatically ${project}.${environment}.appuio.amazee.io
project_user String OpenShift Username that should also be given access too when creating a new project [email protected]

SSH Key

key type description example value
name String Unique. name of the ssh key, most probably an email address, but can be any string. [email protected]
key ssh key the actual ssh key, with no type or email address at the end AAAAC3NzaC1lZDI1NTE5AAAAICtH4WLYkj55uZ/cLtTjnb0QbutYX1xBJbUzpRhBXeq3
type ssh key type the type of the ssh key, by default ssh-rsa ssh-rsa, ssh-ed25519

slack

key type description example value
webhook URL URL of the Slack Incomming Webhook https://hooks.slack.com/services/AAAAAAA/BBBBBBBB/CCCCCCCC
channel String Name of the Slack Channel to send notifications to mychannel

Rename all Environment Variables

  • remove AMAZEEIO_ from all environment variables

  • keep LAGOON_AVAILABILITY_CLASS

  • keep LAGOON_LOCATION (add name of openshift)

  • AMAZEEIO_SITE_BRANCH -> LAGOON_GIT_BRANCH

  • AMAZEEIO_SITE_ENVIRONMENT -> LAGOON_ENVIRONMENT_TYPE

  • AMAZEEIO_SITE_GROUP -> LAGOON_PROJECT

  • remove AMAZEEIO_SITE_NAME

  • AMAZEEIO_SITE_URL remove for now (will be added later with LAGOON_ROUTES see #79)

  • AMAZEEIO_TMP_PATH replace with just /tmp

  • remove AMAZEEIO_WEBROOT

deamazeeiofy Lagoon

  • .amazeeio.yml -> .lagoon.yml
  • .amazeeio.env.$BRANCH -> .lagoon.env.$BRANCH
  • com.amazeeio -> lagoon
  • everything else

allow intra-service api communication via `JWTSECRET` directly

Currently requests to the API are allowed via JWT tokens. For intra-service communication we create an admin token that is used then by every service to talk to the api.
Unfortunately this makes bootstrapping of a new Lagoon very hard, as you need the Lagoon system running in order to create a new JWT token, but in the very first stages we don't have Lagoon running yet :)

So my idea would be to also allow communication to the API via JWTSECRET directly. As we distribute the JWTSECRET to all Services already, they could just use that to talk to the API.

Write User Documentation

Documentation to Update

Find way to have User Documentation inside Lagoon Git Repo

Beyond Openshift

Your docs refer to Lagoon as being for "Openshift & Kubernetes". Is it possible to run Lagoon outside of OpenShift on another Kubernetes provider?

I ask because Azure provides $5000 of free credit p.a. for non-profits which is very tempting.

Implement SSH Server wrapper around `oc rsh`

We don't want our users to install oc on their systems. Instead we would like to have an SSH Server that when connected runs a forced command with oc rsh that connects to the wished container.

We already have a system that can dynamically look at incoming ssh keys and figure out to which sites an ssh key has access to: https://github.com/amazeeio/lagoon/blob/develop/services/auth-ssh/sshd_config#L15

So the idea of the flow is:

  1. user connects with [email protected] to the SSH Server endpoint that is running at server.com. (amazeeio is the sitegroup, prod is the site to connect to`)
  2. SSH Server checks if the incoming SSH key has access to the sitegroup amazeeio and prod
  3. if has access creates a forced command oc rsh -n amazeeio-prod dc/cli which connects the user to the cli container of the openshift project amazeeio-prod

Backups of Lagoon MySQL

Whenever we start a MySQL/MariaDB it also creates a cronjob that connects to the running mysql and dumps all databases into a persistent storage that is only mounted into the backup cronjob container.

Also check with VSHN how they do it in Appuio

Healthcheck & Monitoring of Lagoon

  • implement Kubernetes Live- & Readyness Checks for each Lagoon Service
  • send error logs of each service via winston into the lagoon elasticsearch (how to alert based on that depends on the team behind)
  • run lagoon ansible tests every couple of minutes

Implement modular nginx config overrides

Instead of telling developers to overwrite the whole nginx config if they just need to change a small piece of the config (like blocking IPs, special redirects, etc.), the main nginx config should include some files in a known directory.

api should also use yarn workspace system

The api service is currently not using the yarn workspace system, therefore during building the api Dockerimage we download already downloaded packages, which slows down the process of building.

On order to do that we probably need (just on top of my head though)

Blackfire integration

  • install blackfire php module by default into the php-fpm image, disabled by default and enable as soon as BLACKFIRE_SERVER_ID and BLACKFIRE_SERVER_TOKEN env variables are existing
  • documentation

error from openshiftdeploy when running intial deploy of a branch

While moving slackin over to lagoon, the first push of master w/ the lagoon changes failed to deploy, showing this error in the openshiftdeploy log

2017-08-31T22:23:29.633Z - silly: Error from server (Forbidden): User "system:serviceaccount:amze-amazeeio:jenkins" cannot list rolebindings in project "amze-amazeeio-slackin-master"

I saw similar behavior when pushing the develop branch for the first time as well. A subsequent push had the deployment succeed each time.

New Relic

Figure out how to run New Relic inside a fully dockerized environment (maybe run the NR Agent only once and let the php new relic modules talk to that agent?)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.