GithubHelp home page GithubHelp logo

autopilotpattern / wordpress Goto Github PK

View Code? Open in Web Editor NEW
158.0 20.0 41.0 1.95 MB

A robust and highly-scalable implementation of WordPress in Docker using the Autopilot Pattern

License: GNU General Public License v2.0

Shell 77.17% ApacheConf 1.04% PHP 2.99% Makefile 12.69% Nginx 6.10%

wordpress's Introduction

Autopilot Pattern WordPress

A robust and highly-scalable implementation of WordPress in Docker using the Autopilot Pattern

DockerPulls DockerStars


Containerized and easily scalable

This project uses the Autopilot Pattern to automate operations, including discovery and configuration, for easy scaling to any size. All component containers use ContainerPilot and Consul to configure themselves. This also allows each service to be scaled independently to handle incoming traffic and as more services are added, the containers that consume these services will reconfigure themselves accordingly.

Project architecture

A running cluster includes the following components:

  • ContainerPilot: included in our MySQL containers to orchestrate bootstrap behavior and coordinate replication using keys and checks stored in Consul in the preStart, health, and onChange handlers.
  • MySQL: we're using the Autopilot Pattern implementation of MySQL for automatic backups and self-clustering so that we can deploy and scale easily
  • HyperDB: an "advanced database class that replaces a few of the WordPress built-in database functions" to support the MySQL cluster that's necessary for scaling WordPress; everything is automatically configured so running a scalable WordPress site is no more complex than running without the scaling features
  • Memcached: improves performance by keeping frequently accessed data in memory so WordPress doesn't have to query the database for every request; the images include tollmanz's Memcached plugin pre-installed, and ContainerPilot automatically configures it as we scale
  • Nginx: the front-end load balancer for the WordPress environment; passes traffic from users to the WordPress containers on the back-end
  • NFS: stores user uploaded files so these files can be shared between many WordPress containers
  • Consul: used to coordinate replication and failover
  • Manta: the Joyent object store, for securely and durably storing our MySQL snapshots
  • Prometheus: an optional, open source monitoring tool that tracks the performance of each component and demonstrates ContainerPilot telemetry
  • WP-CLI: to make managing WordPress easier

How do I use this thing?

Pick the answer that fits:

  1. For the hello world experience: follow the directions below for configuration, then docker-compose up -d and you're done.
  2. For building your own WordPress in Docker: clone this repository and place the WordPress theme you want to use into the var/www/html/content/themes directory. Develop locally using the local-compose.yml file, then build your Docker image and run those in the cloud with your own docker-compose.yml file that specifies your custom image.

The instructions below will get you set up to run containers on Triton, or anywhere that supports the Autopilot Pattern.

Getting started on Triton

  1. Get a Joyent account and add your SSH key.
  2. Install the Docker Toolbox (including docker and docker-compose) on your laptop or other environment, as well as the Joyent Triton CLI
  3. Configure Docker and Docker Compose for use with Joyent:
curl -O https://raw.githubusercontent.com/joyent/sdc-docker/master/tools/sdc-docker-setup.sh && chmod +x sdc-docker-setup.sh
./sdc-docker-setup.sh -k us-east-1.api.joyent.com <ACCOUNT> ~/.ssh/<PRIVATE_KEY_FILE>

Configure your environment

Check that everything is configured correctly by running ./setup.sh. You'll need an SSH key that has access to Manta, the object store where the MySQL backups are stored. Pass the path of that SSH key as ./setup.sh ~/path/to/MANTA_SSH_KEY. The script will create an _env file that names the variables that you will need to run your WordPress environment.

Manta settings

The script will set defaults for almost every config variable, but the Manta config is required and must be set manually. The two most important variables there are:

MANTA_BUCKET=/<username>/stor/<bucketname>  # an existing Manta bucket
MANTA_USER=<username> # a user with access to that bucket

The MySQL container will take a backup during its preStart handler and periodically while running. Configure these Manta settings to specify how and where this backup is stored. Here you need to specify the MANTA_USER, and also the MANTA_BUCKET where the backups will be stored.

WordPress configuration

The setup script will set working defaults for the entire WordPress configuration. The defaults will work for a quick "hello world" experience, but you'll probably want to set your own values for many fields.

# Environment variables for for WordPress site
WORDPRESS_URL=http://my-site.example.org/
WORDPRESS_SITE_TITLE=My Blog
[email protected]
WORDPRESS_ADMIN_USER=admin
WORDPRESS_ADMIN_PASSWORD=<random string>
WORDPRESS_ACTIVE_THEME=twentysixteen
WORDPRESS_CACHE_KEY_SALT=<random string>
#WORDPRESS_TEST_DATA=true # uncomment to import a collection of test content on start

This block is the typical information you must provide when installing WordPress. The URL of the site, the site title and admin user information are all straightforward. WORDPRESS_ACTIVE_THEME is the theme that will be activated automatically when the container starts. This will typically be theme that you are developing in this repo, or one of the default themes. WORDPRESS_CACHE_KEY_SALT should be set to a unique string, the object caching in WordPress will use this salt to determine the cache keys for information it sets on the Memcached container.

If you are not bringing your own theme in this repo, you can choose from these default themes for the WORDPRESS_ACTIVE_THEME variable:

  • twentyfifteen
  • twentyfourteen
  • twentysixteen

The script will set a WORDPRESS_URL value for Triton users using Container Name Service that will make it easy to test the containers without setting any DNS information. You can CNAME your site DNS to that to make it easy to scale and replace the Nginx containers at the front of your site without ever needing to update the DNS configuration.

Setting WORDPRESS_TEST_DATA will download the manovotny/wptest content library when the WordPress container starts.

MySQL settings

The setup script will set default values for the MySQL configuration, including randomly generated passwords.

# Environment variables for MySQL service
# WordPress database/WPDB information
MYSQL_USER=wpdbuser
MYSQL_PASSWORD=<random string>
MYSQL_DATABASE=wp
# MySQL replication user, should be different from above
MYSQL_REPL_USER=repluser
MYSQL_REPL_PASSWORD=<random string>

These values will be automatically set in the wp-config.php. The last two options are used by the Autopilot Pattern MySQL container to set up its replication when scaled up to more than a single container. You can keep repluser, but set a unique password for your environment.

WordPress unique salts

As with most of the other configuration blocks, the setup script will set reasonable defaults for these values.

# Wordpress security salts
# These must be unique for your install to ensure the security of the site
WORDPRESS_AUTH_KEY=<random string>
WORDPRESS_SECURE_AUTH_KEY=<random string>
WORDPRESS_LOGGED_IN_KEY=<random string>
WORDPRESS_NONCE_KEY=<random string>
WORDPRESS_AUTH_SALT=<random string>
WORDPRESS_SECURE_AUTH_SALT=<random string>
WORDPRESS_LOGGED_IN_SALT=<random string>
WORDPRESS_NONCE_SALT=<random string>

These variables are how WordPress secures your logins and other secret info. These should be unique for your site. You can set your own values, or use this WordPress tool to generate a new set of random values.

Consul

Finally we need to configure an environment variable with the location of our Consul service. The setup script will pre-set this for Triton users.

CONSUL=<IP or DNS to Consul>

For local development, we use Docker links and simply set this to CONSUL=consul, but on Triton we use Container Name Service so that we can have a raft of Consul instances operating as a highly available service (see example).

A note on Nginx

This project also builds it's own Nginx container that is based on the AutoPilot Pattern Nginx. We build a custom Nginx container to more easily inject our custom configurations. The configs located in the /nginx directory should work well for most use cases of this project, but they can be customized and baked into the Nginx image if the need arises.

Start the containers!

After configuring everything, we are now ready to start the containers. To do that simply execute docker-compose up -d to spin everything up on Triton. Open your browser to the WORDPRESS_URL and enjoy your new site!

For local development, use docker-compose -f local-compose.yml up -d.

Going big

To scale, use docker-compose scale.... For example, the following will set the scale of the WordPress, Memcached, Nginx, and MySQL services to three instances each:

docker-compose scale wordpress=3 memcached=3 nginx=3 mysql=3

If there are few instances running for any of those services, more will be added to meet the specified count. As you scale, the application will automatically reconfigure itself so that everything is connected. All the Nginx instances will connect to all the WordPress instances, and those will connect to all the Memcached and MySQL instances. If an instance should unexpectedly crash, the other instances will automatically reconfigure to re-route requests around the failed instance.

To scale back down, simply run docker-compose scale... and specify a smaller number of instances.

Compatibility

This project has been fully tested and documented to run in Docker in local development environments and on Joyent Triton, however it has been demonstrated on, or is believe compatible with container environments including:

Contributing

Sponsors

Initial development of this project was sponsored by Joyent and 10up.

wordpress's People

Contributors

alexandrascript avatar dekobon avatar jasonpincin avatar misterbisson avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

wordpress's Issues

Can't get autopilot wordpress to work.

I deployed autopilot/wordpress. After all 7 instances were created, I noticed that WordPress app was not completely configured and thus not running on the wordpress_wordpress_1 instance. I logged on to the consul UI (http://165.225.149.83:8500/ui/#/dc1/services/mysql) and noticed that the entry "mysql-primary" was not shown. It looked like mysql instance became live but didn't mark itself as the primary in Consul. I was able to manually login to the database on mysql. I executed "docker logs wordpress_mysql_1", and saved the snippet of the log here: https://gist.github.com/khangngu/59a2af346bbb598d4a01cc0912d60d40. Can someone please help detect what went wrong with my deployment?
Thanks,
-khang-

Running local did not work

On Mac for me running latest version of docker and fresh clone of the project, running local did not work for me until I directly mapped mysql to 3306:

ports:
    - 3306:3306

Add multi-data center config example

If you look closely at the sample HyperDB config, you spot some details related to usage across data centers. Specifically, note the following:

    // dc is not used in hyperdb. This produces the desired effect of
    // trying to connect to local servers before remote servers. Also
    // increases time allowed for TCP responsiveness check.
    if ( !empty($dc) && defined(DATACENTER) && $dc != DATACENTER ) {
        if ( $read )
            $read += 10000;
        if ( $write ) 
            $write += 10000;
        $timeout = 0.7;
    }

Here's the larger context:

/**
 * This is back-compatible with an older config style. It is for convenience.
 * lhost, part, and dc were removed from hyperdb because the read and write
 * parameters provide enough power to achieve the desired effects via config.
 *
 * @param string $dataset Datset: the name of the dataset. Just use "global" if you don't need horizontal partitioning.
 * @param int $part Partition: the vertical partition number (1, 2, 3, etc.). Use "0" if you don't need vertical partitioning.
 * @param string $dc Datacenter: where the database server is located. Airport codes are convenient. Use whatever.
 * @param int $read Read group: tries all servers in lowest number group before trying higher number group. Typical: 1 for slaves, 2 for master. This will cause reads to go to slaves unless all slaves are unreachable. Zero for no reads.
 * @param bool $write Write flag: is this server writable? Works the same as $read. Typical: 1 for master, 0 for slaves.
 * @param string $host Internet address: host:port of server on internet. 
 * @param string $lhost Local address: host:port of server for use when in same datacenter. Leave empty if no local address exists.
 * @param string $name Database name.
 * @param string $user Database user.
 * @param string $password Database password.
 */
function add_db_server($dataset, $part, $dc, $read, $write, $host, $lhost, $name, $user, $password, $timeout = 0.2 ) {
    global $wpdb;

    // dc is not used in hyperdb. This produces the desired effect of
    // trying to connect to local servers before remote servers. Also
    // increases time allowed for TCP responsiveness check.
    if ( !empty($dc) && defined(DATACENTER) && $dc != DATACENTER ) {
        if ( $read )
            $read += 10000;
        if ( $write ) 
            $write += 10000;
        $timeout = 0.7;
    }

    // You'll need a hyperdb::add_callback() callback function to use partitioning.
    // $wpdb->add_callback( 'my_func' );
    if ( $part )
        $dataset = $dataset . '_' . $part;

    $database = compact('dataset', 'read', 'write', 'host', 'name', 'user', 'password', 'timeout');

    $wpdb->add_database($database);

    // lhost is not used in hyperdb. This configures hyperdb with an
    // additional server to represent the local hostname so it tries to
    // connect over the private interface before the public one.
    if ( !empty( $lhost ) ) {
        if ( $read )
            $database['read'] = $read - 0.5;
        if ( $write )
            $database['write'] = $write - 0.5;
        $wpdb->add_database( $database );
    }
}

Implementing that will require a lot more work in setting up cross-data center networking, but it's worth noting and marking as a project.

Implement git2consul

git2consul would be a great alternative to the _env being used to inject configuration details to WordPress.

This would eliminate the risk that two users might mistakenly have different _env files and better allow versioning of the configuration data.

My initial attempt at git2consul integration included the following configuration files:

git2consul.json:

{
    "version": "0.0.1",
    "repos": [{
        "name": "triton-wordpress",
        "source_root": "path/in/git/repo",
        "mode" : "expand_keys",
        "url": "https://github.com/misterbisson/triton-wordpress.git",
        "branches": ["progress"],
        "include_branch_name" : false,
        "hooks": [{
            "type": "polling",
            "interval": "1"
        }]
    }]
}

wp-config.json:

{
    "config": [
        { "site_url": "http://my-site.example.com" },

        { "SAVEQUERIES":      true },
        { "WP_DEBUG":         true },
        { "WP_DEBUG_DISPLAY": true },

        { "AUTH_KEY":         "put your unique phrase here" },
        { "SECURE_AUTH_KEY":  "put your unique phrase here" },
        { "LOGGED_IN_KEY":    "put your unique phrase here" },
        { "NONCE_KEY":        "put your unique phrase here" },
        { "AUTH_SALT":        "put your unique phrase here" },
        { "SECURE_AUTH_SALT": "put your unique phrase here" },
        { "LOGGED_IN_SALT":   "put your unique phrase here" },
        { "NONCE_SALT":       "put your unique phrase here" },

        {}
    ]
}

Support ACME feature of autopilotpattern/nginx

The wordpress blueprint overrides the Nginx config of autopilotpattern/nginx which contains bits required for ACME / LetsEncrypt. We may want to consider adding functionality to inject configuration into the Nginx container without overriding the entire config so that these must stay in-sync as the Nginx blueprint adds features.

Too many redirects when following blog article with Lets Encrypt

In following (https://www.joyent.com/blog/wordpress-on-autopilot-with-ssl#basic-setup) the basic site comes up on us-east-1 without issue as expected. When I follow the instructions to implement SSL browser tells me too many redirects.

I noticed the the article says to use setup.sh to get the CNS name which spits out the cns.joyent.com name. Shouldn't this be triton.zone? When I don't use the triton.zone endpoint I get nothing with or without https.

I'm using Cloudflare for DNS.

Running multiple wordpress?

If I wanted to have several of this wordpress deployed to Joyent at the same time, let's say three instances, what values in instance 2 and 3 should I change to accomplish this?
MANTA_BUCKET, I assume is one.

Any guidance is appreciated.

How to use autopilotpattern for PHP Microservices

Am currently tying to use authopilotpattern to build PHP Microservices. I followed this wordpress implementation of the pattern, the hello-world example and many articles on autopilotpattern website. I also read another book titled PHP Microservices written by Carlos Pérez Sánchez and Pablo Solar Vilariño. I understood and loved the approach of the authopilotpattern. However, the challenge am facing is having to set up a simple environment for several microservices to work together using the autopilotpattern. Am using PHP. I have been busy trying all I could for two weeks now but there is no light and am running out of time. I will really appreciate if someone could put me through. Thank you.

Sample Nginx config

This config file is sets up some strong caching, including overriding cache headers emitted from the upstream and forcing some requests to GET. Very specific directory paths have exceptions that allow uncached requests.

Rather than burying details in location, as is common for shared Nginx configs, I sprinkled details throughout the hierarchy as needed and convenient for the scope of each rule.

# Many details of this from http://www.djm.org.uk/wordpress-nginx-reverse-proxy-caching-setup/
# Though also referenced http://wiki.nginx.org/Pitfalls to validate some of it

user   www  www;
worker_processes  11;

events {
    # After increasing this value You probably should increase limit
    # of file descriptors (for example in start_precmd in startup script)
    worker_connections  8192;
}
worker_rlimit_nofile 16384; # see http://stackoverflow.com/questions/7325211/

http {
    include       /opt/local/etc/nginx/mime.types;
    default_type  application/octet-stream;

    # Defines the cache log format, cache log location
    # and the main access log location.
    log_format cache '***$time_local '
        '$upstream_cache_status '
        'Cache-Control: $upstream_http_cache_control '
        'Expires: $upstream_http_expires '
        '$host '
        '"$request" ($status) '
        '"$http_user_agent" '
        'Args: $args '
        'Wordpress Auth Cookie: $wordpress_auth '
        ;
    access_log /var/log/nginx/cache.log cache;
    access_log /var/log/nginx/access.log;

    # Hide server information for security
    server_tokens off;

    # caching path and config
    proxy_cache_path  /var/nginx/cache  levels=1:2   keys_zone=main:192m  max_size=2g;
    proxy_temp_path /var/nginx/temp;

    #Gzip config
    gzip on;
    gzip_comp_level 4;
    gzip_proxied any;
    # gzip_static on; # module not installed? http://wiki.nginx.org/HttpGzipStaticModule
    gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript;
    # Some version of IE 6 don't handle compression well on some mime-types, so just disable for them
    gzip_disable "MSIE [1-6]\.";
    # Set a vary header so downstream proxies don't send cached gzipped content to IE6
    gzip_vary on;

    # activate sendfile for performance http://wiki.nginx.org/HttpCoreModule#sendfile
    # WP3.5 eliminated the ms-files.php proxying of user-uploaded files, so this is less useful now
    sendfile on;

    # a pool of servers to handle public requests
    upstream backendpublic {
        server 192.168.114.154 weight=3 max_fails=3 fail_timeout=30s; # app3
        server 192.168.114.148 weight=3 max_fails=3 fail_timeout=30s; # app2
        server 192.168.114.147 weight=2; #app1
        }

    # a pool of servers to handle admin requests
    upstream backendadmin {
        server 192.168.114.154 weight=2 max_fails=3 fail_timeout=30s; # app3
        server 192.168.114.148 weight=2 max_fails=3 fail_timeout=30s; # app2
        server 192.168.114.147 weight=3; #app1
    }

    # cache ssl sessions as suggested in http://nginx.org/en/docs/http/configuring_https_servers.html#optimization
    ssl_session_cache   shared:SSL:10m;
    ssl_session_timeout 10m;

    # map oauth connection get vars
    # not currently used
    map $args $oauth {
        default 0;
        "~(oauth|code|denied|error)" 1;
    }

    # increase the proxy buffer from the default 4k
    proxy_buffer_size 8k;

    server {
        listen *:80 default_server;
        listen *:443 ssl;
        keepalive_timeout 70;
        proxy_http_version 1.1;

        # ssl keys
        ssl_certificate /path/to/cert-bundle.crt;
        ssl_certificate_key /path/to/site.com.key;

        # Set proxy headers for the passthrough
        proxy_set_header Host $host:$server_port;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

        # redirect www. prefix, as in www.domain.org -> domin.org (301 - Permanent)
        if ($host ~* www\.(.*)) {
            set $host_without_www $1;
            return 301 https://$host_without_www$request_uri ;
        }

        # Max upload size: make sure this matches the php.ini in .htaccess
        client_max_body_size 50m;

        # Catch the wordpress cookies.
        # Must be set to blank first for when they don't exist.
        set $wordpress_auth "";
        set $proxy_cache_bypass 0;
        set $proxy_no_cache 0;
        set $requestmethod GET;
        if ($http_cookie ~* "wordpress_logged_in_[^=]*=([^%]+)%7C") {
            set $wordpress_auth wordpress_logged_in_$1;
            set $proxy_cache_bypass 1;
            set $proxy_no_cache 1;
            set $requestmethod $request_method;
        }

        # Set the proxy cache key
        set $cache_key $scheme$host$uri$is_args$args;

        proxy_hide_header X-Powered-By;

        # Deny all attempts to access hidden files such as .htaccess, .htpasswd, .DS_Store (Mac).
        location ~ /\. {
            deny all;
        }

        # wp-admin and cron pages
        # always handle on the admin host
        location ~* ^/(wp-admin|wp-cron.php) {
            proxy_pass http://backendadmin;

            proxy_read_timeout 300s;
            proxy_connect_timeout 75s;
            proxy_send_timeout 120s;
            proxy_pass_header 'Set-Cookie';
        }

        # login/signup pages and comment submission
        # handle on public hosts, but do no caching
        # @TODO: maybe add throttling here
        location ~* ^/(wp-comments-post.php|wp-login.php|newsletters/|subscription/|members/|b/|connect/|do/|enterprise-onboarding/) {
            proxy_pass http://backendpublic;

            proxy_pass_header 'Set-Cookie';
        }

        # Deny access to user uploaded files with a .php extension
        # found in http://codex.wordpress.org/Nginx#Global_restrictions_file
        location ~* /(?:uploads|files)/.*\.php$ {
            deny all;
        }

        # user uploaded content
        # forces caching and GET for all requests
        location ~* ^/(uploads|files|wp-content) {
            proxy_pass http://backendpublic;

            # override the request method, force everything to GET
            proxy_method GET;

            # hide cache-related headers
            proxy_hide_header X-Powered-By;
            proxy_hide_header Vary;
            proxy_hide_header Pragma;
            proxy_hide_header Expires;
            proxy_hide_header Last-Modified;
            proxy_hide_header Cache-Control;
            proxy_hide_header Set-Cookie;

            # set headers to encourage caching
            expires 30d;
            add_header Vary "Accept-Encoding, Cookie";
            add_header Pragma "public";
            add_header Cache-Control "public, must-revalidate, proxy-revalidate";

            # don't honor cache headers from the app server
            proxy_ignore_headers Set-Cookie Expires Cache-Control;

            # internal cache settings
            proxy_cache main;
            proxy_cache_key $cache_key;
            proxy_cache_valid 30d; # 200, 301 and 302 will be cached.
            # Fallback to stale cache on certain errors.
            #503 is deliberately missing
            proxy_cache_use_stale error timeout invalid_header http_404 http_500 http_502 http_504;
        }

        # front page is uncached because of oauth requests
        # also accepts POST requests because of enterprise onboarding
        # handle on public hosts, but maybe add throttling here
        location = / {
            proxy_pass http://backendpublic;

            proxy_pass_header Set-Cookie;
        }

        # default location
        # forces caching and GET, except for logged in users
        location / {
            proxy_pass http://backendpublic;

            # maybe override the request method, defaults to GET, see if conditionals above
            # proxy_method $requestmethod;
            # actually, this doesn't work, see http://trac.nginx.org/nginx/ticket/283

            # hide cache-related headers
            proxy_hide_header X-Powered-By;
            proxy_hide_header Vary;
            proxy_hide_header Pragma;
            proxy_hide_header Expires;
            proxy_hide_header Last-Modified;
            proxy_hide_header Cache-Control;
            proxy_hide_header Set-Cookie;

            # set headers to encourage caching
            expires 30m;
            add_header Vary "Accept-Encoding, Cookie";
            add_header Pragma "public";
            add_header Cache-Control "public, must-revalidate, proxy-revalidate";

            # don't honor cache headers from the app server
            proxy_ignore_headers Set-Cookie Expires Cache-Control;

            # internal cache settings
            proxy_cache main;
            proxy_cache_key $cache_key;
            proxy_cache_valid 30m; # 200, 301 and 302 will be cached.
            # Fallback to stale cache on certain errors.
            #503 is deliberately missing
            proxy_cache_use_stale error timeout invalid_header http_404 http_500 http_502 http_504;
            #2 rules to disabled the cache for logged in users.
            proxy_cache_bypass $proxy_cache_bypass; # Do not cache the response.
            proxy_no_cache $proxy_no_cache; # Do not serve response from cache.
        }

    } # End server

} # End http

In the context of Containerbuddy, I'd expect to change the backends and fetch the SSL keys, with everything else being static.

(Originally at autopilotpattern/nginx#2)

Importing sample content causes NFS Container to fail on local environment

When trying to import the sample content by uncommenting WORDPRESS_TEST_DATA=true in the _env file in combination with docker-compose -f local-compose up -d on both boot2docker and Docker for Mac, the NFS container fails with the following error:

fs.js:882
  throw new Error('Cannot parse time: ' + time);
        ^
Error: Cannot parse time: undefined
    at toUnixTimestamp (fs.js:882:9)
    at Object.fs.utimes (fs.js:892:18)
    at NfsServer.setattr_time (/opt/nfs/node_modules/sdc-nfs/lib/nfs/setattr.js:245:8)
    at next (/opt/nfs/node_modules/sdc-nfs/node_modules/oncrpc/lib/server.js:318:22)
    at f (/opt/nfs/node_modules/sdc-nfs/node_modules/once/once.js:16:15)
    at NfsServer.setattr_get_atime (/opt/nfs/node_modules/sdc-nfs/lib/nfs/setattr.js:208:9)
    at next (/opt/nfs/node_modules/sdc-nfs/node_modules/oncrpc/lib/server.js:318:22)
    at f (/opt/nfs/node_modules/sdc-nfs/node_modules/once/once.js:16:15)
    at NfsServer.setattr_get_mtime (/opt/nfs/node_modules/sdc-nfs/lib/nfs/setattr.js:153:9)
    at next (/opt/nfs/node_modules/sdc-nfs/node_modules/oncrpc/lib/server.js:318:22)
2016/05/23 14:43:07 exit status 8

nginx TLS certificate/private key being stored unsecured and publicly accessible in consul?

Please correct me if I'm wrong here, but...

When setting up SSL/TLS in the nginx container using Let's Encrypt, it stores its certificate, private key etc. in Consul's key/value store under nginx/acme/*, presumably so they can be replicated to multiple nginx instances once the certificate is obtained.

However, access to Consul isn't secured in any way, or at least I can't find mention of it, and it's accessible on the internet on :8500. Wouldn't this make it trivial to get the site's SSL/TLS certificate and private key if one knows the hostname or the IP address of the consul instance?

Or am I missing something here?

Specificity for Manta configuration

When first going through this setup, I was unsure of how to properly setup the Manta configuration. In the README file, it says:

MANTA_BUCKET= # an existing Manta bucket
MANTA_USER= # a user with access to that bucket

I think users would be better served with more specificity. Such as:

MANTA_BUCKET=/<username>/store/bucketname  # an existing Manta bucket
MANTA_USER=<username> # a user with access to that bucket

Multi-data center support

Data center awareness

WordPress + HyperDB supports running in multiple data centers. The HyperDB config includes comments on how to configure it for data center awareness:

/**
 * Network topology / Datacenter awareness
 *
 * When your databases are located in separate physical locations there is
 * typically an advantage to connecting to a nearby server instead of a more
 * distant one. The read and write parameters can be used to place servers into
 * logical groups of more or less preferred connections. Lower numbers indicate
 * greater preference.
 *
 * This configuration instructs HyperDB to try reading from one of the local
 * slaves at random. If that slave is unreachable or refuses the connection,
 * the other slave will be tried, followed by the master, and finally the
 * remote slaves in random order.
 * Local slave 1:   'write' => 0, 'read' => 1,
 * Local slave 2:   'write' => 0, 'read' => 1,
 * Local master:    'write' => 1, 'read' => 2,
 * Remote slave 1:  'write' => 0, 'read' => 3,
 * Remote slave 2:  'write' => 0, 'read' => 3,
 *
 * In the other datacenter, the master would be remote. We would take that into
 * account while deciding where to send reads. Writes would always be sent to
 * the master, regardless of proximity.
 * Local slave 1:   'write' => 0, 'read' => 1,
 * Local slave 2:   'write' => 0, 'read' => 1,
 * Remote slave 1:  'write' => 0, 'read' => 2,
 * Remote slave 2:  'write' => 0, 'read' => 2,
 * Remote master:   'write' => 1, 'read' => 3,
 *
 * There are many ways to achieve different configurations in different
 * locations. You can deploy different config files. You can write code to
 * discover the web server's location, such as by inspecting $_SERVER or
 * php_uname(), and compute the read/write parameters accordingly. An example
 * appears later in this file using the legacy function add_db_server().
 */

Though MySQL is not the only service that needs data center awareness:

  • Nginx connects to WordPress
    • Nginx should probably not bother connecting to WordPress instances in other data centers, but if there are no local WP instances...
  • WordPress connects to Memcached, MySQL, and NFS
    • Making requests to Memcached across the WAN is probably slower than requesting from a local MySQL replica, but creating separate Memcached pools in each data center creates consistency problems (Facebook's mcrouter support for replicated pools claims to solve that, but I've never used it personally)
    • Awareness of MySQL primary and replica topology is critical, but it's OK if the primary is in a remote data center and the replica is local (WP+HyperDB will read its writes, so replication delay is not a problem). It should probably prefer local replicas, but if there are none locally...
    • NFS over the WAN would be very slow; even if it were tolerable, it's not supported in RFD26 and probably not wise
  • Memcached does not connect to anything else
  • MySQL replicas connect to the MySQL primary
    • Replication over the WAN is pretty straightforward, though autopilotpattern/mysql#53 asks questions about failover scenarios
  • NFS does not connect to anything else
    • It could be backed up to an object store or replicated across multiple volumes (see https://syncthing.net for an example), but those introduce consistency questions if both sides are writing

Given the current implementation, it might be necessary to ignore performance issues with Memached and NFS transactions over the WAN. However, a better implementation would:

  1. Resolve cross-data center Memcached questions. This could involve implementing Facebook's mcrouter and replicated pools or ditching Memcached for Couchbase, which provides a Memcached-compatible interface with cross-data center replication
  2. Resolve cross-data center NFS questions. Object storage could be used as an exclusive alternative to filesystem storage, eliminating the need for NFS. It's possible that https://syncthing.net could provide sufficiently fast replication and sufficiently good conflict resolution. It's also possible that Nginx could be configured to force all http POST requests to WP instances a primary data center, to substantially reduce the risk of conflicts due to slow replication across the WAN. That would require that Nginx instances in the non-primary data center be able to connect to WP instances in the primary DC.

Requirements for full active-active data center support

Story: The application will be deployed in data centers in two different regions connected by a WAN. Browsers may reach either data center with approximately equal frequency. Operators will specify one data center for the primary database instance, and the application will route requests internally to the correct primary instance in the correct DC.

  • A VPN between the datacenters that connects the private networks of the two data centers)
  • Routes on each host so they can connect to the other data centers
  • Data center awareness in how to reach upstreams (see discussion above)

Questions to answer:

  1. What happens if the replica DC is partitioned from the end user client?
  2. What happens if the replica DC is partitioned from the primary DC?
  3. What happens if the primary DC is partitioned from the end user client?
  4. What happens if the primary DC is partitioned from the replica DC?

Requirements for a standby data center

Story: We need a minimal foot print of the application running in a remote data center so that we can quickly recover if the the primary data center fails. The replica data center is not handling any end-user requests under normal use, and there is no provision for automatic fail-over. This approach seeks to reduce challenges by eliminating activity in the replica data center that would cause frustration due to slow performance of requests over the WAN or inconsistency due to writes in separate DCs (Memcached and NFS).

  • A VPN between the datacenters that connects the private networks of the two data centers)
  • Routes on each host so they can connect to the other data centers

incomplete issue

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.