GithubHelp home page GithubHelp logo

tarantool / ansible-cartridge Goto Github PK

View Code? Open in Web Editor NEW
23.0 21.0 14.0 28.22 MB

Ansible role for deploying tarantool cartridge-based applications

Home Page: https://galaxy.ansible.com/tarantool/cartridge

License: Other

Python 94.21% Shell 1.33% Lua 3.52% Jinja 0.94%
tarantool ansible cartridge

ansible-cartridge's Introduction

DEPRECATION NOTICE: This role will be deprecated by January 1, 2024. All open issues and pull requests will be closed shortly. ansible-cartridge will remain in public archive as Tarantool moves on with version 3.0. Please follow official community channels in Telegram — @tarantool (en), @tarantoolru (ru). Feel free to ask ansible related questions there.

Ansible Role: Tarantool Cartridge

Ansible Galaxy Releases

Unit Tests Molecule Tests Consistency Tests

An Ansible role to easily deploy Tarantool Cartridge applications.

This role can deploy and configure applications packed in RPM, DEB and TGZ using Cartridge CLI.

Only RedHat and Debian OS families are supported.

Table of contents

Requirements

  • Tarantool Cartridge >= 2.0.0, < 3;
  • Ansible 2.8.4 or higher.

Note that running the role may require root access.

Installation

First, you need to install this role using ansible-galaxy:

$ ansible-galaxy install tarantool.cartridge,1.12.0

Quick start

Check out the Getting Started guide to learn how to use this role.

You can start two virtual machines using example Vagrantfile.

Let's deploy an application with simple topology.

First, pack your application to RPM using cartridge pack rpm command.

Then, describe the topology in hosts.yml file:

hosts.yml:

---
all:
  vars:
    cartridge_app_name: myapp
    cartridge_package_path: ./myapp-1.0.0-0.rpm
    cartridge_cluster_cookie: secret-cookie

    # may be useful for vagrant
    ansible_ssh_private_key_file: ~/.vagrant.d/insecure_private_key
    ansible_ssh_common_args: '-o IdentitiesOnly=yes -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no'

  hosts:
    storage-1:
      config:
        advertise_uri: '172.19.0.2:3301'
        http_port: 8181

    storage-1-replica:
      config:
        advertise_uri: '172.19.0.2:3302'
        http_port: 8182

  children:
    # group instances by machines
    machine_1:
      vars:
        # first machine address and connection opts
        ansible_host: 172.19.0.2
        ansible_user: vagrant

      hosts:  # instances to be started on this machine
        storage-1:
        storage-1-replica:

    # group instances by replicasets
    storage_1_replicaset:  # replicaset storage-1
      hosts:  # instances
        storage-1:
        storage-1-replica:
      vars:
        # replicaset configuration
        replicaset_alias: storage-1
        roles:
          - 'vshard-storage'
        failover_priority:
          - storage-1
          - storage-1-replica

Write a simple playbook that imports role:

# playbook.yml
---
- name: Deploy my Tarantool Cartridge app
  hosts: all
  become: true
  become_user: root
  any_errors_fatal: true
  gather_facts: false
  roles:
    - tarantool.cartridge

Then run the playbook with created inventory:

ansible-playbook -i hosts.yml playbook.yml

Now, visit http://localhost:8181

image

Using scenario

It's possible to perform different actions with instances or replicasets by combining cartridge_scenario variable and Ansible limits.

For example, you can configure and start some instances. To do this, you should define cartridge_scenario variable like this:

cartridge_scenario:
  - configure_instances
  - start_instance
  - wait_instance_started

Then run playbook with --limit option:

ansible-playbook -i hosts.yml playbook.yml --limit instance_1,instance_2

You can also simply edit some replicaset. To do this, define cartridge_scenario variable like this:

cartridge_scenario:
  - edit_topology

After run playbook with --limit option:

ansible-playbook -i hosts.yml playbook.yml --limit replicaset_1_group,replicaset_2_group

Moreover, scenario allows you to describe custom steps for configuring cluster. For more details about using scenario and available steps, see scenario documentation.

Documentation

Cookbook

Learn the cookbook to know now to use the tarantool.cartridge role for different purposes.

ansible-cartridge's People

Contributors

andreyermilov avatar dokshina avatar fizikroot avatar hackallcode avatar lenkis avatar mrrvz avatar no1seman avatar oleggator avatar opomuc avatar runsfor avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ansible-cartridge's Issues

Place membership stage under `cartridge-replicasets` tag

Connecting instances to membership is a required step during Tarantool Cartridge configuration. It seems unwise to skip this under cartridge-replicasets tag which only purpose is to configure Tarantool Cartridge replicasets.

Ability do deploy and operate tarballs without root access

In some (enterprise) environments, database/application administrators do not have root access to the machine.
However, it should be possible to deploy and operate tarball (packed by cartridge) to /home directory of tarantool user, with only basic sudo permissions.

Proposal (discussable):

  • generate systemd file from playbook
  • deploy tarball to /home directory instead of system folders
  • run most operations under tarantool user

Limitations:

  • root access still required for registering systemd service. However, since this is a one-time operation, it could be done manually (in worst case).

PoC for prorosal above is also available (#111).

Expected result: dba/dev/ops should be able to run all available operations (restart, upgrade app, change cluster topology, etc.) without having root access.

Custom workdir parameter

Good day.
Is it possible to specify an cusctom location for storing snapshots and transaction logs?
Parameter workdir forbidded.
Thank.

Edit Topology

Start to use cartridge.admin_edit_topology function to create new replicasets and edit existing.

Add ability to specify cluster topology in the playbook

I want to set my topology in the my playbook because I have generated inventory and can't change it.
I want something like that:

- hosts: app
  remote_user: ansible
  become: yes
  tasks:
  - name: Import Tarantool Cartridge role
    import_role:
      name: tarantool.cartridge
  vars:
    instances:
      app1:
        advertise_uri: 'localhosts:3301'
        http_port: 8081
      app2:
        advertise_uri: 'localhosts:3302'
        http_port: 8082

- hosts: storage
  remote_user: ansible
  become: yes
  tasks:
  - name: Import Tarantool Cartridge role
    import_role:
      name: tarantool.cartridge
  vars:
    instances:
      storage1:
        advertise_uri: 'localhosts:3303'
        http_port: 8083
        replica_set: storage_01
        leader: true
      storage2:
        advertise_uri: 'localhosts:3304'
        http_port: 8084
        replica_set: storage_01

Restructuring replicaset tasks

Summary

Pull replicaset management tasks from main.yml to a separate file (replicasets.yml) and and do not apply tags to this task list.

Motivation

In our case, we have custom roles for deploying, no systemd and this role requires for only topology management. Now, to use the role only for managing the replicasets, you must specify command line tags. This makes it difficult to integrate this role into another playbook, since it is impossible to specify tags from the playbook, only from the command line.

Design Detail

Tasks with the tag cartridge-replicasets transfer to new file replicases.yml. In main.yml include the list of replicasets tasks and tag them. Tasks inside replicasets.yml do not tag cartridge-replicasets.

- name: Cartridge cluster management
  include_tasks: replicasets.yml
  tags: cartridge_replicasets

In external playbook replicasets tasks importing will look like this:

tasks:
  - name: Run ansible-cartridge role
    import_role:
      name: ansible-cartridge
      tasks_from: replicasets

Tasks from doc page

Drawbacks

When crossing tags, you will need to duplicate tasks from main.yml to replicasets.yml. Now it is only Select one not expelled instance task.

Change memtx_memory without restart

If memtx_memory is increased, it is possible to change it without tarantool restart (which is a valid ops scenario).

Please detect if the new value of memtx_memory is larger than the previous one, and if it is, change the config files, and set new memtx_memory through control socket (box.cfg{})

[2pt] Add retry to `manage replicasets` stage

When there are a lot of instances or they are under some high load manage replicasets can fail and it will stop any pipeline.

BUT, at the same time instances ARE running and everything is ok.

Take a look at some example:

TASK [ansible-cartridge : Set one control instance to join replicasets] ****************************************************************************************************************
ok: [storage-AE-0 -> xxx]

TASK [ansible-cartridge : Get replicasets] *********************************************************************************************************************************************
ok: [storage-AE-0]

TASK [ansible-cartridge : Manage replicasets] ******************************************************************************************************************************************
included: .../ansible/roles/ansible-cartridge/tasks/manage_replicaset.yml for storage-AE-0

TASK [ansible-cartridge : Manage storage-AE replicaset] ********************************************************************************************************************************
fatal: [storage-AE-0 -> xxx]: FAILED! => {"changed": false, "msg": "Failed to create \"storage-AE\" replicaset: \"xxx\": Connection timed out"}

NO MORE HOSTS LEFT *********************************************************************************************************************************************************************

PLAY RECAP *****************************************************************************************************************************************************************************
storage-AE-0               : ok=12   changed=0    unreachable=0    failed=1    skipped=2    rescued=0    ignored=0   

I propose to at least add a retry to that stage.

Create an example

We need an any example for this repo. Your example should contains steps to create a simple topology with any simple application (like KV, which we already have). The sample lists of sections:

  • environment for start
  • description of example topology
  • description of application
  • steps to deploy (with explanations)
  • result with simple test

[4pt] Call `edit_topology` only once

edit_topology request has been designed to bundle all replicaset change in one.

For example:
We faced an issue, when adding new storages to an existing cluster. When changing replicaset_weight from 0 to 1 on multiple replicasets ansible-cartridge started applying changes one-by-one. This leads to N rebalancing processes instead of one. Also, it is guranteed to fail since during rebalancing instances are in ConfiguringRoles stage, for some reason...

Extend instance readiness check with Cartridge state check

There are 2 main states we need to track:

  • instance is Unconfigured when it has just been started and has not been connected to cartridge
  • instance is in RolesConfigured state when all roles have been applied and instance is ready to accept requests

Where to configure the log file of tarantool

After I finished the ansible-playbook playbook.yml, everything worked fine, then I want to check the tarantool log files (debug, info, error, etc.), I found that there is no such file in the /var/lib/tarantool/ directory, where can I Configure this information

Support stateful failover

This is the last part of tarantool/cartridge#714

Remind, that stateboard systemd unit (/etc/systemd/system/<appname>-stateboard.service) is delivered in application package only if application contains stateboard.init.lua in it's root directory.

It reads configuration from <appname>-stateboard section in /etc/tarantool/conf.d files.

Support multi-file config uploads

There is a section in README how to apply app configuration. But it only supports inline (one-file) configs.

I want to be able to provide a directory with multi-file config, so cartridge role will archive it and upload into cartridge cluster for me.

Where should hosts.yml be placed?

Where should hosts.yml be placed, or how do I call this file?
$ tree
.
├── hosts.yml
├── playbook.yml
└── roles
└── ansible-cartridge
├── CHANGELOG.md
├── create-packages.sh
...

$ cat playbook.yml

  • name: Deploy my Tarantool Cartridge app
    hosts: all
    become: true
    become_user: root
    vars_files:
    • hosts.yml
      tasks:
    • name: Import Tarantool Cartridge role
      import_role:
      name: ansible-cartridge

$ ansible-playbook playbook.yml

PLAY [Deploy my Tarantool Cartridge app] *****************************************************************************************************************

TASK [Gathering Facts] ***********************************************************************************************************************************
ok: [host1]

TASK [ansible-cartridge : Fail for other distributions] **************************************************************************************************
skipping: [host1]

TASK [ansible-cartridge : Set cartridge_confdir and package_updated variables] ***************************************************************************
ok: [host1]

TASK [ansible-cartridge : Set remote_user for delegated tasks] *******************************************************************************************
ok: [host1]

TASK [ansible-cartridge : Validate config] ***************************************************************************************************************
fatal: [host1 -> localhost]: FAILED! => {"changed": false, "msg": "At least one of cartridge_app_name and cartridge_package_path must be specified

Vault secrets are not decrypted

Snippet from my inventory:

---
all:
  vars:
    cartridge_cluster_cookie: !vault |
      $ANSIBLE_VAULT;1.2;AES256;knazarov
      63393238306532306338376334393136343437626636393465313738643363623133663239386435
      6661303861613830616233343334343162653433316235620a343937313538303235626464386561
      66393730343965353932313732383736376135633639613737646434313261373862636430303364
      3137333463643730610a643163333463333138323161373563373038383839323338343839353266
      3630

If I print this variable like this, It is shown unencrypted:

---
- name: Deploy my Tarantool Cartridge app
  hosts: all
  become: true
  become_user: root
  tasks:
  - name: debug password
    debug:
      msg: "{{ cartridge_cluster_cookie }}"

But if I try to use ansible-cartridge, it doesn't decrypt this variable, when interpolating it into the configuration file. I guess that's because you do it with python, and not using jinja templates.

Speed up Gitlab CI tests

Build test image on docker:git base.
Install all tools required for molecule tests once and use this image in tests.

[4pt] Additional tarantool runtime parameters

Need more flexible tarantool configuration. Now only the memtx_memory parameter can be updated at runtime. I would like to update the rest of the parameters (e.g. box.cfg.readahead, box.cfg.net_msg_max...)

Play proceed even when task "Copy RPM" fail

I have one machine and multiple tarantool instances in my hosts.yml file. Task "Copy RPM" triggered only on the first instance. If it fail, play proceed executing for other instances but not for one with failed task.

storage-1                  : ok=5    changed=0    unreachable=0    failed=1    skipped=1    rescued=0    ignored=0
storage-1-replica          : ok=29   changed=0    unreachable=0    failed=0    skipped=6    rescued=0    ignored=0
storage-2                  : ok=8    changed=0    unreachable=0    failed=0    skipped=4    rescued=0    ignored=0
storage-2-replica          : ok=8    changed=0    unreachable=0    failed=0    skipped=4    rescued=0    ignored=0

Adding option any_errors_fatal to role task helped.

Consider to make this behaviour as default.

[1pt] Determine hosts uniqueness by host + port, not only by host

Now if I setup cluster is such a way that multiple VMs would be accessible through the same hostname but different ssh ports, ansible-cartridge would be able to deliver package only to the first VM, not all of them.

That happen because it filters hosts out only by host name, without considering ssh port.

This case might happen when nginx frontend proxy passes tcp ports to other machines or vagrant configuration forwards local ssh ports to VMs.

Распилить монолит ansible-cartridge

Мы попробовали использовать роль tarantool-cartridge в своем деплое, и у нас не получилось. Не хватает ряда фич, а так же возможности использовать только отдельные стадии деплоя.

Сейчас роль tarantool-cartridge использует тэги, чтобы выделять некоторые глобальные стадии. Это позволяет его частично использовать, встраивая в другие процессы, но неприемлимо в принципе. Невозможно подменить, например, стадию выгрузки артефакта. Что если я хочу грузить его по сети, а не локально?

То же касается и стадий failover/bootstrap/join и прочих. Необходимо распилить монолитную роль на пачку отдельных ролей, которые можно использовать независимо -- без привязки к оркестратору или типу пакета.

Как вариант, можно выделить такие этапы:

  • настройка окружения, факты
  • создание директорий
  • скачивание артефакта
  • установка артефакта
  • запуск инстансов
  • ожидание необходимого состояния инстансов
  • подключение к membership
  • join replicasets
  • expel
  • application config
  • bootstrap
  • failover

Unable to set cartridge_package_name using lookup plugin

This setup is not working:

  - name: Import Tarantool Cartridge role
    import_role:
      name: tarantool.cartridge
    vars:
      cartridge_package_path: "{{ lookup('fileglob', '*.rpm') }}"

Debug print tells this:

TASK [tarantool.cartridge : Debug] *****************************************************************
ok: [frontend] => {
    "cartridge_package_path": []
}

TASK [tarantool.cartridge : Copy RPM] **************************************************************
fatal: [frontend]: FAILED! => {"changed": false, "msg": "src (or content) is required"}

Check that cluster_cookie doesn't contain symbols

I made a mistake by generating a cluster cookie with a password generator, which included special symbols into the password (which of course led to cluster being unable to join together, because such passwords cannot be passed in URL).

I believe we should check that the password doesn't contain special symbols on the playbook level, and cowardly refuse to deploy in such case.

Make the README clear for beginners

We need to have a section in README like "Getting Started" which will contain some information about:

  • what Ansible is
  • how to install Ansible
  • link to example (read more about #21 )

Deploy with ansible fails with NoneType error

Cartridge CLI v2.2.0
Ansible v2.9.6
Ansible Role tarantool.cartridge, v1.6.0

When run the deploy with ansible-playbook it fails with the next exception

The full traceback is:
Traceback (most recent call last):
  File "/home/adcm/.ansible/tmp/ansible-tmp-1606219176.486087-16878776470125/AnsiballZ_cartridge_needs_restart.py", line 102, in <module>
    _ansiballz_main()
  File "/home/adcm/.ansible/tmp/ansible-tmp-1606219176.486087-16878776470125/AnsiballZ_cartridge_needs_restart.py", line 94, in _ansiballz_main
    invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
  File "/home/adcm/.ansible/tmp/ansible-tmp-1606219176.486087-16878776470125/AnsiballZ_cartridge_needs_restart.py", line 40, in invoke_module
    runpy.run_module(mod_name='ansible.modules.cartridge_needs_restart', init_globals=None, run_name='main', alter_sys=True)
  File "/usr/lib64/python2.7/runpy.py", line 176, in run_module
    fname, loader, pkg_name)
  File "/usr/lib64/python2.7/runpy.py", line 82, in _run_module_code
    mod_name, mod_fname, mod_loader, pkg_name)
  File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code
    exec code in run_globals
  File "/tmp/ansible_cartridge_needs_restart_payload_5Xopjw/ansible_cartridge_needs_restart_payload.zip/ansible/modules/cartridge_needs_restart.py", line 176, in <module>
  File "/tmp/ansible_cartridge_needs_restart_payload_5Xopjw/ansible_cartridge_needs_restart_payload.zip/ansible/modules/cartridge_needs_restart.py", line 165, in main
  File "/tmp/ansible_cartridge_needs_restart_payload_5Xopjw/ansible_cartridge_needs_restart_payload.zip/ansible/modules/cartridge_needs_restart.py", line 156, in needs_restart
TypeError: 'NoneType' object has no attribute '__getitem__'

As a workaround parameter restarted can be sent to true

Print hostname besides instance name

When deploying to one or multiple ansible hosts, I would like to know from an output at which host ansible is executing the task.

Right now there are no such info in ansible outputб only instance names:

TASK [tarantool.cartridge : Set one control instance to rule them all] **********************
ok: [storage-1 -> {{stress_host}}]

TASK [tarantool.cartridge : Cartridge auth] *************************************************
skipping: [storage-1]

TASK [tarantool.cartridge : Application config] *********************************************
skipping: [storage-1]

TASK [tarantool.cartridge : Bootstrap vshard] ***********************************************
changed: [storage-1 -> {{stress_host}}]

TASK [tarantool.cartridge : Manage failover] ************************************************
changed: [storage-1 -> {{stress_host}}]

TASK [Copy http mock server] ****************************************************************
ok: [storage-1]

TASK [Check if mock server is already running] **********************************************
changed: [storage-1]

TASK [Kill server server and remove pid file] ***********************************************
changed: [storage-1]

TASK [Remove pid file] **********************************************************************
changed: [storage-1]

TASK [Launch http mock server] **************************************************************
changed: [storage-1]

Deploy TGZ: divide task into root and non-root

https://github.com/tarantool/ansible-cartridge/tree/deploy-tgz-under-systemd
I have a suggestion about install_targz.yml: divide it into root and non-root tasks.
OS user creation, systemd unit configuration etc. are one-shot operations.
Tarball distribution rollout is a continuous operation.
It's a common case: one department of corporation configure OS (with root privileges) and another department configure CI to rollout tarball (without root privileges).

Support custom instance socket path

Summary

Add the ability to specify a custom path to the socket for every host.

Motivation

The directory /var/run/tarantool/ can not always be reached. This makes it impossible to use a role in such cases.

Design Detail

Add optional string parameter control_sock_path to instance hostvars and change the function to get the socket.

def get_instance_control_sock(instance_vars, stateboard=False):
    if instance_vars.get('control_sock_path'):
        return instance_vars['control_sock_path']

    return '/var/run/tarantool/{}.control'.format(
        get_instance_fullname(
            instance_vars['cartridge_app_name'],
            instance_vars['inventory_hostname'],
            stateboard))

Filter call will look like this:

join_instance_sock: '{{ hostvars[join_host] | get_instance_control_sock }}'

Drawbacks

When the ansible-cartridge role starts the instance, the default path /var/run/tarantool will be passed to it through from systemd unit. That is, specifying a custom path will not create a file socket, it must already exist.
Therefore, this feature is only suitable for managing an already created instances.

[2pt] Disable restart instance for hot reload

I would like to be able to deliver a new version of the application without restarting the application.
Otherwise, the hot boot functionality does not make sense.
Something like

    cartridge_app_name: getting-started-app
    cartridge_package_path: ./getting-started-app-2.0.0-0.rpm 
    cartridge_restart_instance: false
    cartridge_enable_tarantool_repo: false

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.