Skip to content

Latest commit

 

History

History
919 lines (696 loc) · 32 KB

workshop.asciidoc

File metadata and controls

919 lines (696 loc) · 32 KB

Ansible Workshop v2

Dag Wieers and Jeroen Hoekx v0.4, May 2015

Introduction

Slide presentation.

Ad-Hoc Commands

Using an inventory

Ansible only talks to hosts you want it to talk to. Those hosts are defined in an inventory file.

Example: Modified file "hosts"
[ansible@ws01 workshop]$ cat hosts
[vms]
vm-master ansible_ssh_host=192.168.122.X
vm-web ansible_ssh_host=192.168.122.Y
vm-db ansible_ssh_host=192.168.122.Z

You can see the IP address distributed by your hypervisor-specific DHCP on the console of each VM. Update the IP addresses to correspond with the ones the systems booted with.

Testing connectivity

The most straightforward usage of Ansible is to run commands on remote hosts. The ping module checks connectivity and correct python setup.

Let’s assume our VM is remote and ping it.

Example: Ping a system using Ansible ad-hoc commands
[root@vm-master workshop]# ansible vm-master -m ping -i hosts
vm-master | success >> {
    "changed": false,
    "ping": "pong"
}

The syntax is ansible <selector> <options>.

We only want to run it on the local host, so we choose vm-master as selector. The module is selected with the -m switch. The -i hosts switch tells Ansible which inventory file to use.

We can run this command on multiple hosts by using the group in the inventory:

Example: Ping your VMs using Ansible ad-hoc commands
[root@vm-master workshop]# ansible vms -m ping -i hosts
vm-master | success >> {
    "changed": false,
    "ping": "pong"
}

vm-web| success >> {
    "changed": false,
    "ping": "pong"
}

vm-db| success >> {
    "changed": false,
    "ping": "pong"
}

Using modules

The command module

The ping module is useful for testing things, but let’s run a generic command on all systems:

Example: Run a command on all your VMs
[root@vm-master workshop]# ansible vms -m command -a 'date' -i hosts
vm-master | success | rc=0 >>
Mon Feb  3 16:31:32 UTC 2014

vm-web | success | rc=0 >>
Mon Feb  3 16:31:33 UTC 2014

vm-db | success | rc=0 >>
Mon Feb  3 16:31:33 UTC 2014

That uses the command module and gives it arguments with -a.

Installing packages

The goal of this workshop is to deploy a web application. A useful thing to have in that situation is a web server. Let’s use Ansible to make sure Apache is installed and started. This introduces the yum and service modules.

Example: All your VMs are belong to us
[root@vm-master workshop]# ansible vm-web -m yum -a 'pkg=httpd state=installed' -i hosts
vm-web | success >> {
    "changed": false,
    "msg": "",
    "rc": 0,
    "results": [
        "httpd-2.2.15-29.el6.centos.i686 providing httpd is already installed"
    ]
}

Great. Apache was already installed on the system. Notice the arguments to the yum module. The state parameter is used by virtually every Ansible module.

Starting System Services

Let’s start Apache.

Example: Enable services using the service module
[root@vm-master workshop]# ansible vm-web -m service -a 'name=httpd state=started enabled=yes' -i hosts
vm-web | success >> {
    "changed": false,
    "enabled": true,
    "name": "httpd",
    "state": "started"
}

The service module is a distribution agnostic modules (like the user module). Package managers for example differ too much in their functionality to have one baseline module.

Another thing to notice is the yes argument to the enabled parameter. Boolean parameters in Ansible are yes and no.

Module Parameters

There are more parameters to a module than you would expect. An easy way to learn about a module is to use the ansible-doc command:

Example: Find module documentation on your system
[root@vm-master workshop]# ansible-doc service
> SERVICE

  Controls services on remote hosts. Supported init systems include
  BSD init, OpenRC, SysV, Solaris SMF, systemd, upstart.

Options (= is mandatory):

- arguments
        Additional arguments provided on the command line

- enabled
        Whether the service should start on boot. *At least one of
        state and enabled are required.* (Choices: yes, no)

= name
        Name of the service.
...

Inventory Groups

The application we want to deploy is multi-tiered. There is the web server part and the database part. It makes sense to be able to group hosts in functional groups.

The built-in inventory can do that:

Example: Modified file "hosts"
[vms]
vm-master ansible_ssh_host=192.168.122.X
vm-web ansible_ssh_host=192.168.122.Y
vm-db ansible_ssh_host=192.168.122.Z

[web-servers]
vm-web

[database-servers]
vm-db
Note
The simple inventory format is quite limited, especially when managing large inventories. If you plan on using Ansible for a large environment, you probably want to look into inventory scripts. Inventory scripts can access external sources (database, DNS, network inventories, Satellite, CMDB) and merge information from different sources. The sky is the limit…​

Playbooks

Running commands on potentially hundreds of systems is nice, but it does not scale as your infrastructure grows. We need a way to describe the system state. Ansible playbooks provide a declarative way to do just that.

Playbooks are written in YAML. One playbook consists of one or more plays. Each play has a number of tasks. Playbooks are run sequentially. This allows orchestration of actions on multiple system groups.

Note
YAML is a simple language that describes data, much like XML but for humans. JSON is a basis of YAML version 1.2, so you can use JSON style structures within a YAML document. YAML takes some time to get used to and, much like python, indenting is important for structuring data. A very good summary of what YAML is about is available from the example at Wikipedia.org: http://en.wikipedia.org/wiki/YAML

Creating your first playbook

Let’s show an example that ensures Apache is installed and running.

Example: Create new file "apache.yml"
- name: Configure Apache
  hosts: web-servers
  user: root

  tasks:

  - name: Install Apache
    yum: pkg=httpd state=installed

  - name: Start and Enable Apache
    service: name=httpd state=started enabled=yes

This playbook has one play. In that play are two tasks.

A play always indicates on which hosts it will run and as what user. Many more parameters are possible, but this is the minimum. Well, you could skip the user, but then it will run as your local user.

We’ve updated the hosts file to include a second group web-servers. Our vm-web system is in that group. Now let’s run this playbook to see what happens:

Example: apache.yml playbook output
[root@vm-master workshop]# ansible-playbook apache.yml -i hosts

PLAY [Configure Apache] *******************************************************

GATHERING FACTS ***************************************************************
ok: [vm-web]

TASK: [Install Apache] ********************************************************
ok: [vm-web]

TASK: [Start and Enable Apache] ***********************************************
ok: [vm-web]

PLAY RECAP ********************************************************************
vm-web                     : ok=3    changed=0    unreachable=0    failed=0

You will notice this playbook runs both tasks and has an additional step to gather facts. These facts are available as variables in the remainder of the play.

Gathering facts

Let’s see which facts are available. The setup module is used to gather them.

Example: facts output using the setup module
[root@ws01 workshop]# ansible vm-web -m setup -i hosts
vm-web | success >> {
    "ansible_facts": {
        "ansible_all_ipv4_addresses": [
            "192.168.122.21"
        ],
        "ansible_all_ipv6_addresses": [
            "fe80::5054:ff:fe23:1614"
        ],
        "ansible_architecture": "i386",
        "ansible_bios_date": "01/01/2011",
        "ansible_bios_version": "Bochs",
--snip--
        "ansible_distribution": "CentOS",
        "ansible_distribution_release": "Final",
        "ansible_distribution_version": "6.5",
        "ansible_domain": "",
--snip--
        "ansible_form_factor": "Other",
        "ansible_fqdn": "ws01",
        "ansible_hostname": "ws01",
        "ansible_interfaces": [
            "lo",
            "eth0"
        ],
        "ansible_kernel": "2.6.32-431.el6.i686",
--snip--
        "ansible_machine": "i686",
        "ansible_memfree_mb": 375,
        "ansible_memtotal_mb": 498,
--snip--
        "ansible_os_family": "RedHat",
        "ansible_pkg_mgr": "yum",
        "ansible_processor": [
            "QEMU Virtual CPU version 1.7.0",
            "QEMU Virtual CPU version 1.7.0"
        ],
        "ansible_processor_cores": 1,
        "ansible_processor_count": 2,
        "ansible_processor_threads_per_core": 1,
        "ansible_processor_vcpus": 2,
        "ansible_product_name": "Bochs",
        "ansible_product_serial": "NA",
        "ansible_product_uuid": "D17F90A9-2961-401A-ABF2-67A4986DB05A",
        "ansible_product_version": "NA",
        "ansible_python_version": "2.6.6",
        "ansible_selinux": false,
--snip--
        "ansible_swapfree_mb": 511,
        "ansible_swaptotal_mb": 511,
        "ansible_system": "Linux",
        "ansible_system_vendor": "Bochs",
        "ansible_user_id": "root",
        "ansible_userspace_architecture": "i386",
        "ansible_userspace_bits": "32",
        "ansible_virtualization_role": "guest",
        "ansible_virtualization_type": "kvm"
    },
    "changed": false
}

When the fact you need is not in there, you can also return your own facts in various ways.

Under the hood

Running the playbook with additional verbosity is possible with the -v flag. Adding one -v shows the output you would get when running a task with ansible.

Adding more -v's, up to three, increases the verbosity. It shows you exactly which SSH commands are run in the background.

Example: apache.yml playbook output
[root@vm-master workshop]# ansible-playbook apache.yml -i hosts -vvv

PLAY [Configure Apache] *******************************************************

GATHERING FACTS ***************************************************************
<192.168.122.44> ESTABLISH CONNECTION FOR USER: root on PORT 22 TO 192.168.122.44
<192.168.122.44> EXEC /bin/sh -c 'mkdir -p $HOME/.ansible/tmp/ansible-1391536177.48-154519686540036 && echo $HOME/.ansible/tmp/ansible-1391536177.48-154519686540036'
<192.168.122.44> REMOTE_MODULE setup
<192.168.122.44> PUT /tmp/tmpbeGxa0 TO /root/.ansible/tmp/ansible-1391536177.48-154519686540036/setup
<192.168.122.44> EXEC /bin/sh -c '/usr/bin/python /root/.ansible/tmp/ansible-1391536177.48-154519686540036/setup; rm -rf /root/.ansible/tmp/ansible-1391536177.48-154519686540036/ >/dev/null 2>&1'
ok: [vm-web]

TASK: [Install Apache] ********************************************************
<192.168.122.44> ESTABLISH CONNECTION FOR USER: root on PORT 22 TO 192.168.122.44
<192.168.122.44> EXEC /bin/sh -c 'mkdir -p $HOME/.ansible/tmp/ansible-1391536177.87-158335554738395 && echo $HOME/.ansible/tmp/ansible-1391536177.87-158335554738395'
<192.168.122.44> REMOTE_MODULE yum pkg=httpd state=installed
<192.168.122.44> PUT /tmp/tmpWFxgYv TO /root/.ansible/tmp/ansible-1391536177.87-158335554738395/yum
<192.168.122.44> EXEC /bin/sh -c '/usr/bin/python -tt /root/.ansible/tmp/ansible-1391536177.87-158335554738395/yum; rm -rf /root/.ansible/tmp/ansible-1391536177.87-158335554738395/ >/dev/null 2>&1'
ok: [vm-web] => {"changed": false, "msg": "", "rc": 0, "results": ["httpd-2.2.15-29.el6.centos.i686 providing httpd is already installed"]}

TASK: [Start and Enable Apache] ***********************************************
<192.168.122.44> ESTABLISH CONNECTION FOR USER: root on PORT 22 TO 192.168.122.44
<192.168.122.44> EXEC /bin/sh -c 'mkdir -p $HOME/.ansible/tmp/ansible-1391536178.35-83838710451863 && echo $HOME/.ansible/tmp/ansible-1391536178.35-83838710451863'
<192.168.122.44> REMOTE_MODULE service name=httpd state=started enabled=yes
<192.168.122.44> PUT /tmp/tmpWOsgaK TO /root/.ansible/tmp/ansible-1391536178.35-83838710451863/service
<192.168.122.44> PUT /tmp/tmpmofo2w TO /root/.ansible/tmp/ansible-1391536178.35-83838710451863/arguments
<192.168.122.44> EXEC /bin/sh -c '/usr/bin/python /root/.ansible/tmp/ansible-1391536178.35-83838710451863/service /root/.ansible/tmp/ansible-1391536178.35-83838710451863/arguments; rm -rf /root/.ansible/tmp/ansible-1391536178.35-83838710451863/ >/dev/null 2>&1'
ok: [vm-web] => {"changed": false, "enabled": true, "name": "httpd", "state": "started"}

PLAY RECAP ********************************************************************
vm-web                     : ok=3    changed=0    unreachable=0    failed=0

Configuring a default inventory

Up to now we’ve always used the -i switch to the ansible and ansible-playbook commands. We can configure a default.

Add a file ansible.cfg in the /root/workshop/ directory with this content:

Example: Content of "ansible.cfg"
[root@vm-master workshop]# cat ansible.cfg
[defaults]
hostfile       = hosts

A full list of options can be found in /etc/ansible/ansible.cfg.

Managing Configuration Files

When managing configuration files you often want to use certain variables in your file. That way one can reuse the same file template for multiple systems.

Let’s configure a virtual host on Apache for the web application we want to deploy.

Example: Modifications to file "apache.yml"
- name: Configure Apache
  hosts: web-servers
  user: root

  vars:
    webapp_dir: /var/www/html

  tasks:

  - name: Install Apache
    yum: pkg=httpd state=installed

  - name: Add webapp vhost
    template: src=templates/webapp.conf dest=/etc/httpd/conf.d/

  - name: Start and Enable Apache
    service: name=httpd state=started enabled=yes

This playbook shows the template module. This module allows templating a configuration file using the jinja2 templating language (also used in Flask).

The template is simple. It looks like this:

Example: Create new file "templates/webapp.conf"
<VirtualHost *:80>
    ServerName {{ inventory_hostname }}
    DocumentRoot {{ webapp_dir }}
</VirtualHost>

We use our newly defined webapp_dir variable and a magic inventory_hostname variable. The webapp_dir variable is defined in the vars section in the playbook. We’ll look into other ways to define them later.

When you run it, the output is:

Example: apache.yml playbook output
[root@vm-master workshop]# ansible-playbook apache.yml

PLAY [Configure Apache] *******************************************************

GATHERING FACTS ***************************************************************
ok: [vm-web]

TASK: [Install Apache] ********************************************************
ok: [vm-web]

TASK: [Add webapp vhost] ******************************************************
changed: [vm-web]

TASK: [Start and Enable Apache] ***********************************************
ok: [vm-web]

PLAY RECAP ********************************************************************
vm-web                     : ok=4    changed=1    unreachable=0    failed=0

Of course, we would have wanted a restart of Apache to apply the new config. This can be done with handlers and notify. A handler is a named task. This task will be executed when it is notified by that is run when the task that has it in notify returns changed.

Let’s update the template and add a comment using the magic ansible_managed variable to it:

Modified file "templates/webapp.conf"
# {{ ansible_managed }}
<VirtualHost *:80>
    ServerName {{ inventory_hostname }}
    DocumentRoot {{ webapp_dir }}
</VirtualHost>
Note
An easy way to test for changes is to use the --check option when using ansible-playbook, this will pretend to be running the playbook while not making any direct changes to the target systems. Additionally, you can use the --diff option to see what changes are being made on target systems so you can visually verify that what is happening is what is to be expected (using diff output).

Add the handler to the playbook and use notify to register the handler.

Modified file "apache.yml"
- name: Configure Apache
  hosts: web-servers
  user: root

  vars:
    webapp_dir: /var/www/html

  handlers:

  - name: restart-apache
    service: name=httpd state=restarted

  tasks:

  - name: Install Apache
    yum: pkg=httpd state=installed

  - name: Add webapp vhost
    template: src=templates/webapp.conf dest=/etc/httpd/conf.d/
    notify:
    - restart-apache

  - name: Start and Enable Apache
    service: name=httpd state=started enabled=yes

When we run it:

Example: apache.yml playbook output
[root@vm-master workshop]# ansible-playbook apache.yml

PLAY [Configure Apache] *******************************************************

GATHERING FACTS ***************************************************************
ok: [vm-web]

TASK: [Install Apache] ********************************************************
ok: [vm-web]

TASK: [Add webapp vhost] ******************************************************
changed: [vm-web]

TASK: [Start and Enable Apache] ***********************************************
ok: [vm-web]

NOTIFIED: [restart-apache] ****************************************************
changed: [vm-web]

PLAY RECAP ********************************************************************
vm-web                     : ok=5    changed=2    unreachable=0    failed=0

You can see in the statistics that the handler was activated.

The templated file includes the comment:

Example: Templated file "/etc/httpd/conf.d/webapp.conf"
# Ansible managed: /root/workshop/templates/webapp.conf modified on 2014-02-04 17:59:32 by root on vm-master
<VirtualHost *:80>
    ServerName vm-web
    DocumentRoot /var/www/html
</VirtualHost>

Using with_items

Our web application uses MySQL. For managing MySQL databases with Ansible, the MySQL-Python module is needed. We could use two tasks to install MySQL and the python module, but there is some syntactic sugar that helps.

Example: Create new file "mysql.yml"
- name: Configure MySQL
  hosts: database-servers
  user: root

  tasks:

  - name: Install MySQL
    yum: pkg={{ item }} state=installed
    with_items:
    - mysql-server
    - MySQL-python

  - name: Start MySQL
    service: name=mysqld state=started enabled=yes

The item variable contains the current item.

Example: mysql.yml playbook output
[root@vm-master workshop]# ansible-playbook mysql.yml

PLAY [Configure MySQL] ********************************************************

GATHERING FACTS ***************************************************************
ok: [vm-db]

TASK: [Install MySQL] *********************************************************
ok: [vm-db] => (item=mysql-server,MySQL-python)

TASK: [Start MySQL] ***********************************************************
changed: [vm-db]

PLAY RECAP ********************************************************************
vm-db                      : ok=3    changed=1    unreachable=0    failed=0

We want to be able to talk to MySQL in the web application we’ll soon write in PHP. Use the with_items syntax to change the Apache installation playbook to install PHP and the MySQL extension.

Example: Additions to file "apache.yml"
  - name: Install Apache and PHP
    yum: pkg={{ item }} state=installed
    with_items:
    - httpd
    - php-mysql

Using results of previous tasks

With the web server and the database up and running, we can start deploying the application.

Let’s first consider the case where the system is fresh. In that case we want to create a database and populate it with initial data. In subsequent reruns, we don’t want to populate the database again. We want it to keep the production data.

Example: Create new file "deploy.yml"
- name: Prepare deployment
  hosts: database-servers
  user: root

  tasks:

  - name: Create a temporary directory for database migratons
    file: dest=/tmp/db state=directory

  - name: Create the database
    mysql_db: name=ws state=present
    register: db_create

  - name: Copy initial schema
    copy: src=app/db/initial.sql dest=/tmp/db/
    when: db_create|changed

  - name: Initialize database
    mysql_db: name=ws state=import target=/tmp/db/initial.sql
    when: db_create|changed

  - name: Create a ws user
    mysql_user: name=ws host={{ hostvars[item]["ansible_ssh_host"] }} password=''
                priv=ws.*:SELECT state=present
    with_items:
    - '{{ groups["web-servers"] }}'

This introduces a few new modules.

We are mostly interested in the first mysql_db task. That task uses the register keyword. This stores the results of that task in the variable db_create.

Conditionals

The next two tasks use the when keyword. A task will only run when the condition in when is true. In this case we use the built-in jinja2 filter changed to only run that task when the task that registered db_create did change the system.

Magic Variables

The last task introduces a few magic variables. The hostvars dictionary contains the inventory variables for every host. Notice how we can use the item variable as the key in the hostvars dictionary.

The argument to with_items is also templated. The groups variable is a dictionary that contains a list of hosts for every group.

Globbing files

Now we can deploy the application. We will do it in a quick and dirty way, by just copying everything in a particular folder…​

Example: Additions to file "deploy.yml"
- name: Deploy the demo application
  hosts: web-servers
  user: root

  vars:
    webapp_dir: /var/www/html

  tasks:

  - name: Copy all php scripts to the web server
    copy: src={{ item }}
          dest={{ webapp_dir }}/
          owner=apache group=apache
    with_fileglob: app/web/*

This uses the with_fileglob lookup plugin, which provides exactly the functionality you’d expect it to have.

Note
Explain the various lookup_plugins that already exist, and the extensibility of the with_ keyword.

Including variables from files

That last play feels wrong. We’ve defined the webapp_dir variable here. We also defined the variable in the Apache playbook. When the webapp location changes we have to remember to change it in two places.

There is a better solution: including variables from files.

Example: Modifications in file "deploy.yml"
- name: Deploy the demo application
  hosts: web-servers
  user: root

  vars_files:
  - config.yml

  tasks:

  - name: Copy all php scripts to the web server
    template: src={{ item }}
              dest={{ webapp_dir }}/
              owner=apache group=apache
    with_fileglob: app/web/*

config.yml contains:

Example: Create new file "config.yml"
webapp_dir: /var/www/html

Working with database migrations

Application developers have the nasty habit of not being able to come up with a good data model. They require database schema changes with every deployment. This complicates the whole deployment process and makes rollbacks incredibly hard. We will do as every professional PaaS does, or even more, and allow only forward migrations.

This makes our complete deployment playbook:

Example: Final file "deploy.yml"
- name: Prepare deployment
  hosts: database-servers
  user: root

  vars:
    db_dir: /tmp/db

  tasks:

  - name: Create a temporary directory for database migrations
    file: dest={{ db_dir }} state=directory

  - name: Create the database
    mysql_db: name=ws state=present
    register: db_create

  - name: Create a ws user
    mysql_user: name=ws host={{ hostvars[item]["ansible_ssh_host"] }} password=''
                       priv=ws.*:SELECT state=present
    with_items:
    - '{{ groups["web-servers"] }}'

  - name: Copy initial schema
    copy: src=app/db/initial.sql dest={{ db_dir }}/
    when: db_create|changed

  - name: Initialize database
    mysql_db: name=ws state=import target={{ db_dir }}/initial.sql
    when: db_create|changed

  - name: Upload database migrations
    copy: src=app/db/migrations.sql dest={{ db_dir }}/

  - name: Run database migrations
    mysql_db: name=ws state=import target={{ db_dir }}/migrations.sql

- name: Deploy the demo application
  hosts: web-servers
  user: root

  vars_files:
  - config.yml

  tasks:

  - name: Copy all php scripts to the web server
    template: src={{ item }} dest={{ webapp_dir }}/ owner=apache group=apache
    with_fileglob: app/web/*

This playbook has two plays that talk to different host groups. They are run sequentially.

Example: deploy.yml playbook output
[root@vm-master workshop]# ansible-playbook deploy.yml

PLAY [Prepare deployment] *****************************************************

GATHERING FACTS ***************************************************************
ok: [vm-db]

TASK: [Create a temporary directory for database migrations] ******************
changed: [vm-db]

TASK: [Create the database] ***************************************************
changed: [vm-db]

TASK: [Create a ws user] ******************************************************
changed: [vm-db] => (item=vm-web)

TASK: [Copy initial schema] ***************************************************
changed: [vm-db]

TASK: [Initialize database] ***************************************************
changed: [vm-db]

TASK: [Upload database migrations] ********************************************
changed: [vm-db]

TASK: [Run database migrations] ***********************************************
changed: [vm-db]

PLAY [Deploy the demo application] ********************************************

GATHERING FACTS ***************************************************************
ok: [vm-web]

TASK: [Copy all php scripts to the web server] ********************************
changed: [vm-web] => (item=/root/workshop/app/web/index.php)

PLAY RECAP ********************************************************************
vm-db                      : ok=8    changed=7    unreachable=0    failed=0
vm-web                     : ok=2    changed=1    unreachable=0    failed=0

Ignoring errors

We can now deploy applications from our management system, which could be you CI system. The typical PaaS allows application deployment through git. That is just a few additional lines of code.

We will add a new directory under /root with a bare git repository (i.e. one without working tree). A git post-receive hook runs the application whenever you push to the repository.

Example: New file "auto-deploy.yml"
- name: Set up automatic application deployment repository
  hosts: vm-master
  user: root

  tasks:

  - name: Clone the workshop repository
    git: repo=/root/workshop
         dest=/root/deploy
         bare=yes
    ignore_errors: yes

  - name: Add post-receive hook to deploy
    copy: src=templates/post-receive
          dest=/root/deploy/hooks/
          mode=0755

Run the new playbook:

Example: auto-deploy.yml playbook output
[root@vm-master workshop]# ansible-playbook auto-deploy.yml

PLAY [Set up automatic application deployment repository] *********************

GATHERING FACTS ***************************************************************
ok: [vm-master]

TASK: [Clone the workshop repository] *****************************************
changed: [vm-master]

TASK: [Add post-receive hook to deploy] ***************************************
changed: [vm-master]

PLAY RECAP ********************************************************************
vm-master                  : ok=3    changed=2    unreachable=0    failed=0

Try to run it again.

Example: auto-deploy.yml playbook output
TASK: [Clone the workshop repository] *****************************************
failed: [ws01] => {"cmd": "/usr/bin/git ls-remote origin -h refs/heads/master", "failed": true, "rc": 128}
stderr: fatal: 'origin' does not appear to be a git repository
fatal: The remote end hung up unexpectedly

msg: fatal: 'origin' does not appear to be a git repository
fatal: The remote end hung up unexpectedly
...ignoring

This is something in the Ansible git module that makes incorrect assumptions about bare repositories. It needs to be fixed, but it serves as a good example of the ignore_errors directive.

PaaS deployment

Add the new repository as a remote of the current repository.

Example: Git command to add deploy remote
$ git remote add deploy /root/deploy

Now commit all files (especially the deployment playbook) and push:

Example: Git commands to push content
[root@vm-master workshop]# git status
# On branch master
# Changed but not updated:
#   (use "git add <file>..." to update what will be committed)
#   (use "git checkout -- <file>..." to discard changes in working directory)
#
#       modified:   hosts
#
# Untracked files:
#   (use "git add <file>..." to include in what will be committed)
#
#       ansible.cfg
#       apache.yml
#       auto-deploy.yml
#       config.yml
#       deploy.yml
#       mysql.yml
#       templates/webapp.conf
no changes added to commit (use "git add" and/or "git commit -a")
[root@vm-master workshop]# git add hosts ansible.cfg apache.yml auto-deploy.yml config.yml deploy.yml mysql.yml templates/webapp.conf
[root@vm-master workshop]# git commit -m "Prepare for deployment"
[master 94350cd] Prepare for deployment
 Committer: root <root@vm-master.(none)>
Your name and email address were configured automatically based
on your username and hostname. Please check that they are accurate.
You can suppress this message by setting them explicitly:

    git config --global user.name "Your Name"
    git config --global user.email [email protected]

If the identity used for this commit is wrong, you can fix it with:

    git commit --amend --author='Your Name <[email protected]>'

 8 files changed, 119 insertions(+), 3 deletions(-)
 create mode 100644 ansible.cfg
 create mode 100644 apache.yml
 create mode 100644 auto-deploy.yml
 create mode 100644 config.yml
 create mode 100644 deploy.yml
 create mode 100644 mysql.yml
 create mode 100644 templates/webapp.conf
[root@vm-master workshop]# git push deploy master
Counting objects: 14, done.
Delta compression using up to 2 threads.
Compressing objects: 100% (9/9), done.
Writing objects: 100% (11/11), 2.04 KiB, done.
Total 11 (delta 0), reused 0 (delta 0)
Unpacking objects: 100% (11/11), done.
remote: Deploying Ansible Workshop Web Application
remote: Initialized empty Git repository in /tmp/ws-deploy/.git/
remote:
remote: PLAY [Prepare deployment] *****************************************************
remote:
remote: GATHERING FACTS ***************************************************************
remote: ok: [vm-db]
remote:
remote: TASK: [Create a temporary directory for database migrations] ******************
remote: ok: [vm-db]
remote:
remote: TASK: [Create the database] ***************************************************
remote: ok: [vm-db]
remote:
remote: TASK: [Create a ws user] ******************************************************
remote: ok: [vm-db] => (item=vm-web)
remote:
remote: TASK: [Copy initial schema] ***************************************************
remote: skipping: [vm-db]
remote:
remote: TASK: [Initialize database] ***************************************************
remote: skipping: [vm-db]
remote:
remote: TASK: [Upload database migrations] ********************************************
remote: ok: [vm-db]
remote:
remote: TASK: [Run database migrations] ***********************************************
remote: changed: [vm-db]
remote:
remote: PLAY [Deploy the demo application] ********************************************
remote:
remote: GATHERING FACTS ***************************************************************
remote: ok: [vm-web]
remote:
remote: TASK: [Copy all php scripts to the web server] ********************************
remote: ok: [vm-web] => (item=/tmp/ws-deploy/app/web/index.php)
remote:
remote: PLAY RECAP ********************************************************************
remote: vm-db                      : ok=6    changed=1    unreachable=0    failed=0
remote: vm-web                     : ok=2    changed=0    unreachable=0    failed=0
remote:
To /root/deploy
   fde9e1c..94350cd  master -> master