Confgure Ansible to manage Windows Host

For us to manage a Windows Host using Ansible, we must configure WinRM (Windows Remote Management).

Copy this script and execute it on the target host.

On your Ansible host, add the target server on your hosts file.

Once we have these two configured, we can now test the connectivity.

On the next post, let’s test how to run Power Shell scripts using Ansible.

Ansible Custom Facts

When an Ansible playbook is executed, the first task that gets executed is the setup task. setup task gathers information about the remote system like  OS information, IP information, memory etc. These information are called facts.

We can create custom facts which can be available thru Ansible. In any Ansible controlled host, we can place one or more fact files under /etc/ansible/facts.d. These are files/scripts that returns JSON formatted data.

The following steps shows how we can create a custom facts file and deploy it to Ansible controlled hosts via an Ansible playbook.

Let’s create a sample fact file which returns the host serial number using dmidecode. This should show the same value as ansible_product_serial.

On our Ansible server, create a file /etc/ansible/facts.d/getSerial.fact with the following entries

#!/bin/bash

SERIAL_NUM=`dmidecode | grep "Serial Number" | head -n1 | sed -e 's/\tSerial Number: //g'`
cat <<EOF
{
 "serial_number" : "$SERIAL_NUM"
}
EOF

Let’s create a new playbook which will deploy the facts file and make it executable.

publish_custom_facts.yml

tasks:
 - name: "Create custom fact directory"
 file:
 path: "/etc/ansible/facts.d"
 state: "directory"

- name: "Insert custom fact file"
 copy:
 src: /etc/ansible/facts.d/getSerial.fact 
 dest: /etc/ansible/facts.d/getSerial.fact
 mode: 0755

- name: reload ansible_local
 setup: filter=ansible_local

Let’s run the playbook

[root@ansible GitTraining]# ansible-playbook publish_custom_facts.yml

PLAY [web] *********************************************************************

TASK [setup] *******************************************************************
ok: [192.168.0.53]

TASK [Create custom fact directory] ********************************************
changed: [192.168.0.53]

TASK [Insert custom fact file] *************************************************
changed: [192.168.0.53]

PLAY RECAP *********************************************************************
192.168.0.53 : ok=3 changed=2 unreachable=0 failed=0

[root@ansible GitTraining]#

On our target system, we can check that the new facts file was deployed.

[root@website ~]# ls -ltrh /etc/ansible/facts.d/getSerial.fact
-rwxr-xr-x. 1 root root 158 Mar 28 15:51 /etc/ansible/facts.d/getSerial.fact
[root@website ~]#

The fact can be found under the ansible_local namespace

[root@ansible GitTraining]# ansible -m setup web


192.168.0.53 | SUCCESS => {
 "ansible_facts": {
 "ansible_all_ipv4_addresses": [
 "10.10.1.120"
 ], 
 "ansible_all_ipv6_addresses": [
 "fe80::f816:3eff:fedf:fd6e"
 ], 
 "ansible_architecture": "x86_64", 
 "ansible_bios_date": "04/01/2014", 
 "ansible_bios_version": "seabios-1.7.5-11.el7", 
 "ansible_cmdline": {
 "BOOT_IMAGE": "/boot/vmlinuz-4.0.4-301.fc22.x86_64", 
 "LANG": "en_US.UTF-8", 
 "console": "tty1", 
 "initrd": "/boot/initramfs-4.0.4-301.fc22.x86_64.img", 
 "no_timer_check": true, 
 "quiet": true, 
 "rhgb": true, 
 "ro": true, 
 "root": "UUID=01bc7316-b1f4-45c9-a23a-00c5a2336ef2"
 }, 
 "ansible_date_time": {
 "date": "2017-03-28", 
 "day": "28", 
 "epoch": "1490712946", 
 "hour": "14", 
 "iso8601": "2017-03-28T14:55:46Z", 
 "iso8601_basic": "20170328T145546592104", 
 "iso8601_basic_short": "20170328T145546", 
 "iso8601_micro": "2017-03-28T14:55:46.592468Z", 
 "minute": "55", 
 "month": "03", 
 "second": "46", 
 "time": "14:55:46", 
 "tz": "UTC", 
 "tz_offset": "+0000", 
 "weekday": "Tuesday", 
 "weekday_number": "2", 
 "weeknumber": "13", 
 "year": "2017"
 }, 
 "ansible_default_ipv4": {
 "address": "10.10.1.120", 
 "alias": "eth0", 
 "broadcast": "10.10.1.255", 
 "gateway": "10.10.1.1", 
 "interface": "eth0", 
 "macaddress": "fa:16:3e:df:fd:6e", 
 "mtu": 1400, 
 "netmask": "255.255.255.0", 
 "network": "10.10.1.0", 
 "type": "ether"
 }, 
 "ansible_default_ipv6": {}, 
 "ansible_devices": {
 "vda": {
 "holders": [], 
 "host": "", 
 "model": null, 
 "partitions": {
 "vda1": {
 "sectors": "10483712", 
 "sectorsize": 512, 
 "size": "5.00 GB", 
 "start": "2048"
 }
 }, 
 "removable": "0", 
 "rotational": "1", 
 "sas_address": null, 
 "sas_device_handle": null, 
 "scheduler_mode": "", 
 "sectors": "10485760", 
 "sectorsize": "512", 
 "size": "5.00 GB", 
 "support_discard": "0", 
 "vendor": "0x1af4"
 }
 }, 
 "ansible_distribution": "Fedora", 
 "ansible_distribution_major_version": "22", 
 "ansible_distribution_release": "Twenty Two", 
 "ansible_distribution_version": "22", 
 "ansible_dns": {
 "nameservers": [
 "10.10.1.100", 
 "192.168.0.1", 
 "8.8.8.8"
 ], 
 "search": [
 "openstacklocal"
 ]
 }, 
 "ansible_domain": "", 
 "ansible_env": {
 "HOME": "/root", 
 "LANG": "en_US.UTF-8", 
 "LC_ALL": "en_US.UTF-8", 
 "LC_MESSAGES": "en_US.UTF-8", 
 "LESSOPEN": "||/usr/bin/lesspipe.sh %s", 
 "LOGNAME": "root", 
 "MAIL": "/var/mail/root", 
 "PATH": "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin", 
 "PWD": "/root", 
 "SELINUX_LEVEL_REQUESTED": "", 
 "SELINUX_ROLE_REQUESTED": "", 
 "SELINUX_USE_CURRENT_RANGE": "", 
 "SHELL": "/bin/bash", 
 "SHLVL": "2", 
 "SSH_CLIENT": "192.168.0.56 53598 22", 
 "SSH_CONNECTION": "192.168.0.56 53598 10.10.1.120 22", 
 "SSH_TTY": "/dev/pts/1", 
 "TERM": "xterm-256color", 
 "USER": "root", 
 "XDG_RUNTIME_DIR": "/run/user/0", 
 "XDG_SESSION_ID": "15", 
 "XMODIFIERS": "@im=ibus", 
 "_": "/usr/bin/python"
 }, 
 "ansible_eth0": {
 "active": true, 
 "device": "eth0", 
 "ipv4": {
 "address": "10.10.1.120", 
 "broadcast": "10.10.1.255", 
 "netmask": "255.255.255.0", 
 "network": "10.10.1.0"
 }, 
 "ipv6": [
 {
 "address": "fe80::f816:3eff:fedf:fd6e", 
 "prefix": "64", 
 "scope": "link"
 }
 ], 
 "macaddress": "fa:16:3e:df:fd:6e", 
 "module": "virtio_net", 
 "mtu": 1400, 
 "pciid": "virtio0", 
 "promisc": false, 
 "type": "ether"
 }, 
 "ansible_fips": false, 
 "ansible_form_factor": "Other", 
 "ansible_fqdn": "website", 
 "ansible_gather_subset": [
 "hardware", 
 "network", 
 "virtual"
 ], 
 "ansible_hostname": "website", 
 "ansible_interfaces": [
 "lo", 
 "eth0"
 ], 
 "ansible_kernel": "4.0.4-301.fc22.x86_64", 
 "ansible_lo": {
 "active": true, 
 "device": "lo", 
 "ipv4": {
 "address": "127.0.0.1", 
 "broadcast": "host", 
 "netmask": "255.0.0.0", 
 "network": "127.0.0.0"
 }, 
 "ipv6": [
 {
 "address": "::1", 
 "prefix": "128", 
 "scope": "host"
 }
 ], 
 "mtu": 65536, 
 "promisc": false, 
 "type": "loopback"
 }, 
 "ansible_local": {
 "getSerial": {
 "serial_number": "cf918881-ad39-4e2d-8cfb-fe21b39b8efb"
 }
 }, 
 "ansible_machine": "x86_64", 
 "ansible_machine_id": "1cb6c3aa84d24db1b0bbb0bcaefc0ebd", 
 "ansible_memfree_mb": 16, 
 "ansible_memory_mb": {
 "nocache": {
 "free": 164, 
 "used": 325
 }, 
 "real": {
 "free": 16, 
 "total": 489, 
 "used": 473
 }, 
 "swap": {
 "cached": 0, 
 "free": 0, 
 "total": 0, 
 "used": 0
 }
 }, 
 "ansible_memtotal_mb": 489, 
 "ansible_mounts": [
 {
 "device": "/dev/vda1", 
 "fstype": "ext4", 
 "mount": "/", 
 "options": "rw,seclabel,relatime,data=ordered", 
 "size_available": 3770216448, 
 "size_total": 5218238464, 
 "uuid": "01bc7316-b1f4-45c9-a23a-00c5a2336ef2"
 }
 ], 
 "ansible_nodename": "website", 
 "ansible_os_family": "RedHat", 
 "ansible_pkg_mgr": "dnf", 
 "ansible_processor": [
 "GenuineIntel", 
 "Westmere E56xx/L56xx/X56xx (Nehalem-C)"
 ], 
 "ansible_processor_cores": 1, 
 "ansible_processor_count": 1, 
 "ansible_processor_threads_per_core": 1, 
 "ansible_processor_vcpus": 1, 
 "ansible_product_name": "OpenStack Nova", 
 "ansible_product_serial": "cf918881-ad39-4e2d-8cfb-fe21b39b8efb", 
 "ansible_product_uuid": "86AB1D13-732D-844A-8987-575B8B22285A", 
 "ansible_product_version": "13.1.0-1.el7", 
 "ansible_python": {
 "executable": "/usr/bin/python", 
 "has_sslcontext": true, 
 "type": "CPython", 
 "version": {
 "major": 2, 
 "micro": 9, 
 "minor": 7, 
 "releaselevel": "final", 
 "serial": 0
 }, 
 "version_info": [
 2, 
 7, 
 9, 
 "final", 
 0
 ]
 }, 
 "ansible_python_version": "2.7.9", 
 "ansible_selinux": {
 "config_mode": "enforcing", 
 "mode": "enforcing", 
 "policyvers": 29, 
 "status": "enabled", 
 "type": "targeted"
 }, 
 "ansible_service_mgr": "systemd", 
 "ansible_ssh_host_key_ecdsa_public": "AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBRc7z7YqkdeEOUh/KZKsqgi35cYtmVfb3LdWyGenZKFRDNNLcjPc30vxiofER2zMjZ4HbmFb8D6RqqRk3hBFLA=", 
 "ansible_ssh_host_key_ed25519_public": "AAAAC3NzaC1lZDI1NTE5AAAAIIa/7/Fw+zKnM3caCzwS54flZcOa0dTVMfuiFf/mg6AF", 
 "ansible_ssh_host_key_rsa_public": "AAAAB3NzaC1yc2EAAAADAQABAAABAQC52hRFlHbugr2Z3iyQAg3cl33SirS/n3gHg+7lJkHHqC9mN2kIVDLEyAwu2UHjXKLOWoCS7+nt1wbZLG/+nYe/OVXjJOvW/9MkjxQEsqFOgzKdE/0RVDx1Lnk3TTS+oublE6V49IF4h2+YTz3DqfXF8geg5+d5uKmcBDwYisDTeumgsRgHOFM3O6xWaJdyN887jsZnfv2DHoU144h/pFeSDLJm+evuxNKbLxp9zw+ypty75Div7jn4Shs25hMmwJhRodYCy2azwJ87ZuH5xRVYJznACmlSenddbKc7BetdeKYCvyrq1yfDPTjngOy1CNfsFiumxyVdUfRYC59kLPOt", 
 "ansible_swapfree_mb": 0, 
 "ansible_swaptotal_mb": 0, 
 "ansible_system": "Linux", 
 "ansible_system_capabilities": [
 "cap_chown", 
 "cap_dac_override", 
 "cap_dac_read_search", 
 "cap_fowner", 
 "cap_fsetid", 
 "cap_kill", 
 "cap_setgid", 
 "cap_setuid", 
 "cap_setpcap", 
 "cap_linux_immutable", 
 "cap_net_bind_service", 
 "cap_net_broadcast", 
 "cap_net_admin", 
 "cap_net_raw", 
 "cap_ipc_lock", 
 "cap_ipc_owner", 
 "cap_sys_module", 
 "cap_sys_rawio", 
 "cap_sys_chroot", 
 "cap_sys_ptrace", 
 "cap_sys_pacct", 
 "cap_sys_admin", 
 "cap_sys_boot", 
 "cap_sys_nice", 
 "cap_sys_resource", 
 "cap_sys_time", 
 "cap_sys_tty_config", 
 "cap_mknod", 
 "cap_lease", 
 "cap_audit_write", 
 "cap_audit_control", 
 "cap_setfcap", 
 "cap_mac_override", 
 "cap_mac_admin", 
 "cap_syslog", 
 "cap_wake_alarm", 
 "cap_block_suspend", 
 "37+ep"
 ], 
 "ansible_system_capabilities_enforced": "True", 
 "ansible_system_vendor": "Fedora Project", 
 "ansible_uptime_seconds": 284442, 
 "ansible_user_dir": "/root", 
 "ansible_user_gecos": "root", 
 "ansible_user_gid": 0, 
 "ansible_user_id": "root", 
 "ansible_user_shell": "/bin/bash", 
 "ansible_user_uid": 0, 
 "ansible_userspace_architecture": "x86_64", 
 "ansible_userspace_bits": "64", 
 "ansible_virtualization_role": "NA", 
 "ansible_virtualization_type": "NA", 
 "module_setup": true
 }, 
 "changed": false
}
[root@ansible GitTraining]#

We can now use/fetch the fact by

[root@ansible GitTraining]# ansible -m setup web | grep -A 4 ansible_local
 "getSerial": {
 "serial_number": "cf918881-ad39-4e2d-8cfb-fe21b39b8efb"
 }
 }, 
[root@ansible GitTraining]#

Or by using the following in our playbook

{{ ansible_local.returnLastOctet }}

 

 

 

Configuring Ansible Hosts file

Ansible tracks all servers/devices it manages using a “hosts” file. But before we could configure this file we need to configure how our Ansible management host communicates with target servers.

One option is to configure SSH Password-less login between our Ansible host and target servers. Let’s create a Public and Private key using ssh-keygen on the Ansible host.

Configure ssh key

[root@ansible ~]# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
9f:48:14:e7:72:7d:dc:fe:ee:01:8f:46:13:1a:9e:be root@ansible
The key's randomart image is:
+--[ RSA 2048]----+
| . . |
| + . . . |
| o o...o .|
| . o. +... |
| S + + .|
| . o... = .|
| . o. o o.|
| o ..|
| E .o|
+-----------------+
[root@ansible ~]#

Copy the public key to the remote host using ssh-copy-id.

[root@ansible ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub 192.168.0.45
The authenticity of host '192.168.0.45 (192.168.0.45)' can't be established.
ECDSA key fingerprint is ff:cd:87:c9:d1:c2:e7:26:f8:c7:24:77:11:9a:c1:9a.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@192.168.0.45's password:

Number of key(s) added: 1

Now try logging into the machine, with: "ssh '192.168.0.45'"
and check to make sure that only the key(s) you wanted were added.

[root@ansible ~]#

As suggested, try logging into the remote machine.

[root@ansible ~]# ssh root@192.168.0.45
Last login: Mon Mar 27 12:02:00 2017
[root@undercloud ~]#

Now that we have configured how Ansible host communicates with our target servers, let’s open the  hosts file with root privileges

vi /etc/ansible/hosts

Ansible hosts file contains several examples. At the end of the file, add the following.

[testservers]
192.168.0.45

Now that we have our hosts set up and enough configuration details to allow us to successfully connect to our hosts, we can try out our very first command.

[root@ansible ~]# ansible -m ping all
192.168.0.45 | SUCCESS => {
 "changed": false, 
 "ping": "pong"
}
[root@ansible ~]#

This is a basic test to make sure that Ansible has a connection to all of its hosts.

 

Install Ansible on CentOS

Ensure that we have CentOS 7 EPEL repository installed

[root@ansible ~]# yum install epel-release
Loaded plugins: fastestmirror
Repodata is over 2 weeks old. Install yum-cron? Or run: yum makecache fast
base | 3.6 kB 00:00 
extras | 3.4 kB 00:00 
updates | 3.4 kB 00:00 
(1/2): updates/7/x86_64/primary_db | 3.8 MB 00:00 
(2/2): extras/7/x86_64/primary_db | 139 kB 00:00 
Determining fastest mirrors
 * base: centos.usonyx.net
 * extras: centos.usonyx.net
 * updates: centos.usonyx.net
Resolving Dependencies
--> Running transaction check
---> Package epel-release.noarch 0:7-9 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

================================================================================
 Package Arch Version Repository Size
================================================================================
Installing:
 epel-release noarch 7-9 extras 14 k

Transaction Summary
================================================================================
Install 1 Package

Total download size: 14 k
Installed size: 24 k
Is this ok [y/d/N]: y
Downloading packages:
epel-release-7-9.noarch.rpm | 14 kB 00:00 
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
 Installing : epel-release-7-9.noarch 1/1 
 Verifying : epel-release-7-9.noarch 1/1 

Installed:
 epel-release.noarch 0:7-9 

Complete!
[root@ansible ~]#

We can now install Ansible

[root@ansible ~]# yum install ansible -y
Loaded plugins: fastestmirror
epel/x86_64/metalink | 6.3 kB 00:00 
epel | 4.3 kB 00:00 
(1/3): epel/x86_64/updateinfo | 762 kB 00:03 
(2/3): epel/x86_64/group_gz | 170 kB 00:03 
(3/3): epel/x86_64/primary_db | 4.6 MB 00:09 
Loading mirror speeds from cached hostfile
 * base: centos.usonyx.net
 * epel: kartolo.sby.datautama.net.id
 * extras: centos.usonyx.net
 * updates: centos.usonyx.net
Resolving Dependencies
--> Running transaction check
---> Package ansible.noarch 0:2.2.1.0-1.el7 will be installed
--> Processing Dependency: sshpass for package: ansible-2.2.1.0-1.el7.noarch
--> Processing Dependency: python-six for package: ansible-2.2.1.0-1.el7.noarch
--> Processing Dependency: python-setuptools for package: ansible-2.2.1.0-1.el7.noarch
--> Processing Dependency: python-paramiko for package: ansible-2.2.1.0-1.el7.noarch
--> Processing Dependency: python-keyczar for package: ansible-2.2.1.0-1.el7.noarch
--> Processing Dependency: python-jinja2 for package: ansible-2.2.1.0-1.el7.noarch
--> Processing Dependency: python-httplib2 for package: ansible-2.2.1.0-1.el7.noarch
--> Processing Dependency: PyYAML for package: ansible-2.2.1.0-1.el7.noarch
--> Running transaction check
---> Package PyYAML.x86_64 0:3.10-11.el7 will be installed
--> Processing Dependency: libyaml-0.so.2()(64bit) for package: PyYAML-3.10-11.el7.x86_64
---> Package python-httplib2.noarch 0:0.7.7-3.el7 will be installed
---> Package python-jinja2.noarch 0:2.7.2-2.el7 will be installed
--> Processing Dependency: python-babel >= 0.8 for package: python-jinja2-2.7.2-2.el7.noarch
--> Processing Dependency: python-markupsafe for package: python-jinja2-2.7.2-2.el7.noarch
---> Package python-keyczar.noarch 0:0.71c-2.el7 will be installed
--> Processing Dependency: python-pyasn1 for package: python-keyczar-0.71c-2.el7.noarch
--> Processing Dependency: python-crypto for package: python-keyczar-0.71c-2.el7.noarch
---> Package python-setuptools.noarch 0:0.9.8-4.el7 will be installed
--> Processing Dependency: python-backports-ssl_match_hostname for package: python-setuptools-0.9.8-4.el7.noarch
---> Package python-six.noarch 0:1.9.0-2.el7 will be installed
---> Package python2-paramiko.noarch 0:1.16.1-2.el7 will be installed
--> Processing Dependency: python2-ecdsa for package: python2-paramiko-1.16.1-2.el7.noarch
---> Package sshpass.x86_64 0:1.06-1.el7 will be installed
--> Running transaction check
---> Package libyaml.x86_64 0:0.1.4-11.el7_0 will be installed
---> Package python-babel.noarch 0:0.9.6-8.el7 will be installed
---> Package python-backports-ssl_match_hostname.noarch 0:3.4.0.2-4.el7 will be installed
--> Processing Dependency: python-backports for package: python-backports-ssl_match_hostname-3.4.0.2-4.el7.noarch
---> Package python-markupsafe.x86_64 0:0.11-10.el7 will be installed
---> Package python2-crypto.x86_64 0:2.6.1-13.el7 will be installed
--> Processing Dependency: libtomcrypt.so.0()(64bit) for package: python2-crypto-2.6.1-13.el7.x86_64
---> Package python2-ecdsa.noarch 0:0.13-4.el7 will be installed
---> Package python2-pyasn1.noarch 0:0.1.9-7.el7 will be installed
--> Running transaction check
---> Package libtomcrypt.x86_64 0:1.17-23.el7 will be installed
--> Processing Dependency: libtommath >= 0.42.0 for package: libtomcrypt-1.17-23.el7.x86_64
--> Processing Dependency: libtommath.so.0()(64bit) for package: libtomcrypt-1.17-23.el7.x86_64
---> Package python-backports.x86_64 0:1.0-8.el7 will be installed
--> Running transaction check
---> Package libtommath.x86_64 0:0.42.0-4.el7 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

================================================================================
 Package Arch Version Repository
 Size
================================================================================
Installing:
 ansible noarch 2.2.1.0-1.el7 epel 4.6 M
Installing for dependencies:
 PyYAML x86_64 3.10-11.el7 base 153 k
 libtomcrypt x86_64 1.17-23.el7 epel 224 k
 libtommath x86_64 0.42.0-4.el7 epel 35 k
 libyaml x86_64 0.1.4-11.el7_0 base 55 k
 python-babel noarch 0.9.6-8.el7 base 1.4 M
 python-backports x86_64 1.0-8.el7 base 5.8 k
 python-backports-ssl_match_hostname noarch 3.4.0.2-4.el7 base 12 k
 python-httplib2 noarch 0.7.7-3.el7 epel 70 k
 python-jinja2 noarch 2.7.2-2.el7 base 515 k
 python-keyczar noarch 0.71c-2.el7 epel 218 k
 python-markupsafe x86_64 0.11-10.el7 base 25 k
 python-setuptools noarch 0.9.8-4.el7 base 396 k
 python-six noarch 1.9.0-2.el7 base 29 k
 python2-crypto x86_64 2.6.1-13.el7 epel 476 k
 python2-ecdsa noarch 0.13-4.el7 epel 83 k
 python2-paramiko noarch 1.16.1-2.el7 epel 258 k
 python2-pyasn1 noarch 0.1.9-7.el7 base 100 k
 sshpass x86_64 1.06-1.el7 epel 21 k

Transaction Summary
================================================================================
Install 1 Package (+18 Dependent packages)

Total download size: 8.5 M
Installed size: 38 M
Downloading packages:
(1/19): PyYAML-3.10-11.el7.x86_64.rpm | 153 kB 00:00 
(2/19): python-babel-0.9.6-8.el7.noarch.rpm | 1.4 MB 00:00 
(3/19): libyaml-0.1.4-11.el7_0.x86_64.rpm | 55 kB 00:00 
(4/19): python-backports-1.0-8.el7.x86_64.rpm | 5.8 kB 00:00 
(5/19): python-backports-ssl_match_hostname-3.4.0.2-4.el7. | 12 kB 00:00 
(6/19): python-jinja2-2.7.2-2.el7.noarch.rpm | 515 kB 00:00 
warning: /var/cache/yum/x86_64/7/epel/packages/python-httplib2-0.7.7-3.el7.noarch.rpm: Header V3 RSA/SHA256 Signature, key ID 352c64e5: NOKEY
Public key for python-httplib2-0.7.7-3.el7.noarch.rpm is not installed
(7/19): python-httplib2-0.7.7-3.el7.noarch.rpm | 70 kB 00:00 
(8/19): python-markupsafe-0.11-10.el7.x86_64.rpm | 25 kB 00:00 
(9/19): python-setuptools-0.9.8-4.el7.noarch.rpm | 396 kB 00:00 
(10/19): python-six-1.9.0-2.el7.noarch.rpm | 29 kB 00:00 
(11/19): python2-crypto-2.6.1-13.el7.x86_64.rpm | 476 kB 00:00 
(12/19): python2-ecdsa-0.13-4.el7.noarch.rpm | 83 kB 00:00 
(13/19): python2-paramiko-1.16.1-2.el7.noarch.rpm | 258 kB 00:00 
(14/19): python2-pyasn1-0.1.9-7.el7.noarch.rpm | 100 kB 00:00 
(15/19): sshpass-1.06-1.el7.x86_64.rpm | 21 kB 00:00 
(16/19): libtomcrypt-1.17-23.el7.x86_64.rpm | 224 kB 00:01 
(17/19): libtommath-0.42.0-4.el7.x86_64.rpm | 35 kB 00:02 
(18/19): ansible-2.2.1.0-1.el7.noarch.rpm | 4.6 MB 00:02 
(19/19): python-keyczar-0.71c-2.el7.noarch.rpm | 218 kB 00:05 
--------------------------------------------------------------------------------
Total 1.3 MB/s | 8.5 MB 00:06 
Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
Importing GPG key 0x352C64E5:
 Userid : "Fedora EPEL (7) <epel@fedoraproject.org>"
 Fingerprint: 91e9 7d7c 4a5e 96f1 7f3e 888f 6a2f aea2 352c 64e5
 Package : epel-release-7-9.noarch (@extras)
 From : /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
 Installing : python-six-1.9.0-2.el7.noarch 1/19 
 Installing : python2-ecdsa-0.13-4.el7.noarch 2/19 
 Installing : sshpass-1.06-1.el7.x86_64 3/19 
 Installing : python-babel-0.9.6-8.el7.noarch 4/19 
 Installing : python2-pyasn1-0.1.9-7.el7.noarch 5/19 
 Installing : libtommath-0.42.0-4.el7.x86_64 6/19 
 Installing : libtomcrypt-1.17-23.el7.x86_64 7/19 
 Installing : python2-crypto-2.6.1-13.el7.x86_64 8/19 
 Installing : python-keyczar-0.71c-2.el7.noarch 9/19 
 Installing : python2-paramiko-1.16.1-2.el7.noarch 10/19 
 Installing : python-backports-1.0-8.el7.x86_64 11/19 
 Installing : python-backports-ssl_match_hostname-3.4.0.2-4.el7.noarch 12/19 
 Installing : python-setuptools-0.9.8-4.el7.noarch 13/19 
 Installing : python-httplib2-0.7.7-3.el7.noarch 14/19 
 Installing : python-markupsafe-0.11-10.el7.x86_64 15/19 
 Installing : python-jinja2-2.7.2-2.el7.noarch 16/19 
 Installing : libyaml-0.1.4-11.el7_0.x86_64 17/19 
 Installing : PyYAML-3.10-11.el7.x86_64 18/19 
 Installing : ansible-2.2.1.0-1.el7.noarch 19/19 
 Verifying : python-keyczar-0.71c-2.el7.noarch 1/19 
 Verifying : libyaml-0.1.4-11.el7_0.x86_64 2/19 
 Verifying : python-jinja2-2.7.2-2.el7.noarch 3/19 
 Verifying : python-setuptools-0.9.8-4.el7.noarch 4/19 
 Verifying : python-backports-ssl_match_hostname-3.4.0.2-4.el7.noarch 5/19 
 Verifying : python-markupsafe-0.11-10.el7.x86_64 6/19 
 Verifying : python-httplib2-0.7.7-3.el7.noarch 7/19 
 Verifying : python2-ecdsa-0.13-4.el7.noarch 8/19 
 Verifying : libtomcrypt-1.17-23.el7.x86_64 9/19 
 Verifying : python-backports-1.0-8.el7.x86_64 10/19 
 Verifying : ansible-2.2.1.0-1.el7.noarch 11/19 
 Verifying : python2-paramiko-1.16.1-2.el7.noarch 12/19 
 Verifying : libtommath-0.42.0-4.el7.x86_64 13/19 
 Verifying : python2-pyasn1-0.1.9-7.el7.noarch 14/19 
 Verifying : PyYAML-3.10-11.el7.x86_64 15/19 
 Verifying : python-babel-0.9.6-8.el7.noarch 16/19 
 Verifying : python-six-1.9.0-2.el7.noarch 17/19 
 Verifying : sshpass-1.06-1.el7.x86_64 18/19 
 Verifying : python2-crypto-2.6.1-13.el7.x86_64 19/19

Installed:
 ansible.noarch 0:2.2.1.0-1.el7

Dependency Installed:
 PyYAML.x86_64 0:3.10-11.el7 
 libtomcrypt.x86_64 0:1.17-23.el7 
 libtommath.x86_64 0:0.42.0-4.el7 
 libyaml.x86_64 0:0.1.4-11.el7_0 
 python-babel.noarch 0:0.9.6-8.el7 
 python-backports.x86_64 0:1.0-8.el7 
 python-backports-ssl_match_hostname.noarch 0:3.4.0.2-4.el7 
 python-httplib2.noarch 0:0.7.7-3.el7 
 python-jinja2.noarch 0:2.7.2-2.el7 
 python-keyczar.noarch 0:0.71c-2.el7 
 python-markupsafe.x86_64 0:0.11-10.el7 
 python-setuptools.noarch 0:0.9.8-4.el7 
 python-six.noarch 0:1.9.0-2.el7 
 python2-crypto.x86_64 0:2.6.1-13.el7 
 python2-ecdsa.noarch 0:0.13-4.el7 
 python2-paramiko.noarch 0:1.16.1-2.el7 
 python2-pyasn1.noarch 0:0.1.9-7.el7 
 sshpass.x86_64 0:1.06-1.el7

Complete!
[root@ansible ~]#

We now have all of the software required to administer our servers through Ansible.

 

 

 

 

Configuring a Block Device on a Ceph Client

First, using the ceph-admin machine, check if the cluster is in a correct status:

[root@ceph-1 ~]# ceph -s
 cluster 376331d2-4da0-4f41-8040-0cf433148a08
 health HEALTH_OK
 monmap e1: 3 mons at {ceph-1=192.168.0.42:6789/0,ceph-2=192.168.0.43:6789/0,ceph-3=192.168.0.44:6789/0}
 election epoch 10, quorum 0,1,2 ceph-1,ceph-2,ceph-3
 osdmap e73: 9 osds: 9 up, 9 in
 flags sortbitwise,require_jewel_osds
 pgmap v603: 112 pgs, 7 pools, 8833 kB data, 184 objects
 344 MB used, 8832 MB / 9176 MB avail
 112 active+clean
 [root@ceph-1 ~]#

Create the block device

[root@ceph-1 ~]# rbd create myblock --size 200 --image-format 1
 rbd: image format 1 is deprecated
 [root@ceph-1 ~]#

In the above command, myblock is the name of the rbd image. 200 is the size in MB. RBD image must be in format 1. I tried without using that option but was getting “write error: No such device or address” when mapping the RBD device on my client host.

We can check the block device by issuing the following command.

[root@ceph-1 ~]# rbd list
 myblock
 [root@ceph-1 ~]#

From one of the monitor node, retrieve the client name and key by looking at the /etc/ceph/ceph.client.admin.keyring file.

[root@ceph-1 ~]# cat /etc/ceph/ceph.client.admin.keyring 
 [client.admin]
 key = AQByd5xYfFmqFBAABrv/q2mUKrQdS2Uo5nVq+g==
 caps mds = "allow *"
 caps mon = "allow *"
 caps osd = "allow *"
 [root@ceph-1 ~]#

On the client host, verify if the kernel supports rbd modules.

[root@client ~]# modprobe rbd
 [root@client ~]#

If it gives you an error, then RBD is not installed. Install kmod-rbd and kmod-libceph packages and reload the RBD module.

Map the RBD device in the client host.

[root@client ~]# echo "192.168.0.42,192.168.0.43,192.168.0.44 name=admin,secret=AQByd5xYfFmqFBAABrv/q2mUKrQdS2Uo5nVq+g== rbd myblock" > /sys/bus/rbd/add

The above command will create a new device on the client host.

[root@client ~]# ll /dev/rbd*
 brw-rw---- 1 root disk 252, 0 Feb 16 16:36 /dev/rbd0

Time to format the device.

[root@client ~]# mkfs.ext4 /dev/rbd0 
 mke2fs 1.42.9 (28-Dec-2013)
 Discarding device blocks: done 
 Filesystem label=
 OS type: Linux
 Block size=1024 (log=0)
 Fragment size=1024 (log=0)
 Stride=4096 blocks, Stripe width=4096 blocks
 51200 inodes, 204800 blocks
 10240 blocks (5.00%) reserved for the super user
 First data block=1
 Maximum filesystem blocks=33816576
 25 block groups
 8192 blocks per group, 8192 fragments per group
 2048 inodes per group
 Superblock backups stored on blocks: 
 8193, 24577, 40961, 57345, 73729

Allocating group tables: done 
 Writing inode tables: done 
 Creating journal (4096 blocks): done
 Writing superblocks and filesystem accounting information: done

Create a mount point and mount the device

[root@client ~]# mkdir /mnt/cephblock
 [root@client ~]# mount /dev/rbd0 /mnt/cephblock/

The device is mounted and ready to be used.

[root@client ~]# lsblk
 NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
 vda 253:0 0 20G 0 disk 
 └─vda1 253:1 0 20G 0 part /
 vdb 253:16 0 1G 0 disk 
 rbd0 252:0 0 200M 0 disk /mnt/cephblock
 [root@client ~]#
[root@client ~]# df -h
 Filesystem Size Used Avail Use% Mounted on
 /dev/vda1 20G 1.7G 19G 9% /
 devtmpfs 902M 0 902M 0% /dev
 tmpfs 920M 0 920M 0% /dev/shm
 tmpfs 920M 25M 896M 3% /run
 tmpfs 920M 0 920M 0% /sys/fs/cgroup
 tmpfs 184M 0 184M 0% /run/user/0
 /dev/rbd0 190M 1.6M 175M 1% /mnt/cephblock

When needed, you can unmount the filesystem and remove the RBD device using:

echo "0" >/sys/bus/rbd/remove

Creating and attaching a Cinder volume to an OpenStack instance

Using openstack-cli tools, let’s create a new 1GB cinder volume with a display name of repvolume

[root@localhost ~(keystone_demo)]# cinder create 1 --display-name repvolume
+---------------------+--------------------------------------+
| Property | Value |
+---------------------+--------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| created_at | 2015-09-16T15:59:45.025134 |
| display_description | None |
| display_name | repvolume |
| encrypted | False |
| id | 4efa3212-55bc-417d-a8ed-e88fa63f05d3 |
| metadata | {} |
| multiattach | false |
| size | 1 |
| snapshot_id | None |
| source_volid | None |
| status | creating |
| volume_type | None |
+---------------------+--------------------------------------+

Check if the volume was created successfully

[root@localhost ~(keystone_demo)]# cinder list
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
| ID | Status | Display Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
| 4efa3212-55bc-417d-a8ed-e88fa63f05d3 | available | repvolume | 1 | - | false | |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+

Let’s try to attach repvolume to one of our instance.

[root@localhost ~(keystone_demo)]# nova list
+--------------------------------------+----------+---------+------------+-------------+------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+----------+---------+------------+-------------+------------------+
| 38b5a3e7-e540-42fb-a25f-ea6e79c4d372 | client01 | ACTIVE | - | Running | private=10.0.0.6 |
| 431ba9e5-616c-401e-8dd6-e8269420246c | web01 | SHUTOFF | - | Shutdown | private=10.0.0.3 |
| 99584428-2d5d-4d3d-bd19-5a7a77cb7f24 | web02 | SHUTOFF | - | Shutdown | private=10.0.0.4 |
+--------------------------------------+----------+---------+------------+-------------+------------------+

Using client01 instance ID and the repvolume ID, let’s attach the cinder volume to our instance by issuing the following command.

[root@localhost ~(keystone_demo)]# nova volume-attach 38b5a3e7-e540-42fb-a25f-ea6e79c4d372 4efa3212-55bc-417d-a8ed-e88fa63f05d3 auto
+----------+--------------------------------------+
| Property | Value |
+----------+--------------------------------------+
| device | /dev/vdb |
| id | 4efa3212-55bc-417d-a8ed-e88fa63f05d3 |
| serverId | 38b5a3e7-e540-42fb-a25f-ea6e79c4d372 |
| volumeId | 4efa3212-55bc-417d-a8ed-e88fa63f05d3 |
+----------+--------------------------------------+

Issuing cinder list again should show that our volume is attached to client01

[root@localhost ~(keystone_demo)]# cinder list
+--------------------------------------+--------+--------------+------+-------------+----------+--------------------------------------+
| ID | Status | Display Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+--------+--------------+------+-------------+----------+--------------------------------------+
| 4efa3212-55bc-417d-a8ed-e88fa63f05d3 | in-use | repvolume | 1 | - | false | 38b5a3e7-e540-42fb-a25f-ea6e79c4d372 |
+--------------------------------------+--------+--------------+------+-------------+----------+--------------------------------------+

Next let’s login to client01.

Issue lsblk to see the available block device.  Since this is a new block device, we need to format it

mkfs.ext4 /dev/vdb

Now let’s create a new directory where we could mount the new volume.

mkdir /repvolume

Using the mount command, let’s mount the device.

mount /dev/vdb /repvolume

Issuing df command should show the new volume

 

 

 

 

 

Launching new instance using openstack-cli

Since we will be using the out of the box demo project/account, execute the following to set the needed env variables.

[root@localhost ~]# source keystonerc_demo

Let’s list available instance

[root@localhost ~(keystone_demo)]# nova list
+----+------+--------+------------+-------------+----------+
| ID | Name | Status | Task State | Power State | Networks |
+----+------+--------+------------+-------------+----------+
+----+------+--------+------------+-------------+----------+

Base from the above output, we don’t have any available instance under our project.

Let’s list available image(Glance)

[root@localhost ~(keystone_demo)]# nova image-list
+--------------------------------------+--------+--------+--------+
| ID | Name | Status | Server |
+--------------------------------------+--------+--------+--------+
| 411d5791-c29a-47e3-8d73-c9ec91765beb | cirros | ACTIVE | |
+--------------------------------------+--------+--------+--------+

We need the available network (Neutron) so issue the following command

[root@localhost ~(keystone_demo)]# neutron net-list
+--------------------------------------+---------+------------------------------------------------------+
| id | name | subnets |
+--------------------------------------+---------+------------------------------------------------------+
| bf7bfa1d-4f15-4883-a49e-9bfea12f30db | public | 23ab53f6-a805-42a9-a9c9-9ba2af16dc59 172.24.4.224/28 |
| b3c770f6-0d7e-4361-b644-0d73f8a152f3 | private | 3c05d2f6-9469-4e56-ad32-f7df9a414336 10.0.0.0/24 |
+--------------------------------------+---------+------------------------------------------------------+

Time to create a new instance with a name, web01, using the cirros image and attaching it to the private subnet.

[root@localhost ~(keystone_demo)]# nova boot --image cirros --flavor m1.tiny web01 --nic net-id=b3c770f6-0d7e-4361-b644-0d73f8a152f3
+--------------------------------------+-----------------------------------------------+
| Property | Value |
+--------------------------------------+-----------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | nova |
| OS-EXT-STS:power_state | 0 |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | - |
| OS-SRV-USG:terminated_at | - |
| accessIPv4 | |
| accessIPv6 | |
| adminPass | 9KTrQ2mxzjGQ |
| config_drive | |
| created | 2015-09-15T14:34:30Z |
| flavor | m1.tiny (1) |
| hostId | |
| id | a6926f88-6787-4559-b872-37572ad39b26 |
| image | cirros (411d5791-c29a-47e3-8d73-c9ec91765beb) |
| key_name | - |
| metadata | {} |
| name | web01 |
| os-extended-volumes:volumes_attached | [] |
| progress | 0 |
| security_groups | default |
| status | BUILD |
| tenant_id | 27314e6ab1024e0b9ed1c83d9459352a |
| updated | 2015-09-15T14:34:32Z |
| user_id | 8eecc18e9ec2424c97170d3073d36847 |
+--------------------------------------+-----------------------------------------------+

Create a new instance with name=web02 and attach it to the private subnet

[root@localhost ~(keystone_demo)]# nova boot --image cirros --flavor m1.tiny web02 --nic net-id=b3c770f6-0d7e-4361-b644-0d73f8a152f3
+--------------------------------------+-----------------------------------------------+
| Property | Value |
+--------------------------------------+-----------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | nova |
| OS-EXT-STS:power_state | 0 |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | - |
| OS-SRV-USG:terminated_at | - |
| accessIPv4 | |
| accessIPv6 | |
| adminPass | MPi5MQYnJB4u |
| config_drive | |
| created | 2015-09-15T14:34:55Z |
| flavor | m1.tiny (1) |
| hostId | |
| id | 7d7200f3-7350-43b7-a5f0-01c60af1aacb |
| image | cirros (411d5791-c29a-47e3-8d73-c9ec91765beb) |
| key_name | - |
| metadata | {} |
| name | web02 |
| os-extended-volumes:volumes_attached | [] |
| progress | 0 |
| security_groups | default |
| status | BUILD |
| tenant_id | 27314e6ab1024e0b9ed1c83d9459352a |
| updated | 2015-09-15T14:34:56Z |
| user_id | 8eecc18e9ec2424c97170d3073d36847 |
+--------------------------------------+-----------------------------------------------+

You can check the status of the build by issuing nova list.

[root@localhost ~(keystone_demo)]# nova list
+--------------------------------------+----------+--------+------------+-------------+----------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+----------+--------+------------+-------------+----------+
| a6926f88-6787-4559-b872-37572ad39b26 | web01 | BUILD | spawning | NOSTATE | |
| 7d7200f3-7350-43b7-a5f0-01c60af1aacb | web02 | BUILD | spawning | NOSTATE | |
+--------------------------------------+----------+--------+------------+-------------+----------+