Skip to main content

Ansible Tutorials for RHCE EX294 Certification Exam

Chapter #2: Running Ad-Hoc Commands

In the second chapter of RHCE Ansible EX 294 exam preparation series, you'll learn about configuring Ansible and running ad-hoc commands.

In the first part of the Ansible series, you got acquainted with Ansible and learned to install it.

In this tutorial, you will learn how to manage static inventory in Ansible. You will also understand various Ansible configuration settings.

Furthermore, you will explore few Ansible modules and you will get to run Ansible Ad-Hoc commands.

Before you see all that, I would like to thank all the LHB Pro members. This Ansible series is possible with their support. If you are not a pro member yet, please do consider opting for the subscription.

Creating an Ansible user

Even though you can use the root user in Ansible to run Ad-Hoc commands and playbooks, it’s definitely not recommended and is not considered best practice due to the security risks that can arise by allowing root user ssh access.

For this reason, it’s recommended that you create a dedicated Ansible user with sudo privileges (to all commands) on all hosts (control and managed hosts).

Remember, Ansible uses SSH and Python to do all the dirty work behind the scenes and so here are the four steps you would have to follow after installing Ansible:

  1. Create a new user on all hosts.
  2. Grant sudo privileges to the new user on all nodes.
  3. Generate SSH keys for the new user on the control node.
  4. Copy the SSH public key to the managed nodes.

So, without further ado, let’s start with creating a new user named elliot on all hosts:

[root@control ~]# useradd -m elliot
[root@node1 ~]# useradd -m elliot
[root@node2 ~]# useradd -m elliot
[root@node3 ~]# useradd -m elliot
[root@node4 ~]# useradd -m elliot

After setting elliot’s password on all hosts, you can move to step 2; you can grant elliot sudo privileges to all commands without password by adding the following entry to the /etc/sudoers file:

[root@control ~]# echo "elliot  ALL=(ALL)  NOPASSWD: ALL" >> /etc/sudoers
[root@node1 ~]# echo "elliot  ALL=(ALL)  NOPASSWD: ALL" >> /etc/sudoers
[root@node2 ~]# echo "elliot  ALL=(ALL)  NOPASSWD: ALL" >> /etc/sudoers
[root@node3 ~]# echo "elliot  ALL=(ALL)  NOPASSWD: ALL" >> /etc/sudoers
[root@node4 ~]# echo "elliot  ALL=(ALL)  NOPASSWD: ALL" >> /etc/sudoers

Now, login as user elliot on your control node and generate a ssh-key pair:

[elliot@control ~]$ ssh-keygen 
Generating public/private rsa key pair.
Enter file in which to save the key (/home/elliot/.ssh/id_rsa):       
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/elliot/.ssh/id_rsa.
Your public key has been saved in /home/elliot/.ssh/
The key fingerprint is:
SHA256:Xf5bKx0kkBCsCQ/7rc6Kv6CxCRTH2XJajbNvpzel+Ik elliot@control
The key's randomart image is:
+---[RSA 3072]----+
|        .oo .    |
|  . ooo  . o     |
| . = *=.o   o    |
|  o =.o+ . o . . |
| . . .. S . . o  |
|.     .. . . . . |
|.. .   oo.o   o o|
|. = o oo++.  . +.|
| + ..++Eoo.   o. |

Finally, you can copy elliot’s public ssh key to all managed hosts using the ssh-copy-id command as follows:

[elliot@control ~]$ ssh-copy-id node1
[elliot@control ~]$ ssh-copy-id node2
[elliot@control ~]$ ssh-copy-id node3
[elliot@control ~]$ ssh-copy-id node4

You should now be able to ssh into all managed nodes without being prompted for a password; you will only be asked to enter a ssh passphrase (if you didn’t leave it empty, ha-ha).

Building your Ansible inventory

An Ansible inventory file is a basically a file that contains a list of servers, group of servers, or ip addresses that references that hosts that you want to be managed by Ansible (managed nodes).

The /etc/ansible/hosts is the default inventory file. I will now show you how you to create your own inventory files in Ansible.

Creating a project directory

You don’t want to mess with /etc/ansible directory; you should keep everything in /etc/ansible intact and basically just use it as a reference when you are creating inventory files, editing Ansible project configuration files, etc.

Now, let’s make a new Ansible project directory named in /home/elliot named plays which you will use to store all your Ansible related things (playbooks, inventory files, roles, etc) that you will create from this point onwards:

[elliot@control ~]$ mkdir /home/elliot/plays

Notice that everything you will create from this point moving forward will be on the control node.

Creating an inventory file

Change to the /home/elliot/plays directory and create an inventory file named myhosts and add all your managed nodes hostnames so it will end up looking like this:

[elliot@control plays]$ cat myhosts 

You can now run the following Ansible command to list all your hosts in the myhosts inventory file:

[elliot@control plays]$ ansible all -i myhosts --list-hosts
  hosts (4):

The -i option was used to specify the myhosts inventory file. If you omit the -i option, Ansible will look for hosts in the /etc/ansible/hosts inventory file instead.

Keep in mind that I am using hostnames here and that all the nodes (vms) I have created on Azure are on the same subnet and I don’t have to worry about DNS as it’s handled by Azure.

If you don’t have a working DNS server, you can add your nodes IP address/hostname entries in /etc/hosts, below is an example:

Ansible nodes

Creating host groups and subgroups

You can organize your managed hosts into groups and subgroups. For example, you can edit the myhosts file to create two groups test and prod as follows:

[elliot@control plays]$ cat myhosts 


You can list the hosts in the prod group by running the following command:

[elliot@control plays]$ ansible prod -i myhosts --list-hosts
  hosts (2):

There are two default groups in Ansible:

  1. all - contains all the hosts in the inventory
  2. ungrouped - contains all the hosts that are not a member of any group (aside from all).

Let’s add an imaginary host node5 to the myhosts inventory file to demonstrate the ungrouped group:

[elliot@control plays]$ cat myhosts 



Notice that I added node5 to the very beginning (and not the end) of myhosts file, otherwise, it would be considered a member of the prod group.

Now you can run the following command to list all the ungrouped hosts:

[elliot@control plays]$ ansible ungrouped -i myhosts --list-hosts
  hosts (1):

You can also create a group (parent) that contains subgroups (children). Take a look at the following example:

[elliot@control plays]$ cat myhosts 






The development group contains all the hosts that are in web_dev plus all the members that are in db_dev. Similarly, the production group contains all the hosts that are in web_prod plus all the members that are in db_prod.

[elliot@control plays]$ ansible development -i myhosts --list-hosts
  hosts (2):

[elliot@control plays]$ ansible production -i myhosts --list-hosts
  hosts (2):

Configuring Ansible

In this section, you will learn about the most important Ansible configuration settings. Throughout, the whole series, you will be discussing other configuration settings when the need arises.

The /etc/ansible/ansible.cfg is the default configuration file. However, it’s also recommended that you don’t mess with /etc/ansible/ansible.cfg and just use it a reference. You should create your own Ansible configuration file in your Ansible project directory.

The ansible --version command will show you which configuration file you are currently using:

[elliot@control plays]$ ansible --version
ansible 2.9.14
  config file = /etc/ansible/ansible.cfg
  configured module search path = ['/home/elliot/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python3.6/site-packages/ansible
  executable location = /usr/bin/ansible
  python version = 3.6.8 (default, Dec  5 2019, 15:45:45) [GCC 8.3.1 20191121 (Red Hat 8.3.1-5)]

As you can see from the output, /etc/ansible/ansible.cfg is currently in use as you haven’t yet created your own ansible.cfg file in the project directory.

The /etc/ansible/ansible.cfg contains a whole of various Ansible configuration settings and sections:

[elliot@control plays]$ wc -l /etc/ansible/ansible.cfg 
490 /etc/ansible/ansible.cfg

The two most important sections that you need to define in your Ansible configuration file are:

  1. [defaults]
  2. [privilege_escalation]

In the [defaults] section, here are the most important settings you need to be aware of:

  • inventory - specifies the path of your inventory file.
  • remote_user - specifies the user who will connect to the managed hosts and run the playbooks.
  • forks - specifies the number of host that Ansible can manage/process in parallel; default is 5.
  • host_key_checking - specifies whether you want to turn on/off SSH key host checking; default is True.

In the [privilege_escalation]section, you can configure the following settings:

  • become - specify where to allow/disallow privilege escalation; default is False.
  • become_method - specify the privilege escalation method; default is sudo.
  • become_user - specify the user you become through privilege escalation; default is root.
  • become_ask_pass - specify whether to ask or not ask for privilege escalation password; default is False.

Keep in mind, you don’t need to commit any of these settings to memory. They are all documented in /etc/ansible/ansible.cfg.

Now create your own ansible.cfg configuration file in your Ansible project directory /home/elliot/plays and set the following settings:

Ansible configuration example

Now run the ansible --version command one more time; you should see that your new configuration file is now in effect:

[elliot@control plays]$ ansible --version
ansible 2.9.14
  config file = /home/elliot/plays/ansible.cfg
  configured module search path = ['/home/elliot/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python3.6/site-packages/ansible
  executable location = /usr/bin/ansible
  python version = 3.6.8 (default, Dec  5 2019, 15:45:45) [GCC 8.3.1 20191121 (Red Hat 8.3.1-5)]

Running Ad-Hoc Commands in Ansible

Until this point, you have really just been installing, setting up your environment, and configuring Ansible. Now, the real fun begins!

An Ansible ad-hoc commands is a great tool that you can use to run a single task on one or more managed nodes. A typical Ansible ad-hoc command follows the general syntax:

ansible host_pattern -m module_name -a "module_options"

The easiest way to understand how Ansible ad-hoc commands work is simply running one! So, go ahead and run the following ad-hoc command:

[elliot@control plays]$ ansible node1 -m command -a "uptime"
Enter passphrase for key '/home/elliot/.ssh/id_rsa':
node1 | CHANGED | rc=0 >>
18:53:01 up 5 days, 18:03,  1 user,  load average: 0.00, 0.01, 0.00

I was prompted to enter my ssh key passphrase and then the uptime of node1 was displayed! Now, check the figure below to help you understand each element of the ad-hoc command you just run:

Ansible ad-hoc command

You would probably have guessed it by now; ansible modules are reusable, standalone scripts that can be used by the Ansible API, or by the ansible or ansible-playbook programs.

The command module is one of the many modules that Ansible has to offer. You can run the ansible-doc -l command to list all the available Ansible modules:

[elliot@control plays]$ ansible-doc -l | wc -l

Currently, there are 3387 Ansible modules available, and they increase by the day! You can pass any command way you wish to run as an option to the Ansible command module.

If you don’t have an empty ssh key passphrase (just like me); then you would have to run ssh-agent to avoid the unnecessary headache of being prompted for a passphrase every single time Ansible try to access your managed nodes:

[elliot@control plays]$ eval `ssh-agent`
Agent pid 218750
[elliot@control plays]$ ssh-add
Enter passphrase for /home/elliot/.ssh/id_rsa: 
Identity added: /home/elliot/.ssh/id_rsa (elliot@control)

Testing Connectivity

You may want to test if Ansible can connect to all your managed nodes before getting into the more serious tasks; for this, you can use the ping module and specify all your managed hosts as follows:

[elliot@control plays]$ ansible all -m ping 
node4 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python"
    "changed": false,
    "ping": "pong"
node3 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/libexec/platform-python"
    "changed": false,
    "ping": "pong"
node1 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/libexec/platform-python"
    "changed": false,
    "ping": "pong"
node2 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/libexec/platform-python"
    "changed": false,
    "ping": "pong"

As you can see with all the SUCCESS in the output. Notice that the Ansible ping module doesn’t need any options. Some Ansible modules require options and some does not, just like the case with Linux commands.

Ansible Modules Documentation

If someone asked me what you like most about Ansible; I would quickly say it’s the documentation. Ansible is so very well documented and it’s all from the comfort of your own terminal.

If you want to how to use a specific Ansible module, then you can run ansible-doc followed by the module name.

For example, you can view the description of the ping module and how to use it by running:

[elliot@control plays]$ ansible-doc ping

This will open up the ping module documentation page:

Ansible doc page example

When reading modules documentation, pay especial attention to see if any option is prefixed by the equal sign (=). In this case, it’s a mandatory option that you must include.

Also, if you scroll all the way down, you can see some examples of how to run the ad-hoc commands or Ansible playbooks (that we will discuss later).

Command vs. Shell vs. Raw Modules

There are three Ansible modules that people often confuse with one another; these are:

  1. command
  2. shell
  3. raw

Those three modules achieve the same purpose; they run commands on the managed nodes. But there are key differences that separates the three modules.

You can’t use piping or redirection with the command module. For example, the following ad-hoc command will result in an error:

[elliot@control plays]$ ansible node2 -m command -a "lscpu | head -n 5"
node2 | FAILED | rc=1 >>
lscpu: invalid option -- 'n'
Try 'lscpu --help' for more information.non-zero return code

That’s because command module doesn’t support pipes or redirection. You can use the shell module instead if you want to use pipes or redirection. Run the same command again, but this time, use the shell module instead:

[elliot@control plays]$ ansible node2 -m shell -a "lscpu | head -n 5"
node2 | CHANGED | rc=0 >>
Architecture:        x86_64
CPU op-mode(s):      32-bit, 64-bit
Byte Order:          Little Endian
CPU(s):              1
On-line CPU(s) list: 0

Works like a charm! It successfully displayed the first five lines of the lscpu command output on node2.

Ansible uses SSH and Python scripts behind the scenes to do all the magic. Now, the raw module just uses SSH and bypasses the Ansible module subsystem. This way, that raw module would successfully work on the managed node even if python is not installed (on the managed node).

I tampered with my python binaries on node4 (please don’t do that yourself) so I can mimic a scenario of what will happen if you run the shell or command module on a node that doesn’t have python installed:

root@node4:/usr/bin# mkdir hide
root@node4:/usr/bin# mv python* hide/

Now check what will happen if I run an Ansible ad-hoc with the shell or command module targeting node4:

[elliot@control plays]$ ansible node4 -m shell -a "whoami"
node4 | FAILED! => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python"
    "changed": false,
    "module_stderr": "Shared connection to node4 closed.\r\n",
    "module_stdout": "/bin/sh: 1: /usr/bin/python: not found\r\n",
[elliot@control plays]$ ansible node4 -m command -a "cat /etc/os-release"
node4 | FAILED! => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python"
    "changed": false,
    "module_stderr": "Shared connection to node4 closed.\r\n",
    "module_stdout": "/bin/sh: 1: /usr/bin/python: not found\r\n",
    "msg": "The module failed to execute correctly, you probably need to set the interpreter.\nSee stdout/stderr for the exact error",
    "rc": 127

I get errors! Now I will try to achieve the same task; but this time, I will use the raw module:

[elliot@control plays]$ ansible node4 -m raw -a "cat /etc/os-release"
node4 | CHANGED | rc=0 >>
VERSION="18.04.5 LTS (Bionic Beaver)"
PRETTY_NAME="Ubuntu 18.04.5 LTS"
Shared connection to node4 closed.

As you can see, the raw module was the only module out of three modules to carry out the task successfully. Now I will go back fix the mess that I did on node4:

root@node4:/usr/bin/hide# mv * ..

I have created this table below to help summarize the different use cases for the three modules:

Description Command Shell Raw
Run simple commands Yes Yes Yes
Run commands with redirection No Yes Yes
Run commands without Python No No Yes

Alright! This takes us to the end of the second Ansible tutorial.

Ansible Playbook: Complete Beginners’s Guide
This is the third chapter of RHCE Ansible EX 294 exam preparation series that deals with one of the most important and exciting feature of Ansible.

Stay tuned for next tutorial as you are going to learn how to create and run Ansible playbooks. Don't forget to become a member :)

Ahmed Alkabary
Website Regina,Canada