summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorAlicia Cozine <879121+acozine@users.noreply.github.com>2019-10-08 12:46:38 -0500
committerSandra McCann <samccann@redhat.com>2019-10-08 13:46:38 -0400
commit941a9b68fc262182172e081533e43ccbf60c453f (patch)
treea5f2e17ed3cc9e6f54a0974de6809bb11bf57e11
parentda1a9450885bc51b0268b7ec7c0830e35e71583e (diff)
downloadansible-941a9b68fc262182172e081533e43ccbf60c453f.tar.gz
Docs: User guide overhaul, part 1 (#63056)
-rw-r--r--docs/docsite/rst/index.rst1
-rw-r--r--docs/docsite/rst/installation_guide/intro_installation.rst2
-rw-r--r--docs/docsite/rst/network/getting_started/basic_concepts.rst37
-rw-r--r--docs/docsite/rst/reference_appendices/logging.rst14
-rw-r--r--docs/docsite/rst/shared_snippets/basic_concepts.txt29
-rw-r--r--docs/docsite/rst/user_guide/basic_concepts.rst12
-rw-r--r--docs/docsite/rst/user_guide/become.rst211
-rw-r--r--docs/docsite/rst/user_guide/collections_using.rst2
-rw-r--r--docs/docsite/rst/user_guide/command_line_tools.rst2
-rw-r--r--docs/docsite/rst/user_guide/connection_details.rst80
-rw-r--r--docs/docsite/rst/user_guide/guide_rolling_upgrade.rst96
-rw-r--r--docs/docsite/rst/user_guide/index.rst12
-rw-r--r--docs/docsite/rst/user_guide/intro_adhoc.rst251
-rw-r--r--docs/docsite/rst/user_guide/intro_bsd.rst70
-rw-r--r--docs/docsite/rst/user_guide/intro_dynamic_inventory.rst113
-rw-r--r--docs/docsite/rst/user_guide/intro_getting_started.rst167
-rw-r--r--docs/docsite/rst/user_guide/intro_inventory.rst316
-rw-r--r--docs/docsite/rst/user_guide/intro_patterns.rst151
-rw-r--r--docs/docsite/rst/user_guide/modules_intro.rst18
-rw-r--r--docs/docsite/rst/user_guide/playbooks_async.rst31
20 files changed, 873 insertions, 742 deletions
diff --git a/docs/docsite/rst/index.rst b/docs/docsite/rst/index.rst
index 47f9069604..5a22b66013 100644
--- a/docs/docsite/rst/index.rst
+++ b/docs/docsite/rst/index.rst
@@ -81,6 +81,7 @@ Ansible releases a new major release of Ansible approximately three to four time
reference_appendices/module_utils
reference_appendices/special_variables
reference_appendices/tower
+ reference_appendices/logging
.. toctree::
diff --git a/docs/docsite/rst/installation_guide/intro_installation.rst b/docs/docsite/rst/installation_guide/intro_installation.rst
index c61bb33a4b..bbc7ecb949 100644
--- a/docs/docsite/rst/installation_guide/intro_installation.rst
+++ b/docs/docsite/rst/installation_guide/intro_installation.rst
@@ -45,6 +45,8 @@ Currently Ansible can be run from any machine with Python 2 (version 2.7) or Pyt
This includes Red Hat, Debian, CentOS, macOS, any of the BSDs, and so on.
+When choosing a control node, bear in mind that any management system benefits from being run near the machines being managed. If you are running Ansible in a cloud, consider running it from a machine inside that cloud. In most cases this will work better than on the open Internet.
+
.. note::
macOS by default is configured for a small number of file handles, so if you want to use 15 or more forks you'll need to raise the ulimit with ``sudo launchctl limit maxfiles unlimited``. This command can also fix any "Too many open files" error.
diff --git a/docs/docsite/rst/network/getting_started/basic_concepts.rst b/docs/docsite/rst/network/getting_started/basic_concepts.rst
index ec4c22359b..980b144d35 100644
--- a/docs/docsite/rst/network/getting_started/basic_concepts.rst
+++ b/docs/docsite/rst/network/getting_started/basic_concepts.rst
@@ -1,37 +1,10 @@
-***************************************
+**************
Basic Concepts
-***************************************
+**************
These concepts are common to all uses of Ansible, including network automation. You need to understand them to use Ansible for network automation. This basic introduction provides the background you need to follow the examples in this guide.
-.. contents:: Topics
+.. contents::
+ :local:
-Control Node
-================================================================================
-
-Any machine with Ansible installed. You can run commands and playbooks, invoking ``/usr/bin/ansible`` or ``/usr/bin/ansible-playbook``, from any control node. You can use any computer that has Python installed on it as a control node - laptops, shared desktops, and servers can all run Ansible. However, you cannot use a Windows machine as a control node. You can have multiple control nodes.
-
-Managed Nodes
-================================================================================
-
-The network devices (and/or servers) you manage with Ansible. Managed nodes are also sometimes called "hosts". Ansible is not installed on managed nodes.
-
-Inventory
-================================================================================
-
-A list of managed nodes. An inventory file is also sometimes called a "hostfile". Your inventory can specify information like IP address for each managed node. An inventory can also organize managed nodes, creating and nesting groups for easier scaling. To learn more about inventory, see :ref:`the Working with Inventory<intro_inventory>` section.
-
-Modules
-================================================================================
-
-The units of code Ansible executes. Each module has a particular use, from administering users on a specific type of database to managing VLAN interfaces on a specific type of network device. You can invoke a single module with a task, or invoke several different modules in a playbook. For an idea of how many modules Ansible includes, take a look at the :ref:`list of all modules <modules_by_category>` or the :ref:`list of network modules<network_modules>`.
-
-Tasks
-================================================================================
-
-The units of action in Ansible. You can execute a single task once with an ad-hoc command.
-
-Playbooks
-================================================================================
-
-Ordered lists of tasks, saved so you can run those tasks in that order repeatedly. Playbooks can include variables as well as tasks. Playbooks are written in YAML and are easy to read, write, share and understand. To learn more about playbooks, see :ref:`about_playbooks`.
+.. include:: ../../shared_snippets/basic_concepts.txt
diff --git a/docs/docsite/rst/reference_appendices/logging.rst b/docs/docsite/rst/reference_appendices/logging.rst
new file mode 100644
index 0000000000..6fbd044011
--- /dev/null
+++ b/docs/docsite/rst/reference_appendices/logging.rst
@@ -0,0 +1,14 @@
+**********************
+Logging Ansible output
+**********************
+
+By default Ansible sends output about plays, tasks, and module arguments to your screen (STDOUT) on the control node. If you want to capture Ansible output in a log, you have three options:
+
+* To save Ansible output in a single log on the control node, set the ``log_path`` :ref:`configuration file setting <intro_configuration>`. You may also want to set ``display_args_to_stdout``, which helps to differentiate similar tasks by including variable values in the Ansible output.
+* To save Ansible output in separate logs, one on each managed node, set the ``no_target_syslog`` and ``syslog_facility`` :ref:`configuration file settings <intro_configuration>`.
+* To save Ansible output to a secure database, use :ref:`Ansible Tower <ansible_tower>`. Tower allows you to review history based on hosts, projects, and particular inventories over time, using graphs and/or a REST API.
+
+Protecting sensitive data with ``no_log``
+=========================================
+
+If you save Ansible output to a log, you expose any secret data in your Ansible output, such as passwords and user names. To keep sensitive values out of your logs, mark tasks that expose them with the ``no_log: True`` attribute. However, the ``no_log`` attribute does not affect debugging output, so be careful not to debug playbooks in a production environment. See :ref:`keep_secret_data` for an example.
diff --git a/docs/docsite/rst/shared_snippets/basic_concepts.txt b/docs/docsite/rst/shared_snippets/basic_concepts.txt
new file mode 100644
index 0000000000..a611853a3b
--- /dev/null
+++ b/docs/docsite/rst/shared_snippets/basic_concepts.txt
@@ -0,0 +1,29 @@
+Control node
+============
+
+Any machine with Ansible installed. You can run commands and playbooks, invoking ``/usr/bin/ansible`` or ``/usr/bin/ansible-playbook``, from any control node. You can use any computer that has Python installed on it as a control node - laptops, shared desktops, and servers can all run Ansible. However, you cannot use a Windows machine as a control node. You can have multiple control nodes.
+
+Managed nodes
+=============
+
+The network devices (and/or servers) you manage with Ansible. Managed nodes are also sometimes called "hosts". Ansible is not installed on managed nodes.
+
+Inventory
+=========
+
+A list of managed nodes. An inventory file is also sometimes called a "hostfile". Your inventory can specify information like IP address for each managed node. An inventory can also organize managed nodes, creating and nesting groups for easier scaling. To learn more about inventory, see :ref:`the Working with Inventory<intro_inventory>` section.
+
+Modules
+=======
+
+The units of code Ansible executes. Each module has a particular use, from administering users on a specific type of database to managing VLAN interfaces on a specific type of network device. You can invoke a single module with a task, or invoke several different modules in a playbook. For an idea of how many modules Ansible includes, take a look at the :ref:`list of all modules <modules_by_category>`.
+
+Tasks
+=====
+
+The units of action in Ansible. You can execute a single task once with an ad-hoc command.
+
+Playbooks
+=========
+
+Ordered lists of tasks, saved so you can run those tasks in that order repeatedly. Playbooks can include variables as well as tasks. Playbooks are written in YAML and are easy to read, write, share and understand. To learn more about playbooks, see :ref:`about_playbooks`.
diff --git a/docs/docsite/rst/user_guide/basic_concepts.rst b/docs/docsite/rst/user_guide/basic_concepts.rst
new file mode 100644
index 0000000000..76adc6845f
--- /dev/null
+++ b/docs/docsite/rst/user_guide/basic_concepts.rst
@@ -0,0 +1,12 @@
+.. _basic_concepts:
+
+****************
+Ansible concepts
+****************
+
+These concepts are common to all uses of Ansible. You need to understand them to use Ansible for any kind of automation. This basic introduction provides the background you need to follow the rest of the User Guide.
+
+.. contents::
+ :local:
+
+.. include:: /shared_snippets/basic_concepts.txt
diff --git a/docs/docsite/rst/user_guide/become.rst b/docs/docsite/rst/user_guide/become.rst
index 26174594a2..f8257009c5 100644
--- a/docs/docsite/rst/user_guide/become.rst
+++ b/docs/docsite/rst/user_guide/become.rst
@@ -1,36 +1,31 @@
.. _become:
-**********************************
-Understanding Privilege Escalation
-**********************************
+******************************************
+Understanding privilege escalation: become
+******************************************
-Ansible can use existing privilege escalation systems to allow a user to execute tasks as another.
+Ansible uses existing privilege escalation systems to execute tasks with root privileges or with another user's permissions. Because this feature allows you to 'become' another user, different from the user that logged into the machine (remote user), we call it ``become``. The ``become`` keyword leverages existing privilege escalation tools like `sudo`, `su`, `pfexec`, `doas`, `pbrun`, `dzdo`, `ksu`, `runas`, `machinectl` and others.
-.. contents:: Topics
+.. contents::
+ :local:
-Become
-======
+Using become
+============
-Ansible allows you to 'become' another user, different from the user that logged into the machine (remote user). This is done using existing privilege escalation tools such as `sudo`, `su`, `pfexec`, `doas`, `pbrun`, `dzdo`, `ksu`, `runas`, `machinectl` and others.
+You can control the use of ``become`` with play or task directives, connection variables, or at the command line. If you set privilege escalation properties in multiple ways, review the :ref:`general precedence rules<general_precedence_rules>` to understand which settings will be used.
A full list of all become plugins that are included in Ansible can be found in the :ref:`become_plugin_list`.
+Become directives
+-----------------
-.. note:: Prior to version 1.9, Ansible mostly allowed the use of `sudo` and a limited use of `su` to allow a login/remote user to become a different user and execute tasks and create resources with the second user's permissions. As of Ansible version 1.9, `become` supersedes the old sudo/su, while still being backwards compatible. This new implementation also makes it easier to add other privilege escalation tools, including `pbrun` (Powerbroker), `pfexec`, `dzdo` (Centrify), and others.
-
-.. note:: Become vars and directives are independent. For example, setting ``become_user`` does not set ``become``.
-
-
-Directives
-==========
-
-These can be set from play to task level, but are overridden by connection variables as they can be host specific.
+You can set the directives that control ``become`` at the play or task level. You can override these by setting connection variables, which often differ from one host to another. These variables and directives are independent. For example, setting ``become_user`` does not set ``become``.
become
set to ``yes`` to activate privilege escalation.
become_user
- set to user with desired privileges — the user you `become`, NOT the user you login as. Does NOT imply ``become: yes``, to allow it to be set at host level.
+ set to user with desired privileges — the user you `become`, NOT the user you login as. Does NOT imply ``become: yes``, to allow it to be set at host level. Default value is ``root``.
become_method
(at play or task level) overrides the default method set in ansible.cfg, set to use any of the :ref:`become_plugins`.
@@ -38,7 +33,9 @@ become_method
become_flags
(at play or task level) permit the use of specific flags for the tasks or role. One common use is to change the user to nobody when the shell is set to no login. Added in Ansible 2.2.
-For example, to manage a system service (which requires ``root`` privileges) when connected as a non-``root`` user (this takes advantage of the fact that the default value of ``become_user`` is ``root``)::
+For example, to manage a system service (which requires ``root`` privileges) when connected as a non-``root`` user, you can use the default value of ``become_user`` (``root``):
+
+.. code-block:: yaml
- name: Ensure the httpd service is running
service:
@@ -46,14 +43,18 @@ For example, to manage a system service (which requires ``root`` privileges) whe
state: started
become: yes
-To run a command as the ``apache`` user::
+To run a command as the ``apache`` user:
+
+.. code-block:: yaml
- name: Run a command as the apache user
command: somecommand
become: yes
become_user: apache
-To do something as the ``nobody`` user when the shell is nologin::
+To do something as the ``nobody`` user when the shell is nologin:
+
+.. code-block:: yaml
- name: Run a command as nobody
command: somecommand
@@ -62,9 +63,10 @@ To do something as the ``nobody`` user when the shell is nologin::
become_user: nobody
become_flags: '-s /bin/sh'
-Connection variables
---------------------
-Each allows you to set an option per group and/or host, these are normally defined in inventory but can be used as normal variables.
+Become connection variables
+---------------------------
+
+You can define different ``become`` options for each managed node or group. You can define these variables in inventory or use them as normal variables.
ansible_become
equivalent of the become directive, decides if privilege escalation is used or not.
@@ -78,7 +80,9 @@ ansible_become_user
ansible_become_password
set the privilege escalation password. See :ref:`playbooks_vault` for details on how to avoid having secrets in plain text
-For example, if you want to run all tasks as ``root`` on a server named ``webserver``, but you can only connect as the ``manager`` user, you could use an inventory entry like this::
+For example, if you want to run all tasks as ``root`` on a server named ``webserver``, but you can only connect as the ``manager`` user, you could use an inventory entry like this:
+
+.. code-block:: text
webserver ansible_user=manager ansible_become=yes
@@ -87,8 +91,8 @@ For example, if you want to run all tasks as ``root`` on a server named ``webser
Please see the documentation for each plugin for a list of all options the plugin has and how they can be defined.
A full list of become plugins in Ansible can be found at :ref:`become_plugins`.
-Command line options
---------------------
+Become command-line options
+---------------------------
--ask-become-pass, -K
ask for privilege escalation password; does not imply become will be used. Note that this password will be used for all hosts.
@@ -103,74 +107,53 @@ Command line options
--become-user=BECOME_USER
run operations as this user (default=root), does not imply --become/-b
-
-For those from Pre 1.9 , sudo and su still work!
-------------------------------------------------
-
-For those using old playbooks will not need to be changed, even though they are deprecated, sudo and su directives, variables and options
-will continue to work. It is recommended to move to become as they may be retired at one point.
-You cannot mix directives on the same object (become and sudo) though, Ansible will complain if you try to.
-
-Become will default to using the old sudo/su configs and variables if they exist, but will override them if you specify any of the new ones.
-
-
-Limitations
------------
+Risks and limitations of become
+===============================
Although privilege escalation is mostly intuitive, there are a few limitations
on how it works. Users should be aware of these to avoid surprises.
-Becoming an Unprivileged User
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+Risks of becoming an unprivileged user
+--------------------------------------
-Ansible 2.0.x and below has a limitation with regards to becoming an
-unprivileged user that can be a security risk if users are not aware of it.
Ansible modules are executed on the remote machine by first substituting the
parameters into the module file, then copying the file to the remote machine,
and finally executing it there.
Everything is fine if the module file is executed without using ``become``,
when the ``become_user`` is root, or when the connection to the remote machine
-is made as root. In these cases the module file is created with permissions
-that only allow reading by the user and root.
-
-The problem occurs when the ``become_user`` is an unprivileged user. Ansible
-2.0.x and below make the module file world readable in this case, as the module
-file is written as the user that Ansible connects as, but the file needs to
-be readable by the user Ansible is set to ``become``.
-
-.. note:: In Ansible 2.1, this window is further narrowed: If the connection
- is made as a privileged user (root), then Ansible 2.1 and above will use
- chown to set the file's owner to the unprivileged user being switched to.
- This means both the user making the connection and the user being switched
- to via ``become`` must be unprivileged in order to trigger this problem.
-
-If any of the parameters passed to the module are sensitive in nature, then
-those pieces of data are located in a world readable module file for the
-duration of the Ansible module execution. Once the module is done executing,
-Ansible will delete the temporary file. If you trust the client machines then
-there's no problem here. If you do not trust the client machines then this is
-a potential danger.
+is made as root. In these cases Ansible creates the module file with permissions
+that only allow reading by the user and root, or only allow reading by the unprivileged
+user being switched to.
+
+However, when both the connection user and the ``become_user`` are unprivileged,
+the module file is written as the user that Ansible connects as, but the file needs to
+be readable by the user Ansible is set to ``become``. In this case, Ansible makes
+the module file world-readable for the duration of the Ansible module execution.
+Once the module is done executing, Ansible deletes the temporary file.
+
+If any of the parameters passed to the module are sensitive in nature, and you do
+not trust the client machines, then this is a potential danger.
Ways to resolve this include:
-* Use `pipelining`. When pipelining is enabled, Ansible doesn't save the
+* Use `pipelining`. When pipelining is enabled, Ansible does not save the
module to a temporary file on the client. Instead it pipes the module to
the remote python interpreter's stdin. Pipelining does not work for
python modules involving file transfer (for example: :ref:`copy <copy_module>`,
:ref:`fetch <fetch_module>`, :ref:`template <template_module>`), or for non-python modules.
-* (Available in Ansible 2.1) Install POSIX.1e filesystem acl support on the
+* Install POSIX.1e filesystem acl support on the
managed host. If the temporary directory on the remote host is mounted with
POSIX acls enabled and the :command:`setfacl` tool is in the remote ``PATH``
then Ansible will use POSIX acls to share the module file with the second
unprivileged user instead of having to make the file readable by everyone.
-* Don't perform an action on the remote machine by becoming an unprivileged
+* Avoid becoming an unprivileged
user. Temporary files are protected by UNIX file permissions when you
``become`` root or do not use ``become``. In Ansible 2.1 and above, UNIX
file permissions are also secure if you make the connection to the managed
- machine as root and then use ``become`` to an unprivileged account.
+ machine as root and then use ``become`` to access an unprivileged account.
.. warning:: Although the Solaris ZFS filesystem has filesystem ACLs, the ACLs
are not POSIX.1e filesystem acls (they are NFSv4 ACLs instead). Ansible
@@ -179,48 +162,49 @@ Ways to resolve this include:
.. versionchanged:: 2.1
-In addition to the additional means of doing this securely, Ansible 2.1 also
-makes it harder to unknowingly do this insecurely. Whereas in Ansible 2.0.x
-and below, Ansible will silently allow the insecure behaviour if it was unable
-to find another way to share the files with the unprivileged user, in Ansible
-2.1 and above Ansible defaults to issuing an error if it can't do this
-securely. If you can't make any of the changes above to resolve the problem,
-and you decide that the machine you're running on is secure enough for the
+Ansible makes it hard to unknowingly use ``become`` insecurely. Starting in Ansible 2.1,
+Ansible defaults to issuing an error if it cannot execute securely with ``become``.
+If you cannot use pipelining or POSIX ACLs, you must connect as an unprivileged user,
+you must use ``become`` to execute as a different unprivileged user,
+and you decide that your managed nodes are secure enough for the
modules you want to run there to be world readable, you can turn on
``allow_world_readable_tmpfiles`` in the :file:`ansible.cfg` file. Setting
``allow_world_readable_tmpfiles`` will change this from an error into
a warning and allow the task to run as it did prior to 2.1.
-Connection Plugin Support
-^^^^^^^^^^^^^^^^^^^^^^^^^
+Not supported by all connection plugins
+---------------------------------------
Privilege escalation methods must also be supported by the connection plugin
-used. Most connection plugins will warn if they do not support become. Some
+used. Most connection plugins will warn if they do not support become. Some
will just ignore it as they always run as root (jail, chroot, etc).
Only one method may be enabled per host
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+---------------------------------------
-Methods cannot be chained. You cannot use ``sudo /bin/su -`` to become a user,
+Methods cannot be chained. You cannot use ``sudo /bin/su -`` to become a user,
you need to have privileges to run the command as that user in sudo or be able
to su directly to it (the same for pbrun, pfexec or other supported methods).
-Can't limit escalation to certain commands
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+Privilege escalation must be general
+------------------------------------
-Privilege escalation permissions have to be general. Ansible does not always
+You cannot limit privilege escalation permissions to certain commands.
+Ansible does not always
use a specific command to do something but runs modules (code) from
a temporary file name which changes every time. If you have '/sbin/service'
or '/bin/chmod' as the allowed commands this will fail with ansible as those
-paths won't match with the temporary file that ansible creates to run the
-module.
+paths won't match with the temporary file that Ansible creates to run the
+module. If you have security rules that constrain your sudo/pbrun/doas environment
+to running specific command paths only, use Ansible from a special account that
+does not have this constraint, or use :ref:`ansible_tower` to manage indirect access to SSH credentials.
-Environment variables populated by pam_systemd
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+May not access environment variables populated by pamd_systemd
+--------------------------------------------------------------
For most Linux distributions using ``systemd`` as their init, the default
methods used by ``become`` do not open a new "session", in the sense of
-systemd. Because the ``pam_systemd`` module will not fully initialize a new
+systemd. Because the ``pam_systemd`` module will not fully initialize a new
session, you might have surprises compared to a normal session opened through
ssh: some environment variables set by ``pam_systemd``, most notably
``XDG_RUNTIME_DIR``, are not populated for the new user and instead inherited
@@ -244,10 +228,10 @@ For more information, see `this systemd issue
.. _become_network:
-Become and Networks
-===================
+Become and network automation
+=============================
-As of version 2.6, Ansible supports ``become`` for privilege escalation (entering ``enable`` mode or privileged EXEC mode) on all :ref:`Ansible-maintained platforms<network_supported>` that support ``enable`` mode: ``eos``, ``ios``, and ``nxos``. Using ``become`` replaces the ``authorize`` and ``auth_pass`` options in a ``provider`` dictionary.
+As of version 2.6, Ansible supports ``become`` for privilege escalation (entering ``enable`` mode or privileged EXEC mode) on all :ref:`Ansible-maintained platforms<network_supported>` that support ``enable`` mode. Using ``become`` replaces the ``authorize`` and ``auth_pass`` options in a ``provider`` dictionary.
You must set the connection type to either ``connection: network_cli`` or ``connection: httpapi`` to use ``become`` for privilege escalation on network devices. Check the :ref:`platform_options` and :ref:`network_modules` documentation for details.
@@ -298,7 +282,6 @@ Often you wish for all tasks in all plays to run using privilege mode, that is b
ansible_become: yes
ansible_become_method: enable
-
Passwords for enable mode
^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -314,7 +297,7 @@ If you need a password to enter ``enable`` mode, you can specify it in one of tw
authorize and auth_pass
-----------------------
-Ansible still supports ``enable`` mode with ``connection: local`` for legacy playbooks. To enter ``enable`` mode with ``connection: local``, use the module options ``authorize`` and ``auth_pass``:
+Ansible still supports ``enable`` mode with ``connection: local`` for legacy network playbooks. To enter ``enable`` mode with ``connection: local``, use the module options ``authorize`` and ``auth_pass``:
.. code-block:: yaml
@@ -348,7 +331,7 @@ delegation or accessing forbidden system calls like the WUA API. You can use
``become`` with the same user as ``ansible_user`` to bypass these limitations
and run commands that are not normally accessible in a WinRM session.
-Administrative Rights
+Administrative rights
---------------------
Many tasks in Windows require administrative privileges to complete. When using
@@ -362,7 +345,9 @@ debug privilege is not available, the become process will run with a limited
set of privileges and groups.
To determine the type of token that Ansible was able to get, run the following
-task::
+task:
+
+.. code-block:: yaml
- win_whoami:
become: yes
@@ -486,7 +471,9 @@ If running on a version of Ansible that is older than 2.5 or the normal
default, and care should be taken if you grant this privilege to a user or group.
For more information on this privilege, please see
`Act as part of the operating system <https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/dn221957(v=ws.11)>`_.
- You can use the below task to set this privilege on a Windows host::
+ You can use the below task to set this privilege on a Windows host:
+
+ .. code-block:: yaml
- name: grant the ansible user the SeTcbPrivilege right
win_user_right:
@@ -497,7 +484,9 @@ If running on a version of Ansible that is older than 2.5 or the normal
* Turn UAC off on the host and reboot before trying to become the user. UAC is
a security protocol that is designed to run accounts with the
``least privilege`` principle. You can turn UAC off by running the following
- tasks::
+ tasks:
+
+ .. code-block:: yaml
- name: turn UAC off
win_regedit:
@@ -515,10 +504,10 @@ If running on a version of Ansible that is older than 2.5 or the normal
.. Note:: Granting the ``SeTcbPrivilege`` or turning UAC off can cause Windows
security vulnerabilities and care should be given if these steps are taken.
-Local Service Accounts
+Local service accounts
----------------------
-Prior to Ansible version 2.5, ``become`` only worked with a local or domain
+Prior to Ansible version 2.5, ``become`` only worked on Windows with a local or domain
user account. Local service accounts like ``System`` or ``NetworkService``
could not be used as ``become_user`` in these older versions. This restriction
has been lifted since the 2.5 release of Ansible. The three service accounts
@@ -532,10 +521,10 @@ Because local service accounts do not have passwords, the
``ansible_become_password`` parameter is not required and is ignored if
specified.
-Become without setting a Password
+Become without setting a password
---------------------------------
-As of Ansible 2.8, ``become`` can be used to become a local or domain account
+As of Ansible 2.8, ``become`` can be used to become a Windows local or domain account
without requiring a password for that account. For this method to work, the
following requirements must be met:
@@ -568,12 +557,12 @@ undefined or set ``ansible_become_password:``.
have access to local resources. Use become with a password if the task needs
to access network resources
-Accounts without a Password
+Accounts without a password
---------------------------
.. Warning:: As a general security best practice, you should avoid allowing accounts without passwords.
-Ansible can be used to become an account that does not have a password (like the
+Ansible can be used to become a Windows account that does not have a password (like the
``Guest`` account). To become an account without a password, set up the
variables like normal but set ``ansible_become_password: ''``.
@@ -596,15 +585,15 @@ or with this Ansible task:
to set the account's password under ``ansible_become_password`` if the
become_user has a password.
-Become Flags
-------------
-Ansible 2.5 adds the ``become_flags`` parameter to the ``runas`` become method.
+Become flags for Windows
+------------------------
+
+Ansible 2.5 added the ``become_flags`` parameter to the ``runas`` become method.
This parameter can be set using the ``become_flags`` task directive or set in
Ansible's configuration using ``ansible_become_flags``. The two valid values
that are initially supported for this parameter are ``logon_type`` and
``logon_flags``.
-
.. Note:: These flags should only be set when becoming a normal user account, not a local service account like LocalSystem.
The key ``logon_type`` sets the type of logon operation to perform. The value
@@ -682,10 +671,8 @@ Here are some examples of how to use ``become_flags`` with Windows tasks:
become_flags: logon_flags=
-Limitations
------------
-
-Be aware of the following limitations with ``become`` on Windows:
+Limitations of become on Windows
+--------------------------------
* Running a task with ``async`` and ``become`` on Windows Server 2008, 2008 R2
and Windows 7 only works when using Ansible 2.7 or newer.
@@ -700,7 +687,7 @@ Be aware of the following limitations with ``become`` on Windows:
``ansible_winrm_transport`` was either ``basic`` or ``credssp``. This
restriction has been lifted since the 2.4 release of Ansible for all hosts
except Windows Server 2008 (non R2 version).
-
+
* The Secondary Logon service ``seclogon`` must be running to use ``ansible_become_method: runas``
.. seealso::
diff --git a/docs/docsite/rst/user_guide/collections_using.rst b/docs/docsite/rst/user_guide/collections_using.rst
index 61f04749eb..163aa2f206 100644
--- a/docs/docsite/rst/user_guide/collections_using.rst
+++ b/docs/docsite/rst/user_guide/collections_using.rst
@@ -42,7 +42,7 @@ the collection to the first path defined in :ref:`COLLECTIONS_PATHS`, which by d
You can also keep a collection adjacent to the current playbook, under a ``collections/ansible_collections/`` directory structure.
-::
+.. code-block:: text
play.yml
├── collections/
diff --git a/docs/docsite/rst/user_guide/command_line_tools.rst b/docs/docsite/rst/user_guide/command_line_tools.rst
index 681c742455..56561b5979 100644
--- a/docs/docsite/rst/user_guide/command_line_tools.rst
+++ b/docs/docsite/rst/user_guide/command_line_tools.rst
@@ -1,6 +1,6 @@
.. _command_line_tools:
-Working with Command Line Tools
+Working with command line tools
===============================
Most users are familiar with `ansible` and `ansible-playbook`, but those are not the only utilities Ansible provides.
diff --git a/docs/docsite/rst/user_guide/connection_details.rst b/docs/docsite/rst/user_guide/connection_details.rst
new file mode 100644
index 0000000000..9ebb88b250
--- /dev/null
+++ b/docs/docsite/rst/user_guide/connection_details.rst
@@ -0,0 +1,80 @@
+.. _connections:
+
+******************************
+Connection methods and details
+******************************
+
+This section shows you how to expand and refine the connection methods Ansible uses for your inventory.
+
+ControlPersist and paramiko
+---------------------------
+
+By default, Ansible uses native OpenSSH, because it supports ControlPersist (a performance feature), Kerberos, and options in ``~/.ssh/config`` such as Jump Host setup. If your control machine uses an older version of OpenSSH that does not support ControlPersist, Ansible will fallback to a Python implementation of OpenSSH called 'paramiko'.
+
+SSH key setup
+-------------
+
+By default, Ansible assumes you are using SSH keys to connect to remote machines. SSH keys are encouraged, but you can use password authentication if needed with the ``--ask-pass`` option. If you need to provide a password for :ref:`privilege escalation <become>` (sudo, pbrun, etc.), use ``--ask-become-pass``.
+
+.. include:: shared_snippets/SSH_password_prompt.txt
+
+To set up SSH agent to avoid retyping passwords, you can do:
+
+.. code-block:: bash
+
+ $ ssh-agent bash
+ $ ssh-add ~/.ssh/id_rsa
+
+Depending on your setup, you may wish to use Ansible's ``--private-key`` command line option to specify a pem file instead. You can also add the private key file:
+
+.. code-block:: bash
+
+ $ ssh-agent bash
+ $ ssh-add ~/.ssh/keypair.pem
+
+Another way to add private key files without using ssh-agent is using ``ansible_ssh_private_key_file`` in an inventory file as explained here: :ref:`intro_inventory`.
+
+Running against localhost
+-------------------------
+
+You can run commands against the control node by using "localhost" or "127.0.0.1" for the server name:
+
+.. code-block:: bash
+
+ $ ansible localhost -m ping -e 'ansible_python_interpreter="/usr/bin/env python"'
+
+You can specify localhost explicitly by adding this to your inventory file:
+
+.. code-block:: bash
+
+ localhost ansible_connection=local ansible_python_interpreter="/usr/bin/env python"
+
+.. _host_key_checking_on:
+
+Host key checking
+-----------------
+
+Ansible enables host key checking by default. Checking host keys guards against server spoofing and man-in-the-middle attacks, but it does require some maintenance.
+
+If a host is reinstalled and has a different key in 'known_hosts', this will result in an error message until corrected. If a new host is not in 'known_hosts' your control node may prompt for confirmation of the key, which results in an interactive experience if using Ansible, from say, cron. You might not want this.
+
+If you understand the implications and wish to disable this behavior, you can do so by editing ``/etc/ansible/ansible.cfg`` or ``~/.ansible.cfg``:
+
+.. code-block:: text
+
+ [defaults]
+ host_key_checking = False
+
+Alternatively this can be set by the :envvar:`ANSIBLE_HOST_KEY_CHECKING` environment variable:
+
+.. code-block:: bash
+
+ $ export ANSIBLE_HOST_KEY_CHECKING=False
+
+Also note that host key checking in paramiko mode is reasonably slow, therefore switching to 'ssh' is also recommended when using this feature.
+
+Other connection methods
+------------------------
+
+Ansible can use a variety of connection methods beyond SSH. You can select any connection plugin, including managing things locally and managing chroot, lxc, and jail containers.
+A mode called 'ansible-pull' can also invert the system and have systems 'phone home' via scheduled git checkouts to pull configuration directives from a central repository.
diff --git a/docs/docsite/rst/user_guide/guide_rolling_upgrade.rst b/docs/docsite/rst/user_guide/guide_rolling_upgrade.rst
index ee85de9670..282f51339f 100644
--- a/docs/docsite/rst/user_guide/guide_rolling_upgrade.rst
+++ b/docs/docsite/rst/user_guide/guide_rolling_upgrade.rst
@@ -39,7 +39,9 @@ Site deployment
===============
Let's start with ``site.yml``. This is our site-wide deployment playbook. It can be used to initially deploy the site, as well
-as push updates to all of the servers::
+as push updates to all of the servers:
+
+.. code-block:: yaml
---
# This playbook deploys the whole application stack in this site.
@@ -114,7 +116,9 @@ Configuration: group variables
Group variables are variables that are applied to groups of servers. They can be used in templates and in
playbooks to customize behavior and to provide easily-changed settings and parameters. They are stored in
a directory called ``group_vars`` in the same location as your inventory.
-Here is lamp_haproxy's ``group_vars/all`` file. As you might expect, these variables are applied to all of the machines in your inventory::
+Here is lamp_haproxy's ``group_vars/all`` file. As you might expect, these variables are applied to all of the machines in your inventory:
+
+.. code-block:: yaml
---
httpd_port: 80
@@ -124,7 +128,9 @@ This is a YAML file, and you can create lists and dictionaries for more complex
In this case, we are just setting two variables, one for the port for the web server, and one for the
NTP server that our machines should use for time synchronization.
-Here's another group variables file. This is ``group_vars/dbservers`` which applies to the hosts in the ``dbservers`` group::
+Here's another group variables file. This is ``group_vars/dbservers`` which applies to the hosts in the ``dbservers`` group:
+
+.. code-block:: yaml
---
mysqlservice: mysqld
@@ -135,7 +141,9 @@ Here's another group variables file. This is ``group_vars/dbservers`` which appl
If you look in the example, there are group variables for the ``webservers`` group and the ``lbservers`` group, similarly.
-These variables are used in a variety of places. You can use them in playbooks, like this, in ``roles/db/tasks/main.yml``::
+These variables are used in a variety of places. You can use them in playbooks, like this, in ``roles/db/tasks/main.yml``:
+
+.. code-block:: yaml
- name: Create Application Database
mysql_db:
@@ -150,7 +158,9 @@ These variables are used in a variety of places. You can use them in playbooks,
host: '%'
state: present
-You can also use these variables in templates, like this, in ``roles/common/templates/ntp.conf.j2``::
+You can also use these variables in templates, like this, in ``roles/common/templates/ntp.conf.j2``:
+
+.. code-block:: text
driftfile /var/lib/ntp/drift
@@ -202,14 +212,18 @@ refers to orchestration as 'conducting machines like an orchestra', and has a pr
Ansible has the capability to do operations on multi-tier applications in a coordinated way, making it easy to orchestrate a sophisticated zero-downtime rolling upgrade of our web application. This is implemented in a separate playbook, called ``rolling_update.yml``.
-Looking at the playbook, you can see it is made up of two plays. The first play is very simple and looks like this::
+Looking at the playbook, you can see it is made up of two plays. The first play is very simple and looks like this:
+
+.. code-block:: yaml
- hosts: monitoring
tasks: []
What's going on here, and why are there no tasks? You might know that Ansible gathers "facts" from the servers before operating upon them. These facts are useful for all sorts of things: networking information, OS/distribution versions, etc. In our case, we need to know something about all of the monitoring servers in our environment before we perform the update, so this simple play forces a fact-gathering step on our monitoring servers. You will see this pattern sometimes, and it's a useful trick to know.
-The next part is the update play. The first part looks like this::
+The next part is the update play. The first part looks like this:
+
+.. code-block:: yaml
- hosts: webservers
user: root
@@ -217,21 +231,23 @@ The next part is the update play. The first part looks like this::
This is just a normal play definition, operating on the ``webservers`` group. The ``serial`` keyword tells Ansible how many servers to operate on at once. If it's not specified, Ansible will parallelize these operations up to the default "forks" limit specified in the configuration file. But for a zero-downtime rolling upgrade, you may not want to operate on that many hosts at once. If you had just a handful of webservers, you may want to set ``serial`` to 1, for one host at a time. If you have 100, maybe you could set ``serial`` to 10, for ten at a time.
-Here is the next part of the update play::
+Here is the next part of the update play:
+
+.. code-block:: yaml
- pre_tasks:
- - name: disable nagios alerts for this host webserver service
- nagios:
- action: disable_alerts
- host: "{{ inventory_hostname }}"
- services: webserver
- delegate_to: "{{ item }}"
- loop: "{{ groups.monitoring }}"
+ pre_tasks:
+ - name: disable nagios alerts for this host webserver service
+ nagios:
+ action: disable_alerts
+ host: "{{ inventory_hostname }}"
+ services: webserver
+ delegate_to: "{{ item }}"
+ loop: "{{ groups.monitoring }}"
- - name: disable the server in haproxy
- shell: echo "disable server myapplb/{{ inventory_hostname }}" | socat stdio /var/lib/haproxy/stats
- delegate_to: "{{ item }}"
- loop: "{{ groups.lbservers }}"
+ - name: disable the server in haproxy
+ shell: echo "disable server myapplb/{{ inventory_hostname }}" | socat stdio /var/lib/haproxy/stats
+ delegate_to: "{{ item }}"
+ loop: "{{ groups.lbservers }}"
.. note::
- The ``serial`` keyword forces the play to be executed in 'batches'. Each batch counts as a full play with a subselection of hosts.
@@ -243,28 +259,32 @@ The ``delegate_to`` and ``loop`` arguments, used together, cause Ansible to loop
Note that the HAProxy step looks a little complicated. We're using HAProxy in this example because it's freely available, though if you have (for instance) an F5 or Netscaler in your infrastructure (or maybe you have an AWS Elastic IP setup?), you can use modules included in core Ansible to communicate with them instead. You might also wish to use other monitoring modules instead of nagios, but this just shows the main goal of the 'pre tasks' section -- take the server out of monitoring, and take it out of rotation.
-The next step simply re-applies the proper roles to the web servers. This will cause any configuration management declarations in ``web`` and ``base-apache`` roles to be applied to the web servers, including an update of the web application code itself. We don't have to do it this way--we could instead just purely update the web application, but this is a good example of how roles can be used to reuse tasks::
+The next step simply re-applies the proper roles to the web servers. This will cause any configuration management declarations in ``web`` and ``base-apache`` roles to be applied to the web servers, including an update of the web application code itself. We don't have to do it this way--we could instead just purely update the web application, but this is a good example of how roles can be used to reuse tasks:
+
+.. code-block:: yaml
+
+ roles:
+ - common
+ - base-apache
+ - web
- roles:
- - common
- - base-apache
- - web
+Finally, in the ``post_tasks`` section, we reverse the changes to the Nagios configuration and put the web server back in the load balancing pool:
-Finally, in the ``post_tasks`` section, we reverse the changes to the Nagios configuration and put the web server back in the load balancing pool::
+.. code-block:: yaml
- post_tasks:
- - name: Enable the server in haproxy
- shell: echo "enable server myapplb/{{ inventory_hostname }}" | socat stdio /var/lib/haproxy/stats
- delegate_to: "{{ item }}"
- loop: "{{ groups.lbservers }}"
+ post_tasks:
+ - name: Enable the server in haproxy
+ shell: echo "enable server myapplb/{{ inventory_hostname }}" | socat stdio /var/lib/haproxy/stats
+ delegate_to: "{{ item }}"
+ loop: "{{ groups.lbservers }}"
- - name: re-enable nagios alerts
- nagios:
- action: enable_alerts
- host: "{{ inventory_hostname }}"
- services: webserver
- delegate_to: "{{ item }}"
- loop: "{{ groups.monitoring }}"
+ - name: re-enable nagios alerts
+ nagios:
+ action: enable_alerts
+ host: "{{ inventory_hostname }}"
+ services: webserver
+ delegate_to: "{{ item }}"
+ loop: "{{ groups.monitoring }}"
Again, if you were using a Netscaler or F5 or Elastic Load Balancer, you would just substitute in the appropriate modules instead.
diff --git a/docs/docsite/rst/user_guide/index.rst b/docs/docsite/rst/user_guide/index.rst
index a20393b26f..eadb0db7ab 100644
--- a/docs/docsite/rst/user_guide/index.rst
+++ b/docs/docsite/rst/user_guide/index.rst
@@ -1,6 +1,6 @@
-**********
+##########
User Guide
-**********
+##########
Welcome to the Ansible User Guide!
@@ -10,15 +10,17 @@ This guide covers how to work with Ansible, including using the command line, wo
:maxdepth: 2
quickstart
+ basic_concepts
intro_getting_started
- command_line_tools
- intro_adhoc
intro_inventory
intro_dynamic_inventory
+ intro_patterns
+ intro_adhoc
+ connection_details
+ command_line_tools
playbooks
become
vault
- intro_patterns
modules
../plugins/plugins
intro_bsd
diff --git a/docs/docsite/rst/user_guide/intro_adhoc.rst b/docs/docsite/rst/user_guide/intro_adhoc.rst
index 05ffb17c5e..c455d6b2b9 100644
--- a/docs/docsite/rst/user_guide/intro_adhoc.rst
+++ b/docs/docsite/rst/user_guide/intro_adhoc.rst
@@ -1,97 +1,60 @@
.. _intro_adhoc:
-Introduction To Ad-Hoc Commands
-===============================
+*******************************
+Introduction to ad-hoc commands
+*******************************
-.. contents:: Topics
+An Ansible ad-hoc command uses the `/usr/bin/ansible` command-line tool to automate a single task on one or more managed nodes. Ad-hoc commands are quick and easy, but they are not re-usable. So why learn about ad-hoc commands first? Ad-hoc commands demonstrate the simplicity and power of Ansible. The concepts you learn here will port over directly to the playbook language. Before reading and executing these examples, please read :ref:`intro_inventory`.
-The following examples show how to use `/usr/bin/ansible` for running
-ad hoc tasks.
+.. contents::
+ :local:
-What's an ad-hoc command?
+Why use ad-hoc commands?
+========================
-An ad-hoc command is something that you might type in to do something really
-quick, but don't want to save for later.
+Ad-hoc commands are great for tasks you repeat rarely. For example, if you want to power off all the machines in your lab for Christmas vacation, you could execute a quick one-liner in Ansible without writing a playbook. An ad-hoc command looks like this:
-This is a good place to start to understand the basics of what Ansible can do
-prior to learning the playbooks language -- ad-hoc commands can also be used
-to do quick things that you might not necessarily want to write a full playbook for.
+.. code-block:: bash
-Generally speaking, the true power of Ansible lies in playbooks.
-Why would you use ad-hoc tasks versus playbooks?
+ $ ansible [pattern] -m [module] -a "[module options]"
-For instance, if you wanted to power off all of your lab for Christmas vacation,
-you could execute a quick one-liner in Ansible without writing a playbook.
+You can learn more about :ref:`patterns<intro_patterns>` and :ref:`modules<working_with_modules>` on other pages.
-For configuration management and deployments, though, you'll want to pick up on
-using '/usr/bin/ansible-playbook' -- the concepts you will learn here will
-port over directly to the playbook language.
+Use cases for ad-hoc tasks
+==========================
-(See :ref:`working_with_playbooks` for more information about those)
+Ad-hoc tasks can be used to reboot servers, copy files, manage packages and users, and much more. You can use any Ansible module in an ad-hoc task. Ad-hoc tasks, like playbooks, use a declarative model,
+calculating and executing the actions required to reach a specified final state. They
+achieve a form of idempotence by checking the current state before they begin and doing nothing unless the current state is different from the specified final state.
-If you haven't read :ref:`intro_inventory` already, please look that over a bit first
-and then we'll get going.
+Rebooting servers
+-----------------
-.. _parallelism_and_shell_commands:
+The default module for the ``ansible`` command-line utility is the :ref:`command module<command_module>`. You can use an ad-hoc task to call the command module and reboot all web servers in Atlanta, 10 at a time. Before Ansible can do this, you must have all servers in Atlanta listed in a a group called [atlanta] in your inventory, and you must have working SSH credentials for each machine in that group. To reboot all the servers in the [atlanta] group:
-Parallelism and Shell Commands
-``````````````````````````````
+.. code-block:: bash
-Arbitrary example.
+ $ ansible atlanta -a "/sbin/reboot"
-Let's use Ansible's command line tool to reboot all web servers in Atlanta, 10 at a time. First, let's
-set up SSH-agent so it can remember our credentials::
+By default Ansible uses only 5 simultaneous processes. If you have more hosts than the value set for the fork count, Ansible will talk to them, but it will take a little longer. To reboot the [atlanta] servers with 10 parallel forks:
- $ ssh-agent bash
- $ ssh-add ~/.ssh/id_rsa
-
-If you don't want to use ssh-agent and want to instead SSH with a
-password instead of keys, you can with ``--ask-pass`` (``-k``), but
-it's much better to just use ssh-agent.
-
-Now to run the command on all servers in a group, in this case,
-*atlanta*, in 10 parallel forks::
+.. code-block:: bash
$ ansible atlanta -a "/sbin/reboot" -f 10
-/usr/bin/ansible will default to running from your user account. If you do not like this
-behavior, pass in "-u username". If you want to run commands as a different user, it looks like this::
+/usr/bin/ansible will default to running from your user account. To connect as a different user:
- $ ansible atlanta -a "/usr/bin/foo" -u username
+.. code-block:: bash
-Often you'll not want to just do things from your user account. If you want to run commands through privilege escalation::
+ $ ansible atlanta -a "/sbin/reboot" -f 10 -u username
- $ ansible atlanta -a "/usr/bin/foo" -u username --become [--ask-become-pass]
+Rebooting probably requires privilege escalation. You can connect to the server as ``username`` and run the command as the ``root`` user by using the :ref:`become <become>` keyword:
-Use ``--ask-become-pass`` (``-K``) if you are not using a passwordless privilege escalation method (sudo/su/pfexec/doas/etc).
-This will interactively prompt you for the password to use.
-Use of a passwordless setup makes things easier to automate, but it's not required.
+.. code-block:: bash
-It is also possible to become a user other than root using
-``--become-user``::
+ $ ansible atlanta -a "/sbin/reboot" -f 10 -u username --become [--ask-become-pass]
- $ ansible atlanta -a "/usr/bin/foo" -u username --become --become-user otheruser [--ask-become-pass]
-
-.. note::
-
- Rarely, some users have security rules where they constrain their sudo/pbrun/doas environment to running specific command paths only.
- This does not work with Ansible's no-bootstrapping philosophy and hundreds of different modules.
- If doing this, use Ansible from a special account that does not have this constraint.
- One way of doing this without sharing access to unauthorized users would be gating Ansible with :ref:`ansible_tower`, which
- can hold on to an SSH credential and let members of certain organizations use it on their behalf without having direct access.
-
-Ok, so those are basics. If you didn't read about patterns and groups yet, go back and read :ref:`intro_patterns`.
-
-The ``-f 10`` in the above specifies the usage of 10 simultaneous
-processes to use. You can also set this in :ref:`intro_configuration` to avoid setting it again. The default is actually 5, which
-is really small and conservative. You are probably going to want to talk to a lot more simultaneous hosts so feel free
-to crank this up. If you have more hosts than the value set for the fork count, Ansible will talk to them, but it will
-take a little longer. Feel free to push this value as high as your system can handle!
-
-You can also select what Ansible "module" you want to run. Normally commands also take a ``-m`` for module name, but
-the default module name is 'command', so we didn't need to
-specify that all of the time. We'll use ``-m`` in later examples to
-run some other modules.
+If you add ``--ask-become-pass`` or ``-K``, Ansible prompts you for the password to use for privilege escalation (sudo/su/pfexec/doas/etc).
.. note::
The :ref:`command module <command_module>` does not support extended shell syntax like piping and
@@ -99,173 +62,135 @@ run some other modules.
syntax, use the `shell` module instead. Read more about the differences on the
:ref:`working_with_modules` page.
-Using the :ref:`shell module <shell_module>` looks like this::
+So far all our examples have used the default 'command' module. To use a different module, pass ``-m`` for module name. For example, to use the :ref:`shell module <shell_module>`:
+
+.. code-block:: bash
$ ansible raleigh -m shell -a 'echo $TERM'
When running any command with the Ansible *ad hoc* CLI (as opposed to
:ref:`Playbooks <working_with_playbooks>`), pay particular attention to shell quoting rules, so
-the local shell doesn't eat a variable before it gets passed to Ansible.
+the local shell retains the variable and passes it to Ansible.
For example, using double rather than single quotes in the above example would
evaluate the variable on the box you were on.
-So far we've been demoing simple command execution, but most Ansible modules are not simple imperative scripts. Instead, they use a declarative model,
-calculating and executing the actions required to reach a specified final state.
-Furthermore, they achieve a form of idempotence by checking the current state
-before they begin, and if the current state matches the specified final state,
-doing nothing.
-However, we also recognize that running arbitrary commands can be valuable, so Ansible easily supports both.
-
.. _file_transfer:
-File Transfer
-`````````````
+Managing files
+--------------
-Here's another use case for the `/usr/bin/ansible` command line. Ansible can SCP lots of files to multiple machines in parallel.
+An ad-hoc task can harness the power of Ansible and SCP to transfer many files to multiple machines in parallel. To transfer a file directly to all servers in the [atlanta] group:
-To transfer a file directly to many servers::
+.. code-block:: bash
$ ansible atlanta -m copy -a "src=/etc/hosts dest=/tmp/hosts"
-If you use playbooks, you can also take advantage of the ``template`` module,
-which takes this another step further. (See module and playbook documentation).
+If you plan to repeat a task like this, use the :ref:`template<template_module>` module in a playbook.
+
+The :ref:`file<file_module>` module allows changing ownership and permissions on files. These
+same options can be passed directly to the ``copy`` module as well:
-The ``file`` module allows changing ownership and permissions on files. These
-same options can be passed directly to the ``copy`` module as well::
+.. code-block:: bash
$ ansible webservers -m file -a "dest=/srv/foo/a.txt mode=600"
$ ansible webservers -m file -a "dest=/srv/foo/b.txt mode=600 owner=mdehaan group=mdehaan"
-The ``file`` module can also create directories, similar to ``mkdir -p``::
+The ``file`` module can also create directories, similar to ``mkdir -p``:
+
+.. code-block:: bash
$ ansible webservers -m file -a "dest=/path/to/c mode=755 owner=mdehaan group=mdehaan state=directory"
-As well as delete directories (recursively) and delete files::
+As well as delete directories (recursively) and delete files:
+
+.. code-block:: bash
$ ansible webservers -m file -a "dest=/path/to/c state=absent"
.. _managing_packages:
-Managing Packages
-`````````````````
+Managing packages
+-----------------
-There are modules available for yum and apt. Here are some examples
-with yum.
+You might also use an ad-hoc task to install, update, or remove packages on managed nodes using a package management module like yum. To ensure a package is installed without updating it:
-Ensure a package is installed, but don't update it::
+.. code-block:: bash
$ ansible webservers -m yum -a "name=acme state=present"
-Ensure a package is installed to a specific version::
+To ensure a specific version of a package is installed:
+
+.. code-block:: bash
$ ansible webservers -m yum -a "name=acme-1.5 state=present"
-Ensure a package is at the latest version::
+To ensure a package is at the latest version:
+
+.. code-block:: bash
$ ansible webservers -m yum -a "name=acme state=latest"
-Ensure a package is not installed::
+To ensure a package is not installed:
+
+.. code-block:: bash
$ ansible webservers -m yum -a "name=acme state=absent"
-Ansible has modules for managing packages under many platforms. If there isn't
-a module for your package manager, you can install packages using the
-command module or (better!) contribute a module for your package manager.
-Stop by the mailing list for info/details.
+Ansible has modules for managing packages under many platforms. If there is no module for your package manager, you can install packages using the command module or create a module for your package manager.
.. _users_and_groups:
-Users and Groups
-````````````````
+Managing users and groups
+-------------------------
-The 'user' module allows easy creation and manipulation of
-existing user accounts, as well as removal of user accounts that may
-exist::
+You can create, manage, and remove user accounts on your managed nodes with ad-hoc tasks:
+
+.. code-block:: bash
$ ansible all -m user -a "name=foo password=<crypted password here>"
$ ansible all -m user -a "name=foo state=absent"
-See the :ref:`Module Docs <modules_by_category>` section for details on all of the available options, including
+See the :ref:`user <user_module>` module documentation for details on all of the available options, including
how to manipulate groups and group membership.
-.. _from_source_control:
-
-Deploying From Source Control
-`````````````````````````````
-
-Deploy your webapp straight from git::
-
- $ ansible webservers -m git -a "repo=https://foo.example.org/repo.git dest=/srv/myapp version=HEAD"
-
-Since Ansible modules can notify change handlers it is possible to
-tell Ansible to run specific tasks when the code is updated, such as
-deploying Perl/Python/PHP/Ruby directly from git and then restarting
-apache.
-
.. _managing_services:
-Managing Services
-`````````````````
-
-Ensure a service is started on all webservers::
+Managing services
+-----------------
- $ ansible webservers -m service -a "name=httpd state=started"
-
-Alternatively, restart a service on all webservers::
-
- $ ansible webservers -m service -a "name=httpd state=restarted"
+Ensure a service is started on all webservers:
-Ensure a service is stopped::
+.. code-block:: bash
- $ ansible webservers -m service -a "name=httpd state=stopped"
-
-.. _time_limited_background_operations:
-
-Time Limited Background Operations
-``````````````````````````````````
-
-Long running operations can be run in the background, and it is possible to
-check their status later. For example, to execute ``long_running_operation``
-asynchronously in the background, with a timeout of 3600 seconds (``-B``),
-and without polling (``-P``)::
-
- $ ansible all -B 3600 -P 0 -a "/usr/bin/long_running_operation --do-stuff"
+ $ ansible webservers -m service -a "name=httpd state=started"
-If you do decide you want to check on the job status later, you can use the
-async_status module, passing it the job id that was returned when you ran
-the original job in the background::
+Alternatively, restart a service on all webservers:
- $ ansible web1.example.com -m async_status -a "jid=488359678239.2844"
+.. code-block:: bash
-Polling is built-in and looks like this::
+ $ ansible webservers -m service -a "name=httpd state=restarted"
- $ ansible all -B 1800 -P 60 -a "/usr/bin/long_running_operation --do-stuff"
+Ensure a service is stopped:
-The above example says "run for 30 minutes max (``-B`` 30*60=1800),
-poll for status (``-P``) every 60 seconds".
+.. code-block:: bash
-Poll mode is smart so all jobs will be started before polling will begin on any machine.
-Be sure to use a high enough ``--forks`` value if you want to get all of your jobs started
-very quickly. After the time limit (in seconds) runs out (``-B``), the process on
-the remote nodes will be terminated.
+ $ ansible webservers -m service -a "name=httpd state=stopped"
-Typically you'll only be backgrounding long-running
-shell commands or software upgrades. Backgrounding the copy module does not do a background file transfer. :ref:`Playbooks <working_with_playbooks>` also support polling, and have a simplified syntax for this.
+.. _gathering_facts:
-.. _checking_facts:
+Gathering facts
+---------------
-Gathering Facts
-```````````````
+Facts represent discovered variables about a system. You can use facts to implement conditional execution of tasks but also just to get ad-hoc information about your systems. To see all facts:
-Facts are described in the playbooks section and represent discovered variables about a
-system. These can be used to implement conditional execution of tasks but also just to get ad-hoc information about your system. You can see all facts via::
+.. code-block:: bash
$ ansible all -m setup
-It's also possible to filter this output to just export certain facts, see the "setup" module documentation for details.
+You can also filter this output to display only certain facts, see the :ref:`setup <setup_module>` module documentation for details.
-Read more about facts at :ref:`playbooks_variables` once you're ready to read up on :ref:`Playbooks <playbooks_intro>`.
+Now that you understand the basic elements of Ansible execution, you are ready to learn to automate repetitive tasks using :ref:`Ansible Playbooks <playbooks_intro>`.
.. seealso::
diff --git a/docs/docsite/rst/user_guide/intro_bsd.rst b/docs/docsite/rst/user_guide/intro_bsd.rst
index d93cc35047..106acd536a 100644
--- a/docs/docsite/rst/user_guide/intro_bsd.rst
+++ b/docs/docsite/rst/user_guide/intro_bsd.rst
@@ -1,59 +1,66 @@
-BSD Support
-===========
+.. _working_with_bsd:
-.. contents:: Topics
+Ansible and BSD
+===============
-.. _working_with_bsd:
+Managing BSD machines is different from managing Linux/Unix machines. If you have managed nodes running BSD, review these topics.
-Working with BSD
-````````````````
+.. contents::
+ :local:
-Ansible manages Linux/Unix machines using SSH by default. BSD machines are no exception, however this document covers some of the differences you may encounter with Ansible when working with BSD variants.
+Connecting to BSD nodes
+-----------------------
-Typically, Ansible will try to default to using OpenSSH as a connection method. This is suitable when using SSH keys to authenticate, but when using SSH passwords, Ansible relies on sshpass. Most
-versions of sshpass do not deal particularly well with BSD login prompts, so when using SSH passwords against BSD machines, it is recommended to change the transport method to paramiko. You can do this in ansible.cfg globally or you can set it as an inventory/group/host variable. For example::
+Ansible connects to managed nodes using OpenSSH by default. This works on BSD if you use SSH keys for authentication. However, if you use SSH passwords for authentication, Ansible relies on sshpass. Most
+versions of sshpass do not deal well with BSD login prompts, so when using SSH passwords against BSD machines, use ``paramiko`` to connect instead of OpenSSH. You can do this in ansible.cfg globally or you can set it as an inventory/group/host variable. For example:
+
+.. code-block:: text
[freebsd]
mybsdhost1 ansible_connection=paramiko
-Ansible is agentless by default, however certain software is required on the target machines.
-
-Operating without Python is possible with the ``raw`` module. Although this module can be used to bootstrap Ansible and install Python on BSD variants (see below), it is very limited and the use of Python is required to make full use of Ansible's features.
-
.. _bootstrap_bsd:
Bootstrapping BSD
-`````````````````
+-----------------
-As mentioned above, you can bootstrap Ansible with the ``raw`` module and remotely install Python on targets. The following example installs Python 2.7 which includes the json library required for full functionality of Ansible.
-On your control machine you can execute the following for most versions of FreeBSD::
+Ansible is agentless by default, however, it requires Python on managed nodes. Only the :ref:`raw <raw_module>` module will operate without Python. Although this module can be used to bootstrap Ansible and install Python on BSD variants (see below), it is very limited and the use of Python is required to make full use of Ansible's features.
- ansible -m raw -a "pkg install -y python27" mybsdhost1
+The following example installs Python 2.7 which includes the json library required for full functionality of Ansible.
+On your control machine you can execute the following for most versions of FreeBSD:
-Or for most versions of OpenBSD::
+.. code-block:: bash
- ansible -m raw -a "pkg_add -z python-2.7"
+ ansible -m raw -a "pkg install -y python27" mybsdhost1
+
+Or for most versions of OpenBSD:
+.. code-block:: bash
+ ansible -m raw -a "pkg_add -z python-2.7"
Once this is done you can now use other Ansible modules apart from the ``raw`` module.
.. note::
This example demonstrated using pkg on FreeBSD and pkg_add on OpenBSD, however you should be able to substitute the appropriate package tool for your BSD; the package name may also differ. Refer to the package list or documentation of the BSD variant you are using for the exact Python package name you intend to install.
-.. _python_location:
+.. BSD_python_location:
Setting the Python interpreter
-``````````````````````````````
+------------------------------
-To support a variety of Unix/Linux operating systems and distributions, Ansible cannot always rely on the existing environment or ``env`` variables to locate the correct Python binary. By default, modules point at ``/usr/bin/python`` as this is the most common location. On BSD variants, this path may differ, so it is advised to inform Ansible of the binary's location, through the ``ansible_python_interpreter`` inventory variable. For example::
+To support a variety of Unix/Linux operating systems and distributions, Ansible cannot always rely on the existing environment or ``env`` variables to locate the correct Python binary. By default, modules point at ``/usr/bin/python`` as this is the most common location. On BSD variants, this path may differ, so it is advised to inform Ansible of the binary's location, through the ``ansible_python_interpreter`` inventory variable. For example:
+
+.. code-block:: text
[freebsd:vars]
ansible_python_interpreter=/usr/local/bin/python2.7
[openbsd:vars]
ansible_python_interpreter=/usr/local/bin/python2.7
-If you use additional plugins beyond those bundled with Ansible, you can set similar variables for ``bash``, ``perl`` or ``ruby``, depending on how the plugin is written. For example::
+If you use additional plugins beyond those bundled with Ansible, you can set similar variables for ``bash``, ``perl`` or ``ruby``, depending on how the plugin is written. For example:
+
+.. code-block:: text
[freebsd:vars]
ansible_python_interpreter=/usr/local/bin/python
@@ -61,28 +68,28 @@ If you use additional plugins beyond those bundled with Ansible, you can set sim
Which modules are available?
-````````````````````````````
+----------------------------
The majority of the core Ansible modules are written for a combination of Linux/Unix machines and other generic services, so most should function well on the BSDs with the obvious exception of those that are aimed at Linux-only technologies (such as LVG).
-Using BSD as the control machine
-````````````````````````````````
+Using BSD as the control node
+-----------------------------
Using BSD as the control machine is as simple as installing the Ansible package for your BSD variant or by following the ``pip`` or 'from source' instructions.
.. _bsd_facts:
-BSD Facts
-`````````
+BSD facts
+---------
Ansible gathers facts from the BSDs in a similar manner to Linux machines, but since the data, names and structures can vary for network, disks and other devices, one should expect the output to be slightly different yet still familiar to a BSD administrator.
.. _bsd_contributions:
-BSD Efforts and Contributions
-`````````````````````````````
+BSD efforts and contributions
+-----------------------------
-BSD support is important to us at Ansible. Even though the majority of our contributors use and target Linux we have an active BSD community and strive to be as BSD friendly as possible.
+BSD support is important to us at Ansible. Even though the majority of our contributors use and target Linux we have an active BSD community and strive to be as BSD-friendly as possible.
Please feel free to report any issues or incompatibilities you discover with BSD; pull requests with an included fix are also welcome!
.. seealso::
@@ -97,4 +104,3 @@ Please feel free to report any issues or incompatibilities you discover with BSD
Questions? Help? Ideas? Stop by the list on Google Groups
`irc.freenode.net <http://irc.freenode.net>`_
#ansible IRC chat channel
-
diff --git a/docs/docsite/rst/user_guide/intro_dynamic_inventory.rst b/docs/docsite/rst/user_guide/intro_dynamic_inventory.rst
index 9004079dd4..e214ce70d6 100644
--- a/docs/docsite/rst/user_guide/intro_dynamic_inventory.rst
+++ b/docs/docsite/rst/user_guide/intro_dynamic_inventory.rst
@@ -2,25 +2,25 @@
.. _dynamic_inventory:
******************************
-Working With Dynamic Inventory
+Working with dynamic inventory
******************************
-.. contents:: Topics
+.. contents::
:local:
If your Ansible inventory fluctuates over time, with hosts spinning up and shutting down in response to business demands, the static inventory solutions described in :ref:`inventory` will not serve your needs. You may need to track hosts from multiple sources: cloud providers, LDAP, `Cobbler <https://cobbler.github.io>`_, and/or enterprise CMDB systems.
Ansible integrates all of these options via a dynamic external inventory system. Ansible supports two ways to connect with external inventory: :ref:`inventory_plugins` and `inventory scripts <https://github.com/ansible/ansible/tree/devel/contrib/inventory>`_.
-Inventory plugins take advantage of the most recent updates to Ansible's core code. We recommend plugins over scripts for dynamic inventory. You can :ref:`write your own plugin <developing_inventory>` to connect to additional dynamic inventory sources.
+Inventory plugins take advantage of the most recent updates to the Ansible core code. We recommend plugins over scripts for dynamic inventory. You can :ref:`write your own plugin <developing_inventory>` to connect to additional dynamic inventory sources.
You can still use inventory scripts if you choose. When we implemented inventory plugins, we ensured backwards compatibility via the script inventory plugin. The examples below illustrate how to use inventory scripts.
-If you'd like a GUI for handling dynamic inventory, the :ref:`ansible_tower` inventory database syncs with all your dynamic inventory sources, provides web and REST access to the results, and offers a graphical inventory editor. With a database record of all of your hosts, you can correlate past event history and see which hosts have had failures on their last playbook runs.
+If you would like a GUI for handling dynamic inventory, the :ref:`ansible_tower` inventory database syncs with all your dynamic inventory sources, provides web and REST access to the results, and offers a graphical inventory editor. With a database record of all of your hosts, you can correlate past event history and see which hosts have had failures on their last playbook runs.
.. _cobbler_example:
-Inventory Script Example: Cobbler
+Inventory script example: Cobbler
=================================
Ansible integrates seamlessly with `Cobbler <https://cobbler.github.io>`_, a Linux installation server originally written by Michael DeHaan and now led by James Cammarata, who works for Ansible.
@@ -28,10 +28,11 @@ Ansible integrates seamlessly with `Cobbler <https://cobbler.github.io>`_, a Lin
While primarily used to kickoff OS installations and manage DHCP and DNS, Cobbler has a generic
layer that can represent data for multiple configuration management systems (even at the same time) and serve as a 'lightweight CMDB'.
-To tie Ansible's inventory to Cobbler, copy `this script <https://raw.githubusercontent.com/ansible/ansible/devel/contrib/inventory/cobbler.py>`_ to ``/etc/ansible`` and ``chmod +x`` the file. Run ``cobblerd`` any time you use Ansible and use the ``-i`` command line option (e.g. ``-i /etc/ansible/cobbler.py``) to communicate with Cobbler using Cobbler's XMLRPC API.
+To tie your Ansible inventory to Cobbler, copy `this script <https://raw.githubusercontent.com/ansible/ansible/devel/contrib/inventory/cobbler.py>`_ to ``/etc/ansible`` and ``chmod +x`` the file. Run ``cobblerd`` any time you use Ansible and use the ``-i`` command line option (e.g. ``-i /etc/ansible/cobbler.py``) to communicate with Cobbler using Cobbler's XMLRPC API.
-Add a ``cobbler.ini`` file in ``/etc/ansible`` so Ansible knows where the Cobbler server is and some cache improvements can be used. For example::
+Add a ``cobbler.ini`` file in ``/etc/ansible`` so Ansible knows where the Cobbler server is and some cache improvements can be used. For example:
+.. code-block:: text
[cobbler]
@@ -54,21 +55,27 @@ Add a ``cobbler.ini`` file in ``/etc/ansible`` so Ansible knows where the Cobble
First test the script by running ``/etc/ansible/cobbler.py`` directly. You should see some JSON data output, but it may not have anything in it just yet.
-Let's explore what this does. In Cobbler, assume a scenario somewhat like the following::
+Let's explore what this does. In Cobbler, assume a scenario somewhat like the following:
+
+.. code-block:: bash
cobbler profile add --name=webserver --distro=CentOS6-x86_64
cobbler profile edit --name=webserver --mgmt-classes="webserver" --ksmeta="a=2 b=3"
cobbler system edit --name=foo --dns-name="foo.example.com" --mgmt-classes="atlanta" --ksmeta="c=4"
cobbler system edit --name=bar --dns-name="bar.example.com" --mgmt-classes="atlanta" --ksmeta="c=5"
-In the example above, the system 'foo.example.com' will be addressable by ansible directly, but will also be addressable when using the group names 'webserver' or 'atlanta'. Since Ansible uses SSH, we'll try to contact system foo over 'foo.example.com', only, never just 'foo'. Similarly, if you try "ansible foo" it wouldn't find the system... but "ansible 'foo*'" would, because the system DNS name starts with 'foo'.
+In the example above, the system 'foo.example.com' are addressable by ansible directly, but are also addressable when using the group names 'webserver' or 'atlanta'. Since Ansible uses SSH, it contacts system foo over 'foo.example.com', only, never just 'foo'. Similarly, if you try "ansible foo" it wouldn't find the system... but "ansible 'foo*'" would, because the system DNS name starts with 'foo'.
+
+The script provides more than host and group info. In addition, as a bonus, when the 'setup' module is run (which happens automatically when using playbooks), the variables 'a', 'b', and 'c' will all be auto-populated in the templates:
-The script doesn't just provide host and group info. In addition, as a bonus, when the 'setup' module is run (which happens automatically when using playbooks), the variables 'a', 'b', and 'c' will all be auto-populated in the templates::
+.. code-block:: text
# file: /srv/motd.j2
Welcome, I am templated with a value of a={{ a }}, b={{ b }}, and c={{ c }}
-Which could be executed just like this::
+Which could be executed just like this:
+
+.. code-block:: bash
ansible webserver -m setup
ansible webserver -m template -a "src=/tmp/motd.j2 dest=/etc/motd"
@@ -79,15 +86,21 @@ Which could be executed just like this::
normal in Ansible, but variables from the external inventory script
will override any that have the same name.
-So, with the template above (``motd.j2``), this would result in the following data being written to ``/etc/motd`` for system 'foo'::
+So, with the template above (``motd.j2``), this would result in the following data being written to ``/etc/motd`` for system 'foo':
+
+.. code-block:: text
Welcome, I am templated with a value of a=2, b=3, and c=4
-And on system 'bar' (bar.example.com)::
+And on system 'bar' (bar.example.com):
+
+.. code-block:: text
Welcome, I am templated with a value of a=2, b=3, and c=5
-And technically, though there is no major good reason to do it, this also works too::
+And technically, though there is no major good reason to do it, this also works too:
+
+.. code-block:: bash
ansible webserver -m shell -a "echo {{ a }}"
@@ -95,31 +108,38 @@ So in other words, you can use those variables in arguments/actions as well.
.. _aws_example:
-Inventory Script Example: AWS EC2
+Inventory script example: AWS EC2
=================================
If you use Amazon Web Services EC2, maintaining an inventory file might not be the best approach, because hosts may come and go over time, be managed by external applications, or you might even be using AWS autoscaling. For this reason, you can use the `EC2 external inventory <https://raw.githubusercontent.com/ansible/ansible/devel/contrib/inventory/ec2.py>`_ script.
-You can use this script in one of two ways. The easiest is to use Ansible's ``-i`` command line option and specify the path to the script after
-marking it executable::
+You can use this script in one of two ways. The easiest is to use Ansible's ``-i`` command line option and specify the path to the script after marking it executable:
+
+.. code-block:: bash
ansible -i ec2.py -u ubuntu us-east-1d -m ping
-The second option is to copy the script to `/etc/ansible/hosts` and `chmod +x` it. You will also need to copy the `ec2.ini <https://raw.githubusercontent.com/ansible/ansible/devel/contrib/inventory/ec2.ini>`_ file to `/etc/ansible/ec2.ini`. Then you can run ansible as you would normally.
+The second option is to copy the script to `/etc/ansible/hosts` and `chmod +x` it. You must also copy the `ec2.ini <https://raw.githubusercontent.com/ansible/ansible/devel/contrib/inventory/ec2.ini>`_ file to `/etc/ansible/ec2.ini`. Then you can run ansible as you would normally.
+
+To make a successful API call to AWS, you must configure Boto (the Python interface to AWS). You can do this in `several ways <http://docs.pythonboto.org/en/latest/boto_config_tut.html>`_ available, but the simplest is to export two environment variables:
-To successfully make an API call to AWS, you will need to configure Boto (the Python interface to AWS). There are a `variety of methods <http://docs.pythonboto.org/en/latest/boto_config_tut.html>`_ available, but the simplest is just to export two environment variables::
+.. code-block:: bash
export AWS_ACCESS_KEY_ID='AK123'
export AWS_SECRET_ACCESS_KEY='abc123'
-You can test the script by itself to make sure your config is correct::
+You can test the script by itself to make sure your config is correct:
+
+.. code-block:: bash
cd contrib/inventory
./ec2.py --list
After a few moments, you should see your entire EC2 inventory across all regions in JSON.
-If you use Boto profiles to manage multiple AWS accounts, you can pass ``--profile PROFILE`` name to the ``ec2.py`` script. An example profile might be::
+If you use Boto profiles to manage multiple AWS accounts, you can pass ``--profile PROFILE`` name to the ``ec2.py`` script. An example profile might be:
+
+.. code-block:: text
[profile dev]
aws_access_key_id = <dev access key>
@@ -134,7 +154,9 @@ You can also use the ``AWS_PROFILE`` variable - for example: ``AWS_PROFILE=prod
Since each region requires its own API call, if you are only using a small set of regions, you can edit the ``ec2.ini`` file and comment out the regions you are not using.
-There are other config options in ``ec2.ini``, including cache control and destination variables. By default, the ``ec2.ini`` file is configured for **all Amazon cloud services**, but you can comment out any features that aren't applicable. For example, if you don't have ``RDS`` or ``elasticache``, you can set them to ``False`` ::
+There are other config options in ``ec2.ini``, including cache control and destination variables. By default, the ``ec2.ini`` file is configured for **all Amazon cloud services**, but you can comment out any features that aren't applicable. For example, if you don't have ``RDS`` or ``elasticache``, you can set them to ``False`` :
+
+.. code-block:: text
[ec2]
...
@@ -226,19 +248,23 @@ When the Ansible is interacting with a specific server, the EC2 inventory script
Both ``ec2_security_group_ids`` and ``ec2_security_group_names`` are comma-separated lists of all security groups. Each EC2 tag is a variable in the format ``ec2_tag_KEY``.
-To see the complete list of variables available for an instance, run the script by itself::
+To see the complete list of variables available for an instance, run the script by itself:
+
+.. code-block:: bash
cd contrib/inventory
./ec2.py --host ec2-12-12-12-12.compute-1.amazonaws.com
Note that the AWS inventory script will cache results to avoid repeated API calls, and this cache setting is configurable in ec2.ini. To
-explicitly clear the cache, you can run the ec2.py script with the ``--refresh-cache`` parameter::
+explicitly clear the cache, you can run the ec2.py script with the ``--refresh-cache`` parameter:
+
+.. code-block:: bash
./ec2.py --refresh-cache
.. _openstack_example:
-Inventory Script Example: OpenStack
+Inventory script example: OpenStack
===================================
If you use an OpenStack-based cloud, instead of manually maintaining your own inventory file, you can use the ``openstack_inventory.py`` dynamic inventory to pull information about your compute instances directly from OpenStack.
@@ -258,7 +284,9 @@ Download the latest version of the OpenStack dynamic inventory script and make i
.. note::
Do not name it `openstack.py`. This name will conflict with imports from openstacksdk.
-Source an OpenStack RC file::
+Source an OpenStack RC file:
+
+.. code-block:: bash
source openstack.rc
@@ -278,26 +306,34 @@ You can test the OpenStack dynamic inventory script manually to confirm it is wo
After a few moments you should see some JSON output with information about your compute instances.
-Once you confirm the dynamic inventory script is working as expected, you can tell Ansible to use the `openstack_inventory.py` script as an inventory file, as illustrated below::
+Once you confirm the dynamic inventory script is working as expected, you can tell Ansible to use the `openstack_inventory.py` script as an inventory file, as illustrated below:
+
+.. code-block:: bash
ansible -i openstack_inventory.py all -m ping
Implicit use of OpenStack inventory script
------------------------------------------
-Download the latest version of the OpenStack dynamic inventory script, make it executable and copy it to `/etc/ansible/hosts`::
+Download the latest version of the OpenStack dynamic inventory script, make it executable and copy it to `/etc/ansible/hosts`:
+
+.. code-block:: bash
wget https://raw.githubusercontent.com/ansible/ansible/devel/contrib/inventory/openstack_inventory.py
chmod +x openstack_inventory.py
sudo cp openstack_inventory.py /etc/ansible/hosts
-Download the sample configuration file, modify it to suit your needs and copy it to `/etc/ansible/openstack.yml`::
+Download the sample configuration file, modify it to suit your needs and copy it to `/etc/ansible/openstack.yml`:
+
+.. code-block:: bash
wget https://raw.githubusercontent.com/ansible/ansible/devel/contrib/inventory/openstack.yml
vi openstack.yml
sudo cp openstack.yml /etc/ansible/
-You can test the OpenStack dynamic inventory script manually to confirm it is working as expected::
+You can test the OpenStack dynamic inventory script manually to confirm it is working as expected:
+
+.. code-block:: bash
/etc/ansible/hosts --list
@@ -306,7 +342,9 @@ After a few moments you should see some JSON output with information about your
Refreshing the cache
--------------------
-Note that the OpenStack dynamic inventory script will cache results to avoid repeated API calls. To explicitly clear the cache, you can run the openstack_inventory.py (or hosts) script with the ``--refresh`` parameter::
+Note that the OpenStack dynamic inventory script will cache results to avoid repeated API calls. To explicitly clear the cache, you can run the openstack_inventory.py (or hosts) script with the ``--refresh`` parameter:
+
+.. code-block:: bash
./openstack_inventory.py --refresh --list
@@ -319,14 +357,16 @@ You can find all included inventory scripts in the `contrib/inventory directory
.. _using_multiple_sources:
-Using Inventory Directories and Multiple Inventory Sources
+Using inventory directories and multiple inventory sources
==========================================================
If the location given to ``-i`` in Ansible is a directory (or as so configured in ``ansible.cfg``), Ansible can use multiple inventory sources
at the same time. When doing so, it is possible to mix both dynamic and statically managed inventory sources in the same ansible run. Instant
hybrid cloud!
-In an inventory directory, executable files will be treated as dynamic inventory sources and most other files as static sources. Files which end with any of the following will be ignored::
+In an inventory directory, executable files will be treated as dynamic inventory sources and most other files as static sources. Files which end with any of the following will be ignored:
+
+.. code-block:: text
~, .orig, .bak, .ini, .cfg, .retry, .pyc, .pyo
@@ -336,13 +376,15 @@ Any ``group_vars`` and ``host_vars`` subdirectories in an inventory directory wi
.. _static_groups_of_dynamic:
-Static Groups of Dynamic Groups
+Static groups of dynamic groups
===============================
When defining groups of groups in the static inventory file, the child groups
must also be defined in the static inventory file, or ansible will return an
error. If you want to define a static group of dynamic child groups, define
-the dynamic groups as empty in the static inventory file. For example::
+the dynamic groups as empty in the static inventory file. For example:
+
+.. code-block:: text
[tag_Name_staging_foo]
@@ -353,7 +395,6 @@ the dynamic groups as empty in the static inventory file. For example::
tag_Name_staging_bar
-
.. seealso::
:ref:`intro_inventory`
diff --git a/docs/docsite/rst/user_guide/intro_getting_started.rst b/docs/docsite/rst/user_guide/intro_getting_started.rst
index e3755657fe..b7a986265c 100644
--- a/docs/docsite/rst/user_guide/intro_getting_started.rst
+++ b/docs/docsite/rst/user_guide/intro_getting_started.rst
@@ -1,88 +1,95 @@
.. _intro_getting_started:
+***************
Getting Started
-===============
+***************
-.. contents::
- :local:
-
-.. _gs_about:
-
-Foreword
-````````
-
-Now that you've read the :ref:`installation guide<installation_guide>` and installed Ansible, you can get
-started with some ad-hoc commands.
-
-What we are showing first are not the powerful configuration/deployment/orchestration features of Ansible.
-These features are handled by playbooks which are covered in a separate section.
+Now that you have read the :ref:`installation guide<installation_guide>` and installed Ansible on a control node, you are ready to learn how Ansible works. A basic Ansible command or playbook:
+ * selects machines to execute against from inventory
+ * connects to those machines (or network devices, or other managed nodes), usually over SSH
+ * copies one or more modules to the remote machines and starts execution there
-This section is about how to initially get Ansible running. Once you understand these concepts, read :ref:`intro_adhoc` for some more detail, and then you'll be ready to begin learning about playbooks and explore the most interesting parts!
+Ansible can do much more, but you should understand the most common use case before exploring all the powerful configuration, deployment, and orchestration features of Ansible. This page illustrates the basic process with a simple inventory and an ad-hoc command. Once you understand how Ansible works, you can read more details about :ref:`ad-hoc commands<intro_adhoc>`, organize your infrastructure with :ref:`inventory<intro_inventory>`, and harness the full power of Ansible with :ref:`playbooks<playbooks_intro>`.
-.. _remote_connection_information:
-
-Remote Connection Information
-`````````````````````````````
+.. contents::
+ :local:
-Before we get started, it is important to understand how Ansible communicates with remote
-machines over the `SSH protocol <https://www.ssh.com/ssh/protocol/>`_.
+Selecting machines from inventory
+=================================
-By default, Ansible will try to use native
-OpenSSH for remote communication when possible. This enables ControlPersist (a performance feature), Kerberos, and options in ``~/.ssh/config`` such as Jump Host setup. However, when using Enterprise Linux 6 operating systems as the control machine (Red Hat Enterprise Linux and derivatives such as CentOS), the version of OpenSSH may be too old to support ControlPersist. On these operating systems, Ansible will fallback into using a high-quality Python implementation of
-OpenSSH called 'paramiko'. If you wish to use features like Kerberized SSH and more, consider using Fedora, macOS, or Ubuntu as your control machine until a newer version of OpenSSH is available for your platform.
+Ansible reads information about which machines you want to manage from your inventory. Although you can pass an IP address to an ad-hoc command, you need inventory to take advantage of the full flexibility and repeatability of Ansible.
-Occasionally you will encounter a device that does not support SFTP. This is rare, but should it occur, you can switch to SCP mode in :ref:`intro_configuration`.
+Action: create a basic inventory
+--------------------------------
+For this basic inventory, edit (or create) ``/etc/ansible/hosts`` and add a few remote systems to it. For this example, use either IP addresses or FQDNs:
-When speaking with remote machines, Ansible by default assumes you are using SSH keys. SSH keys are encouraged but password authentication can also be used where needed by supplying the option ``--ask-pass``. If using sudo features and when sudo requires a password, also supply ``--ask-become-pass`` (previously ``--ask-sudo-pass`` which has been deprecated).
+.. code-block:: text
-.. include:: shared_snippets/SSH_password_prompt.txt
+ 192.0.2.50
+ aserver.example.org
+ bserver.example.org
-While it may be common sense, it is worth sharing: Any management system benefits from being run near the machines being managed. If you are running Ansible in a cloud, consider running it from a machine inside that cloud. In most cases this will work better than on the open Internet.
+Beyond the basics
+-----------------
+Your inventory can store much more than IPs and FQDNs. You can create :ref:`aliases<inventory_aliases>`, set variable values for a single host with :ref:`host vars<host_variables>`, or set variable values for multiple hosts with :ref:`group vars<group_variables>`.
-Ansible is not limited to remote connections over SSH. The transports are pluggable, and there are options for managing things locally, as well as managing chroot, lxc, and jail containers. A mode called 'ansible-pull' can also invert the system and have systems 'phone home' via scheduled git checkouts to pull configuration directives from a central repository.
+.. _remote_connection_information:
-.. _your_first_commands:
+Connecting to remote nodes
+==========================
-Your first commands
-```````````````````
+Ansible communicates with remote machines over the `SSH protocol <https://www.ssh.com/ssh/protocol/>`_. By default, Ansible uses native OpenSSH and connects to remote machines using your current user name, just as SSH does.
-Now that you've installed Ansible, try some basics.
+Action: check your SSH connections
+----------------------------------
+Confirm that you can connect using SSH to all the nodes in your inventory using the same username. If necessary, add your public SSH key to the ``authorized_keys`` file on those systems.
-Edit (or create) ``/etc/ansible/hosts`` and put one or more remote systems in it. Your
-public SSH key should be located in ``authorized_keys`` on those systems::
+Beyond the basics
+-----------------
+You can override the default remote user name in several ways, including:
+* passing the ``-u`` parameter at the command line
+* setting user information in your inventory file
+* setting user information in your configuration file
+* setting environment variables
- 192.0.2.50
- aserver.example.org
- bserver.example.org
+See :ref:`general_precedence_rules` for details on the (sometimes unintuitive) precedence of each method of passing user information. You can read more about connections in :ref:`connections`.
+Copying and executing modules
+=============================
-This is an inventory file, which is also explained in greater depth here: :ref:`intro_inventory`.
+Once it has connected, Ansible transfers the modules required by your command or playbook to the remote machine(s) for execution.
-We assume you are using SSH keys for authentication. To set up SSH agent to avoid retyping passwords, you can
-do:
+Action: run your first Ansible commands
+---------------------------------------
+Use the ping module to ping all the nodes in your inventory:
.. code-block:: bash
- $ ssh-agent bash
- $ ssh-add ~/.ssh/id_rsa
+ $ ansible all -m ping
-Depending on your setup, you may wish to use Ansible's ``--private-key`` command line option to specify a pem file instead. You can also add the private key file:
+Now run a live command on all of your nodes:
- $ ssh-agent bash
- $ ssh-add ~/.ssh/keypair.pem
+.. code-block:: bash
-Another way to add private key files without using ssh-agent is using ``ansible_ssh_private_key_file`` in an inventory file as explained here: :ref:`intro_inventory`.
+ $ ansible all -a "/bin/echo hello"
-Now ping all your nodes:
+You should see output for each host in your inventory, similar to this:
-.. code-block:: bash
+.. code-block:: ansible-output
- $ ansible all -m ping
+ aserver.example.org | SUCCESS => {
+ "ansible_facts": {
+ "discovered_interpreter_python": "/usr/bin/python"
+ },
+ "changed": false,
+ "ping": "pong"
+ }
-Ansible will attempt to remote connect to the machines using your current user name, just like SSH would.
-You can override the default remote user name in several ways, including passing the ``-u`` parameter at the command line, setting user information in your inventory file, setting user information in your configuration file, and setting environment variables. See :ref:`general_precedence_rules` for details on the (sometimes unintuitive) precedence of each method of passing user information.
+Beyond the basics
+-----------------
+By default Ansible uses SFTP to transfer files. If the machine or device you want to manage does not support SFTP, you can switch to SCP mode in :ref:`intro_configuration`. The files are placed in a temporary directory and executed from there.
-If you would like to access sudo mode, there are also flags to do that:
+If you need privilege escalation (sudo and similar) to run a command, pass the ``become`` flags:
.. code-block:: bash
@@ -93,63 +100,17 @@ If you would like to access sudo mode, there are also flags to do that:
# as bruce, sudoing to batman
$ ansible all -m ping -u bruce --become --become-user batman
-The sudo implementation (and other methods of changing the current user) can be modified in Ansible configuration
-if you happen to want to use a sudo replacement. Flags passed to sudo (like -H) can also be set.
+You can read more about privilege escalation in :ref:`become`.
-Now run a live command on all of your nodes:
-
-.. code-block:: bash
-
- $ ansible all -a "/bin/echo hello"
+Congratulations! You have contacted your nodes using Ansible. You used a basic inventory file and an ad-hoc command to direct Ansible to connect to specific remote nodes, copy a module file there and execute it, and return output. You have a fully working infrastructure.
-Congratulations! You have contacted your nodes with Ansible. You have a fully working infrastructure.
+Next steps
+==========
Next you can read about more real-world cases in :ref:`intro_adhoc`,
explore what you can do with different modules, or read about the Ansible
:ref:`working_with_playbooks` language. Ansible is not just about running commands, it
also has powerful configuration management and deployment features.
-Tips
-
-When running commands, you can specify the local server by using "localhost" or "127.0.0.1" for the server name.
-
-Example:
-
-.. code-block:: bash
-
- $ ansible localhost -m ping -e 'ansible_python_interpreter="/usr/bin/env python"'
-
-You can specify localhost explicitly by adding this to your inventory file::
-
- localhost ansible_connection=local ansible_python_interpreter="/usr/bin/env python"
-
-.. _a_note_about_host_key_checking:
-
-Host Key Checking
-`````````````````
-
-Ansible has host key checking enabled by default.
-
-If a host is reinstalled and has a different key in 'known_hosts', this will result in an error message until corrected. If a host is not initially in 'known_hosts' this will result in prompting for confirmation of the key, which results in an interactive experience if using Ansible, from say, cron. You might not want this.
-
-If you understand the implications and wish to disable this behavior, you can do so by editing ``/etc/ansible/ansible.cfg`` or ``~/.ansible.cfg``::
-
- [defaults]
- host_key_checking = False
-
-Alternatively this can be set by the :envvar:`ANSIBLE_HOST_KEY_CHECKING` environment variable:
-
-.. code-block:: bash
-
- $ export ANSIBLE_HOST_KEY_CHECKING=False
-
-Also note that host key checking in paramiko mode is reasonably slow, therefore switching to 'ssh' is also recommended when using this feature.
-
-.. _a_note_about_logging:
-
-Ansible will log some information about module arguments on the remote system in the remote syslog, unless a task or play is marked with a "no_log: True" attribute. This is explained later.
-
-To enable basic logging on the control machine see :ref:`intro_configuration` document and set the 'log_path' configuration file setting. Enterprise users may also be interested in :ref:`ansible_tower`. Tower provides a very robust database logging feature where it is possible to drill down and see history based on hosts, projects, and particular inventories over time -- explorable both graphically and through a REST API.
-
.. seealso::
:ref:`intro_inventory`
diff --git a/docs/docsite/rst/user_guide/intro_inventory.rst b/docs/docsite/rst/user_guide/intro_inventory.rst
index 74711565a7..6af82b5b92 100644
--- a/docs/docsite/rst/user_guide/intro_inventory.rst
+++ b/docs/docsite/rst/user_guide/intro_inventory.rst
@@ -1,31 +1,27 @@
.. _intro_inventory:
.. _inventory:
-**********************
-Working with Inventory
-**********************
+***************************
+How to build your inventory
+***************************
-.. contents::
- :local:
+Ansible works against multiple managed nodes or "hosts" in your infrastructure at the same time, using a list or group of lists know as inventory. Once your inventory is defined, you use :ref:`patterns <intro_patterns>` to select the hosts or groups you want Ansible to run against.
-Ansible works against multiple systems in your infrastructure at the same time.
-It does this by selecting portions of systems listed in Ansible's inventory,
-which defaults to being saved in the location ``/etc/ansible/hosts``.
-You can specify a different inventory file using the ``-i <path>`` option on the command line.
-
-Not only is this inventory configurable, but you can also use multiple inventory files at the same time and
-pull inventory from dynamic or cloud sources or different formats (YAML, ini, etc), as described in :ref:`intro_dynamic_inventory`.
+The default location for inventory is a file called ``/etc/ansible/hosts``. You can specify a different inventory file at the command line using the ``-i <path>`` option. You can also use multiple inventory files at the same time, and/or pull inventory from dynamic or cloud sources or different formats (YAML, ini, etc), as described in :ref:`intro_dynamic_inventory`.
Introduced in version 2.4, Ansible has :ref:`inventory_plugins` to make this flexible and customizable.
+.. contents::
+ :local:
+
.. _inventoryformat:
-Inventory basics: hosts and groups
-==================================
+Inventory basics: formats, hosts, and groups
+============================================
The inventory file can be in one of many formats, depending on the inventory plugins you have.
-For this example, the format for ``/etc/ansible/hosts`` is an INI-like (one of Ansible's defaults) and looks like this:
+The most common formats are INI and YAML. A basic INI ``etc/ansible/hosts`` might look like this:
-.. code-block:: guess
+.. code-block:: text
mail.example.com
@@ -38,10 +34,10 @@ For this example, the format for ``/etc/ansible/hosts`` is an INI-like (one of A
two.example.com
three.example.com
-The headings in brackets are group names, which are used in classifying systems
-and deciding what systems you are controlling at what times and for what purpose.
+The headings in brackets are group names, which are used in classifying hosts
+and deciding what hosts you are controlling at what times and for what purpose.
-A YAML version would look like:
+Here's that same basic inventory file in YAML format:
.. code-block:: yaml
@@ -59,12 +55,21 @@ A YAML version would look like:
two.example.com:
three.example.com:
+.. _default_groups:
+
+Default groups
+--------------
+
+There are two default groups: ``all`` and ``ungrouped``. The ``all`` group contains every host.
+The ``ungrouped`` group contains all hosts that don't have another group aside from ``all``.
+Every host will always belong to at least 2 groups (``all`` and ``ungrouped`` or ``all`` and some other group). Though ``all`` and ``ungrouped`` are always present, they can be implicit and not appear in group listings like ``group_names``.
+
.. _host_multiple_groups:
Hosts in multiple groups
------------------------
-You can put systems in more than one group, for instance a server could be both a webserver and in a specific datacenter. For example, you could create groups that track:
+You can (and probably will) put each host in more than one group. For example a production webserver in a datacenter in Atlanta might be included in groups called [prod] and [atlanta] and [webservers]. You can create groups that track:
* What - An application, stack or microservice. (For example, database servers, web servers, etc).
* Where - A datacenter or region, to talk to local DNS, storage, etc. (For example, east, west).
@@ -108,7 +113,7 @@ Extending the previous YAML inventory to include what, when, and where would loo
You can see that ``one.example.com`` exists in the ``dbservers``, ``east``, and ``prod`` groups.
-You could also use nested groups to simplify ``prod`` and ``test`` in this inventory, for the same result:
+You can also use nested groups to simplify ``prod`` and ``test`` in this inventory, for the same result:
.. code-block:: yaml
@@ -143,55 +148,14 @@ You could also use nested groups to simplify ``prod`` and ``test`` in this inven
You can find more examples on how to organize your inventories and group your hosts in :ref:`inventory_setup_examples`.
-If you do have systems in multiple groups, note that variables will come from all of the groups they are a member of. Variable precedence is detailed in :ref:`ansible_variable_precedence`.
-
-
-Hosts and non-standard ports
------------------------------
-If you have hosts that run on non-standard SSH ports you can put the port number after the hostname with a colon.
-Ports listed in your SSH config file won't be used with the `paramiko` connection but will be used with the `openssh` connection.
-
-To make things explicit, it is suggested that you set them if things are not running on the default port:
-
-.. code-block:: guess
-
- badwolf.example.com:5309
-
-Suppose you have just static IPs and want to set up some aliases that live in your host file, or you are connecting through tunnels.
-You can also describe hosts via variables:
-
-In INI:
-
-.. code-block:: guess
-
- jumper ansible_port=5555 ansible_host=192.0.2.50
-
-In YAML:
-
-.. code-block:: yaml
-
- ...
- hosts:
- jumper:
- ansible_port: 5555
- ansible_host: 192.0.2.50
+Adding ranges of hosts
+----------------------
-In the above example, trying to ansible against the host alias "jumper" (which may not even be a real hostname) will contact 192.0.2.50 on port 5555.
-Note that this is using a feature of the inventory file to define some special variables.
-Generally speaking, this is not the best way to define variables that describe your system policy, but we'll share suggestions on doing this later.
-
-.. note:: Values passed in the INI format using the ``key=value`` syntax are interpreted differently depending on where they are declared.
- * When declared inline with the host, INI values are interpreted as Python literal structures
- (strings, numbers, tuples, lists, dicts, booleans, None). Host lines accept multiple ``key=value`` parameters per line. Therefore they need a way to indicate that a space is part of a value rather than a separator.
- * When declared in a ``:vars`` section, INI values are interpreted as strings. For example ``var=FALSE`` would create a string equal to 'FALSE'. Unlike host lines, ``:vars`` sections accept only a single entry per line, so everything after the ``=`` must be the value for the entry.
- * Do not rely on types set during definition, always make sure you specify type with a filter when needed when consuming the variable.
- * Consider using YAML format for inventory sources to avoid confusion on the actual type of a variable. The YAML inventory plugin processes variable values consistently and correctly.
-
-If you are adding a lot of hosts following similar patterns, you can do this rather than listing each hostname:
+If you have a lot of hosts with a similar pattern, you can add them as a range rather than listing each hostname separately:
In INI:
-.. code-block:: guess
+.. code-block:: text
[webservers]
www[01:50].example.com
@@ -205,39 +169,32 @@ In YAML:
hosts:
www[01:50].example.com:
-For numeric patterns, leading zeros can be included or removed, as desired. Ranges are inclusive. You can also define alphabetic ranges:
+For numeric patterns, leading zeros can be included or removed, as desired. Ranges are inclusive. You can also define alphabetic ranges:
-.. code-block:: guess
+.. code-block:: text
[databases]
db-[a:f].example.com
-You can also select the connection type and user on a per host basis:
-
-.. code-block:: guess
-
- [targets]
-
- localhost ansible_connection=local
- other1.example.com ansible_connection=ssh ansible_user=mpdehaan
- other2.example.com ansible_connection=ssh ansible_user=mdehaan
+Adding variables to inventory
+=============================
-As mentioned above, setting these in the inventory file is only a shorthand, and we'll discuss how to store them in individual files in the 'host_vars' directory a bit later on.
+You can store variable values that relate to a specific host or group in inventory. To start with, you may add variables directly to the hosts and groups in your main inventory file. As you add more and more managed nodes to your Ansible inventory, however, you will likely want to store variables in separate host and group variable files.
.. _host_variables:
Assigning a variable to one machine: host variables
===================================================
-As described above, it is easy to assign variables to hosts that will be used later in playbooks:
+You can easily assign a variable to a single host, then use it later in playbooks. In INI:
-.. code-block:: guess
+.. code-block:: text
[atlanta]
host1 http_port=80 maxRequestsPerChild=808
host2 http_port=303 maxRequestsPerChild=909
-The YAML version:
+In YAML:
.. code-block:: yaml
@@ -249,16 +206,71 @@ The YAML version:
http_port: 303
maxRequestsPerChild: 909
+Unique values like non-standard SSH ports work well as host variables. You can add them to your Ansible inventory by adding the port number after the hostname with a colon:
+
+.. code-block:: text
+
+ badwolf.example.com:5309
+
+Connection variables also work well as host variables:
+
+.. code-block:: text
+
+ [targets]
+
+ localhost ansible_connection=local
+ other1.example.com ansible_connection=ssh ansible_user=myuser
+ other2.example.com ansible_connection=ssh ansible_user=myotheruser
+
+.. note:: If you list non-standard SSH ports in your SSH config file, the ``openssh`` connection will find and use them, but the ``paramiko`` connection will not.
+
+.. _inventory_aliases:
+
+Inventory aliases
+-----------------
+
+You can also define aliases in your inventory:
+
+In INI:
+
+.. code-block:: text
+
+ jumper ansible_port=5555 ansible_host=192.0.2.50
+
+In YAML:
+
+.. code-block:: yaml
+
+ ...
+ hosts:
+ jumper:
+ ansible_port: 5555
+ ansible_host: 192.0.2.50
+
+In the above example, running Ansible against the host alias "jumper" will connect to 192.0.2.50 on port 5555.
+This only works for hosts with static IPs, or when you are connecting through tunnels.
+
+.. note::
+ Values passed in the INI format using the ``key=value`` syntax are interpreted differently depending on where they are declared:
+
+ * When declared inline with the host, INI values are interpreted as Python literal structures (strings, numbers, tuples, lists, dicts, booleans, None). Host lines accept multiple ``key=value`` parameters per line. Therefore they need a way to indicate that a space is part of a value rather than a separator.
+
+ * When declared in a ``:vars`` section, INI values are interpreted as strings. For example ``var=FALSE`` would create a string equal to 'FALSE'. Unlike host lines, ``:vars`` sections accept only a single entry per line, so everything after the ``=`` must be the value for the entry.
+
+ * If a variable value set in an INI inventory must be a certain type (for example, a string or a boolean value), always specify the type with a filter in your task. Do not rely on types set in INI inventories when consuming variables.
+
+ * Consider using YAML format for inventory sources to avoid confusion on the actual type of a variable. The YAML inventory plugin processes variable values consistently and correctly.
+
+Generally speaking, this is not the best way to define variables that describe your system policy. Setting variables in the main inventory file is only a shorthand. See :ref:`splitting_out_vars` for guidelines on storing variable values in individual files in the 'host_vars' directory.
+
.. _group_variables:
Assigning a variable to many machines: group variables
======================================================
-Variables can also be applied to an entire group at once:
-
-The INI way:
+If all hosts in a group share a variable value, you can apply that variable to an entire group at once. In INI:
-.. code-block:: guess
+.. code-block:: text
[atlanta]
host1
@@ -268,7 +280,7 @@ The INI way:
ntp_server=ntp.atlanta.example.com
proxy=proxy.atlanta.example.com
-The YAML version:
+In YAML:
.. code-block:: yaml
@@ -280,7 +292,7 @@ The YAML version:
ntp_server: ntp.atlanta.example.com
proxy: proxy.atlanta.example.com
-Be aware that this is only a convenient way to apply variables to multiple hosts at once; even though you can target hosts by group, **variables are always flattened to the host level** before a play is executed.
+Group variables are a convenient way to apply variables to multiple hosts at once. Before executing, however, Ansible always flattens variables, including inventory variables, to the host level. If a host is a member of multiple groups, Ansible reads variable values from all of those groups. If you assign different values to the same variable in different groups, Ansible chooses which value to use based on internal :ref:`rules for merging <how_we_merge>`.
.. _subgroups:
@@ -290,8 +302,9 @@ Inheriting variable values: group variables for groups of groups
You can make groups of groups using the ``:children`` suffix in INI or the ``children:`` entry in YAML.
You can apply variables to these groups of groups using ``:vars`` or ``vars:``:
+In INI:
-.. code-block:: guess
+.. code-block:: text
[atlanta]
host1
@@ -317,6 +330,8 @@ You can apply variables to these groups of groups using ``:vars`` or ``vars:``:
southwest
northwest
+In YAML:
+
.. code-block:: yaml
all:
@@ -342,7 +357,8 @@ You can apply variables to these groups of groups using ``:vars`` or ``vars:``:
northwest:
southwest:
-If you need to store lists or hash data, or prefer to keep host and group specific variables separate from the inventory file, see the next section.
+If you need to store lists or hash data, or prefer to keep host and group specific variables separate from the inventory file, see :ref:`splitting_out_vars`.
+
Child groups have a couple of properties to note:
- Any host that is member of a child group is automatically a member of the parent group.
@@ -350,65 +366,45 @@ Child groups have a couple of properties to note:
- Groups can have multiple parents and children, but not circular relationships.
- Hosts can also be in multiple groups, but there will only be **one** instance of a host, merging the data from the multiple groups.
-.. _default_groups:
-
-Default groups
-==============
-
-There are two default groups: ``all`` and ``ungrouped``. ``all`` contains every host.
-``ungrouped`` contains all hosts that don't have another group aside from ``all``.
-Every host will always belong to at least 2 groups (``all`` and ``ungrouped`` or ``all`` and some other group).
-Though ``all`` and ``ungrouped`` are always present, they can be implicit and not appear in group listings like ``group_names``.
-
.. _splitting_out_vars:
Organizing host and group variables
===================================
-Although you can store variables in the main inventory file, storing separate host and group variables files may help you track your variable values more easily.
-
-Host and group variables can be stored in individual files relative to the inventory file (not directory, it is always the file).
-
-These variable files are in YAML format. Valid file extensions include '.yml', '.yaml', '.json', or no file extension.
+Although you can store variables in the main inventory file, storing separate host and group variables files may help you organize your variable values more easily. Host and group variable files must use YAML syntax. Valid file extensions include '.yml', '.yaml', '.json', or no file extension.
See :ref:`yaml_syntax` if you are new to YAML.
-Let's say, for example, that you keep your inventory file at ``/etc/ansible/hosts``. You have a host named 'foosball' that's a member of two groups: 'raleigh' and 'webservers'. That host will use variables
-in YAML files at the following locations::
+Ansible loads host and group variable files by searching paths relative to the inventory file or the playbook file. If your inventory file at ``/etc/ansible/hosts`` contains a host named 'foosball' that belongs to two groups, 'raleigh' and 'webservers', that host will use variables in YAML files at the following locations:
+
+.. code-block:: bash
/etc/ansible/group_vars/raleigh # can optionally end in '.yml', '.yaml', or '.json'
/etc/ansible/group_vars/webservers
/etc/ansible/host_vars/foosball
-For instance, suppose you have hosts grouped by datacenter, and each datacenter
-uses some different servers. The data in the groupfile '/etc/ansible/group_vars/raleigh' for
-the 'raleigh' group might look like::
+For example, if you group hosts in your inventory by datacenter, and each datacenter uses its own NTP server and database server, you can create a file called ``/etc/ansible/group_vars/raleigh`` to store the variables for the ``raleigh`` group:
+
+.. code-block:: yaml
---
ntp_server: acme.example.org
database_server: storage.example.org
-It is okay if these files do not exist, as this is an optional feature.
+You can also create *directories* named after your groups or hosts. Ansible will read all the files in these directories in lexicographical order. An example with the 'raleigh' group:
-As an advanced use case, you can create *directories* named after your groups or hosts, and
-Ansible will read all the files in these directories in lexicographical order. An example with the 'raleigh' group::
+.. code-block:: bash
/etc/ansible/group_vars/raleigh/db_settings
/etc/ansible/group_vars/raleigh/cluster_settings
-All hosts that are in the 'raleigh' group will have the variables defined in these files
+All hosts in the 'raleigh' group will have the variables defined in these files
available to them. This can be very useful to keep your variables organized when a single
-file starts to be too big, or when you want to use :ref:`Ansible Vault<playbooks_vault>` on a part of a group's
-variables.
-
-Tip: The ``group_vars/`` and ``host_vars/`` directories can exist in
-the playbook directory OR the inventory directory. If both paths exist, variables in the playbook
-directory will override variables set in the inventory directory.
+file gets too big, or when you want to use :ref:`Ansible Vault<playbooks_vault>` on some group variables.
-Tip: The ``ansible-playbook`` command looks for playbooks in the current working directory by default. Other Ansible commands (for example, ``ansible``, ``ansible-console``, etc.) will only look for ``group_vars/`` and ``host_vars/`` in the
-inventory directory unless you provide the ``--playbook-dir`` option
-on the command line.
+You can also add ``group_vars/`` and ``host_vars/`` directories to your playbook directory. The ``ansible-playbook`` command looks for these directories in the current working directory by default. Other Ansible commands (for example, ``ansible``, ``ansible-console``, etc.) will only look for ``group_vars/`` and ``host_vars/`` in the inventory directory. If you want other commands to load group and host variables from a playbook directory, you must provide the ``--playbook-dir`` option on the command line.
+If you load inventory files from both the playbook directory and the inventory directory, variables in the playbook directory will override variables set in the inventory directory.
-Tip: Keeping your inventory file and variables in a git repo (or other version control)
+Keeping your inventory file and variables in a git repo (or other version control)
is an excellent way to track changes to your inventory and host variables.
.. _how_we_merge:
@@ -423,11 +419,9 @@ By default variables are merged/flattened to the specific host before a play is
- child group
- host
-When groups of the same parent/child level are merged, it is done alphabetically, and the last group loaded overwrites the previous groups. For example, an a_group will be merged with b_group and b_group vars that match will overwrite the ones in a_group.
+By default Ansible merges groups at the same parent/child level alphabetically, and the last group loaded overwrites the previous groups. For example, an a_group will be merged with b_group and b_group vars that match will overwrite the ones in a_group.
-.. versionadded:: 2.4
-
-Starting in Ansible version 2.4, users can use the group variable ``ansible_group_priority`` to change the merge order for groups of the same level (after the parent/child order is resolved). The larger the number, the later it will be merged, giving it higher priority. This variable defaults to ``1`` if not set. For example:
+You can change this behavior by setting the group variable ``ansible_group_priority`` to change the merge order for groups of the same level (after the parent/child order is resolved). The larger the number, the later it will be merged, giving it higher priority. This variable defaults to ``1`` if not set. For example:
.. code-block:: yaml
@@ -439,19 +433,21 @@ Starting in Ansible version 2.4, users can use the group variable ``ansible_grou
In this example, if both groups have the same priority, the result would normally have been ``testvar == b``, but since we are giving the ``a_group`` a higher priority the result will be ``testvar == a``.
-.. note:: ``ansible_group_priority`` can only be set in the inventory source and not in group_vars/ as the variable is used in the loading of group_vars.
+.. note:: ``ansible_group_priority`` can only be set in the inventory source and not in group_vars/, as the variable is used in the loading of group_vars.
.. _using_multiple_inventory_sources:
Using multiple inventory sources
================================
-As an advanced use case you can target multiple inventory sources (directories, dynamic inventory scripts
+You can target multiple inventory sources (directories, dynamic inventory scripts
or files supported by inventory plugins) at the same time by giving multiple inventory parameters from the command
line or by configuring :envvar:`ANSIBLE_INVENTORY`. This can be useful when you want to target normally
separate environments, like staging and production, at the same time for a specific action.
-Target two sources from the command line like this::
+Target two sources from the command line like this:
+
+.. code-block:: bash
ansible-playbook get_logs.yml -i staging -i production
@@ -467,7 +463,9 @@ the playbook will be run with ``myvar = 2``. The result would be reversed if the
You can also create an inventory by combining multiple inventory sources and source types under a directory.
This can be useful for combining static and dynamic hosts and managing them as one inventory.
The following inventory combines an inventory plugin source, a dynamic inventory script,
-and a file with static hosts::
+and a file with static hosts:
+
+.. code-block:: text
inventory/
openstack.yml # configure inventory plugin to get hosts from Openstack cloud
@@ -476,14 +474,18 @@ and a file with static hosts::
group_vars/
all.yml # assign variables to all hosts
-You can target this inventory directory simply like this::
+You can target this inventory directory simply like this:
+
+.. code-block:: bash
ansible-playbook example.yml -i inventory
It can be useful to control the merging order of the inventory sources if there's variable
conflicts or group of groups dependencies to the other inventory sources. The inventories
are merged in alphabetical order according to the filenames so the result can
-be controlled by adding prefixes to the files::
+be controlled by adding prefixes to the files:
+
+.. code-block:: text
inventory/
01-openstack.yml # configure inventory plugin to get hosts from Openstack cloud
@@ -592,7 +594,9 @@ ansible_shell_executable
to use :command:`/bin/sh` (i.e. :command:`/bin/sh` is not installed on the target
machine or cannot be run from sudo.).
-Examples from an Ansible-INI host file::
+Examples from an Ansible-INI host file:
+
+.. code-block:: text
some_host ansible_port=2222 ansible_user=manager
aws_host ansible_ssh_private_key_file=/home/example/.ssh/aws.pem
@@ -623,27 +627,29 @@ ansible_become
ansible_docker_extra_args
Could be a string with any additional arguments understood by Docker, which are not command specific. This parameter is mainly used to configure a remote Docker daemon to use.
-Here is an example of how to instantly deploy to created containers::
-
- - name: create jenkins container
- docker_container:
- docker_host: myserver.net:4243
- name: my_jenkins
- image: jenkins
-
- - name: add container to inventory
- add_host:
- name: my_jenkins
- ansible_connection: docker
- ansible_docker_extra_args: "--tlsverify --tlscacert=/path/to/ca.pem --tlscert=/path/to/client-cert.pem --tlskey=/path/to/client-key.pem -H=tcp://myserver.net:4243"
- ansible_user: jenkins
- changed_when: false
-
- - name: create directory for ssh keys
- delegate_to: my_jenkins
- file:
- path: "/var/jenkins_home/.ssh/jupiter"
- state: directory
+Here is an example of how to instantly deploy to created containers:
+
+.. code-block:: yaml
+
+ - name: create jenkins container
+ docker_container:
+ docker_host: myserver.net:4243
+ name: my_jenkins
+ image: jenkins
+
+ - name: add container to inventory
+ add_host:
+ name: my_jenkins
+ ansible_connection: docker
+ ansible_docker_extra_args: "--tlsverify --tlscacert=/path/to/ca.pem --tlscert=/path/to/client-cert.pem --tlskey=/path/to/client-key.pem -H=tcp://myserver.net:4243"
+ ansible_user: jenkins
+ changed_when: false
+
+ - name: create directory for ssh keys
+ delegate_to: my_jenkins
+ file:
+ path: "/var/jenkins_home/.ssh/jupiter"
+ state: directory
For a full list with available plugins and examples, see :ref:`connection_plugin_list`.
diff --git a/docs/docsite/rst/user_guide/intro_patterns.rst b/docs/docsite/rst/user_guide/intro_patterns.rst
index 8360eb90a2..a016a568de 100644
--- a/docs/docsite/rst/user_guide/intro_patterns.rst
+++ b/docs/docsite/rst/user_guide/intro_patterns.rst
@@ -1,85 +1,132 @@
.. _intro_patterns:
-Working with Patterns
-=====================
+Patterns: targeting hosts and groups
+====================================
-.. contents:: Topics
+When you execute Ansible through an ad-hoc command or by running a playbook, you must choose which managed nodes or groups you want to execute against. Patterns let you run commands and playbooks against specific hosts and/or groups in your inventory. An Ansible pattern can refer to a single host, an IP address, an inventory group, a set of groups, or all hosts in your inventory. Patterns are highly flexible - you can exclude or require subsets of hosts, use wildcards or regular expressions, and more. Ansible executes on all inventory hosts included in the pattern.
-Patterns in Ansible are how we decide which hosts to manage. This can mean what hosts to communicate with, but in terms
-of :ref:`playbooks<playbooks_intro>` it actually means what hosts to apply a particular configuration or IT process to.
+.. contents::
+ :local:
-We'll go over how to use the command line in :ref:`intro_adhoc` section, however, basically it looks like this::
+Using patterns
+--------------
- ansible <pattern_goes_here> -m <module_name> -a <arguments>
+You use a pattern almost any time you execute an ad-hoc command or a playbook. The pattern is the only element of an :ref:`ad-hoc command<intro_adhoc>` that has no flag. It is usually the second element::
-Such as::
+ ansible <pattern> -m <module_name> -a "<module options>""
+
+For example::
ansible webservers -m service -a "name=httpd state=restarted"
-A pattern usually refers to a set of groups (which are sets of hosts) -- in the above case, machines in the "webservers" group.
+In a playbook the pattern is the content of the ``hosts:`` line for each play:
-Anyway, to use Ansible, you'll first need to know how to tell Ansible which hosts in your inventory to talk to.
-This is done by designating particular host names or groups of hosts.
+.. code-block:: yaml
-The following patterns are equivalent and target all hosts in the inventory::
+ - name: <play_name>
+ hosts: <pattern>
- all
- *
+For example::
-It is also possible to address a specific host or set of hosts by name::
+ - name: restart webservers
+ hosts: webservers
- one.example.com
- one.example.com:two.example.com
- 192.0.2.50
- 192.0.2.*
+Since you often want to run a command or playbook against multiple hosts at once, patterns often refer to inventory groups. Both the ad-hoc command and the playbook above will execute against all machines in the ``webservers`` group.
-The following patterns address one or more groups. Groups separated by a colon indicate an "OR" configuration.
-This means the host may be in either one group or the other::
+.. _common_patterns:
- webservers
- webservers:dbservers
+Common patterns
+---------------
-You can exclude groups as well, for instance, all machines must be in the group webservers but not in the group phoenix::
+This table lists common patterns for targeting inventory hosts and groups.
- webservers:!phoenix
+.. table::
+ :class: documentation-table
-You can also specify the intersection of two groups. This would mean the hosts must be in the group webservers and
-the host must also be in the group staging::
+ ====================== ================================ ===================================================
+ Description Pattern(s) Targets
+ ====================== ================================ ===================================================
+ All hosts all (or \*)
- webservers:&staging
+ One host host1
-You can do combinations::
+ Multiple hosts host1:host2 (or host1,host2)
- webservers:dbservers:&staging:!phoenix
+ One group webservers
+
+ Multiple groups webservers:dbservers all hosts in webservers plus all hosts in dbservers
-The above configuration means "all machines in the groups 'webservers' and 'dbservers' are to be managed if they are also in
-the group 'staging', but the machines are not to be managed if they are in the group 'phoenix'." Whew!
+ Excluding groups webservers:!atlanta all hosts in webservers except those in atlanta
-You can also use variables if you want to pass some group specifiers via the ``-e`` argument to ansible-playbook, but this
-is uncommonly used::
+ Intersection of groups webservers:&staging any hosts in webservers that are also in staging
+ ====================== ================================ ===================================================
- webservers:!{{excluded}}:&{{required}}
+.. note:: You can use either a comma (``,``) or a colon (``:``) to separate a list of hosts. The comma is preferred when dealing with ranges and IPv6 addresses.
+
+Once you know the basic patterns, you can combine them. This example::
+
+ webservers:dbservers:&staging:!phoenix
-You also don't have to manage by strictly defined groups. Individual host names, IPs, and groups can also be referenced using
-wildcards:
+targets all machines in the groups 'webservers' and 'dbservers' that are also in
+the group 'staging', except any machines in the group 'phoenix'.
-.. code-block:: none
+You can use wildcard patterns with FQDNs or IP addresses, as long as the hosts are named in your inventory by FQDN or IP address::
- *.example.com
- *.com
+ 192.0.\*
+ \*.example.com
+ \*.com
-It's also ok to mix wildcard patterns and groups at the same time::
+You can mix wildcard patterns and groups at the same time::
one*.com:dbservers
-You can select a host or subset of hosts from a group by their position. For example, given the following group::
+Limitations of patterns
+-----------------------
+
+Patterns depend on inventory. If a host or group is not listed in your inventory, you cannot use a pattern to target it. If your pattern includes an IP address or hostname that does not appear in your inventory, you will see an error like this:
+
+.. code-block:: text
+
+ [WARNING]: No inventory was parsed, only implicit localhost is available
+ [WARNING]: Could not match supplied host pattern, ignoring: *.not_in_inventory.com
+
+Your pattern must match your inventory syntax. If you define a host as an :ref:`alias<inventory_aliases>`:
+
+.. code-block:: yaml
+
+ atlanta:
+ host1:
+ http_port: 80
+ maxRequestsPerChild: 808
+ host: 127.0.0.2
+
+you must use the alias in your pattern. In the example above, your must use ``host1`` in your pattern. If you use the IP address, you will once again get the error::
+
+ [WARNING]: Could not match supplied host pattern, ignoring: 127.0.0.2
+
+Advanced pattern options
+------------------------
+
+The common patterns described above will meet most of your needs, but Ansible offers several other ways to define the hosts and groups you want to target.
+
+Using variables in patterns
+^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+You can use variables to enable passing group specifiers via the ``-e`` argument to ansible-playbook::
+
+ webservers:!{{ excluded }}:&{{ required }}
+
+Using group position in patterns
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+You can define a host or subset of hosts by its position in a group. For example, given the following group::
[webservers]
cobweb
webbing
weber
-You can refer to hosts within the group by adding a subscript to the group name::
+you can use subscripts to select individual hosts or ranges within the webservers group::
webservers[0] # == cobweb
webservers[-1] # == weber
@@ -88,28 +135,32 @@ You can refer to hosts within the group by adding a subscript to the group name:
webservers[1:] # == webbing,weber
webservers[:3] # == cobweb,webbing,weber
-Most people don't specify patterns as regular expressions, but you can. Just start the pattern with a ``~``::
+Using regexes in patterns
+^^^^^^^^^^^^^^^^^^^^^^^^^
+
+You can specify a pattern as a regular expression by starting the pattern with ``~``::
~(web|db).*\.example\.com
-While we're jumping a bit ahead, additionally, you can add an exclusion criteria just by supplying the ``--limit`` flag to /usr/bin/ansible or /usr/bin/ansible-playbook::
+Patterns and ansible-playbook flags
+-----------------------------------
+
+You can change the behavior of the patterns defined in playbooks using command-line options. For example, you can run a playbook that defines ``hosts: all`` on a single host by specifying ``-i 127.0.0.2,``. This works even if the host you target is not defined in your inventory. You can also limit the hosts you target on a particular run with the ``--limit`` flag::
ansible-playbook site.yml --limit datacenter2
-And if you want to read the list of hosts from a file, prefix the file name with ``@``::
+Finally, you can use ``--limit`` to read the list of hosts from a file by prefixing the file name with ``@``::
ansible-playbook site.yml --limit @retry_hosts.txt
-Easy enough. See :ref:`intro_adhoc` and then :ref:`playbooks_intro` for how to apply this knowledge.
-
-.. note:: You can use a comma (``,``) as a host list separator instead of a colon (``:``). The comma is preferred when dealing with ranges and IPv6 addresses.
+To apply your knowledge of patterns with Ansible commands and playbooks, read :ref:`intro_adhoc` and :ref:`playbooks_intro`.
.. seealso::
:ref:`intro_adhoc`
Examples of basic commands
:ref:`working_with_playbooks`
- Learning ansible's configuration management language
+ Learning the Ansible configuration management language
`Mailing List <https://groups.google.com/group/ansible-project>`_
Questions? Help? Ideas? Stop by the list on Google Groups
`irc.freenode.net <http://irc.freenode.net>`_
diff --git a/docs/docsite/rst/user_guide/modules_intro.rst b/docs/docsite/rst/user_guide/modules_intro.rst
index 7f4237d67f..482b49aca3 100644
--- a/docs/docsite/rst/user_guide/modules_intro.rst
+++ b/docs/docsite/rst/user_guide/modules_intro.rst
@@ -1,11 +1,11 @@
.. _intro_modules:
-Introduction
-============
+Introduction to modules
+=======================
-Modules (also referred to as "task plugins" or "library plugins") are discrete units of code that can be used from the command line or in a playbook task.
+Modules (also referred to as "task plugins" or "library plugins") are discrete units of code that can be used from the command line or in a playbook task. Ansible executes each module, usually on the remote target node, and collects return values.
-Let's review how we execute three different modules from the command line::
+You can execute modules from the command line::
ansible webservers -m service -a "name=httpd state=started"
ansible webservers -m ping
@@ -25,20 +25,14 @@ Which can be abbreviated to::
- name: reboot the servers
command: /sbin/reboot -t now
-Another way to pass arguments to a module is using yaml syntax also called 'complex args' ::
+Another way to pass arguments to a module is using YAML syntax also called 'complex args' ::
- name: restart webserver
service:
name: httpd
state: restarted
-All modules technically return JSON format data, though if you are using the command line or playbooks, you don't really need to know much about
-that. If you're writing your own module, you care, and this means you do not have to write modules in any particular language -- you get to choose.
-
-Modules should be idempotent, and should avoid making any changes if
-they detect that the current state matches the desired final state. When using
-Ansible playbooks, these modules can trigger 'change events' in the form of
-notifying 'handlers' to run additional tasks.
+All modules return JSON format data. This means modules can be written in any programming language. Modules should be idempotent, and should avoid making any changes if they detect that the current state matches the desired final state. When used in an Ansible playbook, modules can trigger 'change events' in the form of notifying 'handlers' to run additional tasks.
Documentation for each module can be accessed from the command line with the ansible-doc tool::
diff --git a/docs/docsite/rst/user_guide/playbooks_async.rst b/docs/docsite/rst/user_guide/playbooks_async.rst
index 0ad66c7b7c..a99342a975 100644
--- a/docs/docsite/rst/user_guide/playbooks_async.rst
+++ b/docs/docsite/rst/user_guide/playbooks_async.rst
@@ -7,10 +7,37 @@ By default tasks in playbooks block, meaning the connections stay open
until the task is done on each node. This may not always be desirable, or you may
be running operations that take longer than the SSH timeout.
-To avoid blocking or timeout issues, you can use asynchronous mode to run all of your tasks at once and then poll until they are done.
+Time-limited background operations
+----------------------------------
+
+You can run long-running operations in the background and check their status later.
+For example, to execute ``long_running_operation``
+asynchronously in the background, with a timeout of 3600 seconds (``-B``),
+and without polling (``-P``)::
+
+ $ ansible all -B 3600 -P 0 -a "/usr/bin/long_running_operation --do-stuff"
+
+If you want to check on the job status later, you can use the
+``async_status`` module, passing it the job ID that was returned when you ran
+the original job in the background::
+
+ $ ansible web1.example.com -m async_status -a "jid=488359678239.2844"
-The behaviour of asynchronous mode depends on the value of `poll`.
+To run for 30 minutes and poll for status every 60 seconds::
+
+ $ ansible all -B 1800 -P 60 -a "/usr/bin/long_running_operation --do-stuff"
+
+Poll mode is smart so all jobs will be started before polling will begin on any machine.
+Be sure to use a high enough ``--forks`` value if you want to get all of your jobs started
+very quickly. After the time limit (in seconds) runs out (``-B``), the process on
+the remote nodes will be terminated.
+
+Typically you'll only be backgrounding long-running
+shell commands or software upgrades. Backgrounding the copy module does not do a background file transfer. :ref:`Playbooks <working_with_playbooks>` also support polling, and have a simplified syntax for this.
+
+To avoid blocking or timeout issues, you can use asynchronous mode to run all of your tasks at once and then poll until they are done.
+The behavior of asynchronous mode depends on the value of `poll`.
Avoid connection timeouts: poll > 0
-----------------------------------