summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorAine Riordan <44700011+ariordan-redhat@users.noreply.github.com>2021-10-18 06:19:20 -0700
committerGitHub <noreply@github.com>2021-10-18 08:19:20 -0500
commit069e0ec058a6acc3bc2c3bc2f0d9ddac8ede4bd1 (patch)
treeeadf3efb248c1d6bed677a95c1e9a91d99fef82e
parentd0abb7e08104d3b821889d7c6f3eb1d22af9cedf (diff)
downloadansible-069e0ec058a6acc3bc2c3bc2f0d9ddac8ede4bd1.tar.gz
[Backport][Docs] Docs 2.12 backportapalooza2 (#76055)
* uri: add documentation for return value path (#75795) * Move ansible-5.1 to 2021-12-21 (#75865) We moved the target date for ansible-5.0 one week back so there's now time to do 5.1 before the holidays start. * Corrected mytask.yml file name (#75860) * docs - Use code-block to format code examples in Developer Guide (#75849) Fixes #75663 * docs - Use code-block to format examples in Network Guide (#75850) Fixes #75676 * docs - Use code-block to format code examples in Community Guide (#75847) Fixes #75675 * Docs: Clarify adding newlines in YAML folded block scalar (#75875) * Removed translatable words from code blocks ##### SUMMARY Removed translatable words from codeblocks as per #59449 ##### ISSUE TYPE - Docs Pull Request +label: docsite_pr * Maintaining intend as well as clearity * Preserving "save_as" as the key * showing equivalence and keeping same context * Docs: Moves the AWS Scenario Guide out of the main docs (#74704) * removes AWS scenario guide, moving to collection * first attept to replace TOC entries * not sure what I did, but saving it * updates TOCs to reflect new location of aws guide * reinstates original page as a stub * adds links to new location for AWS guide, updates header Co-authored-by: Alicia Cozine <acozine@users.noreply.github.com> Co-authored-by: Daniel Ziegenberg <daniel@ziegenberg.at> Co-authored-by: Toshio Kuratomi <a.badger@gmail.com> Co-authored-by: RamblingPSTech <34667559+RamblingPSTech@users.noreply.github.com> Co-authored-by: Samuel Gaist <samuel.gaist@idiap.ch> Co-authored-by: Ankur H. Singh <49074231+sankur-codes@users.noreply.github.com> Co-authored-by: Alicia Cozine <879121+acozine@users.noreply.github.com> Co-authored-by: Alicia Cozine <acozine@users.noreply.github.com>
-rw-r--r--docs/docsite/rst/community/communication.rst4
-rw-r--r--docs/docsite/rst/community/development_process.rst6
-rw-r--r--docs/docsite/rst/dev_guide/developing_collections_contributing.rst12
-rw-r--r--docs/docsite/rst/dev_guide/developing_collections_structure.rst4
-rw-r--r--docs/docsite/rst/dev_guide/developing_collections_testing.rst24
-rw-r--r--docs/docsite/rst/dev_guide/developing_inventory.rst17
-rw-r--r--docs/docsite/rst/dev_guide/developing_module_utilities.rst4
-rw-r--r--docs/docsite/rst/dev_guide/developing_modules_documenting.rst8
-rw-r--r--docs/docsite/rst/dev_guide/developing_modules_general.rst4
-rw-r--r--docs/docsite/rst/dev_guide/developing_modules_general_windows.rst4
-rw-r--r--docs/docsite/rst/dev_guide/developing_program_flow_modules.rst4
-rw-r--r--docs/docsite/rst/dev_guide/developing_rebasing.rst28
-rw-r--r--docs/docsite/rst/dev_guide/overview_architecture.rst8
-rw-r--r--docs/docsite/rst/dev_guide/testing.rst23
-rw-r--r--docs/docsite/rst/dev_guide/testing/sanity/no-main-display.rst4
-rw-r--r--docs/docsite/rst/dev_guide/testing_integration.rst48
-rw-r--r--docs/docsite/rst/dev_guide/testing_integration_legacy.rst12
-rw-r--r--docs/docsite/rst/dev_guide/testing_running_locally.rst8
-rw-r--r--docs/docsite/rst/dev_guide/testing_units.rst28
-rw-r--r--docs/docsite/rst/dev_guide/testing_units_modules.rst52
-rw-r--r--docs/docsite/rst/network/user_guide/network_debug_troubleshooting.rst8
-rw-r--r--docs/docsite/rst/network/user_guide/network_working_with_command_output.rst8
-rw-r--r--docs/docsite/rst/reference_appendices/YAMLSyntax.rst5
-rw-r--r--docs/docsite/rst/roadmap/COLLECTIONS_5.rst2
-rw-r--r--docs/docsite/rst/scenario_guides/cloud_guides.rst13
-rw-r--r--docs/docsite/rst/scenario_guides/guide_aws.rst284
-rw-r--r--docs/docsite/rst/scenario_guides/guides.rst13
-rw-r--r--docs/docsite/rst/user_guide/intro_getting_started.rst2
-rw-r--r--lib/ansible/modules/uri.py5
29 files changed, 262 insertions, 380 deletions
diff --git a/docs/docsite/rst/community/communication.rst b/docs/docsite/rst/community/communication.rst
index d35128fb8d..010c61f2a5 100644
--- a/docs/docsite/rst/community/communication.rst
+++ b/docs/docsite/rst/community/communication.rst
@@ -71,7 +71,9 @@ IRC chat supports:
* simple text interface
* bridging from Matrix
-Our IRC channels may require you to register your IRC nickname. If you receive an error when you connect or when posting a message, see `libera.chat's Nickname Registration guide <https://libera.chat/guides/registration>`_ for instructions. To find all ``ansible`` specific channels on the libera.chat network, use the following command in your IRC client::
+Our IRC channels may require you to register your IRC nickname. If you receive an error when you connect or when posting a message, see `libera.chat's Nickname Registration guide <https://libera.chat/guides/registration>`_ for instructions. To find all ``ansible`` specific channels on the libera.chat network, use the following command in your IRC client:
+
+.. code-block:: text
/msg alis LIST #ansible* -min 5
diff --git a/docs/docsite/rst/community/development_process.rst b/docs/docsite/rst/community/development_process.rst
index b98d4a3711..12d0d419cd 100644
--- a/docs/docsite/rst/community/development_process.rst
+++ b/docs/docsite/rst/community/development_process.rst
@@ -313,14 +313,14 @@ We do **not** backport features.
#. Prepare your devel, stable, and feature branches:
- ::
+.. code-block:: shell
git fetch upstream
git checkout -b backport/2.11/[PR_NUMBER_FROM_DEVEL] upstream/stable-2.11
#. Cherry pick the relevant commit SHA from the devel branch into your feature branch, handling merge conflicts as necessary:
- ::
+.. code-block:: shell
git cherry-pick -x [SHA_FROM_DEVEL]
@@ -328,7 +328,7 @@ We do **not** backport features.
#. Push your feature branch to your fork on GitHub:
- ::
+.. code-block:: shell
git push origin backport/2.11/[PR_NUMBER_FROM_DEVEL]
diff --git a/docs/docsite/rst/dev_guide/developing_collections_contributing.rst b/docs/docsite/rst/dev_guide/developing_collections_contributing.rst
index 37a43e3498..86f5fea663 100644
--- a/docs/docsite/rst/dev_guide/developing_collections_contributing.rst
+++ b/docs/docsite/rst/dev_guide/developing_collections_contributing.rst
@@ -25,16 +25,22 @@ Creating a PR
-* Create the directory ``~/dev/ansible/collections/ansible_collections/community``::
+* Create the directory ``~/dev/ansible/collections/ansible_collections/community``:
+
+.. code-block:: shell
mkdir -p ~/dev/ansible/collections/ansible_collections/community
-* Clone `the community.general Git repository <https://github.com/ansible-collections/community.general/>`_ or a fork of it into the directory ``general``::
+* Clone `the community.general Git repository <https://github.com/ansible-collections/community.general/>`_ or a fork of it into the directory ``general``:
+
+.. code-block:: shell
cd ~/dev/ansible/collections/ansible_collections/community
git clone git@github.com:ansible-collections/community.general.git general
-* If you clone from a fork, add the original repository as a remote ``upstream``::
+* If you clone from a fork, add the original repository as a remote ``upstream``:
+
+.. code-block:: shell
cd ~/dev/ansible/collections/ansible_collections/community/general
git remote add upstream git@github.com:ansible-collections/community.general.git
diff --git a/docs/docsite/rst/dev_guide/developing_collections_structure.rst b/docs/docsite/rst/dev_guide/developing_collections_structure.rst
index b08e4e1144..bbba79fb7c 100644
--- a/docs/docsite/rst/dev_guide/developing_collections_structure.rst
+++ b/docs/docsite/rst/dev_guide/developing_collections_structure.rst
@@ -13,7 +13,9 @@ A collection is a simple data structure. None of the directories are required un
Collection directories and files
================================
-A collection can contain these directories and files::
+A collection can contain these directories and files:
+
+.. code-block:: shell-session
collection/
├── docs/
diff --git a/docs/docsite/rst/dev_guide/developing_collections_testing.rst b/docs/docsite/rst/dev_guide/developing_collections_testing.rst
index 11ddcf1350..4820788369 100644
--- a/docs/docsite/rst/dev_guide/developing_collections_testing.rst
+++ b/docs/docsite/rst/dev_guide/developing_collections_testing.rst
@@ -20,7 +20,9 @@ You must always execute ``ansible-test`` from the root directory of a collection
Compile and sanity tests
------------------------
-To run all compile and sanity tests::
+To run all compile and sanity tests:
+
+.. code-block:: shell-session
ansible-test sanity --docker default -v
@@ -31,15 +33,21 @@ Adding unit tests
You must place unit tests in the appropriate ``tests/unit/plugins/`` directory. For example, you would place tests for ``plugins/module_utils/foo/bar.py`` in ``tests/unit/plugins/module_utils/foo/test_bar.py`` or ``tests/unit/plugins/module_utils/foo/bar/test_bar.py``. For examples, see the `unit tests in community.general <https://github.com/ansible-collections/community.general/tree/master/tests/unit/>`_.
-To run all unit tests for all supported Python versions::
+To run all unit tests for all supported Python versions:
+
+.. code-block:: shell-session
ansible-test units --docker default -v
-To run all unit tests only for a specific Python version::
+To run all unit tests only for a specific Python version:
+
+.. code-block:: shell-session
ansible-test units --docker default -v --python 3.6
-To run only a specific unit test::
+To run only a specific unit test:
+
+.. code-block:: shell-session
ansible-test units --docker default -v --python 3.6 tests/unit/plugins/module_utils/foo/test_bar.py
@@ -59,13 +67,17 @@ For examples, see the `integration tests in community.general <https://github.co
Since integration tests can install requirements, and set-up, start and stop services, we recommended running them in docker containers or otherwise restricted environments whenever possible. By default, ``ansible-test`` supports Docker images for several operating systems. See the `list of supported docker images <https://github.com/ansible/ansible/blob/devel/test/lib/ansible_test/_data/completion/docker.txt>`_ for all options. Use the ``default`` image mainly for platform-independent integration tests, such as those for cloud modules. The following examples use the ``centos8`` image.
-To execute all integration tests for a collection::
+To execute all integration tests for a collection:
+
+.. code-block:: shell-session
ansible-test integration --docker centos8 -v
If you want more detailed output, run the command with ``-vvv`` instead of ``-v``. Alternatively, specify ``--retry-on-error`` to automatically re-run failed tests with higher verbosity levels.
-To execute only the integration tests in a specific directory::
+To execute only the integration tests in a specific directory:
+
+.. code-block:: shell-session
ansible-test integration --docker centos8 -v connection_bar
diff --git a/docs/docsite/rst/dev_guide/developing_inventory.rst b/docs/docsite/rst/dev_guide/developing_inventory.rst
index 5b6dc906bd..8cc66aadd0 100644
--- a/docs/docsite/rst/dev_guide/developing_inventory.rst
+++ b/docs/docsite/rst/dev_guide/developing_inventory.rst
@@ -360,9 +360,11 @@ Inventory script conventions
Inventory scripts must accept the ``--list`` and ``--host <hostname>`` arguments. Although other arguments are allowed, Ansible will not use them.
Such arguments might still be useful for executing the scripts directly.
-When the script is called with the single argument ``--list``, the script must output to stdout a JSON object that contains all the groups to be managed. Each group's value should be either an object containing a list of each host, any child groups, and potential group variables, or simply a list of hosts::
+When the script is called with the single argument ``--list``, the script must output to stdout a JSON object that contains all the groups to be managed. Each group's value should be either an object containing a list of each host, any child groups, and potential group variables, or simply a list of hosts:
+.. code-block:: json
+
{
"group001": {
"hosts": ["host001", "host002"],
@@ -383,12 +385,13 @@ When the script is called with the single argument ``--list``, the script must o
If any of the elements of a group are empty, they may be omitted from the output.
-When called with the argument ``--host <hostname>`` (where <hostname> is a host from above), the script must print a JSON object, either empty or containing variables to make them available to templates and playbooks. For example::
+When called with the argument ``--host <hostname>`` (where <hostname> is a host from above), the script must print a JSON object, either empty or containing variables to make them available to templates and playbooks. For example:
+.. code-block:: json
{
"VAR001": "VALUE",
- "VAR002": "VALUE",
+ "VAR002": "VALUE"
}
Printing variables is optional. If the script does not print variables, it should print an empty JSON object.
@@ -404,7 +407,9 @@ The stock inventory script system mentioned above works for all versions of Ansi
To avoid this inefficiency, if the inventory script returns a top-level element called "_meta", it is possible to return all the host variables in a single script execution. When this meta element contains a value for "hostvars", the inventory script will not be invoked with ``--host`` for each host. This behavior results in a significant performance increase for large numbers of hosts.
-The data to be added to the top-level JSON object looks like this::
+The data to be added to the top-level JSON object looks like this:
+
+.. code-block:: text
{
@@ -424,7 +429,9 @@ The data to be added to the top-level JSON object looks like this::
}
To satisfy the requirements of using ``_meta``, to prevent ansible from calling your inventory with ``--host`` you must at least populate ``_meta`` with an empty ``hostvars`` object.
-For example::
+For example:
+
+.. code-block:: text
{
diff --git a/docs/docsite/rst/dev_guide/developing_module_utilities.rst b/docs/docsite/rst/dev_guide/developing_module_utilities.rst
index dfeaef55a9..504a7911fe 100644
--- a/docs/docsite/rst/dev_guide/developing_module_utilities.rst
+++ b/docs/docsite/rst/dev_guide/developing_module_utilities.rst
@@ -8,7 +8,9 @@ Ansible provides a number of module utilities, or snippets of shared code, that
provide helper functions you can use when developing your own modules. The
``basic.py`` module utility provides the main entry point for accessing the
Ansible library, and all Python Ansible modules must import something from
-``ansible.module_utils``. A common option is to import ``AnsibleModule``::
+``ansible.module_utils``. A common option is to import ``AnsibleModule``:
+
+.. code-block:: python
from ansible.module_utils.basic import AnsibleModule
diff --git a/docs/docsite/rst/dev_guide/developing_modules_documenting.rst b/docs/docsite/rst/dev_guide/developing_modules_documenting.rst
index 49e2d7b552..7d4ab7e450 100644
--- a/docs/docsite/rst/dev_guide/developing_modules_documenting.rst
+++ b/docs/docsite/rst/dev_guide/developing_modules_documenting.rst
@@ -325,7 +325,9 @@ EXAMPLES block
After the shebang, the UTF-8 coding, the copyright line, the license section, and the ``DOCUMENTATION`` block comes the ``EXAMPLES`` block. Here you show users how your module works with real-world examples in multi-line plain-text YAML format. The best examples are ready for the user to copy and paste into a playbook. Review and update your examples with every change to your module.
-Per playbook best practices, each example should include a ``name:`` line::
+Per playbook best practices, each example should include a ``name:`` line:
+
+.. code-block:: text
EXAMPLES = r'''
- name: Ensure foo is installed
@@ -371,7 +373,9 @@ Otherwise, for each value returned, provide the following fields. All fields are
:contains:
Optional. To describe nested return values, set ``type: dict``, or ``type: list``/``elements: dict``, or if you really have to, ``type: complex``, and repeat the elements above for each sub-field.
-Here are two example ``RETURN`` sections, one with three simple fields and one with a complex nested field::
+Here are two example ``RETURN`` sections, one with three simple fields and one with a complex nested field:
+
+.. code-block:: text
RETURN = r'''
dest:
diff --git a/docs/docsite/rst/dev_guide/developing_modules_general.rst b/docs/docsite/rst/dev_guide/developing_modules_general.rst
index cb18672d72..7572d6d8d9 100644
--- a/docs/docsite/rst/dev_guide/developing_modules_general.rst
+++ b/docs/docsite/rst/dev_guide/developing_modules_general.rst
@@ -153,7 +153,9 @@ Verifying your module code in a playbook
The next step in verifying your new module is to consume it with an Ansible playbook.
- Create a playbook in any directory: ``$ touch testmod.yml``
-- Add the following to the new playbook file::
+- Add the following to the new playbook file:
+
+.. code-block:: yaml
- name: test my new module
hosts: localhost
diff --git a/docs/docsite/rst/dev_guide/developing_modules_general_windows.rst b/docs/docsite/rst/dev_guide/developing_modules_general_windows.rst
index 44247fa9e6..30e5220b03 100644
--- a/docs/docsite/rst/dev_guide/developing_modules_general_windows.rst
+++ b/docs/docsite/rst/dev_guide/developing_modules_general_windows.rst
@@ -579,7 +579,9 @@ are some steps that need to be followed to set this up:
#!powershell
You can add more args to ``$complex_args`` as required by the module or define the module options through a JSON file
-with the structure::
+with the structure:
+
+.. code-block:: json
{
"ANSIBLE_MODULE_ARGS": {
diff --git a/docs/docsite/rst/dev_guide/developing_program_flow_modules.rst b/docs/docsite/rst/dev_guide/developing_program_flow_modules.rst
index c443bfa7a6..4e50171fdf 100644
--- a/docs/docsite/rst/dev_guide/developing_program_flow_modules.rst
+++ b/docs/docsite/rst/dev_guide/developing_program_flow_modules.rst
@@ -425,7 +425,9 @@ _ansible_selinux_special_fs
List. Names of filesystems which should have a special SELinux
context. They are used by the `AnsibleModule` methods which operate on
-files (changing attributes, moving, and copying). To set, add a comma separated string of filesystem names in :file:`ansible.cfg`::
+files (changing attributes, moving, and copying). To set, add a comma separated string of filesystem names in :file:`ansible.cfg`:
+
+.. code-block:: ini
# ansible.cfg
[selinux]
diff --git a/docs/docsite/rst/dev_guide/developing_rebasing.rst b/docs/docsite/rst/dev_guide/developing_rebasing.rst
index 7c71446aca..dcd1fb07c4 100644
--- a/docs/docsite/rst/dev_guide/developing_rebasing.rst
+++ b/docs/docsite/rst/dev_guide/developing_rebasing.rst
@@ -14,17 +14,23 @@ Rebasing the branch used to create your PR will resolve both of these issues.
Configuring your remotes
========================
-Before you can rebase your PR, you need to make sure you have the proper remotes configured. These instructions apply to any repository on GitHub, including collections repositories. On other platforms (bitbucket, gitlab), the same principles and commands apply but the syntax may be different. We use the ansible/ansible repository here as an example. In other repositories, the branch names may be different. Assuming you cloned your fork in the usual fashion, the ``origin`` remote will point to your fork::
+Before you can rebase your PR, you need to make sure you have the proper remotes configured. These instructions apply to any repository on GitHub, including collections repositories. On other platforms (bitbucket, gitlab), the same principles and commands apply but the syntax may be different. We use the ansible/ansible repository here as an example. In other repositories, the branch names may be different. Assuming you cloned your fork in the usual fashion, the ``origin`` remote will point to your fork:
+
+.. code-block:: shell-session
$ git remote -v
origin git@github.com:YOUR_GITHUB_USERNAME/ansible.git (fetch)
origin git@github.com:YOUR_GITHUB_USERNAME/ansible.git (push)
-However, you also need to add a remote which points to the upstream repository::
+However, you also need to add a remote which points to the upstream repository:
+
+.. code-block:: shell-session
$ git remote add upstream https://github.com/ansible/ansible.git
-Which should leave you with the following remotes::
+Which should leave you with the following remotes:
+
+.. code-block:: shell-session
$ git remote -v
origin git@github.com:YOUR_GITHUB_USERNAME/ansible.git (fetch)
@@ -32,7 +38,9 @@ Which should leave you with the following remotes::
upstream https://github.com/ansible/ansible.git (fetch)
upstream https://github.com/ansible/ansible.git (push)
-Checking the status of your branch should show your fork is up-to-date with the ``origin`` remote::
+Checking the status of your branch should show your fork is up-to-date with the ``origin`` remote:
+
+.. code-block:: shell-session
$ git status
On branch YOUR_BRANCH
@@ -42,14 +50,18 @@ Checking the status of your branch should show your fork is up-to-date with the
Rebasing your branch
====================
-Once you have an ``upstream`` remote configured, you can rebase the branch for your PR::
+Once you have an ``upstream`` remote configured, you can rebase the branch for your PR:
+
+.. code-block:: shell-session
$ git pull --rebase upstream devel
This will replay the changes in your branch on top of the changes made in the upstream ``devel`` branch.
If there are merge conflicts, you will be prompted to resolve those before you can continue.
-After you rebase, the status of your branch changes::
+After you rebase, the status of your branch changes:
+
+.. code-block:: shell-session
$ git status
On branch YOUR_BRANCH
@@ -65,7 +77,9 @@ Updating your pull request
Now that you've rebased your branch, you need to push your changes to GitHub to update your PR.
-Since rebasing re-writes git history, you will need to use a force push::
+Since rebasing re-writes git history, you will need to use a force push:
+
+.. code-block:: shell-session
$ git push --force-with-lease
diff --git a/docs/docsite/rst/dev_guide/overview_architecture.rst b/docs/docsite/rst/dev_guide/overview_architecture.rst
index fdd90625b6..2a3fe55bab 100644
--- a/docs/docsite/rst/dev_guide/overview_architecture.rst
+++ b/docs/docsite/rst/dev_guide/overview_architecture.rst
@@ -40,7 +40,9 @@ To add new machines, there is no additional SSL signing server involved, so ther
If there's another source of truth in your infrastructure, Ansible can also connect to that. Ansible can draw inventory, group, and variable information from sources like EC2, Rackspace, OpenStack, and more.
-Here's what a plain text inventory file looks like::
+Here's what a plain text inventory file looks like:
+
+.. code-block:: text
---
[webservers]
@@ -62,7 +64,9 @@ Playbooks can finely orchestrate multiple slices of your infrastructure topology
Ansible's approach to orchestration is one of finely-tuned simplicity, as we believe your automation code should make perfect sense to you years down the road and there should be very little to remember about special syntax or features.
-Here's what a simple playbook looks like::
+Here's what a simple playbook looks like:
+
+.. code-block:: yaml
---
- hosts: webservers
diff --git a/docs/docsite/rst/dev_guide/testing.rst b/docs/docsite/rst/dev_guide/testing.rst
index 0170f114eb..d111973616 100644
--- a/docs/docsite/rst/dev_guide/testing.rst
+++ b/docs/docsite/rst/dev_guide/testing.rst
@@ -60,7 +60,9 @@ Organization
When Pull Requests (PRs) are created they are tested using Azure Pipelines, a Continuous Integration (CI) tool. Results are shown at the end of every PR.
-When Azure Pipelines detects an error and it can be linked back to a file that has been modified in the PR then the relevant lines will be added as a GitHub comment. For example::
+When Azure Pipelines detects an error and it can be linked back to a file that has been modified in the PR then the relevant lines will be added as a GitHub comment. For example:
+
+.. code-block:: text
The test `ansible-test sanity --test pep8` failed with the following errors:
@@ -71,11 +73,15 @@ When Azure Pipelines detects an error and it can be linked back to a file that h
From the above example we can see that ``--test pep8`` and ``--test validate-modules`` have identified an issue. The commands given allow you to run the same tests locally to ensure you've fixed all issues without having to push your changes to GitHub and wait for Azure Pipelines, for example:
-If you haven't already got Ansible available, use the local checkout by running::
+If you haven't already got Ansible available, use the local checkout by running:
+
+.. code-block:: shell-session
source hacking/env-setup
-Then run the tests detailed in the GitHub comment::
+Then run the tests detailed in the GitHub comment:
+
+.. code-block:: shell-session
ansible-test sanity --test pep8
ansible-test sanity --test validate-modules
@@ -126,8 +132,9 @@ Here's how:
other flavors, since some features (for example, package managers such as apt or yum) are specific to those OS versions.
-Create a fresh area to work::
+Create a fresh area to work:
+.. code-block:: shell-session
git clone https://github.com/ansible/ansible.git ansible-pr-testing
cd ansible-pr-testing
@@ -140,7 +147,9 @@ Next, find the pull request you'd like to test and make note of its number. It w
It is important that the PR request target be ``ansible:devel``, as we do not accept pull requests into any other branch. Dot releases are cherry-picked manually by Ansible staff.
-Use the pull request number when you fetch the proposed changes and create your branch for testing::
+Use the pull request number when you fetch the proposed changes and create your branch for testing:
+
+.. code-block:: shell-session
git fetch origin refs/pull/XXXX/head:testing_PRXXXX
git checkout testing_PRXXXX
@@ -156,7 +165,9 @@ The first command fetches the proposed changes from the pull request and creates
The Ansible source includes a script that allows you to use Ansible directly from source without requiring a
full installation that is frequently used by developers on Ansible.
-Simply source it (to use the Linux/Unix terminology) to begin using it immediately::
+Simply source it (to use the Linux/Unix terminology) to begin using it immediately:
+
+.. code-block:: shell-session
source ./hacking/env-setup
diff --git a/docs/docsite/rst/dev_guide/testing/sanity/no-main-display.rst b/docs/docsite/rst/dev_guide/testing/sanity/no-main-display.rst
index 7ccf0dc702..271f88f188 100644
--- a/docs/docsite/rst/dev_guide/testing/sanity/no-main-display.rst
+++ b/docs/docsite/rst/dev_guide/testing/sanity/no-main-display.rst
@@ -3,7 +3,9 @@ no-main-display
As of Ansible 2.8, ``Display`` should no longer be imported from ``__main__``.
-``Display`` is now a singleton and should be utilized like the following::
+``Display`` is now a singleton and should be utilized like the following:
+
+.. code-block:: python
from ansible.utils.display import Display
display = Display()
diff --git a/docs/docsite/rst/dev_guide/testing_integration.rst b/docs/docsite/rst/dev_guide/testing_integration.rst
index 5402d64457..366ca68172 100644
--- a/docs/docsite/rst/dev_guide/testing_integration.rst
+++ b/docs/docsite/rst/dev_guide/testing_integration.rst
@@ -33,12 +33,16 @@ ansible-test command
--------------------
The example below assumes ``bin/`` is in your ``$PATH``. An easy way to achieve that
-is to initialize your environment with the ``env-setup`` command::
+is to initialize your environment with the ``env-setup`` command:
+
+.. code-block:: shell-session
source hacking/env-setup
ansible-test --help
-You can also call ``ansible-test`` with the full path::
+You can also call ``ansible-test`` with the full path:
+
+.. code-block:: shell-session
bin/ansible-test --help
@@ -74,19 +78,27 @@ outside of those test subdirectories. They will also not reconfigure or bounce
Use the ``--docker-no-pull`` option to avoid pulling the latest container image. This is required when using custom local images that are not available for download.
-Run as follows for all POSIX platform tests executed by our CI system in a fedora32 docker container::
+Run as follows for all POSIX platform tests executed by our CI system in a fedora32 docker container:
+
+.. code-block:: shell-session
ansible-test integration shippable/ --docker fedora32
-You can target a specific tests as well, such as for individual modules::
+You can target a specific tests as well, such as for individual modules:
+
+.. code-block:: shell-session
ansible-test integration ping
-You can use the ``-v`` option to make the output more verbose::
+You can use the ``-v`` option to make the output more verbose:
+
+.. code-block:: shell-session
ansible-test integration lineinfile -vvv
-Use the following command to list all the available targets::
+Use the following command to list all the available targets:
+
+.. code-block:: shell-session
ansible-test integration --list-targets
@@ -98,7 +110,9 @@ Destructive Tests
=================
These tests are allowed to install and remove some trivial packages. You will likely want to devote these
-to a virtual environment, such as Docker. They won't reformat your filesystem::
+to a virtual environment, such as Docker. They won't reformat your filesystem:
+
+.. code-block:: shell-session
ansible-test integration destructive/ --docker fedora32
@@ -112,16 +126,22 @@ for testing, and enable PowerShell Remoting to continue.
Running these tests may result in changes to your Windows host, so don't run
them against a production/critical Windows environment.
-Enable PowerShell Remoting (run on the Windows host via Remote Desktop)::
+Enable PowerShell Remoting (run on the Windows host via Remote Desktop):
+
+.. code-block:: shell-session
Enable-PSRemoting -Force
-Define Windows inventory::
+Define Windows inventory:
+
+.. code-block:: shell-session
cp inventory.winrm.template inventory.winrm
${EDITOR:-vi} inventory.winrm
-Run the Windows tests executed by our CI system::
+Run the Windows tests executed by our CI system:
+
+.. code-block:: shell-session
ansible-test windows-integration -v shippable/
@@ -140,12 +160,16 @@ the Ansible continuous integration (CI) system is recommended.
Running Integration Tests
-------------------------
-To run all CI integration test targets for POSIX platforms in a Ubuntu 18.04 container::
+To run all CI integration test targets for POSIX platforms in a Ubuntu 18.04 container:
+
+.. code-block:: shell-session
ansible-test integration shippable/ --docker ubuntu1804
You can also run specific tests or select a different Linux distribution.
-For example, to run tests for the ``ping`` module on a Ubuntu 18.04 container::
+For example, to run tests for the ``ping`` module on a Ubuntu 18.04 container:
+
+.. code-block:: shell-session
ansible-test integration ping --docker ubuntu1804
diff --git a/docs/docsite/rst/dev_guide/testing_integration_legacy.rst b/docs/docsite/rst/dev_guide/testing_integration_legacy.rst
index d3656369f7..02c88bb136 100644
--- a/docs/docsite/rst/dev_guide/testing_integration_legacy.rst
+++ b/docs/docsite/rst/dev_guide/testing_integration_legacy.rst
@@ -41,7 +41,9 @@ In order to run cloud tests, you must provide access credentials in a file
named ``credentials.yml``. A sample credentials file named
``credentials.template`` is available for syntax help.
-Provide cloud credentials::
+Provide cloud credentials:
+
+.. code-block:: shell-session
cp credentials.template credentials.yml
${EDITOR:-vi} credentials.yml
@@ -85,11 +87,15 @@ Running Tests
The tests are invoked via a ``Makefile``.
-If you haven't already got Ansible available use the local checkout by doing::
+If you haven't already got Ansible available use the local checkout by doing:
+
+.. code-block:: shell-session
source hacking/env-setup
-Run the tests by doing::
+Run the tests by doing:
+
+.. code-block:: shell-session
cd test/integration/
# TARGET is the name of the test from the list at the top of this page
diff --git a/docs/docsite/rst/dev_guide/testing_running_locally.rst b/docs/docsite/rst/dev_guide/testing_running_locally.rst
index 1a53ddcc10..dcf7e6d9f7 100644
--- a/docs/docsite/rst/dev_guide/testing_running_locally.rst
+++ b/docs/docsite/rst/dev_guide/testing_running_locally.rst
@@ -70,7 +70,9 @@ be written. Online reports are available but only cover the ``devel`` branch (s
Add the ``--coverage`` option to any test command to collect code coverage data. If you
aren't using the ``--venv`` or ``--docker`` options which create an isolated python
environment then you may have to use the ``--requirements`` option to ensure that the
-correct version of the coverage module is installed::
+correct version of the coverage module is installed:
+
+.. code-block:: shell-session
ansible-test coverage erase
ansible-test units --coverage apt
@@ -84,6 +86,8 @@ Reports can be generated in several different formats:
* ``ansible-test coverage html`` - HTML report.
* ``ansible-test coverage xml`` - XML report.
-To clear data between test runs, use the ``ansible-test coverage erase`` command. For a full list of features see the online help::
+To clear data between test runs, use the ``ansible-test coverage erase`` command. For a full list of features see the online help:
+
+.. code-block:: shell-session
ansible-test coverage --help
diff --git a/docs/docsite/rst/dev_guide/testing_units.rst b/docs/docsite/rst/dev_guide/testing_units.rst
index dda2d0349d..3b876455a7 100644
--- a/docs/docsite/rst/dev_guide/testing_units.rst
+++ b/docs/docsite/rst/dev_guide/testing_units.rst
@@ -53,7 +53,9 @@ If you are running unit tests against things other than modules, such as module
ansible-test units --docker -v test/units/module_utils/basic/test_imports.py
-For advanced usage see the online help::
+For advanced usage see the online help:
+
+.. code:: shell
ansible-test units --help
@@ -104,35 +106,39 @@ Ansible drives unit tests through `pytest <https://docs.pytest.org/en/latest/>`_
means that tests can either be written a simple functions which are included in any file
name like ``test_<something>.py`` or as classes.
-Here is an example of a function::
+Here is an example of a function:
+
+.. code:: python
#this function will be called simply because it is called test_*()
- def test_add()
+ def test_add():
a = 10
b = 23
c = 33
- assert a + b = c
+ assert a + b == c
+
+Here is an example of a class:
-Here is an example of a class::
+.. code:: python
import unittest
- class AddTester(unittest.TestCase)
+ class AddTester(unittest.TestCase):
- def SetUp()
+ def SetUp():
self.a = 10
self.b = 23
# this function will
- def test_add()
+ def test_add():
c = 33
- assert self.a + self.b = c
+ assert self.a + self.b == c
# this function will
- def test_subtract()
+ def test_subtract():
c = -13
- assert self.a - self.b = c
+ assert self.a - self.b == c
Both methods work fine in most circumstances; the function-based interface is simpler and
quicker and so that's probably where you should start when you are just trying to add a
diff --git a/docs/docsite/rst/dev_guide/testing_units_modules.rst b/docs/docsite/rst/dev_guide/testing_units_modules.rst
index 9dd2ee9401..fe715d8ad5 100644
--- a/docs/docsite/rst/dev_guide/testing_units_modules.rst
+++ b/docs/docsite/rst/dev_guide/testing_units_modules.rst
@@ -168,7 +168,9 @@ Ensuring failure cases are visible with mock objects
Functions like :meth:`module.fail_json` are normally expected to terminate execution. When you
run with a mock module object this doesn't happen since the mock always returns another mock
from a function call. You can set up the mock to raise an exception as shown above, or you can
-assert that these functions have not been called in each test. For example::
+assert that these functions have not been called in each test. For example:
+
+.. code-block:: python
module = MagicMock()
function_to_test(module, argument)
@@ -185,7 +187,9 @@ The setup of an actual module is quite complex (see `Passing Arguments`_ below)
isn't needed for most functions which use a module. Instead you can use a mock object as
the module and create any module attributes needed by the function you are testing. If
you do this, beware that the module exit functions need special handling as mentioned
-above, either by throwing an exception or ensuring that they haven't been called. For example::
+above, either by throwing an exception or ensuring that they haven't been called. For example:
+
+.. code-block:: python
class AnsibleExitJson(Exception):
"""Exception class to be raised by module.exit_json and caught by the test case"""
@@ -218,7 +222,9 @@ present in the message. This means that we can check that we use the correct
parameters and nothing else.
-*Example: in rds_instance unit tests a simple instance state is defined*::
+*Example: in rds_instance unit tests a simple instance state is defined*:
+
+.. code-block:: python
def simple_instance_list(status, pending):
return {u'DBInstances': [{u'DBInstanceArn': 'arn:aws:rds:us-east-1:1234567890:db:fakedb',
@@ -226,7 +232,9 @@ parameters and nothing else.
u'PendingModifiedValues': pending,
u'DBInstanceIdentifier': 'fakedb'}]}
-This is then used to create a list of states::
+This is then used to create a list of states:
+
+.. code-block:: python
rds_client_double = MagicMock()
rds_client_double.describe_db_instances.side_effect = [
@@ -243,7 +251,9 @@ This is then used to create a list of states::
These states are then used as returns from a mock object to ensure that the ``await`` function
waits through all of the states that would mean the RDS instance has not yet completed
-configuration::
+configuration:
+
+.. code-block:: python
rds_i.await_resource(rds_client_double, "some-instance", "available", mod_mock,
await_pending=1)
@@ -292,7 +302,9 @@ To pass arguments to a module correctly, use the ``set_module_args`` method whic
as its parameter. Module creation and argument processing is
handled through the :class:`AnsibleModule` object in the basic section of the utilities. Normally
this accepts input on ``STDIN``, which is not convenient for unit testing. When the special
-variable is set it will be treated as if the input came on ``STDIN`` to the module. Simply call that function before setting up your module::
+variable is set it will be treated as if the input came on ``STDIN`` to the module. Simply call that function before setting up your module:
+
+.. code-block:: python
import json
from units.modules.utils import set_module_args
@@ -314,7 +326,9 @@ Handling exit correctly
The :meth:`module.exit_json` function won't work properly in a testing environment since it
writes error information to ``STDOUT`` upon exit, where it
is difficult to examine. This can be mitigated by replacing it (and :meth:`module.fail_json`) with
-a function that raises an exception::
+a function that raises an exception:
+
+.. code-block:: python
def exit_json(*args, **kwargs):
if 'changed' not in kwargs:
@@ -322,7 +336,9 @@ a function that raises an exception::
raise AnsibleExitJson(kwargs)
Now you can ensure that the first function called is the one you expected simply by
-testing for the correct exception::
+testing for the correct exception:
+
+.. code-block:: python
def test_returned_value(self):
set_module_args({
@@ -342,7 +358,9 @@ Running the main function
-------------------------
If you do want to run the actual main function of a module you must import the module, set
-the arguments as above, set up the appropriate exit exception and then run the module::
+the arguments as above, set up the appropriate exit exception and then run the module:
+
+.. code-block:: python
# This test is based around pytest's features for individual test functions
import pytest
@@ -364,7 +382,9 @@ Handling calls to external executables
Module must use :meth:`AnsibleModule.run_command` in order to execute an external command. This
method needs to be mocked:
-Here is a simple mock of :meth:`AnsibleModule.run_command` (taken from :file:`test/units/modules/packaging/os/test_rhn_register.py`)::
+Here is a simple mock of :meth:`AnsibleModule.run_command` (taken from :file:`test/units/modules/packaging/os/test_rhn_register.py`):
+
+.. code-block:: python
with patch.object(basic.AnsibleModule, 'run_command') as run_command:
run_command.return_value = 0, '', '' # successful execution, no output
@@ -381,7 +401,9 @@ A Complete Example
------------------
The following example is a complete skeleton that reuses the mocks explained above and adds a new
-mock for :meth:`Ansible.get_bin_path`::
+mock for :meth:`Ansible.get_bin_path`:
+
+.. code-block:: python
import json
@@ -470,7 +492,9 @@ Restructuring modules to enable testing module set up and other processes
Often modules have a ``main()`` function which sets up the module and then performs other
actions. This can make it difficult to check argument processing. This can be made easier by
-moving module configuration and initialization into a separate function. For example::
+moving module configuration and initialization into a separate function. For example:
+
+.. code-block:: python
argument_spec = dict(
# module function variables
@@ -498,7 +522,9 @@ moving module configuration and initialization into a separate function. For exa
return_dict = run_task(module, conn)
module.exit_json(**return_dict)
-This now makes it possible to run tests against the module initiation function::
+This now makes it possible to run tests against the module initiation function:
+
+.. code-block:: python
def test_rds_module_setup_fails_if_db_instance_identifier_parameter_missing():
# db_instance_identifier parameter is missing
diff --git a/docs/docsite/rst/network/user_guide/network_debug_troubleshooting.rst b/docs/docsite/rst/network/user_guide/network_debug_troubleshooting.rst
index 18a348b363..ef56aa8722 100644
--- a/docs/docsite/rst/network/user_guide/network_debug_troubleshooting.rst
+++ b/docs/docsite/rst/network/user_guide/network_debug_troubleshooting.rst
@@ -42,7 +42,9 @@ Ansible includes logging to help diagnose and troubleshoot issues regarding Ansi
Because logging is very verbose, it is disabled by default. It can be enabled with the :envvar:`ANSIBLE_LOG_PATH` and :envvar:`ANSIBLE_DEBUG` options on the ansible-controller, that is the machine running ``ansible-playbook``.
-Before running ``ansible-playbook``, run the following commands to enable logging::
+Before running ``ansible-playbook``, run the following commands to enable logging:
+
+.. code:: shell
# Specify the location for the log file
export ANSIBLE_LOG_PATH=~/ansible.log
@@ -379,7 +381,9 @@ or
You can tell Ansible to automatically accept the keys
-Environment variable method::
+Environment variable method:
+
+.. code-block:: shell
export ANSIBLE_PARAMIKO_HOST_KEY_AUTO_ADD=True
ansible-playbook ...
diff --git a/docs/docsite/rst/network/user_guide/network_working_with_command_output.rst b/docs/docsite/rst/network/user_guide/network_working_with_command_output.rst
index 12040d4b52..6215df97ef 100644
--- a/docs/docsite/rst/network/user_guide/network_working_with_command_output.rst
+++ b/docs/docsite/rst/network/user_guide/network_working_with_command_output.rst
@@ -26,7 +26,9 @@ executed remotely on the device. Once the task executes the command
set, the ``wait_for`` argument can be used to evaluate the results before
returning control to the Ansible playbook.
-For example::
+For example:
+
+.. code-block:: yaml
---
- name: wait for interface to be admin enabled
@@ -45,7 +47,9 @@ until either the condition is satisfied or the number of retries has
expired (by default, this is 10 retries at 1 second intervals).
The commands module can also evaluate more than one set of command
-results in an interface. For instance::
+results in an interface. For instance:
+
+.. code-block:: yaml
---
- name: wait for interfaces to be admin enabled
diff --git a/docs/docsite/rst/reference_appendices/YAMLSyntax.rst b/docs/docsite/rst/reference_appendices/YAMLSyntax.rst
index fe1f06ee1d..5cdd2f10b5 100644
--- a/docs/docsite/rst/reference_appendices/YAMLSyntax.rst
+++ b/docs/docsite/rst/reference_appendices/YAMLSyntax.rst
@@ -107,7 +107,10 @@ While in the above ``>`` example all newlines are folded into spaces, there are
d
e
f
- same_as: "a b\nc d\n e\nf\n"
+
+Alternatively, it can be enforced by including newline ``\n`` characters::
+
+ fold_same_newlines: "a b\nc d\n e\nf\n"
Let's combine what we learned so far in an arbitrary YAML example.
This really has nothing to do with Ansible, but will give you a feel for the format::
diff --git a/docs/docsite/rst/roadmap/COLLECTIONS_5.rst b/docs/docsite/rst/roadmap/COLLECTIONS_5.rst
index 01070b8994..9789cd8b93 100644
--- a/docs/docsite/rst/roadmap/COLLECTIONS_5.rst
+++ b/docs/docsite/rst/roadmap/COLLECTIONS_5.rst
@@ -31,7 +31,7 @@ Release schedule
:2021-11-16: Ansible-5.0.0 beta2.
:2021-11-23: Ansible-5.0.0 rc1 [2]_ [3]_ (weekly release candidates as needed; test and alert us to any blocker bugs). Blocker bugs will slip release.
:2021-11-30: Ansible-5.0.0 release.
-:2022-01-04: Release of Ansible-5.1.0 (bugfix + compatible features: every three weeks. Note: this comes 4 week after 5.0.0 due to the winter holiday season).
+:2021-12-21: Release of Ansible-5.1.0 (bugfix + compatible features: every three weeks.)
.. [1] No new modules or major features accepted after this date. In practice, this means we will freeze the semver collection versions to compatible release versions. For example, if the version of community.crypto on this date was community.crypto 2.1.0; Ansible-5.0.0 could ship with community.crypto 2.1.1. It would not ship with community.crypto 2.2.0.
diff --git a/docs/docsite/rst/scenario_guides/cloud_guides.rst b/docs/docsite/rst/scenario_guides/cloud_guides.rst
index d430bddae9..a0e6e8ed97 100644
--- a/docs/docsite/rst/scenario_guides/cloud_guides.rst
+++ b/docs/docsite/rst/scenario_guides/cloud_guides.rst
@@ -1,16 +1,19 @@
.. _cloud_guides:
-*******************
-Public Cloud Guides
-*******************
+**************************
+Legacy Public Cloud Guides
+**************************
-The guides in this section cover using Ansible with a range of public cloud platforms. They explore particular use cases in greater depth and provide a more "top-down" explanation of some basic features.
+The legacy guides in this section may be out of date. They cover using Ansible with a range of public cloud platforms. They explore particular use cases in greater depth and provide a more "top-down" explanation of some basic features.
+
+Guides for using public clouds are moving into collections. We are migrating these guides into collections. Please update your links for the following guides:
+
+:ref:`ansible_collections.amazon.aws.docsite.aws_intro`
.. toctree::
:maxdepth: 1
guide_alicloud
- guide_aws
guide_cloudstack
guide_gce
guide_azure
diff --git a/docs/docsite/rst/scenario_guides/guide_aws.rst b/docs/docsite/rst/scenario_guides/guide_aws.rst
index 46a3132337..f293155639 100644
--- a/docs/docsite/rst/scenario_guides/guide_aws.rst
+++ b/docs/docsite/rst/scenario_guides/guide_aws.rst
@@ -1,284 +1,6 @@
+:orphan:
+
Amazon Web Services Guide
=========================
-.. _aws_intro:
-
-Introduction
-````````````
-
-Ansible contains a number of modules for controlling Amazon Web Services (AWS). The purpose of this
-section is to explain how to put Ansible modules together (and use inventory scripts) to use Ansible in AWS context.
-
-Requirements for the AWS modules are minimal.
-
-All of the modules require and are tested against recent versions of botocore and boto3. Starting with the 2.0 AWS collection releases, it is generally the policy of the collections to support the versions of these libraries released 12 months prior to the most recent major collection revision. Individual modules may require a more recent library version to support specific features or may require the boto library, check the module documentation for the minimum required version for each module. You must have the boto3 Python module installed on your control machine. You can install these modules from your OS distribution or using the python package installer: ``pip install boto3``.
-
-Starting with the 2.0 releases of both collections, Python 2.7 support will be ended in accordance with AWS' `end of Python 2.7 support <https://aws.amazon.com/blogs/developer/announcing-end-of-support-for-python-2-7-in-aws-sdk-for-python-and-aws-cli-v1/>`_ and Python 3.6 or greater will be required.
-
-
-Whereas classically Ansible will execute tasks in its host loop against multiple remote machines, most cloud-control steps occur on your local machine with reference to the regions to control.
-
-In your playbook steps we'll typically be using the following pattern for provisioning steps::
-
- - hosts: localhost
- gather_facts: False
- tasks:
- - ...
-
-.. _aws_authentication:
-
-Authentication
-``````````````
-
-Authentication with the AWS-related modules is handled by either
-specifying your access and secret key as ENV variables or module arguments.
-
-For environment variables::
-
- export AWS_ACCESS_KEY_ID='AK123'
- export AWS_SECRET_ACCESS_KEY='abc123'
-
-For storing these in a vars_file, ideally encrypted with ansible-vault::
-
- ---
- ec2_access_key: "--REMOVED--"
- ec2_secret_key: "--REMOVED--"
-
-Note that if you store your credentials in vars_file, you need to refer to them in each AWS-module. For example::
-
- - ec2
- aws_access_key: "{{ec2_access_key}}"
- aws_secret_key: "{{ec2_secret_key}}"
- image: "..."
-
-.. _aws_provisioning:
-
-Provisioning
-````````````
-
-The ec2 module provisions and de-provisions instances within EC2.
-
-An example of making sure there are only 5 instances tagged 'Demo' in EC2 follows.
-
-In the example below, the "exact_count" of instances is set to 5. This means if there are 0 instances already existing, then
-5 new instances would be created. If there were 2 instances, only 3 would be created, and if there were 8 instances, 3 instances would
-be terminated.
-
-What is being counted is specified by the "count_tag" parameter. The parameter "instance_tags" is used to apply tags to the newly created
-instance.::
-
- # demo_setup.yml
-
- - hosts: localhost
- gather_facts: False
-
- tasks:
-
- - name: Provision a set of instances
- ec2:
- key_name: my_key
- group: test
- instance_type: t2.micro
- image: "{{ ami_id }}"
- wait: true
- exact_count: 5
- count_tag:
- Name: Demo
- instance_tags:
- Name: Demo
- register: ec2
-
-The data about what instances are created is being saved by the "register" keyword in the variable named "ec2".
-
-From this, we'll use the add_host module to dynamically create a host group consisting of these new instances. This facilitates performing configuration actions on the hosts immediately in a subsequent task.::
-
- # demo_setup.yml
-
- - hosts: localhost
- gather_facts: False
-
- tasks:
-
- - name: Provision a set of instances
- ec2:
- key_name: my_key
- group: test
- instance_type: t2.micro
- image: "{{ ami_id }}"
- wait: true
- exact_count: 5
- count_tag:
- Name: Demo
- instance_tags:
- Name: Demo
- register: ec2
-
- - name: Add all instance public IPs to host group
- add_host: hostname={{ item.public_ip }} groups=ec2hosts
- loop: "{{ ec2.instances }}"
-
-With the host group now created, a second play at the bottom of the same provisioning playbook file might now have some configuration steps::
-
- # demo_setup.yml
-
- - name: Provision a set of instances
- hosts: localhost
- # ... AS ABOVE ...
-
- - hosts: ec2hosts
- name: configuration play
- user: ec2-user
- gather_facts: true
-
- tasks:
-
- - name: Check NTP service
- service: name=ntpd state=started
-
-.. _aws_security_groups:
-
-Security Groups
-```````````````
-
-Security groups on AWS are stateful. The response of a request from your instance is allowed to flow in regardless of inbound security group rules and vice-versa.
-In case you only want allow traffic with AWS S3 service, you need to fetch the current IP ranges of AWS S3 for one region and apply them as an egress rule.::
-
- - name: fetch raw ip ranges for aws s3
- set_fact:
- raw_s3_ranges: "{{ lookup('aws_service_ip_ranges', region='eu-central-1', service='S3', wantlist=True) }}"
-
- - name: prepare list structure for ec2_group module
- set_fact:
- s3_ranges: "{{ s3_ranges | default([]) + [{'proto': 'all', 'cidr_ip': item, 'rule_desc': 'S3 Service IP range'}] }}"
- loop: "{{ raw_s3_ranges }}"
-
- - name: set S3 IP ranges to egress rules
- ec2_group:
- name: aws_s3_ip_ranges
- description: allow outgoing traffic to aws S3 service
- region: eu-central-1
- state: present
- vpc_id: vpc-123456
- purge_rules: true
- purge_rules_egress: true
- rules: []
- rules_egress: "{{ s3_ranges }}"
- tags:
- Name: aws_s3_ip_ranges
-
-.. _aws_host_inventory:
-
-Host Inventory
-``````````````
-
-Once your nodes are spun up, you'll probably want to talk to them again. With a cloud setup, it's best to not maintain a static list of cloud hostnames
-in text files. Rather, the best way to handle this is to use the aws_ec2 inventory plugin. See :ref:`dynamic_inventory`.
-
-The plugin will also return instances that were created outside of Ansible and allow Ansible to manage them.
-
-.. _aws_tags_and_groups:
-
-Tags And Groups And Variables
-`````````````````````````````
-
-When using the inventory plugin, you can configure extra inventory structure based on the metadata returned by AWS.
-
-For instance, you might use ``keyed_groups`` to create groups from instance tags::
-
- plugin: aws_ec2
- keyed_groups:
- - prefix: tag
- key: tags
-
-
-You can then target all instances with a "class" tag where the value is "webserver" in a play::
-
- - hosts: tag_class_webserver
- tasks:
- - ping
-
-You can also use these groups with 'group_vars' to set variables that are automatically applied to matching instances. See :ref:`splitting_out_vars`.
-
-.. _aws_pull:
-
-Autoscaling with Ansible Pull
-`````````````````````````````
-
-Amazon Autoscaling features automatically increase or decrease capacity based on load. There are also Ansible modules shown in the cloud documentation that
-can configure autoscaling policy.
-
-When nodes come online, it may not be sufficient to wait for the next cycle of an ansible command to come along and configure that node.
-
-To do this, pre-bake machine images which contain the necessary ansible-pull invocation. Ansible-pull is a command line tool that fetches a playbook from a git server and runs it locally.
-
-One of the challenges of this approach is that there needs to be a centralized way to store data about the results of pull commands in an autoscaling context.
-For this reason, the autoscaling solution provided below in the next section can be a better approach.
-
-Read :ref:`ansible-pull` for more information on pull-mode playbooks.
-
-.. _aws_autoscale:
-
-Autoscaling with Ansible Tower
-``````````````````````````````
-
-:ref:`ansible_tower` also contains a very nice feature for auto-scaling use cases. In this mode, a simple curl script can call
-a defined URL and the server will "dial out" to the requester and configure an instance that is spinning up. This can be a great way
-to reconfigure ephemeral nodes. See the Tower install and product documentation for more details.
-
-A benefit of using the callback in Tower over pull mode is that job results are still centrally recorded and less information has to be shared
-with remote hosts.
-
-.. _aws_cloudformation_example:
-
-Ansible With (And Versus) CloudFormation
-````````````````````````````````````````
-
-CloudFormation is a Amazon technology for defining a cloud stack as a JSON or YAML document.
-
-Ansible modules provide an easier to use interface than CloudFormation in many examples, without defining a complex JSON/YAML document.
-This is recommended for most users.
-
-However, for users that have decided to use CloudFormation, there is an Ansible module that can be used to apply a CloudFormation template
-to Amazon.
-
-When using Ansible with CloudFormation, typically Ansible will be used with a tool like Packer to build images, and CloudFormation will launch
-those images, or ansible will be invoked through user data once the image comes online, or a combination of the two.
-
-Please see the examples in the Ansible CloudFormation module for more details.
-
-.. _aws_image_build:
-
-AWS Image Building With Ansible
-```````````````````````````````
-
-Many users may want to have images boot to a more complete configuration rather than configuring them entirely after instantiation. To do this,
-one of many programs can be used with Ansible playbooks to define and upload a base image, which will then get its own AMI ID for usage with
-the ec2 module or other Ansible AWS modules such as ec2_asg or the cloudformation module. Possible tools include Packer, aminator, and Ansible's
-ec2_ami module.
-
-Generally speaking, we find most users using Packer.
-
-See the Packer documentation of the `Ansible local Packer provisioner <https://www.packer.io/docs/provisioners/ansible/ansible-local>`_ and `Ansible remote Packer provisioner <https://www.packer.io/docs/provisioners/ansible/ansible>`_.
-
-If you do not want to adopt Packer at this time, configuring a base-image with Ansible after provisioning (as shown above) is acceptable.
-
-.. _aws_next_steps:
-
-Next Steps: Explore Modules
-```````````````````````````
-
-Ansible ships with lots of modules for configuring a wide array of EC2 services. Browse the "Cloud" category of the module
-documentation for a full list with examples.
-
-.. seealso::
-
- :ref:`list_of_collections`
- Browse existing collections, modules, and plugins
- :ref:`working_with_playbooks`
- An introduction to playbooks
- :ref:`playbooks_delegation`
- Delegation, useful for working with loud balancers, clouds, and locally executed steps.
- `User Mailing List <https://groups.google.com/group/ansible-devel>`_
- Have a question? Stop by the google group!
- :ref:`communication_irc`
- How to join Ansible chat channels
+The content on this page has moved. Please see the updated :ref:`ansible_collections.amazon.aws.docsite.aws_intro` in the AWS collection.
diff --git a/docs/docsite/rst/scenario_guides/guides.rst b/docs/docsite/rst/scenario_guides/guides.rst
index ef3a02eed8..2545c452d9 100644
--- a/docs/docsite/rst/scenario_guides/guides.rst
+++ b/docs/docsite/rst/scenario_guides/guides.rst
@@ -6,15 +6,19 @@
Scenario Guides
******************
-The guides in this section cover integrating Ansible with a variety of
-platforms, products, and technologies. They explore particular use cases in greater depth and provide a more "top-down" explanation of some basic features.
+The guides in this section are migrating into collections. Remaining guides may be out of date.
+
+These guides cover integrating Ansible with a variety of platforms, products, and technologies. They explore particular use cases in greater depth and provide a more "top-down" explanation of some basic features.
+
+We are migrating these guides into collections. Please update your links for the following guides:
+
+:ref:`ansible_collections.amazon.aws.docsite.aws_intro`
.. toctree::
:maxdepth: 1
- :caption: Public Cloud Guides
+ :caption: Legacy Public Cloud Guides
guide_alicloud
- guide_aws
guide_cloudstack
guide_gce
guide_azure
@@ -40,4 +44,3 @@ platforms, products, and technologies. They explore particular use cases in grea
guide_kubernetes
guide_vagrant
guide_vmware
- guide_vmware_rest
diff --git a/docs/docsite/rst/user_guide/intro_getting_started.rst b/docs/docsite/rst/user_guide/intro_getting_started.rst
index 2d70624d87..bee9565ba8 100644
--- a/docs/docsite/rst/user_guide/intro_getting_started.rst
+++ b/docs/docsite/rst/user_guide/intro_getting_started.rst
@@ -103,7 +103,7 @@ Playbooks are used to pull together tasks into reusable units.
Ansible does not store playbooks for you; they are simply YAML documents that you store and manage, passing them to Ansible to run as needed.
-In a directory of your choice you can create your first playbook in a file called mytask.yml:
+In a directory of your choice you can create your first playbook in a file called mytask.yaml:
.. code-block:: yaml
diff --git a/lib/ansible/modules/uri.py b/lib/ansible/modules/uri.py
index 08860a5ff1..b75c1bff8e 100644
--- a/lib/ansible/modules/uri.py
+++ b/lib/ansible/modules/uri.py
@@ -407,6 +407,11 @@ msg:
returned: always
type: str
sample: OK (unknown bytes)
+path:
+ description: destination file/path
+ returned: dest is defined
+ type: str
+ sample: /path/to/file.txt
redirected:
description: Whether the request was redirected.
returned: on success