summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorMatt Clay <mclay@redhat.com>2020-10-28 09:12:09 -0700
committerGitHub <noreply@github.com>2020-10-28 09:12:09 -0700
commit522f167d279ad3bd75d986ee8a0db8e0ab4ecee8 (patch)
tree177dd0ec94bd50c13afd898cea0993e5eba7a4d2
parentdcae6d20d257ff8fbbea7e0f05669c8817e019b9 (diff)
downloadansible-522f167d279ad3bd75d986ee8a0db8e0ab4ecee8.tar.gz
Replace incidental tests with intentional argspec tests (#72370)
* Remove incidental_consul tests (#71811) * Add explicit intg tests for argspec functionality * ci_complete ci_coverage * Remove incidental_consul and incidental_setup_openssl * ci_complete ci_coverage (cherry picked from commit a99212464c1e5de0245a8429047fa2e05458f20b) * Remove incidental_nios_txt_record (#72009) * Add explicit coverage of argspec type=dict * Non string mapping failure * ci_complete ci_coverage * Remove incidental_nios_txt_record and associated files * Don't forget the ignore.txt changes * ci_complete ci_coverage (cherry picked from commit 6f4aed53772f2b5ff83262575865f2be5b36933b) * Remove incidental_vyos_static_route (#72024) * Add explicit tests for required_together suboptions * ci_complete ci_coverage * Remove incidental_vyos_static_route * ci_complete ci_coverage * Add explicit coverage of suboptions required_if * ci_complete ci_coverage * Remove incidental_vyos_logging * ci_complete ci_coverage (cherry picked from commit 9081b228689ca97261fae42ee84fde7e0bb48d6e) * More explicit argspec tests (#72064) * Add more explicit coverage of argspec functionality * fail_on_missing_params * ci_complete ci_coverage * Remove incidental_aws_step_functions_state_machine * ci_complete ci_coverage * Remove incidental_cs_service_offering * ci_complete ci_coverage (cherry picked from commit ab2b339dd6d27c4b06001e88480eabe9a94a8e92) * Add explicit coverage of required_together (#72107) * Add explicit coverage of required_together * ci_complete ci_coverage * Remove incidental_hcloud_server * Remove hcloud from shippable matrix * ci_complete ci_coverage (cherry picked from commit 460ba041c850acb11fab67a8dd36e1bb7f7c9b0c) * Add explicit coverage of suboptions=list without elements (#72108) * Add explicit coverage of suboptions=list without elements * ci_complete ci_coverage * Remove incidental_vmware_guest_custom_attributes * ci_complete ci_coverage (cherry picked from commit 50c8c87fe2889612999f45d11d9f35a614518f1b) * Add explicit coverage of argspec choices with strings that shadow YAML bools (#72122) * Add explicit coverage of argspec choices with strings that shadow YAML bools * ci_complete ci_coverage * Remove incidental_ufw * ci_complete ci_coverage (cherry picked from commit cfa41898c4c1b349acd3d8d54018a5cc7571dcbc) * Adds argspec tests for required, required_one_of and required_by (#72245) * Improve variable names. * Add test for required. * Add test for required_one_of. * Add test for required_by. (cherry picked from commit 1489bf9190f0463392ea67c2370f7fd342014bfa) * Remove incidentals without coverage (#71788) * Remove incidental_lookup_hashi_vault * Remove incidental_connection_chroot * Remove incidental_selinux * Remove incidental_win_hosts (cherry picked from commit e6e98407178556c1eb60101abef1df08c753d31d) Co-authored-by: Matt Martz <matt@sivel.net> Co-authored-by: Felix Fontein <felix@fontein.de>
-rw-r--r--shippable.yml1
-rw-r--r--test/integration/targets/argspec/aliases1
-rw-r--r--test/integration/targets/argspec/library/argspec.py118
-rw-r--r--test/integration/targets/argspec/tasks/main.yml312
-rw-r--r--test/integration/targets/incidental_aws_step_functions_state_machine/aliases2
-rw-r--r--test/integration/targets/incidental_aws_step_functions_state_machine/defaults/main.yml4
-rw-r--r--test/integration/targets/incidental_aws_step_functions_state_machine/files/alternative_state_machine.json15
-rw-r--r--test/integration/targets/incidental_aws_step_functions_state_machine/files/state_machine.json10
-rw-r--r--test/integration/targets/incidental_aws_step_functions_state_machine/files/state_machines_iam_trust_policy.json12
-rw-r--r--test/integration/targets/incidental_aws_step_functions_state_machine/tasks/main.yml296
-rw-r--r--test/integration/targets/incidental_connection_chroot/aliases3
-rwxr-xr-xtest/integration/targets/incidental_connection_chroot/runme.sh18
-rw-r--r--test/integration/targets/incidental_connection_chroot/test_connection.inventory7
-rw-r--r--test/integration/targets/incidental_consul/aliases4
-rw-r--r--test/integration/targets/incidental_consul/meta/main.yml3
-rw-r--r--test/integration/targets/incidental_consul/tasks/consul_session.yml162
-rw-r--r--test/integration/targets/incidental_consul/tasks/main.yml97
-rw-r--r--test/integration/targets/incidental_consul/templates/consul_config.hcl.j213
-rw-r--r--test/integration/targets/incidental_cs_service_offering/aliases2
-rw-r--r--test/integration/targets/incidental_cs_service_offering/meta/main.yml3
-rw-r--r--test/integration/targets/incidental_cs_service_offering/tasks/guest_vm_service_offering.yml223
-rw-r--r--test/integration/targets/incidental_cs_service_offering/tasks/main.yml3
-rw-r--r--test/integration/targets/incidental_cs_service_offering/tasks/system_vm_service_offering.yml151
-rw-r--r--test/integration/targets/incidental_hcloud_server/aliases2
-rw-r--r--test/integration/targets/incidental_hcloud_server/defaults/main.yml5
-rw-r--r--test/integration/targets/incidental_hcloud_server/tasks/main.yml517
-rw-r--r--test/integration/targets/incidental_lookup_hashi_vault/aliases7
-rw-r--r--test/integration/targets/incidental_lookup_hashi_vault/lookup_hashi_vault/defaults/main.yml4
-rw-r--r--test/integration/targets/incidental_lookup_hashi_vault/lookup_hashi_vault/tasks/approle_setup.yml21
-rw-r--r--test/integration/targets/incidental_lookup_hashi_vault/lookup_hashi_vault/tasks/approle_test.yml45
-rw-r--r--test/integration/targets/incidental_lookup_hashi_vault/lookup_hashi_vault/tasks/main.yml155
-rw-r--r--test/integration/targets/incidental_lookup_hashi_vault/lookup_hashi_vault/tasks/tests.yml35
-rw-r--r--test/integration/targets/incidental_lookup_hashi_vault/lookup_hashi_vault/tasks/token_setup.yml3
-rw-r--r--test/integration/targets/incidental_lookup_hashi_vault/lookup_hashi_vault/tasks/token_test.yml58
-rw-r--r--test/integration/targets/incidental_lookup_hashi_vault/lookup_hashi_vault/templates/vault_config.hcl.j210
-rw-r--r--test/integration/targets/incidental_lookup_hashi_vault/playbooks/install_dependencies.yml19
-rw-r--r--test/integration/targets/incidental_lookup_hashi_vault/playbooks/test_lookup_hashi_vault.yml9
-rwxr-xr-xtest/integration/targets/incidental_lookup_hashi_vault/runme.sh23
-rw-r--r--test/integration/targets/incidental_nios_prepare_tests/aliases1
-rw-r--r--test/integration/targets/incidental_nios_prepare_tests/tasks/main.yml0
-rw-r--r--test/integration/targets/incidental_nios_txt_record/aliases3
-rw-r--r--test/integration/targets/incidental_nios_txt_record/defaults/main.yaml3
-rw-r--r--test/integration/targets/incidental_nios_txt_record/meta/main.yaml2
-rw-r--r--test/integration/targets/incidental_nios_txt_record/tasks/main.yml1
-rw-r--r--test/integration/targets/incidental_nios_txt_record/tasks/nios_txt_record_idempotence.yml80
-rw-r--r--test/integration/targets/incidental_selinux/aliases3
-rw-r--r--test/integration/targets/incidental_selinux/tasks/main.yml36
-rw-r--r--test/integration/targets/incidental_selinux/tasks/selinux.yml364
-rw-r--r--test/integration/targets/incidental_selinux/tasks/selogin.yml81
-rw-r--r--test/integration/targets/incidental_setup_openssl/aliases2
-rw-r--r--test/integration/targets/incidental_setup_openssl/tasks/main.yml48
-rw-r--r--test/integration/targets/incidental_setup_openssl/vars/Debian.yml3
-rw-r--r--test/integration/targets/incidental_setup_openssl/vars/FreeBSD.yml3
-rw-r--r--test/integration/targets/incidental_setup_openssl/vars/RedHat.yml3
-rw-r--r--test/integration/targets/incidental_setup_openssl/vars/Suse.yml3
-rw-r--r--test/integration/targets/incidental_ufw/aliases13
-rw-r--r--test/integration/targets/incidental_ufw/tasks/main.yml34
-rw-r--r--test/integration/targets/incidental_ufw/tasks/run-test.yml21
-rw-r--r--test/integration/targets/incidental_ufw/tasks/tests/basic.yml402
-rw-r--r--test/integration/targets/incidental_ufw/tasks/tests/global-state.yml150
-rw-r--r--test/integration/targets/incidental_ufw/tasks/tests/insert_relative_to.yml80
-rw-r--r--test/integration/targets/incidental_ufw/tasks/tests/interface.yml81
-rw-r--r--test/integration/targets/incidental_vmware_guest_custom_attributes/aliases3
-rw-r--r--test/integration/targets/incidental_vmware_guest_custom_attributes/tasks/main.yml110
-rw-r--r--test/integration/targets/incidental_vyos_logging/aliases2
-rw-r--r--test/integration/targets/incidental_vyos_logging/defaults/main.yaml3
-rw-r--r--test/integration/targets/incidental_vyos_logging/tasks/cli.yaml22
-rw-r--r--test/integration/targets/incidental_vyos_logging/tasks/main.yaml2
-rw-r--r--test/integration/targets/incidental_vyos_logging/tests/cli/basic.yaml126
-rw-r--r--test/integration/targets/incidental_vyos_logging/tests/cli/net_logging.yaml39
-rw-r--r--test/integration/targets/incidental_vyos_static_route/aliases2
-rw-r--r--test/integration/targets/incidental_vyos_static_route/defaults/main.yaml3
-rw-r--r--test/integration/targets/incidental_vyos_static_route/tasks/cli.yaml22
-rw-r--r--test/integration/targets/incidental_vyos_static_route/tasks/main.yaml2
-rw-r--r--test/integration/targets/incidental_vyos_static_route/tests/cli/basic.yaml120
-rw-r--r--test/integration/targets/incidental_vyos_static_route/tests/cli/net_static_route.yaml33
-rw-r--r--test/integration/targets/incidental_win_hosts/aliases2
-rw-r--r--test/integration/targets/incidental_win_hosts/defaults/main.yml13
-rw-r--r--test/integration/targets/incidental_win_hosts/meta/main.yml2
-rw-r--r--test/integration/targets/incidental_win_hosts/tasks/main.yml17
-rw-r--r--test/integration/targets/incidental_win_hosts/tasks/tests.yml189
-rw-r--r--test/sanity/ignore.txt6
-rw-r--r--test/support/integration/plugins/connection/chroot.py208
-rw-r--r--test/support/integration/plugins/lookup/hashi_vault.py302
-rw-r--r--test/support/integration/plugins/module_utils/hcloud.py63
-rw-r--r--test/support/integration/plugins/module_utils/net_tools/nios/__init__.py0
-rw-r--r--test/support/integration/plugins/module_utils/net_tools/nios/api.py601
-rw-r--r--test/support/integration/plugins/modules/aws_step_functions_state_machine.py232
-rw-r--r--test/support/integration/plugins/modules/aws_step_functions_state_machine_execution.py197
-rw-r--r--test/support/integration/plugins/modules/consul_session.py284
-rw-r--r--test/support/integration/plugins/modules/cs_service_offering.py583
-rw-r--r--test/support/integration/plugins/modules/hcloud_server.py555
-rw-r--r--test/support/integration/plugins/modules/nios_txt_record.py134
-rw-r--r--test/support/integration/plugins/modules/nios_zone.py228
-rw-r--r--test/support/integration/plugins/modules/openssl_certificate.py2757
-rw-r--r--test/support/integration/plugins/modules/openssl_certificate_info.py864
-rw-r--r--test/support/integration/plugins/modules/openssl_csr.py1161
-rw-r--r--test/support/integration/plugins/modules/openssl_privatekey.py944
-rw-r--r--test/support/integration/plugins/modules/selinux.py266
-rw-r--r--test/support/integration/plugins/modules/ufw.py598
-rw-r--r--test/support/integration/plugins/modules/vmware_guest_custom_attributes.py259
-rw-r--r--test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/action/net_logging.py30
-rw-r--r--test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/action/net_static_route.py31
-rw-r--r--test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/modules/net_logging.py110
-rw-r--r--test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/modules/net_static_route.py98
-rw-r--r--test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/modules/vyos_logging.py300
-rw-r--r--test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/modules/vyos_static_route.py302
-rw-r--r--test/support/windows-integration/plugins/modules/win_hosts.ps1257
-rw-r--r--test/support/windows-integration/plugins/modules/win_hosts.py126
109 files changed, 431 insertions, 15567 deletions
diff --git a/shippable.yml b/shippable.yml
index 9f19d381c1..47359138f0 100644
--- a/shippable.yml
+++ b/shippable.yml
@@ -140,7 +140,6 @@ matrix:
- env: T=i/cs//1
- env: T=i/tower//1
- env: T=i/cloud//1
- - env: T=i/hcloud//1
branches:
except:
diff --git a/test/integration/targets/argspec/aliases b/test/integration/targets/argspec/aliases
new file mode 100644
index 0000000000..70a7b7a9f3
--- /dev/null
+++ b/test/integration/targets/argspec/aliases
@@ -0,0 +1 @@
+shippable/posix/group5
diff --git a/test/integration/targets/argspec/library/argspec.py b/test/integration/targets/argspec/library/argspec.py
new file mode 100644
index 0000000000..724b34e0c8
--- /dev/null
+++ b/test/integration/targets/argspec/library/argspec.py
@@ -0,0 +1,118 @@
+#!/usr/bin/python
+# Copyright: (c) 2020, Matt Martz <matt@sivel.net>
+# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
+
+from __future__ import absolute_import, division, print_function
+__metaclass__ = type
+
+from ansible.module_utils.basic import AnsibleModule
+
+
+def main():
+ module = AnsibleModule(
+ {
+ 'required': {
+ 'required': True,
+ },
+ 'required_one_of_one': {},
+ 'required_one_of_two': {},
+ 'required_by_one': {},
+ 'required_by_two': {},
+ 'required_by_three': {},
+ 'state': {
+ 'type': 'str',
+ 'choices': ['absent', 'present'],
+ },
+ 'path': {},
+ 'content': {},
+ 'mapping': {
+ 'type': 'dict',
+ },
+ 'required_one_of': {
+ 'required_one_of': [['thing', 'other']],
+ 'type': 'list',
+ 'elements': 'dict',
+ 'options': {
+ 'thing': {},
+ 'other': {},
+ },
+ },
+ 'required_by': {
+ 'required_by': {'thing': 'other'},
+ 'type': 'list',
+ 'elements': 'dict',
+ 'options': {
+ 'thing': {},
+ 'other': {},
+ },
+ },
+ 'required_together': {
+ 'required_together': [['thing', 'other']],
+ 'type': 'list',
+ 'elements': 'dict',
+ 'options': {
+ 'thing': {},
+ 'other': {},
+ 'another': {},
+ },
+ },
+ 'required_if': {
+ 'required_if': (
+ ('thing', 'foo', ('other',), True),
+ ),
+ 'type': 'list',
+ 'elements': 'dict',
+ 'options': {
+ 'thing': {},
+ 'other': {},
+ 'another': {},
+ },
+ },
+ 'json': {
+ 'type': 'json',
+ },
+ 'fail_on_missing_params': {
+ 'type': 'list',
+ 'default': [],
+ },
+ 'needed_param': {},
+ 'required_together_one': {},
+ 'required_together_two': {},
+ 'suboptions_list_no_elements': {
+ 'type': 'list',
+ 'options': {
+ 'thing': {},
+ },
+ },
+ 'choices_with_strings_like_bools': {
+ 'type': 'str',
+ 'choices': [
+ 'on',
+ 'off',
+ ],
+ },
+ },
+ required_if=(
+ ('state', 'present', ('path', 'content'), True),
+ ),
+ mutually_exclusive=(
+ ('path', 'content'),
+ ),
+ required_one_of=(
+ ('required_one_of_one', 'required_one_of_two'),
+ ),
+ required_by={
+ 'required_by_one': ('required_by_two', 'required_by_three'),
+ },
+ required_together=(
+ ('required_together_one', 'required_together_two'),
+ ),
+ )
+
+ module.fail_on_missing_params(module.params['fail_on_missing_params'])
+
+ module.exit_json(**module.params)
+
+
+if __name__ == '__main__':
+ main()
diff --git a/test/integration/targets/argspec/tasks/main.yml b/test/integration/targets/argspec/tasks/main.yml
new file mode 100644
index 0000000000..6fcaa7b501
--- /dev/null
+++ b/test/integration/targets/argspec/tasks/main.yml
@@ -0,0 +1,312 @@
+- argspec:
+ required: value
+ required_one_of_one: value
+
+- argspec:
+ required_one_of_one: value
+ register: argspec_required_fail
+ ignore_errors: true
+
+- argspec:
+ required: value
+ required_one_of_two: value
+
+- argspec:
+ required: value
+ register: argspec_required_one_of_fail
+ ignore_errors: true
+
+- argspec:
+ required: value
+ required_one_of_two: value
+ required_by_one: value
+ required_by_two: value
+ required_by_three: value
+
+- argspec:
+ required: value
+ required_one_of_two: value
+ required_by_one: value
+ required_by_two: value
+ register: argspec_required_by_fail
+ ignore_errors: true
+
+- argspec:
+ state: absent
+ required: value
+ required_one_of_one: value
+
+- argspec:
+ state: present
+ required: value
+ required_one_of_one: value
+ register: argspec_required_if_fail
+ ignore_errors: true
+
+- argspec:
+ state: present
+ path: foo
+ required: value
+ required_one_of_one: value
+
+- argspec:
+ state: present
+ content: foo
+ required: value
+ required_one_of_one: value
+
+- argspec:
+ state: present
+ content: foo
+ path: foo
+ required: value
+ required_one_of_one: value
+ register: argspec_mutually_exclusive_fail
+ ignore_errors: true
+
+- argspec:
+ mapping:
+ foo: bar
+ required: value
+ required_one_of_one: value
+ register: argspec_good_mapping
+
+- argspec:
+ mapping: foo=bar
+ required: value
+ required_one_of_one: value
+ register: argspec_good_mapping_kv
+
+- argspec:
+ mapping: !!str '{"foo": "bar"}'
+ required: value
+ required_one_of_one: value
+ register: argspec_good_mapping_json
+
+- argspec:
+ mapping: foo
+ required: value
+ required_one_of_one: value
+ register: argspec_bad_mapping_string
+ ignore_errors: true
+
+- argspec:
+ mapping: 1
+ required: value
+ required_one_of_one: value
+ register: argspec_bad_mapping_int
+ ignore_errors: true
+
+- argspec:
+ mapping:
+ - foo
+ - bar
+ required: value
+ required_one_of_one: value
+ register: argspec_bad_mapping_list
+ ignore_errors: true
+
+- argspec:
+ required_together:
+ - thing: foo
+ other: bar
+ another: baz
+ required: value
+ required_one_of_one: value
+
+- argspec:
+ required_together:
+ - another: baz
+ required: value
+ required_one_of_one: value
+
+- argspec:
+ required_together:
+ - thing: foo
+ required: value
+ required_one_of_one: value
+ register: argspec_required_together_fail
+ ignore_errors: true
+
+- argspec:
+ required_together:
+ - thing: foo
+ other: bar
+ required: value
+ required_one_of_one: value
+
+- argspec:
+ required_if:
+ - thing: bar
+ required: value
+ required_one_of_one: value
+
+- argspec:
+ required_if:
+ - thing: foo
+ other: bar
+ required: value
+ required_one_of_one: value
+
+- argspec:
+ required_if:
+ - thing: foo
+ required: value
+ required_one_of_one: value
+ register: argspec_required_if_fail_2
+ ignore_errors: true
+
+- argspec:
+ required_one_of:
+ - thing: foo
+ other: bar
+ required: value
+ required_one_of_one: value
+
+- argspec:
+ required_one_of:
+ - {}
+ required: value
+ required_one_of_one: value
+ register: argspec_required_one_of_fail_2
+ ignore_errors: true
+
+- argspec:
+ required_by:
+ - thing: foo
+ other: bar
+ required: value
+ required_one_of_one: value
+
+- argspec:
+ required_by:
+ - thing: foo
+ required: value
+ required_one_of_one: value
+ register: argspec_required_by_fail_2
+ ignore_errors: true
+
+- argspec:
+ json: !!str '{"foo": "bar"}'
+ required: value
+ required_one_of_one: value
+ register: argspec_good_json_string
+
+- argspec:
+ json:
+ foo: bar
+ required: value
+ required_one_of_one: value
+ register: argspec_good_json_dict
+
+- argspec:
+ json: 1
+ required: value
+ required_one_of_one: value
+ register: argspec_bad_json
+ ignore_errors: true
+
+- argspec:
+ fail_on_missing_params:
+ - needed_param
+ needed_param: whatever
+ required: value
+ required_one_of_one: value
+
+- argspec:
+ fail_on_missing_params:
+ - needed_param
+ required: value
+ required_one_of_one: value
+ register: argspec_fail_on_missing_params_bad
+ ignore_errors: true
+
+- argspec:
+ required_together_one: foo
+ required_together_two: bar
+ required: value
+ required_one_of_one: value
+
+- argspec:
+ required_together_one: foo
+ required: value
+ required_one_of_one: value
+ register: argspec_fail_required_together_2
+ ignore_errors: true
+
+- argspec:
+ suboptions_list_no_elements:
+ - thing: foo
+ required: value
+ required_one_of_one: value
+ register: argspec_suboptions_list_no_elements
+
+- argspec:
+ choices_with_strings_like_bools: on
+ required: value
+ required_one_of_one: value
+ register: argspec_choices_with_strings_like_bools_true
+
+- argspec:
+ choices_with_strings_like_bools: 'on'
+ required: value
+ required_one_of_one: value
+ register: argspec_choices_with_strings_like_bools_true_bool
+
+- argspec:
+ choices_with_strings_like_bools: off
+ required: value
+ required_one_of_one: value
+ register: argspec_choices_with_strings_like_bools_false
+
+- assert:
+ that:
+ - argspec_required_fail is failed
+
+ - argspec_required_one_of_fail is failed
+
+ - argspec_required_by_fail is failed
+
+ - argspec_required_if_fail is failed
+
+ - argspec_mutually_exclusive_fail is failed
+
+ - argspec_good_mapping is successful
+ - >-
+ argspec_good_mapping.mapping == {'foo': 'bar'}
+ - argspec_good_mapping_json is successful
+ - >-
+ argspec_good_mapping_json.mapping == {'foo': 'bar'}
+ - argspec_good_mapping_kv is successful
+ - >-
+ argspec_good_mapping_kv.mapping == {'foo': 'bar'}
+ - argspec_bad_mapping_string is failed
+ - argspec_bad_mapping_int is failed
+ - argspec_bad_mapping_list is failed
+
+ - argspec_required_together_fail is failed
+
+ - argspec_required_if_fail_2 is failed
+
+ - argspec_required_one_of_fail_2 is failed
+
+ - argspec_required_by_fail_2 is failed
+
+ - argspec_good_json_string is successful
+ - >-
+ argspec_good_json_string.json == '{"foo": "bar"}'
+ - argspec_good_json_dict is successful
+ - >-
+ argspec_good_json_dict.json == '{"foo": "bar"}'
+ - argspec_bad_json is failed
+
+ - argspec_fail_on_missing_params_bad is failed
+
+ - argspec_fail_required_together_2 is failed
+
+ - >-
+ argspec_suboptions_list_no_elements.suboptions_list_no_elements.0 == {'thing': 'foo'}
+
+ - argspec_choices_with_strings_like_bools_true.choices_with_strings_like_bools == 'on'
+ - argspec_choices_with_strings_like_bools_true_bool.choices_with_strings_like_bools == 'on'
+ - argspec_choices_with_strings_like_bools_false.choices_with_strings_like_bools == 'off' \ No newline at end of file
diff --git a/test/integration/targets/incidental_aws_step_functions_state_machine/aliases b/test/integration/targets/incidental_aws_step_functions_state_machine/aliases
deleted file mode 100644
index 29f60feb44..0000000000
--- a/test/integration/targets/incidental_aws_step_functions_state_machine/aliases
+++ /dev/null
@@ -1,2 +0,0 @@
-cloud/aws
-shippable/aws/incidental
diff --git a/test/integration/targets/incidental_aws_step_functions_state_machine/defaults/main.yml b/test/integration/targets/incidental_aws_step_functions_state_machine/defaults/main.yml
deleted file mode 100644
index 273a0c783b..0000000000
--- a/test/integration/targets/incidental_aws_step_functions_state_machine/defaults/main.yml
+++ /dev/null
@@ -1,4 +0,0 @@
-# the random_num is generated in a set_fact task at the start of the testsuite
-state_machine_name: "{{ resource_prefix }}_step_functions_state_machine_ansible_test_{{ random_num }}"
-step_functions_role_name: "ansible-test-sts-{{ resource_prefix }}-step_functions-role"
-execution_name: "{{ resource_prefix }}_sfn_execution"
diff --git a/test/integration/targets/incidental_aws_step_functions_state_machine/files/alternative_state_machine.json b/test/integration/targets/incidental_aws_step_functions_state_machine/files/alternative_state_machine.json
deleted file mode 100644
index 7b51bebb1a..0000000000
--- a/test/integration/targets/incidental_aws_step_functions_state_machine/files/alternative_state_machine.json
+++ /dev/null
@@ -1,15 +0,0 @@
-{
- "StartAt": "HelloWorld",
- "States": {
- "HelloWorld": {
- "Type": "Pass",
- "Result": "Some other result",
- "Next": "Wait"
- },
- "Wait": {
- "Type": "Wait",
- "Seconds": 30,
- "End": true
- }
- }
-} \ No newline at end of file
diff --git a/test/integration/targets/incidental_aws_step_functions_state_machine/files/state_machine.json b/test/integration/targets/incidental_aws_step_functions_state_machine/files/state_machine.json
deleted file mode 100644
index c07d5cebad..0000000000
--- a/test/integration/targets/incidental_aws_step_functions_state_machine/files/state_machine.json
+++ /dev/null
@@ -1,10 +0,0 @@
-{
- "StartAt": "HelloWorld",
- "States": {
- "HelloWorld": {
- "Type": "Pass",
- "Result": "Hello World!",
- "End": true
- }
- }
-} \ No newline at end of file
diff --git a/test/integration/targets/incidental_aws_step_functions_state_machine/files/state_machines_iam_trust_policy.json b/test/integration/targets/incidental_aws_step_functions_state_machine/files/state_machines_iam_trust_policy.json
deleted file mode 100644
index 48d627220f..0000000000
--- a/test/integration/targets/incidental_aws_step_functions_state_machine/files/state_machines_iam_trust_policy.json
+++ /dev/null
@@ -1,12 +0,0 @@
-{
- "Version": "2012-10-17",
- "Statement": [
- {
- "Effect": "Allow",
- "Principal": {
- "Service": "states.amazonaws.com"
- },
- "Action": "sts:AssumeRole"
- }
- ]
-} \ No newline at end of file
diff --git a/test/integration/targets/incidental_aws_step_functions_state_machine/tasks/main.yml b/test/integration/targets/incidental_aws_step_functions_state_machine/tasks/main.yml
deleted file mode 100644
index 23e71dcebf..0000000000
--- a/test/integration/targets/incidental_aws_step_functions_state_machine/tasks/main.yml
+++ /dev/null
@@ -1,296 +0,0 @@
----
-
-- name: Integration test for AWS Step Function state machine module
- module_defaults:
- iam_role:
- aws_access_key: "{{ aws_access_key }}"
- aws_secret_key: "{{ aws_secret_key }}"
- security_token: "{{ security_token | default(omit) }}"
- region: "{{ aws_region }}"
- aws_step_functions_state_machine:
- aws_access_key: "{{ aws_access_key }}"
- aws_secret_key: "{{ aws_secret_key }}"
- security_token: "{{ security_token | default(omit) }}"
- region: "{{ aws_region }}"
- aws_step_functions_state_machine_execution:
- aws_access_key: "{{ aws_access_key }}"
- aws_secret_key: "{{ aws_secret_key }}"
- security_token: "{{ security_token | default(omit) }}"
- region: "{{ aws_region }}"
- block:
-
- # ==== Setup ==================================================
-
- - name: Create IAM service role needed for Step Functions
- iam_role:
- name: "{{ step_functions_role_name }}"
- description: Role with permissions for AWS Step Functions actions.
- assume_role_policy_document: "{{ lookup('file', 'state_machines_iam_trust_policy.json') }}"
- state: present
- register: step_functions_role
-
- - name: Pause a few seconds to ensure IAM role is available to next task
- pause:
- seconds: 10
-
- # ==== Tests ===================================================
-
- - name: Create a random component for state machine name
- set_fact:
- random_num: "{{ 999999999 | random }}"
-
- - name: Create a new state machine -- check_mode
- aws_step_functions_state_machine:
- name: "{{ state_machine_name }}"
- definition: "{{ lookup('file','state_machine.json') }}"
- role_arn: "{{ step_functions_role.iam_role.arn }}"
- tags:
- project: helloWorld
- state: present
- register: creation_check
- check_mode: yes
-
- - assert:
- that:
- - creation_check.changed == True
- - creation_check.output == 'State machine would be created.'
-
- - name: Create a new state machine
- aws_step_functions_state_machine:
- name: "{{ state_machine_name }}"
- definition: "{{ lookup('file','state_machine.json') }}"
- role_arn: "{{ step_functions_role.iam_role.arn }}"
- tags:
- project: helloWorld
- state: present
- register: creation_output
-
- - assert:
- that:
- - creation_output.changed == True
-
- - name: Pause a few seconds to ensure state machine role is available
- pause:
- seconds: 5
-
- - name: Idempotent rerun of same state function -- check_mode
- aws_step_functions_state_machine:
- name: "{{ state_machine_name }}"
- definition: "{{ lookup('file','state_machine.json') }}"
- role_arn: "{{ step_functions_role.iam_role.arn }}"
- tags:
- project: helloWorld
- state: present
- register: result
- check_mode: yes
-
- - assert:
- that:
- - result.changed == False
- - result.output == 'State is up-to-date.'
-
- - name: Idempotent rerun of same state function
- aws_step_functions_state_machine:
- name: "{{ state_machine_name }}"
- definition: "{{ lookup('file','state_machine.json') }}"
- role_arn: "{{ step_functions_role.iam_role.arn }}"
- tags:
- project: helloWorld
- state: present
- register: result
-
- - assert:
- that:
- - result.changed == False
-
- - name: Update an existing state machine -- check_mode
- aws_step_functions_state_machine:
- name: "{{ state_machine_name }}"
- definition: "{{ lookup('file','alternative_state_machine.json') }}"
- role_arn: "{{ step_functions_role.iam_role.arn }}"
- tags:
- differentTag: different_tag
- state: present
- register: update_check
- check_mode: yes
-
- - assert:
- that:
- - update_check.changed == True
- - "update_check.output == 'State machine would be updated: {{ creation_output.state_machine_arn }}'"
-
- - name: Update an existing state machine
- aws_step_functions_state_machine:
- name: "{{ state_machine_name }}"
- definition: "{{ lookup('file','alternative_state_machine.json') }}"
- role_arn: "{{ step_functions_role.iam_role.arn }}"
- tags:
- differentTag: different_tag
- state: present
- register: update_output
-
- - assert:
- that:
- - update_output.changed == True
- - update_output.state_machine_arn == creation_output.state_machine_arn
-
- - name: Start execution of state machine -- check_mode
- aws_step_functions_state_machine_execution:
- name: "{{ execution_name }}"
- execution_input: "{}"
- state_machine_arn: "{{ creation_output.state_machine_arn }}"
- register: start_execution_output
- check_mode: yes
-
- - assert:
- that:
- - start_execution_output.changed == True
- - "start_execution_output.output == 'State machine execution would be started.'"
-
- - name: Start execution of state machine
- aws_step_functions_state_machine_execution:
- name: "{{ execution_name }}"
- execution_input: "{}"
- state_machine_arn: "{{ creation_output.state_machine_arn }}"
- register: start_execution_output
-
- - assert:
- that:
- - start_execution_output.changed
- - "'execution_arn' in start_execution_output"
- - "'start_date' in start_execution_output"
-
- - name: Start execution of state machine (check for idempotency) (check mode)
- aws_step_functions_state_machine_execution:
- name: "{{ execution_name }}"
- execution_input: "{}"
- state_machine_arn: "{{ creation_output.state_machine_arn }}"
- register: start_execution_output_idem_check
- check_mode: yes
-
- - assert:
- that:
- - not start_execution_output_idem_check.changed
- - "start_execution_output_idem_check.output == 'State machine execution already exists.'"
-
- - name: Start execution of state machine (check for idempotency)
- aws_step_functions_state_machine_execution:
- name: "{{ execution_name }}"
- execution_input: "{}"
- state_machine_arn: "{{ creation_output.state_machine_arn }}"
- register: start_execution_output_idem
-
- - assert:
- that:
- - not start_execution_output_idem.changed
-
- - name: Stop execution of state machine -- check_mode
- aws_step_functions_state_machine_execution:
- action: stop
- execution_arn: "{{ start_execution_output.execution_arn }}"
- cause: "cause of the failure"
- error: "error code of the failure"
- register: stop_execution_output
- check_mode: yes
-
- - name: Stop execution of state machine
- aws_step_functions_state_machine_execution:
- action: stop
- execution_arn: "{{ start_execution_output.execution_arn }}"
- cause: "cause of the failure"
- error: "error code of the failure"
- register: stop_execution_output
-
- - name: Stop execution of state machine (check for idempotency)
- aws_step_functions_state_machine_execution:
- action: stop
- execution_arn: "{{ start_execution_output.execution_arn }}"
- cause: "cause of the failure"
- error: "error code of the failure"
- register: stop_execution_output
-
- - name: Try stopping a non-running execution -- check_mode
- aws_step_functions_state_machine_execution:
- action: stop
- execution_arn: "{{ start_execution_output.execution_arn }}"
- cause: "cause of the failure"
- error: "error code of the failure"
- register: stop_execution_output
- check_mode: yes
-
- - assert:
- that:
- - not stop_execution_output.changed
- - "stop_execution_output.output == 'State machine execution is not running.'"
-
- - name: Try stopping a non-running execution
- aws_step_functions_state_machine_execution:
- action: stop
- execution_arn: "{{ start_execution_output.execution_arn }}"
- cause: "cause of the failure"
- error: "error code of the failure"
- register: stop_execution_output
- check_mode: yes
-
- - assert:
- that:
- - not stop_execution_output.changed
-
- - name: Start execution of state machine with the same execution name
- aws_step_functions_state_machine_execution:
- name: "{{ execution_name }}"
- state_machine_arn: "{{ creation_output.state_machine_arn }}"
- register: start_execution_output_again
-
- - assert:
- that:
- - not start_execution_output_again.changed
-
- - name: Remove state machine -- check_mode
- aws_step_functions_state_machine:
- name: "{{ state_machine_name }}"
- state: absent
- register: deletion_check
- check_mode: yes
-
- - assert:
- that:
- - deletion_check.changed == True
- - "deletion_check.output == 'State machine would be deleted: {{ creation_output.state_machine_arn }}'"
-
- - name: Remove state machine
- aws_step_functions_state_machine:
- name: "{{ state_machine_name }}"
- state: absent
- register: deletion_output
-
- - assert:
- that:
- - deletion_output.changed == True
- - deletion_output.state_machine_arn == creation_output.state_machine_arn
-
- - name: Non-existent state machine is absent
- aws_step_functions_state_machine:
- name: "non_existing_state_machine"
- state: absent
- register: result
-
- - assert:
- that:
- - result.changed == False
-
- # ==== Cleanup ====================================================
-
- always:
-
- - name: Cleanup - delete state machine
- aws_step_functions_state_machine:
- name: "{{ state_machine_name }}"
- state: absent
- ignore_errors: true
-
- - name: Cleanup - delete IAM role needed for Step Functions test
- iam_role:
- name: "{{ step_functions_role_name }}"
- state: absent
- ignore_errors: true
diff --git a/test/integration/targets/incidental_connection_chroot/aliases b/test/integration/targets/incidental_connection_chroot/aliases
deleted file mode 100644
index 01f0bd4e61..0000000000
--- a/test/integration/targets/incidental_connection_chroot/aliases
+++ /dev/null
@@ -1,3 +0,0 @@
-needs/root
-shippable/posix/incidental
-needs/target/connection
diff --git a/test/integration/targets/incidental_connection_chroot/runme.sh b/test/integration/targets/incidental_connection_chroot/runme.sh
deleted file mode 100755
index e7eb01d3c7..0000000000
--- a/test/integration/targets/incidental_connection_chroot/runme.sh
+++ /dev/null
@@ -1,18 +0,0 @@
-#!/usr/bin/env bash
-
-set -eux
-
-# Connection tests for POSIX platforms use this script by linking to it from the appropriate 'connection_' target dir.
-# The name of the inventory group to test is extracted from the directory name following the 'connection_' prefix.
-
-group=$(python -c \
- "from os import path; print(path.basename(path.abspath(path.dirname('$0'))).replace('incidental_connection_', ''))")
-
-cd ../connection
-
-INVENTORY="../incidental_connection_${group}/test_connection.inventory" ./test.sh \
- -e target_hosts="${group}" \
- -e action_prefix= \
- -e local_tmp=/tmp/ansible-local \
- -e remote_tmp=/tmp/ansible-remote \
- "$@"
diff --git a/test/integration/targets/incidental_connection_chroot/test_connection.inventory b/test/integration/targets/incidental_connection_chroot/test_connection.inventory
deleted file mode 100644
index 5f78393f21..0000000000
--- a/test/integration/targets/incidental_connection_chroot/test_connection.inventory
+++ /dev/null
@@ -1,7 +0,0 @@
-[chroot]
-chroot-pipelining ansible_ssh_pipelining=true
-chroot-no-pipelining ansible_ssh_pipelining=false
-[chroot:vars]
-ansible_host=/
-ansible_connection=chroot
-ansible_python_interpreter="{{ ansible_playbook_python }}"
diff --git a/test/integration/targets/incidental_consul/aliases b/test/integration/targets/incidental_consul/aliases
deleted file mode 100644
index 0a22af0f92..0000000000
--- a/test/integration/targets/incidental_consul/aliases
+++ /dev/null
@@ -1,4 +0,0 @@
-shippable/posix/incidental
-destructive
-skip/aix
-skip/power/centos
diff --git a/test/integration/targets/incidental_consul/meta/main.yml b/test/integration/targets/incidental_consul/meta/main.yml
deleted file mode 100644
index 1039151126..0000000000
--- a/test/integration/targets/incidental_consul/meta/main.yml
+++ /dev/null
@@ -1,3 +0,0 @@
----
-dependencies:
- - incidental_setup_openssl
diff --git a/test/integration/targets/incidental_consul/tasks/consul_session.yml b/test/integration/targets/incidental_consul/tasks/consul_session.yml
deleted file mode 100644
index a5490ec6c2..0000000000
--- a/test/integration/targets/incidental_consul/tasks/consul_session.yml
+++ /dev/null
@@ -1,162 +0,0 @@
-- name: list sessions
- consul_session:
- state: list
- register: result
-
-- assert:
- that:
- - result is changed
- - "'sessions' in result"
-
-- name: create a session
- consul_session:
- state: present
- name: testsession
- register: result
-
-- assert:
- that:
- - result is changed
- - result['name'] == 'testsession'
- - "'session_id' in result"
-
-- set_fact:
- session_id: "{{ result['session_id'] }}"
-
-- name: list sessions after creation
- consul_session:
- state: list
- register: result
-
-- set_fact:
- session_count: "{{ result['sessions'] | length }}"
-
-- assert:
- that:
- - result is changed
- # selectattr not available on Jinja 2.2 provided by CentOS 6
- # hence the two following tasks (set_fact/assert) are used
- # - (result['sessions'] | selectattr('ID', 'match', '^' ~ session_id ~ '$') | first)['Name'] == 'testsession'
-
-- name: search created session
- set_fact:
- test_session_found: True
- loop: "{{ result['sessions'] }}"
- when: "item.get('ID') == session_id and item.get('Name') == 'testsession'"
-
-- name: ensure session was created
- assert:
- that:
- - test_session_found|default(False)
-
-- name: fetch info about a session
- consul_session:
- state: info
- id: '{{ session_id }}'
- register: result
-
-- assert:
- that:
- - result is changed
-
-- name: ensure 'id' parameter is required when state=info
- consul_session:
- state: info
- name: test
- register: result
- ignore_errors: True
-
-- assert:
- that:
- - result is failed
-
-- name: ensure unknown scheme fails
- consul_session:
- state: info
- id: '{{ session_id }}'
- scheme: non_existent
- register: result
- ignore_errors: True
-
-- assert:
- that:
- - result is failed
-
-- when: pyopenssl_version.stdout is version('0.15', '>=')
- block:
- - name: ensure SSL certificate is checked
- consul_session:
- state: info
- id: '{{ session_id }}'
- port: 8501
- scheme: https
- register: result
- ignore_errors: True
-
- - name: previous task should fail since certificate is not known
- assert:
- that:
- - result is failed
- - "'certificate verify failed' in result.msg"
-
- - name: ensure SSL certificate isn't checked when validate_certs is disabled
- consul_session:
- state: info
- id: '{{ session_id }}'
- port: 8501
- scheme: https
- validate_certs: False
- register: result
-
- - name: previous task should succeed since certificate isn't checked
- assert:
- that:
- - result is changed
-
- - name: ensure a secure connection is possible
- consul_session:
- state: info
- id: '{{ session_id }}'
- port: 8501
- scheme: https
- environment:
- REQUESTS_CA_BUNDLE: '{{ remote_dir }}/cert.pem'
- register: result
-
- - assert:
- that:
- - result is changed
-
-- name: delete a session
- consul_session:
- state: absent
- id: '{{ session_id }}'
- register: result
-
-- assert:
- that:
- - result is changed
-
-- name: list sessions after deletion
- consul_session:
- state: list
- register: result
-
-- assert:
- that:
- - result is changed
- # selectattr and equalto not available on Jinja 2.2 provided by CentOS 6
- # hence the two following tasks (command/assert) are used
- # - (result['sessions'] | selectattr('ID', 'equalto', session_id) | list | length) == 0
-
-- name: search deleted session
- command: echo 'session found'
- loop: "{{ result['sessions'] }}"
- when: "item.get('ID') == session_id and item.get('Name') == 'testsession'"
- register: search_deleted
-
-- name: ensure session was deleted
- assert:
- that:
- - search_deleted is skipped # each iteration is skipped
- - search_deleted is not changed # and then unchanged
diff --git a/test/integration/targets/incidental_consul/tasks/main.yml b/test/integration/targets/incidental_consul/tasks/main.yml
deleted file mode 100644
index 575c2ed9fb..0000000000
--- a/test/integration/targets/incidental_consul/tasks/main.yml
+++ /dev/null
@@ -1,97 +0,0 @@
----
-- name: Install Consul and test
-
- vars:
- consul_version: '1.5.0'
- consul_uri: https://s3.amazonaws.com/ansible-ci-files/test/integration/targets/consul/consul_{{ consul_version }}_{{ ansible_system | lower }}_{{ consul_arch }}.zip
- consul_cmd: '{{ output_dir }}/consul'
-
- block:
- - name: register pyOpenSSL version
- command: "{{ ansible_python_interpreter }} -c 'import OpenSSL; print(OpenSSL.__version__)'"
- register: pyopenssl_version
-
- - name: Install requests<2.20 (CentOS/RHEL 6)
- pip:
- name: requests<2.20
- register: result
- until: result is success
- when: ansible_distribution_file_variety|default() == 'RedHat' and ansible_distribution_major_version is version('6', '<=')
-
- - name: Install python-consul
- pip:
- name: python-consul
- register: result
- until: result is success
-
- - when: pyopenssl_version.stdout is version('0.15', '>=')
- block:
- - name: Generate privatekey
- openssl_privatekey:
- path: '{{ output_dir }}/privatekey.pem'
-
- - name: Generate CSR
- openssl_csr:
- path: '{{ output_dir }}/csr.csr'
- privatekey_path: '{{ output_dir }}/privatekey.pem'
- subject:
- commonName: localhost
-
- - name: Generate selfsigned certificate
- openssl_certificate:
- path: '{{ output_dir }}/cert.pem'
- csr_path: '{{ output_dir }}/csr.csr'
- privatekey_path: '{{ output_dir }}/privatekey.pem'
- provider: selfsigned
- selfsigned_digest: sha256
- register: selfsigned_certificate
-
- - name: 'Install unzip'
- package:
- name: unzip
- register: result
- until: result is success
- when: ansible_distribution != "MacOSX" # unzip already installed
-
- - assert:
- # Linux: x86_64, FreeBSD: amd64
- that: ansible_architecture in ['i386', 'x86_64', 'amd64']
- - set_fact:
- consul_arch: '386'
- when: ansible_architecture == 'i386'
- - set_fact:
- consul_arch: amd64
- when: ansible_architecture in ['x86_64', 'amd64']
-
- - name: 'Download consul binary'
- unarchive:
- src: '{{ consul_uri }}'
- dest: '{{ output_dir }}'
- remote_src: true
- register: result
- until: result is success
-
- - vars:
- remote_dir: '{{ echo_output_dir.stdout }}'
- block:
- - command: 'echo {{ output_dir }}'
- register: echo_output_dir
-
- - name: 'Create configuration file'
- template:
- src: consul_config.hcl.j2
- dest: '{{ output_dir }}/consul_config.hcl'
-
- - name: 'Start Consul (dev mode enabled)'
- shell: 'nohup {{ consul_cmd }} agent -dev -config-file {{ output_dir }}/consul_config.hcl </dev/null >/dev/null 2>&1 &'
-
- - name: 'Create some data'
- command: '{{ consul_cmd }} kv put data/value{{ item }} foo{{ item }}'
- loop: [1, 2, 3]
-
- - import_tasks: consul_session.yml
-
- always:
- - name: 'Kill consul process'
- shell: "kill $(cat {{ output_dir }}/consul.pid)"
- ignore_errors: true
diff --git a/test/integration/targets/incidental_consul/templates/consul_config.hcl.j2 b/test/integration/targets/incidental_consul/templates/consul_config.hcl.j2
deleted file mode 100644
index 9af06f02e9..0000000000
--- a/test/integration/targets/incidental_consul/templates/consul_config.hcl.j2
+++ /dev/null
@@ -1,13 +0,0 @@
-# {{ ansible_managed }}
-server = true
-pid_file = "{{ remote_dir }}/consul.pid"
-ports {
- http = 8500
- {% if pyopenssl_version.stdout is version('0.15', '>=') %}
- https = 8501
- {% endif %}
-}
-{% if pyopenssl_version.stdout is version('0.15', '>=') %}
-key_file = "{{ remote_dir }}/privatekey.pem"
-cert_file = "{{ remote_dir }}/cert.pem"
-{% endif %}
diff --git a/test/integration/targets/incidental_cs_service_offering/aliases b/test/integration/targets/incidental_cs_service_offering/aliases
deleted file mode 100644
index e50e650e98..0000000000
--- a/test/integration/targets/incidental_cs_service_offering/aliases
+++ /dev/null
@@ -1,2 +0,0 @@
-cloud/cs
-shippable/cs/incidental
diff --git a/test/integration/targets/incidental_cs_service_offering/meta/main.yml b/test/integration/targets/incidental_cs_service_offering/meta/main.yml
deleted file mode 100644
index d46613c55f..0000000000
--- a/test/integration/targets/incidental_cs_service_offering/meta/main.yml
+++ /dev/null
@@ -1,3 +0,0 @@
----
-dependencies:
- - incidental_cs_common
diff --git a/test/integration/targets/incidental_cs_service_offering/tasks/guest_vm_service_offering.yml b/test/integration/targets/incidental_cs_service_offering/tasks/guest_vm_service_offering.yml
deleted file mode 100644
index f7aee3c8a2..0000000000
--- a/test/integration/targets/incidental_cs_service_offering/tasks/guest_vm_service_offering.yml
+++ /dev/null
@@ -1,223 +0,0 @@
----
-- name: setup service offering
- cs_service_offering:
- name: Micro
- state: absent
- register: so
-- name: verify setup service offering
- assert:
- that:
- - so is successful
-
-- name: create service offering in check mode
- cs_service_offering:
- name: Micro
- display_text: Micro 512mb 1cpu
- cpu_number: 1
- cpu_speed: 2198
- memory: 512
- host_tags: eco
- storage_tags:
- - eco
- - backup
- storage_type: local
- register: so
- check_mode: true
-- name: verify create service offering in check mode
- assert:
- that:
- - so is changed
-
-- name: create service offering
- cs_service_offering:
- name: Micro
- display_text: Micro 512mb 1cpu
- cpu_number: 1
- cpu_speed: 2198
- memory: 512
- host_tags: eco
- storage_tags:
- - eco
- - backup
- storage_type: local
- register: so
-- name: verify create service offering
- assert:
- that:
- - so is changed
- - so.name == "Micro"
- - so.display_text == "Micro 512mb 1cpu"
- - so.cpu_number == 1
- - so.cpu_speed == 2198
- - so.memory == 512
- - so.host_tags == ['eco']
- - so.storage_tags == ['eco', 'backup']
- - so.storage_type == "local"
-
-- name: create service offering idempotence
- cs_service_offering:
- name: Micro
- display_text: Micro 512mb 1cpu
- cpu_number: 1
- cpu_speed: 2198
- memory: 512
- host_tags: eco
- storage_tags:
- - eco
- - backup
- storage_type: local
- register: so
-- name: verify create service offering idempotence
- assert:
- that:
- - so is not changed
- - so.name == "Micro"
- - so.display_text == "Micro 512mb 1cpu"
- - so.cpu_number == 1
- - so.cpu_speed == 2198
- - so.memory == 512
- - so.host_tags == ['eco']
- - so.storage_tags == ['eco', 'backup']
- - so.storage_type == "local"
-
-- name: update service offering in check mode
- cs_service_offering:
- name: Micro
- display_text: Micro RAM 512MB 1vCPU
- register: so
- check_mode: true
-- name: verify create update offering in check mode
- assert:
- that:
- - so is changed
- - so.name == "Micro"
- - so.display_text == "Micro 512mb 1cpu"
- - so.cpu_number == 1
- - so.cpu_speed == 2198
- - so.memory == 512
- - so.host_tags == ['eco']
- - so.storage_tags == ['eco', 'backup']
- - so.storage_type == "local"
-
-- name: update service offering
- cs_service_offering:
- name: Micro
- display_text: Micro RAM 512MB 1vCPU
- register: so
-- name: verify update service offerin
- assert:
- that:
- - so is changed
- - so.name == "Micro"
- - so.display_text == "Micro RAM 512MB 1vCPU"
- - so.cpu_number == 1
- - so.cpu_speed == 2198
- - so.memory == 512
- - so.host_tags == ['eco']
- - so.storage_tags == ['eco', 'backup']
- - so.storage_type == "local"
-
-- name: update service offering idempotence
- cs_service_offering:
- name: Micro
- display_text: Micro RAM 512MB 1vCPU
- register: so
-- name: verify update service offering idempotence
- assert:
- that:
- - so is not changed
- - so.name == "Micro"
- - so.display_text == "Micro RAM 512MB 1vCPU"
- - so.cpu_number == 1
- - so.cpu_speed == 2198
- - so.memory == 512
- - so.host_tags == ['eco']
- - so.storage_tags == ['eco', 'backup']
- - so.storage_type == "local"
-
-- name: remove service offering in check mode
- cs_service_offering:
- name: Micro
- state: absent
- check_mode: true
- register: so
-- name: verify remove service offering in check mode
- assert:
- that:
- - so is changed
- - so.name == "Micro"
- - so.display_text == "Micro RAM 512MB 1vCPU"
- - so.cpu_number == 1
- - so.cpu_speed == 2198
- - so.memory == 512
- - so.host_tags == ['eco']
- - so.storage_tags == ['eco', 'backup']
- - so.storage_type == "local"
-
-- name: remove service offering
- cs_service_offering:
- name: Micro
- state: absent
- register: so
-- name: verify remove service offering
- assert:
- that:
- - so is changed
- - so.name == "Micro"
- - so.display_text == "Micro RAM 512MB 1vCPU"
- - so.cpu_number == 1
- - so.cpu_speed == 2198
- - so.memory == 512
- - so.host_tags == ['eco']
- - so.storage_tags == ['eco', 'backup']
- - so.storage_type == "local"
-
-- name: remove service offering idempotence
- cs_service_offering:
- name: Micro
- state: absent
- register: so
-- name: verify remove service offering idempotence
- assert:
- that:
- - so is not changed
-
-- name: create custom service offering
- cs_service_offering:
- name: custom
- display_text: custom offer
- is_customized: yes
- host_tags: eco
- storage_tags:
- - eco
- - backup
- storage_type: local
- register: so
-- name: verify create custom service offering
- assert:
- that:
- - so is changed
- - so.name == "custom"
- - so.display_text == "custom offer"
- - so.is_customized == True
- - so.cpu_number is not defined
- - so.cpu_speed is not defined
- - so.memory is not defined
- - so.host_tags == ['eco']
- - so.storage_tags == ['eco', 'backup']
- - so.storage_type == "local"
-
-- name: remove custom service offering
- cs_service_offering:
- name: custom
- state: absent
- register: so
-- name: verify remove service offering
- assert:
- that:
- - so is changed
- - so.name == "custom"
- - so.display_text == "custom offer"
- - so.host_tags == ['eco']
- - so.storage_tags == ['eco', 'backup']
- - so.storage_type == "local"
diff --git a/test/integration/targets/incidental_cs_service_offering/tasks/main.yml b/test/integration/targets/incidental_cs_service_offering/tasks/main.yml
deleted file mode 100644
index 581f7d74de..0000000000
--- a/test/integration/targets/incidental_cs_service_offering/tasks/main.yml
+++ /dev/null
@@ -1,3 +0,0 @@
----
-- import_tasks: guest_vm_service_offering.yml
-- import_tasks: system_vm_service_offering.yml \ No newline at end of file
diff --git a/test/integration/targets/incidental_cs_service_offering/tasks/system_vm_service_offering.yml b/test/integration/targets/incidental_cs_service_offering/tasks/system_vm_service_offering.yml
deleted file mode 100644
index 4c63a4b9c8..0000000000
--- a/test/integration/targets/incidental_cs_service_offering/tasks/system_vm_service_offering.yml
+++ /dev/null
@@ -1,151 +0,0 @@
----
-- name: setup system offering
- cs_service_offering:
- name: System Offering for Ansible
- is_system: true
- state: absent
- register: so
-- name: verify setup system offering
- assert:
- that:
- - so is successful
-
-- name: fail missing storage type and is_system
- cs_service_offering:
- name: System Offering for Ansible
- cpu_number: 1
- cpu_speed: 500
- memory: 512
- host_tag: perf
- storage_tag: perf
- storage_type: shared
- offer_ha: true
- limit_cpu_usage: false
- is_system: true
- register: so
- ignore_errors: true
-- name: verify create system service offering in check mode
- assert:
- that:
- - so is failed
- - so.msg.startswith('missing required arguments:')
-
-- name: create system service offering in check mode
- cs_service_offering:
- name: System Offering for Ansible
- cpu_number: 1
- cpu_speed: 500
- memory: 512
- host_tag: perf
- storage_tag: perf
- storage_type: shared
- offer_ha: true
- limit_cpu_usage: false
- system_vm_type: domainrouter
- is_system: true
- register: so
- check_mode: true
-- name: verify create system service offering in check mode
- assert:
- that:
- - so is changed
-
-- name: create system service offering
- cs_service_offering:
- name: System Offering for Ansible
- cpu_number: 1
- cpu_speed: 500
- memory: 512
- host_tag: perf
- storage_tag: perf
- storage_type: shared
- offer_ha: true
- limit_cpu_usage: false
- system_vm_type: domainrouter
- is_system: true
- register: so
-- name: verify create system service offering
- assert:
- that:
- - so is changed
- - so.name == "System Offering for Ansible"
- - so.display_text == "System Offering for Ansible"
- - so.cpu_number == 1
- - so.cpu_speed == 500
- - so.memory == 512
- - so.host_tags == ['perf']
- - so.storage_tags == ['perf']
- - so.storage_type == "shared"
- - so.offer_ha == true
- - so.limit_cpu_usage == false
- - so.system_vm_type == "domainrouter"
- - so.is_system == true
-
-- name: create system service offering idempotence
- cs_service_offering:
- name: System Offering for Ansible
- cpu_number: 1
- cpu_speed: 500
- memory: 512
- host_tag: perf
- storage_tag: perf
- storage_type: shared
- offer_ha: true
- limit_cpu_usage: false
- system_vm_type: domainrouter
- is_system: true
- register: so
-- name: verify create system service offering idempotence
- assert:
- that:
- - so is not changed
- - so.name == "System Offering for Ansible"
- - so.display_text == "System Offering for Ansible"
- - so.cpu_number == 1
- - so.cpu_speed == 500
- - so.memory == 512
- - so.host_tags == ['perf']
- - so.storage_tags == ['perf']
- - so.storage_type == "shared"
- - so.offer_ha == true
- - so.limit_cpu_usage == false
- - so.system_vm_type == "domainrouter"
- - so.is_system == true
-
-- name: remove system service offering in check mode
- cs_service_offering:
- name: System Offering for Ansible
- is_system: true
- state: absent
- check_mode: true
- register: so
-- name: verify remove system service offering in check mode
- assert:
- that:
- - so is changed
- - so.name == "System Offering for Ansible"
- - so.is_system == true
-
-- name: remove system service offering
- cs_service_offering:
- name: System Offering for Ansible
- is_system: true
- state: absent
- register: so
-- name: verify remove system service offering
- assert:
- that:
- - so is changed
- - so.name == "System Offering for Ansible"
- - so.is_system == true
-
-- name: remove system service offering idempotence
- cs_service_offering:
- name: System Offering for Ansible
- is_system: true
- state: absent
- register: so
-- name: verify remove system service offering idempotence
- assert:
- that:
- - so is not changed
diff --git a/test/integration/targets/incidental_hcloud_server/aliases b/test/integration/targets/incidental_hcloud_server/aliases
deleted file mode 100644
index 6c43c27cf9..0000000000
--- a/test/integration/targets/incidental_hcloud_server/aliases
+++ /dev/null
@@ -1,2 +0,0 @@
-cloud/hcloud
-shippable/hcloud/incidental
diff --git a/test/integration/targets/incidental_hcloud_server/defaults/main.yml b/test/integration/targets/incidental_hcloud_server/defaults/main.yml
deleted file mode 100644
index b9a9a8df7b..0000000000
--- a/test/integration/targets/incidental_hcloud_server/defaults/main.yml
+++ /dev/null
@@ -1,5 +0,0 @@
-# Copyright: (c) 2019, Hetzner Cloud GmbH <info@hetzner-cloud.de>
-# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
----
-hcloud_prefix: "tests"
-hcloud_server_name: "{{hcloud_prefix}}-integration"
diff --git a/test/integration/targets/incidental_hcloud_server/tasks/main.yml b/test/integration/targets/incidental_hcloud_server/tasks/main.yml
deleted file mode 100644
index 31c7ad97e0..0000000000
--- a/test/integration/targets/incidental_hcloud_server/tasks/main.yml
+++ /dev/null
@@ -1,517 +0,0 @@
-# Copyright: (c) 2019, Hetzner Cloud GmbH <info@hetzner-cloud.de>
-# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
----
-- name: setup
- hcloud_server:
- name: "{{ hcloud_server_name }}"
- state: absent
- register: result
-- name: verify setup
- assert:
- that:
- - result is success
-- name: test missing required parameters on create server
- hcloud_server:
- name: "{{ hcloud_server_name }}"
- register: result
- ignore_errors: yes
-- name: verify fail test missing required parameters on create server
- assert:
- that:
- - result is failed
- - 'result.msg == "missing required arguments: server_type, image"'
-
-- name: test create server with check mode
- hcloud_server:
- name: "{{ hcloud_server_name }}"
- server_type: cx11
- image: ubuntu-18.04
- state: present
- register: result
- check_mode: yes
-- name: test create server server
- assert:
- that:
- - result is changed
-
-- name: test create server
- hcloud_server:
- name: "{{ hcloud_server_name}}"
- server_type: cx11
- image: ubuntu-18.04
- state: started
- register: main_server
-- name: verify create server
- assert:
- that:
- - main_server is changed
- - main_server.hcloud_server.name == "{{ hcloud_server_name }}"
- - main_server.hcloud_server.server_type == "cx11"
- - main_server.hcloud_server.status == "running"
- - main_server.root_password != ""
-
-- name: test create server idempotence
- hcloud_server:
- name: "{{ hcloud_server_name }}"
- state: started
- register: result
-- name: verify create server idempotence
- assert:
- that:
- - result is not changed
-
-- name: test stop server with check mode
- hcloud_server:
- name: "{{ hcloud_server_name }}"
- state: stopped
- register: result
- check_mode: yes
-- name: verify stop server with check mode
- assert:
- that:
- - result is changed
- - result.hcloud_server.status == "running"
-
-- name: test stop server
- hcloud_server:
- name: "{{ hcloud_server_name }}"
- state: stopped
- register: result
-- name: verify stop server
- assert:
- that:
- - result is changed
- - result.hcloud_server.status == "off"
-
-- name: test start server with check mode
- hcloud_server:
- name: "{{ hcloud_server_name }}"
- state: started
- register: result
- check_mode: true
-- name: verify start server with check mode
- assert:
- that:
- - result is changed
-
-- name: test start server
- hcloud_server:
- name: "{{ hcloud_server_name }}"
- state: started
- register: result
-- name: verify start server
- assert:
- that:
- - result is changed
- - result.hcloud_server.status == "running"
-
-- name: test start server idempotence
- hcloud_server:
- name: "{{ hcloud_server_name }}"
- state: started
- register: result
-- name: verify start server idempotence
- assert:
- that:
- - result is not changed
- - result.hcloud_server.status == "running"
-
-- name: test stop server by its id
- hcloud_server:
- id: "{{ main_server.hcloud_server.id }}"
- state: stopped
- register: result
-- name: verify stop server by its id
- assert:
- that:
- - result is changed
- - result.hcloud_server.status == "off"
-
-- name: test resize server running without force
- hcloud_server:
- name: "{{ hcloud_server_name }}"
- server_type: "cx21"
- state: present
- register: result
- check_mode: true
-- name: verify test resize server running without force
- assert:
- that:
- - result is changed
- - result.hcloud_server.server_type == "cx11"
-
-- name: test resize server with check mode
- hcloud_server:
- name: "{{ hcloud_server_name }}"
- server_type: "cx21"
- state: stopped
- register: result
- check_mode: true
-- name: verify resize server with check mode
- assert:
- that:
- - result is changed
-
-- name: test enable backups with check mode
- hcloud_server:
- name: "{{ hcloud_server_name }}"
- backups: true
- state: stopped
- register: result
- check_mode: true
-- name: verify enable backups with check mode
- assert:
- that:
- - result is changed
-
-- name: test enable backups
- hcloud_server:
- name: "{{ hcloud_server_name }}"
- backups: true
- state: stopped
- register: result
-- name: verify enable backups
- assert:
- that:
- - result is changed
- - result.hcloud_server.backup_window != ""
-
-- name: test enable backups idempotence
- hcloud_server:
- name: "{{ hcloud_server_name }}"
- backups: true
- state: stopped
- register: result
-- name: verify enable backups idempotence
- assert:
- that:
- - result is not changed
- - result.hcloud_server.backup_window != ""
-
-- name: test rebuild server
- hcloud_server:
- name: "{{ hcloud_server_name }}"
- image: ubuntu-18.04
- state: rebuild
- register: result_after_test
-- name: verify rebuild server
- assert:
- that:
- - result_after_test is changed
- - result.hcloud_server.id == result_after_test.hcloud_server.id
-
-- name: test rebuild server with check mode
- hcloud_server:
- name: "{{ hcloud_server_name }}"
- image: ubuntu-18.04
- state: rebuild
- register: result_after_test
- check_mode: true
-- name: verify rebuild server with check mode
- assert:
- that:
- - result_after_test is changed
-
-- name: test update server protection booth protection arguments are required
- hcloud_server:
- name: "{{ hcloud_server_name }}"
- delete_protection: true
- state: present
- register: result_after_test
- ignore_errors: true
-- name: verify update server protection booth protection arguments are required
- assert:
- that:
- - result_after_test is failed
- - 'result_after_test.msg == "parameters are required together: delete_protection, rebuild_protection"'
-
-- name: test update server protection fails if they are not the same
- hcloud_server:
- name: "{{ hcloud_server_name }}"
- delete_protection: true
- rebuild_protection: false
- state: present
- register: result_after_test
- ignore_errors: true
-- name: verify update server protection fails if they are not the same
- assert:
- that:
- - result_after_test is failed
-
-- name: test update server protection
- hcloud_server:
- name: "{{ hcloud_server_name }}"
- delete_protection: true
- rebuild_protection: true
- state: present
- register: result_after_test
- ignore_errors: true
-- name: verify update server protection
- assert:
- that:
- - result_after_test is changed
- - result_after_test.hcloud_server.delete_protection is sameas true
- - result_after_test.hcloud_server.rebuild_protection is sameas true
-
-- name: test server without protection set to be idempotent
- hcloud_server:
- name: "{{hcloud_server_name}}"
- register: result_after_test
-- name: verify test server without protection set to be idempotent
- assert:
- that:
- - result_after_test is not changed
- - result_after_test.hcloud_server.delete_protection is sameas true
- - result_after_test.hcloud_server.rebuild_protection is sameas true
-
-- name: test delete server fails if it is protected
- hcloud_server:
- name: "{{hcloud_server_name}}"
- state: absent
- ignore_errors: yes
- register: result
-- name: verify delete server fails if it is protected
- assert:
- that:
- - result is failed
- - 'result.msg == "server deletion is protected"'
-
-- name: test rebuild server fails if it is protected
- hcloud_server:
- name: "{{hcloud_server_name}}"
- image: ubuntu-18.04
- state: rebuild
- ignore_errors: yes
- register: result
-- name: verify rebuild server fails if it is protected
- assert:
- that:
- - result is failed
- - 'result.msg == "server rebuild is protected"'
-
-- name: test remove server protection
- hcloud_server:
- name: "{{ hcloud_server_name }}"
- delete_protection: false
- rebuild_protection: false
- state: present
- register: result_after_test
- ignore_errors: true
-- name: verify remove server protection
- assert:
- that:
- - result_after_test is changed
- - result_after_test.hcloud_server.delete_protection is sameas false
- - result_after_test.hcloud_server.rebuild_protection is sameas false
-
-- name: absent server
- hcloud_server:
- name: "{{ hcloud_server_name }}"
- state: absent
- register: result
-- name: verify absent server
- assert:
- that:
- - result is success
-
-- name: test create server with ssh key
- hcloud_server:
- name: "{{ hcloud_server_name}}"
- server_type: cx11
- image: "ubuntu-18.04"
- ssh_keys:
- - ci@ansible.hetzner.cloud
- state: started
- register: main_server
-- name: verify create server with ssh key
- assert:
- that:
- - main_server is changed
- - main_server.hcloud_server.name == "{{ hcloud_server_name }}"
- - main_server.hcloud_server.server_type == "cx11"
- - main_server.hcloud_server.status == "running"
- - main_server.root_password != ""
-
-- name: absent server
- hcloud_server:
- name: "{{ hcloud_server_name }}"
- state: absent
- register: result
-- name: verify absent server
- assert:
- that:
- - result is success
-
-- name: test create server with rescue_mode
- hcloud_server:
- name: "{{ hcloud_server_name}}"
- server_type: cx11
- image: "ubuntu-18.04"
- ssh_keys:
- - ci@ansible.hetzner.cloud
- rescue_mode: "linux64"
- state: started
- register: main_server
-- name: verify create server with rescue_mode
- assert:
- that:
- - main_server is changed
- - main_server.hcloud_server.name == "{{ hcloud_server_name }}"
- - main_server.hcloud_server.server_type == "cx11"
- - main_server.hcloud_server.status == "running"
- - main_server.root_password != ""
- - main_server.hcloud_server.rescue_enabled is sameas true
-
-- name: absent server
- hcloud_server:
- name: "{{ hcloud_server_name }}"
- state: absent
- register: result
-- name: verify absent server
- assert:
- that:
- - result is success
-
-- name: setup server
- hcloud_server:
- name: "{{ hcloud_server_name}}"
- server_type: cx11
- image: ubuntu-18.04
- state: started
- register: main_server
-- name: verify setup server
- assert:
- that:
- - main_server is changed
- - main_server.hcloud_server.name == "{{ hcloud_server_name }}"
- - main_server.hcloud_server.server_type == "cx11"
- - main_server.hcloud_server.status == "running"
- - main_server.root_password != ""
-
-- name: test activate rescue mode with check_mode
- hcloud_server:
- name: "{{ hcloud_server_name }}"
- rescue_mode: "linux64"
- ssh_keys:
- - ci@ansible.hetzner.cloud
- state: present
- register: main_server
- check_mode: true
-- name: verify activate rescue mode
- assert:
- that:
- - main_server is changed
-
-- name: test activate rescue mode
- hcloud_server:
- name: "{{ hcloud_server_name }}"
- rescue_mode: "linux64"
- ssh_keys:
- - ci@ansible.hetzner.cloud
- state: present
- register: main_server
-- name: verify activate rescue mode
- assert:
- that:
- - main_server is changed
- - main_server.hcloud_server.rescue_enabled is sameas true
-
-- name: test disable rescue mode
- hcloud_server:
- name: "{{ hcloud_server_name }}"
- ssh_keys:
- - ci@ansible.hetzner.cloud
- state: present
- register: main_server
-- name: verify activate rescue mode
- assert:
- that:
- - main_server is changed
- - main_server.hcloud_server.rescue_enabled is sameas false
-
-- name: test activate rescue mode without ssh keys
- hcloud_server:
- name: "{{ hcloud_server_name }}"
- rescue_mode: "linux64"
- state: present
- register: main_server
-- name: verify activate rescue mode without ssh keys
- assert:
- that:
- - main_server is changed
- - main_server.hcloud_server.rescue_enabled is sameas true
-
-- name: cleanup
- hcloud_server:
- name: "{{ hcloud_server_name }}"
- state: absent
- register: result
-- name: verify cleanup
- assert:
- that:
- - result is success
-
-- name: test create server with labels
- hcloud_server:
- name: "{{ hcloud_server_name}}"
- server_type: cx11
- image: "ubuntu-18.04"
- ssh_keys:
- - ci@ansible.hetzner.cloud
- labels:
- key: value
- mylabel: "val123"
- state: started
- register: main_server
-- name: verify create server with labels
- assert:
- that:
- - main_server is changed
- - main_server.hcloud_server.labels.key == "value"
- - main_server.hcloud_server.labels.mylabel == "val123"
-
-- name: test update server with labels
- hcloud_server:
- name: "{{ hcloud_server_name}}"
- server_type: cx11
- image: "ubuntu-18.04"
- ssh_keys:
- - ci@ansible.hetzner.cloud
- labels:
- key: other
- mylabel: "val123"
- state: started
- register: main_server
-- name: verify update server with labels
- assert:
- that:
- - main_server is changed
- - main_server.hcloud_server.labels.key == "other"
- - main_server.hcloud_server.labels.mylabel == "val123"
-
-- name: test update server with labels in other order
- hcloud_server:
- name: "{{ hcloud_server_name}}"
- server_type: cx11
- image: "ubuntu-18.04"
- ssh_keys:
- - ci@ansible.hetzner.cloud
- labels:
- mylabel: "val123"
- key: other
- state: started
- register: main_server
-- name: verify update server with labels in other order
- assert:
- that:
- - main_server is not changed
-
-- name: cleanup with labels
- hcloud_server:
- name: "{{ hcloud_server_name }}"
- state: absent
- register: result
-- name: verify cleanup
- assert:
- that:
- - result is success
diff --git a/test/integration/targets/incidental_lookup_hashi_vault/aliases b/test/integration/targets/incidental_lookup_hashi_vault/aliases
deleted file mode 100644
index 3dded09a49..0000000000
--- a/test/integration/targets/incidental_lookup_hashi_vault/aliases
+++ /dev/null
@@ -1,7 +0,0 @@
-shippable/posix/incidental
-destructive
-needs/target/incidental_setup_openssl
-needs/file/test/lib/ansible_test/_data/requirements/constraints.txt
-skip/aix
-skip/power/centos
-skip/python2.6
diff --git a/test/integration/targets/incidental_lookup_hashi_vault/lookup_hashi_vault/defaults/main.yml b/test/integration/targets/incidental_lookup_hashi_vault/lookup_hashi_vault/defaults/main.yml
deleted file mode 100644
index f1f6dd981d..0000000000
--- a/test/integration/targets/incidental_lookup_hashi_vault/lookup_hashi_vault/defaults/main.yml
+++ /dev/null
@@ -1,4 +0,0 @@
----
-vault_gen_path: 'gen/testproject'
-vault_kv1_path: 'kv1/testproject'
-vault_kv2_path: 'kv2/data/testproject'
diff --git a/test/integration/targets/incidental_lookup_hashi_vault/lookup_hashi_vault/tasks/approle_setup.yml b/test/integration/targets/incidental_lookup_hashi_vault/lookup_hashi_vault/tasks/approle_setup.yml
deleted file mode 100644
index 63307728a3..0000000000
--- a/test/integration/targets/incidental_lookup_hashi_vault/lookup_hashi_vault/tasks/approle_setup.yml
+++ /dev/null
@@ -1,21 +0,0 @@
-- name: 'Create an approle policy'
- shell: "echo '{{ policy }}' | {{ vault_cmd }} policy write approle-policy -"
- vars:
- policy: |
- path "auth/approle/login" {
- capabilities = [ "create", "read" ]
- }
-
-- name: 'Enable the AppRole auth method'
- command: '{{ vault_cmd }} auth enable approle'
-
-- name: 'Create a named role'
- command: '{{ vault_cmd }} write auth/approle/role/test-role policies="test-policy,approle-policy"'
-
-- name: 'Fetch the RoleID of the AppRole'
- command: '{{ vault_cmd }} read -field=role_id auth/approle/role/test-role/role-id'
- register: role_id_cmd
-
-- name: 'Get a SecretID issued against the AppRole'
- command: '{{ vault_cmd }} write -field=secret_id -f auth/approle/role/test-role/secret-id'
- register: secret_id_cmd
diff --git a/test/integration/targets/incidental_lookup_hashi_vault/lookup_hashi_vault/tasks/approle_test.yml b/test/integration/targets/incidental_lookup_hashi_vault/lookup_hashi_vault/tasks/approle_test.yml
deleted file mode 100644
index 44eb5ed18d..0000000000
--- a/test/integration/targets/incidental_lookup_hashi_vault/lookup_hashi_vault/tasks/approle_test.yml
+++ /dev/null
@@ -1,45 +0,0 @@
-- vars:
- role_id: '{{ role_id_cmd.stdout }}'
- secret_id: '{{ secret_id_cmd.stdout }}'
- block:
- - name: 'Fetch secrets using "hashi_vault" lookup'
- set_fact:
- secret1: "{{ lookup('hashi_vault', conn_params ~ 'secret=' ~ vault_kv2_path ~ '/secret1 auth_method=approle secret_id=' ~ secret_id ~ ' role_id=' ~ role_id) }}"
- secret2: "{{ lookup('hashi_vault', conn_params ~ 'secret=' ~ vault_kv2_path ~ '/secret2 auth_method=approle secret_id=' ~ secret_id ~ ' role_id=' ~ role_id) }}"
-
- - name: 'Check secret values'
- fail:
- msg: 'unexpected secret values'
- when: secret1['value'] != 'foo1' or secret2['value'] != 'foo2'
-
- - name: 'Failure expected when erroneous credentials are used'
- vars:
- secret_wrong_cred: "{{ lookup('hashi_vault', conn_params ~ 'secret=' ~ vault_kv2_path ~ '/secret2 auth_method=approle secret_id=toto role_id=' ~ role_id) }}"
- debug:
- msg: 'Failure is expected ({{ secret_wrong_cred }})'
- register: test_wrong_cred
- ignore_errors: true
-
- - name: 'Failure expected when unauthorized secret is read'
- vars:
- secret_unauthorized: "{{ lookup('hashi_vault', conn_params ~ 'secret=' ~ vault_kv2_path ~ '/secret3 auth_method=approle secret_id=' ~ secret_id ~ ' role_id=' ~ role_id) }}"
- debug:
- msg: 'Failure is expected ({{ secret_unauthorized }})'
- register: test_unauthorized
- ignore_errors: true
-
- - name: 'Failure expected when inexistent secret is read'
- vars:
- secret_inexistent: "{{ lookup('hashi_vault', conn_params ~ 'secret=' ~ vault_kv2_path ~ '/secret4 auth_method=approle secret_id=' ~ secret_id ~ ' role_id=' ~ role_id) }}"
- debug:
- msg: 'Failure is expected ({{ secret_inexistent }})'
- register: test_inexistent
- ignore_errors: true
-
- - name: 'Check expected failures'
- assert:
- msg: "an expected failure didn't occur"
- that:
- - test_wrong_cred is failed
- - test_unauthorized is failed
- - test_inexistent is failed
diff --git a/test/integration/targets/incidental_lookup_hashi_vault/lookup_hashi_vault/tasks/main.yml b/test/integration/targets/incidental_lookup_hashi_vault/lookup_hashi_vault/tasks/main.yml
deleted file mode 100644
index 42fd0907f3..0000000000
--- a/test/integration/targets/incidental_lookup_hashi_vault/lookup_hashi_vault/tasks/main.yml
+++ /dev/null
@@ -1,155 +0,0 @@
----
-- name: Install Hashi Vault on controlled node and test
-
- vars:
- vault_version: '0.11.0'
- vault_uri: 'https://ansible-ci-files.s3.amazonaws.com/test/integration/targets/lookup_hashi_vault/vault_{{ vault_version }}_{{ ansible_system | lower }}_{{ vault_arch }}.zip'
- vault_cmd: '{{ local_temp_dir }}/vault'
-
- block:
- - name: Create a local temporary directory
- tempfile:
- state: directory
- register: tempfile_result
-
- - set_fact:
- local_temp_dir: '{{ tempfile_result.path }}'
-
- - when: pyopenssl_version.stdout is version('0.15', '>=')
- block:
- - name: Generate privatekey
- openssl_privatekey:
- path: '{{ local_temp_dir }}/privatekey.pem'
-
- - name: Generate CSR
- openssl_csr:
- path: '{{ local_temp_dir }}/csr.csr'
- privatekey_path: '{{ local_temp_dir }}/privatekey.pem'
- subject:
- commonName: localhost
-
- - name: Generate selfsigned certificate
- openssl_certificate:
- path: '{{ local_temp_dir }}/cert.pem'
- csr_path: '{{ local_temp_dir }}/csr.csr'
- privatekey_path: '{{ local_temp_dir }}/privatekey.pem'
- provider: selfsigned
- selfsigned_digest: sha256
- register: selfsigned_certificate
-
- - name: 'Install unzip'
- package:
- name: unzip
- when: ansible_distribution != "MacOSX" # unzip already installed
-
- - assert:
- # Linux: x86_64, FreeBSD: amd64
- that: ansible_architecture in ['i386', 'x86_64', 'amd64']
- - set_fact:
- vault_arch: '386'
- when: ansible_architecture == 'i386'
- - set_fact:
- vault_arch: amd64
- when: ansible_architecture in ['x86_64', 'amd64']
-
- - name: 'Download vault binary'
- unarchive:
- src: '{{ vault_uri }}'
- dest: '{{ local_temp_dir }}'
- remote_src: true
-
- - environment:
- # used by vault command
- VAULT_DEV_ROOT_TOKEN_ID: '47542cbc-6bf8-4fba-8eda-02e0a0d29a0a'
- block:
- - name: 'Create configuration file'
- template:
- src: vault_config.hcl.j2
- dest: '{{ local_temp_dir }}/vault_config.hcl'
-
- - name: 'Start vault service'
- environment:
- VAULT_ADDR: 'http://localhost:8200'
- block:
- - name: 'Start vault server (dev mode enabled)'
- shell: 'nohup {{ vault_cmd }} server -dev -config {{ local_temp_dir }}/vault_config.hcl </dev/null >/dev/null 2>&1 &'
-
- - name: 'Create generic secrets engine'
- command: '{{ vault_cmd }} secrets enable -path=gen generic'
-
- - name: 'Create KV v1 secrets engine'
- command: '{{ vault_cmd }} secrets enable -path=kv1 -version=1 kv'
-
- - name: 'Create KV v2 secrets engine'
- command: '{{ vault_cmd }} secrets enable -path=kv2 -version=2 kv'
-
- - name: 'Create a test policy'
- shell: "echo '{{ policy }}' | {{ vault_cmd }} policy write test-policy -"
- vars:
- policy: |
- path "{{ vault_gen_path }}/secret1" {
- capabilities = ["read"]
- }
- path "{{ vault_gen_path }}/secret2" {
- capabilities = ["read", "update"]
- }
- path "{{ vault_gen_path }}/secret3" {
- capabilities = ["deny"]
- }
- path "{{ vault_kv1_path }}/secret1" {
- capabilities = ["read"]
- }
- path "{{ vault_kv1_path }}/secret2" {
- capabilities = ["read", "update"]
- }
- path "{{ vault_kv1_path }}/secret3" {
- capabilities = ["deny"]
- }
- path "{{ vault_kv2_path }}/secret1" {
- capabilities = ["read"]
- }
- path "{{ vault_kv2_path }}/secret2" {
- capabilities = ["read", "update"]
- }
- path "{{ vault_kv2_path }}/secret3" {
- capabilities = ["deny"]
- }
-
- - name: 'Create generic secrets'
- command: '{{ vault_cmd }} write {{ vault_gen_path }}/secret{{ item }} value=foo{{ item }}'
- loop: [1, 2, 3]
-
- - name: 'Create KV v1 secrets'
- command: '{{ vault_cmd }} kv put {{ vault_kv1_path }}/secret{{ item }} value=foo{{ item }}'
- loop: [1, 2, 3]
-
- - name: 'Create KV v2 secrets'
- command: '{{ vault_cmd }} kv put {{ vault_kv2_path | regex_replace("/data") }}/secret{{ item }} value=foo{{ item }}'
- loop: [1, 2, 3]
-
- - name: setup approle auth
- import_tasks: approle_setup.yml
- when: ansible_distribution != 'RedHat' or ansible_distribution_major_version is version('7', '>')
-
- - name: setup token auth
- import_tasks: token_setup.yml
-
- - import_tasks: tests.yml
- vars:
- auth_type: approle
- when: ansible_distribution != 'RedHat' or ansible_distribution_major_version is version('7', '>')
-
- - import_tasks: tests.yml
- vars:
- auth_type: token
-
- always:
- - name: 'Kill vault process'
- shell: "kill $(cat {{ local_temp_dir }}/vault.pid)"
- ignore_errors: true
-
- always:
- - name: 'Delete temp dir'
- file:
- path: '{{ local_temp_dir }}'
- state: absent
diff --git a/test/integration/targets/incidental_lookup_hashi_vault/lookup_hashi_vault/tasks/tests.yml b/test/integration/targets/incidental_lookup_hashi_vault/lookup_hashi_vault/tasks/tests.yml
deleted file mode 100644
index 198f587a77..0000000000
--- a/test/integration/targets/incidental_lookup_hashi_vault/lookup_hashi_vault/tasks/tests.yml
+++ /dev/null
@@ -1,35 +0,0 @@
-- name: 'test {{ auth_type }} auth without SSL (lookup parameters)'
- include_tasks: '{{ auth_type }}_test.yml'
- vars:
- conn_params: 'url=http://localhost:8200 '
-
-- name: 'test {{ auth_type }} auth without SSL (environment variable)'
- include_tasks: '{{ auth_type }}_test.yml'
- args:
- apply:
- vars:
- conn_params: ''
- environment:
- VAULT_ADDR: 'http://localhost:8200'
-
-- when: pyopenssl_version.stdout is version('0.15', '>=')
- block:
- - name: 'test {{ auth_type }} auth with certs (validation enabled, lookup parameters)'
- include_tasks: '{{ auth_type }}_test.yml'
- vars:
- conn_params: 'url=https://localhost:8201 ca_cert={{ local_temp_dir }}/cert.pem validate_certs=True '
-
- - name: 'test {{ auth_type }} auth with certs (validation enabled, environment variables)'
- include_tasks: '{{ auth_type }}_test.yml'
- args:
- apply:
- vars:
- conn_params: ''
- environment:
- VAULT_ADDR: 'https://localhost:8201'
- VAULT_CACERT: '{{ local_temp_dir }}/cert.pem'
-
- - name: 'test {{ auth_type }} auth with certs (validation disabled, lookup parameters)'
- include_tasks: '{{ auth_type }}_test.yml'
- vars:
- conn_params: 'url=https://localhost:8201 validate_certs=False '
diff --git a/test/integration/targets/incidental_lookup_hashi_vault/lookup_hashi_vault/tasks/token_setup.yml b/test/integration/targets/incidental_lookup_hashi_vault/lookup_hashi_vault/tasks/token_setup.yml
deleted file mode 100644
index d5ce280346..0000000000
--- a/test/integration/targets/incidental_lookup_hashi_vault/lookup_hashi_vault/tasks/token_setup.yml
+++ /dev/null
@@ -1,3 +0,0 @@
-- name: 'Create a test credentials (token)'
- command: '{{ vault_cmd }} token create -policy test-policy -field token'
- register: user_token_cmd
diff --git a/test/integration/targets/incidental_lookup_hashi_vault/lookup_hashi_vault/tasks/token_test.yml b/test/integration/targets/incidental_lookup_hashi_vault/lookup_hashi_vault/tasks/token_test.yml
deleted file mode 100644
index 20c1af791e..0000000000
--- a/test/integration/targets/incidental_lookup_hashi_vault/lookup_hashi_vault/tasks/token_test.yml
+++ /dev/null
@@ -1,58 +0,0 @@
-- vars:
- user_token: '{{ user_token_cmd.stdout }}'
- block:
- - name: 'Fetch secrets using "hashi_vault" lookup'
- set_fact:
- gen_secret1: "{{ lookup('hashi_vault', conn_params ~ 'secret=' ~ vault_gen_path ~ '/secret1 auth_method=token token=' ~ user_token) }}"
- gen_secret2: "{{ lookup('hashi_vault', conn_params ~ 'secret=' ~ vault_gen_path ~ '/secret2 token=' ~ user_token) }}"
- kv1_secret1: "{{ lookup('hashi_vault', conn_params ~ 'secret=' ~ vault_kv1_path ~ '/secret1 auth_method=token token=' ~ user_token) }}"
- kv1_secret2: "{{ lookup('hashi_vault', conn_params ~ 'secret=' ~ vault_kv1_path ~ '/secret2 token=' ~ user_token) }}"
- kv2_secret1: "{{ lookup('hashi_vault', conn_params ~ 'secret=' ~ vault_kv2_path ~ '/secret1 auth_method=token token=' ~ user_token) }}"
- kv2_secret2: "{{ lookup('hashi_vault', conn_params ~ 'secret=' ~ vault_kv2_path ~ '/secret2 token=' ~ user_token) }}"
-
- - name: 'Check secret generic values'
- fail:
- msg: 'unexpected secret values'
- when: gen_secret1['value'] != 'foo1' or gen_secret2['value'] != 'foo2'
-
- - name: 'Check secret kv1 values'
- fail:
- msg: 'unexpected secret values'
- when: kv1_secret1['value'] != 'foo1' or kv1_secret2['value'] != 'foo2'
-
- - name: 'Check secret kv2 values'
- fail:
- msg: 'unexpected secret values'
- when: kv2_secret1['value'] != 'foo1' or kv2_secret2['value'] != 'foo2'
-
- - name: 'Failure expected when erroneous credentials are used'
- vars:
- secret_wrong_cred: "{{ lookup('hashi_vault', conn_params ~ 'secret=' ~ vault_kv2_path ~ '/secret2 auth_method=token token=wrong_token') }}"
- debug:
- msg: 'Failure is expected ({{ secret_wrong_cred }})'
- register: test_wrong_cred
- ignore_errors: true
-
- - name: 'Failure expected when unauthorized secret is read'
- vars:
- secret_unauthorized: "{{ lookup('hashi_vault', conn_params ~ 'secret=' ~ vault_kv2_path ~ '/secret3 token=' ~ user_token) }}"
- debug:
- msg: 'Failure is expected ({{ secret_unauthorized }})'
- register: test_unauthorized
- ignore_errors: true
-
- - name: 'Failure expected when inexistent secret is read'
- vars:
- secret_inexistent: "{{ lookup('hashi_vault', conn_params ~ 'secret=' ~ vault_kv2_path ~ '/secret4 token=' ~ user_token) }}"
- debug:
- msg: 'Failure is expected ({{ secret_inexistent }})'
- register: test_inexistent
- ignore_errors: true
-
- - name: 'Check expected failures'
- assert:
- msg: "an expected failure didn't occur"
- that:
- - test_wrong_cred is failed
- - test_unauthorized is failed
- - test_inexistent is failed
diff --git a/test/integration/targets/incidental_lookup_hashi_vault/lookup_hashi_vault/templates/vault_config.hcl.j2 b/test/integration/targets/incidental_lookup_hashi_vault/lookup_hashi_vault/templates/vault_config.hcl.j2
deleted file mode 100644
index effc90ba90..0000000000
--- a/test/integration/targets/incidental_lookup_hashi_vault/lookup_hashi_vault/templates/vault_config.hcl.j2
+++ /dev/null
@@ -1,10 +0,0 @@
-# {{ ansible_managed }}
-pid_file = "{{ local_temp_dir }}/vault.pid"
-{% if pyopenssl_version.stdout is version('0.15', '>=') %}
-listener "tcp" {
- tls_key_file = "{{ local_temp_dir }}/privatekey.pem"
- tls_cert_file = "{{ local_temp_dir }}/cert.pem"
- tls_disable = false
- address = "localhost:8201"
-}
-{% endif %}
diff --git a/test/integration/targets/incidental_lookup_hashi_vault/playbooks/install_dependencies.yml b/test/integration/targets/incidental_lookup_hashi_vault/playbooks/install_dependencies.yml
deleted file mode 100644
index 9edbdbd631..0000000000
--- a/test/integration/targets/incidental_lookup_hashi_vault/playbooks/install_dependencies.yml
+++ /dev/null
@@ -1,19 +0,0 @@
-- hosts: localhost
- tasks:
- - name: Install openssl
- import_role:
- name: incidental_setup_openssl
-
- - name: "RedHat <= 7, select last version compatible with request 2.6.0 (this version doesn't support approle auth)"
- set_fact:
- hvac_package: 'hvac==0.2.5'
- when: ansible_distribution == 'RedHat' and ansible_distribution_major_version is version('7', '<=')
-
- - name: 'CentOS < 7, select last version compatible with Python 2.6'
- set_fact:
- hvac_package: 'hvac==0.5.0'
- when: ansible_distribution == 'CentOS' and ansible_distribution_major_version is version('7', '<')
-
- - name: 'Install hvac Python package'
- pip:
- name: "{{ hvac_package|default('hvac') }}"
diff --git a/test/integration/targets/incidental_lookup_hashi_vault/playbooks/test_lookup_hashi_vault.yml b/test/integration/targets/incidental_lookup_hashi_vault/playbooks/test_lookup_hashi_vault.yml
deleted file mode 100644
index 343763af09..0000000000
--- a/test/integration/targets/incidental_lookup_hashi_vault/playbooks/test_lookup_hashi_vault.yml
+++ /dev/null
@@ -1,9 +0,0 @@
-- hosts: localhost
- tasks:
- - name: register pyOpenSSL version
- command: "{{ ansible_python.executable }} -c 'import OpenSSL; print(OpenSSL.__version__)'"
- register: pyopenssl_version
-
- - name: Test lookup hashi_vault
- import_role:
- name: incidental_lookup_hashi_vault/lookup_hashi_vault
diff --git a/test/integration/targets/incidental_lookup_hashi_vault/runme.sh b/test/integration/targets/incidental_lookup_hashi_vault/runme.sh
deleted file mode 100755
index e5e0df347f..0000000000
--- a/test/integration/targets/incidental_lookup_hashi_vault/runme.sh
+++ /dev/null
@@ -1,23 +0,0 @@
-#!/usr/bin/env bash
-
-set -eux
-
-# First install pyOpenSSL, then test lookup in a second playbook in order to
-# workaround this error which occurs on OS X 10.11 only:
-#
-# TASK [lookup_hashi_vault : test token auth with certs (validation enabled, lookup parameters)] ***
-# included: lookup_hashi_vault/tasks/token_test.yml for testhost
-#
-# TASK [lookup_hashi_vault : Fetch secrets using "hashi_vault" lookup] ***
-# From cffi callback <function _verify_callback at 0x106f995f0>:
-# Traceback (most recent call last):
-# File "/usr/local/lib/python2.7/site-packages/OpenSSL/SSL.py", line 309, in wrapper
-# _lib.X509_up_ref(x509)
-# AttributeError: 'module' object has no attribute 'X509_up_ref'
-# fatal: [testhost]: FAILED! => { "msg": "An unhandled exception occurred while running the lookup plugin 'hashi_vault'. Error was a <class 'requests.exceptions.SSLError'>, original message: HTTPSConnectionPool(host='localhost', port=8201): Max retries exceeded with url: /v1/auth/token/lookup-self (Caused by SSLError(SSLError(\"bad handshake: Error([('SSL routines', 'ssl3_get_server_certificate', 'certificate verify failed')],)\",),))"}
-
-ANSIBLE_ROLES_PATH=../ \
- ansible-playbook playbooks/install_dependencies.yml -v "$@"
-
-ANSIBLE_ROLES_PATH=../ \
- ansible-playbook playbooks/test_lookup_hashi_vault.yml -v "$@"
diff --git a/test/integration/targets/incidental_nios_prepare_tests/aliases b/test/integration/targets/incidental_nios_prepare_tests/aliases
deleted file mode 100644
index 136c05e0d0..0000000000
--- a/test/integration/targets/incidental_nios_prepare_tests/aliases
+++ /dev/null
@@ -1 +0,0 @@
-hidden
diff --git a/test/integration/targets/incidental_nios_prepare_tests/tasks/main.yml b/test/integration/targets/incidental_nios_prepare_tests/tasks/main.yml
deleted file mode 100644
index e69de29bb2..0000000000
--- a/test/integration/targets/incidental_nios_prepare_tests/tasks/main.yml
+++ /dev/null
diff --git a/test/integration/targets/incidental_nios_txt_record/aliases b/test/integration/targets/incidental_nios_txt_record/aliases
deleted file mode 100644
index dfb77b8152..0000000000
--- a/test/integration/targets/incidental_nios_txt_record/aliases
+++ /dev/null
@@ -1,3 +0,0 @@
-shippable/cloud/incidental
-cloud/nios
-destructive
diff --git a/test/integration/targets/incidental_nios_txt_record/defaults/main.yaml b/test/integration/targets/incidental_nios_txt_record/defaults/main.yaml
deleted file mode 100644
index ebf6ffc903..0000000000
--- a/test/integration/targets/incidental_nios_txt_record/defaults/main.yaml
+++ /dev/null
@@ -1,3 +0,0 @@
----
-testcase: "*"
-test_items: [] \ No newline at end of file
diff --git a/test/integration/targets/incidental_nios_txt_record/meta/main.yaml b/test/integration/targets/incidental_nios_txt_record/meta/main.yaml
deleted file mode 100644
index c7c538f4e7..0000000000
--- a/test/integration/targets/incidental_nios_txt_record/meta/main.yaml
+++ /dev/null
@@ -1,2 +0,0 @@
-dependencies:
- - incidental_nios_prepare_tests
diff --git a/test/integration/targets/incidental_nios_txt_record/tasks/main.yml b/test/integration/targets/incidental_nios_txt_record/tasks/main.yml
deleted file mode 100644
index e15b4c55db..0000000000
--- a/test/integration/targets/incidental_nios_txt_record/tasks/main.yml
+++ /dev/null
@@ -1 +0,0 @@
-- include: nios_txt_record_idempotence.yml
diff --git a/test/integration/targets/incidental_nios_txt_record/tasks/nios_txt_record_idempotence.yml b/test/integration/targets/incidental_nios_txt_record/tasks/nios_txt_record_idempotence.yml
deleted file mode 100644
index 3b7357afaf..0000000000
--- a/test/integration/targets/incidental_nios_txt_record/tasks/nios_txt_record_idempotence.yml
+++ /dev/null
@@ -1,80 +0,0 @@
-- name: cleanup the parent object
- nios_zone:
- name: ansible.com
- state: absent
- provider: "{{ nios_provider }}"
-
-- name: create the parent object
- nios_zone:
- name: ansible.com
- state: present
- provider: "{{ nios_provider }}"
-
-- name: cleanup txt record
- nios_txt_record:
- name: txt.ansible.com
- text: mytext
- state: absent
- provider: "{{ nios_provider }}"
-
-- name: create txt record
- nios_txt_record:
- name: txt.ansible.com
- text: mytext
- state: present
- provider: "{{ nios_provider }}"
- register: txt_create1
-
-- name: create txt record
- nios_txt_record:
- name: txt.ansible.com
- text: mytext
- state: present
- provider: "{{ nios_provider }}"
- register: txt_create2
-
-- assert:
- that:
- - "txt_create1.changed"
- - "not txt_create2.changed"
-
-- name: add a comment to an existing txt record
- nios_txt_record:
- name: txt.ansible.com
- text: mytext
- state: present
- comment: mycomment
- provider: "{{ nios_provider }}"
- register: txt_update1
-
-- name: add a comment to an existing txt record
- nios_txt_record:
- name: txt.ansible.com
- text: mytext
- state: present
- comment: mycomment
- provider: "{{ nios_provider }}"
- register: txt_update2
-
-- name: remove a txt record from the system
- nios_txt_record:
- name: txt.ansible.com
- state: absent
- provider: "{{ nios_provider }}"
- register: txt_delete1
-
-- name: remove a txt record from the system
- nios_txt_record:
- name: txt.ansible.com
- state: absent
- provider: "{{ nios_provider }}"
- register: txt_delete2
-
-- assert:
- that:
- - "txt_create1.changed"
- - "not txt_create2.changed"
- - "txt_update1.changed"
- - "not txt_update2.changed"
- - "txt_delete1.changed"
- - "not txt_delete2.changed"
diff --git a/test/integration/targets/incidental_selinux/aliases b/test/integration/targets/incidental_selinux/aliases
deleted file mode 100644
index 6bda43bced..0000000000
--- a/test/integration/targets/incidental_selinux/aliases
+++ /dev/null
@@ -1,3 +0,0 @@
-needs/root
-shippable/posix/incidental
-skip/aix
diff --git a/test/integration/targets/incidental_selinux/tasks/main.yml b/test/integration/targets/incidental_selinux/tasks/main.yml
deleted file mode 100644
index 41fdca5220..0000000000
--- a/test/integration/targets/incidental_selinux/tasks/main.yml
+++ /dev/null
@@ -1,36 +0,0 @@
-# (c) 2017, Sam Doran <sdoran@redhat.com>
-
-# This file is part of Ansible
-#
-# Ansible is free software: you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation, either version 3 of the License, or
-# (at your option) any later version.
-#
-# Ansible is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
-#
-# You should have received a copy of the GNU General Public License
-# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
-
-- debug:
- msg: SELinux is disabled
- when: ansible_selinux is defined and ansible_selinux == False
-
-- debug:
- msg: SELinux is {{ ansible_selinux.status }}
- when: ansible_selinux is defined and ansible_selinux != False
-
-- include: selinux.yml
- when:
- - ansible_selinux is defined
- - ansible_selinux != False
- - ansible_selinux.status == 'enabled'
-
-- include: selogin.yml
- when:
- - ansible_selinux is defined
- - ansible_selinux != False
- - ansible_selinux.status == 'enabled'
diff --git a/test/integration/targets/incidental_selinux/tasks/selinux.yml b/test/integration/targets/incidental_selinux/tasks/selinux.yml
deleted file mode 100644
index 7fcba899cf..0000000000
--- a/test/integration/targets/incidental_selinux/tasks/selinux.yml
+++ /dev/null
@@ -1,364 +0,0 @@
-# (c) 2017, Sam Doran <sdoran@redhat.com>
-
-# This file is part of Ansible
-#
-# Ansible is free software: you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation, either version 3 of the License, or
-# (at your option) any later version.
-#
-# Ansible is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
-#
-# You should have received a copy of the GNU General Public License
-# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
-
-
-# First Test
-# ##############################################################################
-# Test changing the state, which requires a reboot
-
-- name: TEST 1 | Get current SELinux config file contents
- set_fact:
- selinux_config_original: "{{ lookup('file', '/etc/sysconfig/selinux').split('\n') }}"
- before_test_sestatus: "{{ ansible_selinux }}"
-
-- debug:
- var: "{{ item }}"
- verbosity: 1
- with_items:
- - selinux_config_original
- - before_test_sestatus
- - ansible_selinux
-
-- name: TEST 1 | Setup SELinux configuration for tests
- selinux:
- state: enforcing
- policy: targeted
-
-- name: TEST 1 | Disable SELinux
- selinux:
- state: disabled
- policy: targeted
- register: _disable_test1
-
-- debug:
- var: _disable_test1
- verbosity: 1
-
-- name: TEST 1 | Re-gather facts
- setup:
-
-- name: TEST 1 | Assert that status was changed, reboot_required is True, a warning was displayed, and SELinux is configured properly
- assert:
- that:
- - _disable_test1 is changed
- - _disable_test1.reboot_required
- - (_disable_test1.warnings | length ) >= 1
- - ansible_selinux.config_mode == 'disabled'
- - ansible_selinux.type == 'targeted'
-
-- debug:
- var: ansible_selinux
- verbosity: 1
-
-- name: TEST 1 | Disable SELinux again
- selinux:
- state: disabled
- policy: targeted
- register: _disable_test2
-
-- debug:
- var: _disable_test2
- verbosity: 1
-
-- name: TEST 1 | Assert that no change is reported, a warnking was dispalyed, and reboot_required is True
- assert:
- that:
- - _disable_test2 is not changed
- - (_disable_test1.warnings | length ) >= 1
- - _disable_test2.reboot_required
-
-- name: TEST 1 | Get modified config file
- set_fact:
- selinux_config_after: "{{ lookup('file', '/etc/sysconfig/selinux').split('\n') }}"
-
-- debug:
- var: selinux_config_after
- verbosity: 1
-
-- name: TEST 1 | Ensure SELinux config file is properly formatted
- assert:
- that:
- - selinux_config_original | length == selinux_config_after | length
- - selinux_config_after[selinux_config_after.index('SELINUX=disabled')] is search("^SELINUX=\w+$")
- - selinux_config_after[selinux_config_after.index('SELINUXTYPE=targeted')] is search("^SELINUXTYPE=\w+$")
-
-- name: TEST 1 | Reset SELinux configuration for next test
- selinux:
- state: enforcing
- policy: targeted
-
-
-# Second Test
-# ##############################################################################
-# Test changing only the policy, which does not require a reboot
-
-- name: TEST 2 | Make sure the policy is present
- package:
- name: selinux-policy-mls
- state: present
-
-- name: TEST 2 | Set SELinux policy
- selinux:
- state: enforcing
- policy: mls
- register: _state_test1
-
-- debug:
- var: _state_test1
- verbosity: 1
-
-- name: TEST 2 | Re-gather facts
- setup:
-
-- debug:
- var: ansible_selinux
- tags: debug
-
-- name: TEST 2 | Assert that status was changed, reboot_required is False, no warnings were displayed, and SELinux is configured properly
- assert:
- that:
- - _state_test1 is changed
- - not _state_test1.reboot_required
- - _state_test1.warnings is not defined
- - ansible_selinux.config_mode == 'enforcing'
- - ansible_selinux.type == 'mls'
-
-- name: TEST 2 | Set SELinux policy again
- selinux:
- state: enforcing
- policy: mls
- register: _state_test2
-
-- debug:
- var: _state_test2
- verbosity: 1
-
-- name: TEST 2 | Assert that no change was reported, no warnings were dispalyed, and reboot_required is False
- assert:
- that:
- - _state_test2 is not changed
- - _state_test2.warnings is not defined
- - not _state_test2.reboot_required
-
-- name: TEST 2 | Get modified config file
- set_fact:
- selinux_config_after: "{{ lookup('file', '/etc/sysconfig/selinux').split('\n') }}"
-
-- debug:
- var: selinux_config_after
- verbosity: 1
-
-- name: TEST 2 | Ensure SELinux config file is properly formatted
- assert:
- that:
- - selinux_config_original | length == selinux_config_after | length
- - selinux_config_after[selinux_config_after.index('SELINUX=enforcing')] is search("^SELINUX=\w+$")
- - selinux_config_after[selinux_config_after.index('SELINUXTYPE=mls')] is search("^SELINUXTYPE=\w+$")
-
-- name: TEST 2 | Reset SELinux configuration for next test
- selinux:
- state: enforcing
- policy: targeted
-
-
-# Third Test
-# ##############################################################################
-# Test changing non-existing policy
-
-- name: TEST 3 | Set SELinux policy
- selinux:
- state: enforcing
- policy: non-existing-selinux-policy
- register: _state_test1
- ignore_errors: yes
-
-- debug:
- var: _state_test1
- verbosity: 1
-
-- name: TEST 3 | Re-gather facts
- setup:
-
-- debug:
- var: ansible_selinux
- tags: debug
-
-- name: TEST 3 | Assert that status was not changed, the task failed, the msg contains proper information and SELinux was not changed
- assert:
- that:
- - _state_test1 is not changed
- - _state_test1 is failed
- - _state_test1.msg == 'Policy non-existing-selinux-policy does not exist in /etc/selinux/'
- - ansible_selinux.config_mode == 'enforcing'
- - ansible_selinux.type == 'targeted'
-
-
-# Fourth Test
-# ##############################################################################
-# Test if check mode returns correct changed values and
-# doesn't make any changes
-
-
-- name: TEST 4 | Set SELinux to enforcing
- selinux:
- state: enforcing
- policy: targeted
- register: _check_mode_test1
-
-- debug:
- var: _check_mode_test1
- verbosity: 1
-
-- name: TEST 4 | Set SELinux to enforcing in check mode
- selinux:
- state: enforcing
- policy: targeted
- register: _check_mode_test1
- check_mode: yes
-
-- name: TEST 4 | Re-gather facts
- setup:
-
-- debug:
- var: ansible_selinux
- verbosity: 1
- tags: debug
-
-- name: TEST 4 | Assert that check mode is idempotent
- assert:
- that:
- - _check_mode_test1 is success
- - not _check_mode_test1.reboot_required
- - ansible_selinux.config_mode == 'enforcing'
- - ansible_selinux.type == 'targeted'
-
-- name: TEST 4 | Set SELinux to permissive in check mode
- selinux:
- state: permissive
- policy: targeted
- register: _check_mode_test2
- check_mode: yes
-
-- name: TEST 4 | Re-gather facts
- setup:
-
-- debug:
- var: ansible_selinux
- verbosity: 1
- tags: debug
-
-- name: TEST 4 | Assert that check mode doesn't set state permissive and returns changed
- assert:
- that:
- - _check_mode_test2 is changed
- - not _check_mode_test2.reboot_required
- - ansible_selinux.config_mode == 'enforcing'
- - ansible_selinux.type == 'targeted'
-
-- name: TEST 4 | Disable SELinux in check mode
- selinux:
- state: disabled
- register: _check_mode_test3
- check_mode: yes
-
-- name: TEST 4 | Re-gather facts
- setup:
-
-- debug:
- var: ansible_selinux
- verbosity: 1
- tags: debug
-
-- name: TEST 4 | Assert that check mode didn't change anything, status is changed, reboot_required is True, a warning was displayed
- assert:
- that:
- - _check_mode_test3 is changed
- - _check_mode_test3.reboot_required
- - (_check_mode_test3.warnings | length ) >= 1
- - ansible_selinux.config_mode == 'enforcing'
- - ansible_selinux.type == 'targeted'
-
-- name: TEST 4 | Set SELinux to permissive
- selinux:
- state: permissive
- policy: targeted
- register: _check_mode_test4
-
-- debug:
- var: _check_mode_test4
- verbosity: 1
-
-- name: TEST 4 | Disable SELinux in check mode
- selinux:
- state: disabled
- register: _check_mode_test4
- check_mode: yes
-
-- name: TEST 4 | Re-gather facts
- setup:
-
-- debug:
- var: ansible_selinux
- verbosity: 1
- tags: debug
-
-- name: TEST 4 | Assert that check mode didn't change anything, status is changed, reboot_required is True, a warning was displayed
- assert:
- that:
- - _check_mode_test4 is changed
- - _check_mode_test4.reboot_required
- - (_check_mode_test3.warnings | length ) >= 1
- - ansible_selinux.config_mode == 'permissive'
- - ansible_selinux.type == 'targeted'
-
-- name: TEST 4 | Set SELinux to enforcing
- selinux:
- state: enforcing
- policy: targeted
- register: _check_mode_test5
-
-- debug:
- var: _check_mode_test5
- verbosity: 1
-
-- name: TEST 4 | Disable SELinux
- selinux:
- state: disabled
- register: _check_mode_test5
-
-- name: TEST 4 | Disable SELinux in check mode
- selinux:
- state: disabled
- register: _check_mode_test5
- check_mode: yes
-
-- name: TEST 4 | Re-gather facts
- setup:
-
-- debug:
- var: ansible_selinux
- verbosity: 1
- tags: debug
-
-- name: TEST 4 | Assert that in check mode status was not changed, reboot_required is True, a warning was displayed, and SELinux is configured properly
- assert:
- that:
- - _check_mode_test5 is success
- - _check_mode_test5.reboot_required
- - (_check_mode_test5.warnings | length ) >= 1
- - ansible_selinux.config_mode == 'disabled'
- - ansible_selinux.type == 'targeted'
diff --git a/test/integration/targets/incidental_selinux/tasks/selogin.yml b/test/integration/targets/incidental_selinux/tasks/selogin.yml
deleted file mode 100644
index a2c820ff38..0000000000
--- a/test/integration/targets/incidental_selinux/tasks/selogin.yml
+++ /dev/null
@@ -1,81 +0,0 @@
----
-
-- name: create user for testing
- user:
- name: seuser
-
-- name: attempt to add mapping without 'seuser'
- selogin:
- login: seuser
- register: selogin_error
- ignore_errors: yes
-
-- name: verify failure
- assert:
- that:
- - selogin_error is failed
-
-- name: map login to SELinux user
- selogin:
- login: seuser
- seuser: staff_u
- register: selogin_new_mapping
- check_mode: "{{ item }}"
- with_items:
- - yes
- - no
- - yes
- - no
-
-- name: new mapping- verify functionality and check_mode
- assert:
- that:
- - selogin_new_mapping.results[0] is changed
- - selogin_new_mapping.results[1] is changed
- - selogin_new_mapping.results[2] is not changed
- - selogin_new_mapping.results[3] is not changed
-
-- name: change SELinux user login mapping
- selogin:
- login: seuser
- seuser: user_u
- register: selogin_mod_mapping
- check_mode: "{{ item }}"
- with_items:
- - yes
- - no
- - yes
- - no
-
-- name: changed mapping- verify functionality and check_mode
- assert:
- that:
- - selogin_mod_mapping.results[0] is changed
- - selogin_mod_mapping.results[1] is changed
- - selogin_mod_mapping.results[2] is not changed
- - selogin_mod_mapping.results[3] is not changed
-
-- name: remove SELinux user mapping
- selogin:
- login: seuser
- state: absent
- register: selogin_del_mapping
- check_mode: "{{ item }}"
- with_items:
- - yes
- - no
- - yes
- - no
-
-- name: delete mapping- verify functionality and check_mode
- assert:
- that:
- - selogin_del_mapping.results[0] is changed
- - selogin_del_mapping.results[1] is changed
- - selogin_del_mapping.results[2] is not changed
- - selogin_del_mapping.results[3] is not changed
-
-- name: remove test user
- user:
- name: seuser
- state: absent
diff --git a/test/integration/targets/incidental_setup_openssl/aliases b/test/integration/targets/incidental_setup_openssl/aliases
deleted file mode 100644
index e5830e282b..0000000000
--- a/test/integration/targets/incidental_setup_openssl/aliases
+++ /dev/null
@@ -1,2 +0,0 @@
-hidden
-
diff --git a/test/integration/targets/incidental_setup_openssl/tasks/main.yml b/test/integration/targets/incidental_setup_openssl/tasks/main.yml
deleted file mode 100644
index 8960441296..0000000000
--- a/test/integration/targets/incidental_setup_openssl/tasks/main.yml
+++ /dev/null
@@ -1,48 +0,0 @@
----
-- name: Include OS-specific variables
- include_vars: '{{ ansible_os_family }}.yml'
- when: not ansible_os_family == "Darwin"
-
-- name: Install OpenSSL
- become: True
- package:
- name: '{{ openssl_package_name }}'
- when: not ansible_os_family == 'Darwin'
-
-- name: Install pyOpenSSL (Python 3)
- become: True
- package:
- name: '{{ pyopenssl_package_name_python3 }}'
- when: not ansible_os_family == 'Darwin' and ansible_python_version is version('3.0', '>=')
-
-- name: Install pyOpenSSL (Python 2)
- become: True
- package:
- name: '{{ pyopenssl_package_name }}'
- when: not ansible_os_family == 'Darwin' and ansible_python_version is version('3.0', '<')
-
-- name: Install pyOpenSSL (Darwin)
- become: True
- pip:
- name:
- - pyOpenSSL==19.1.0
- # dependencies for pyOpenSSL
- - cffi==1.14.2
- - cryptography==3.1
- - enum34==1.1.10
- - ipaddress==1.0.23
- - pycparser==2.20
- - six==1.15.0
- when: ansible_os_family == 'Darwin'
-
-- name: register pyOpenSSL version
- command: "{{ ansible_python.executable }} -c 'import OpenSSL; print(OpenSSL.__version__)'"
- register: pyopenssl_version
-
-- name: register openssl version
- shell: "openssl version | cut -d' ' -f2"
- register: openssl_version
-
-- name: register cryptography version
- command: "{{ ansible_python.executable }} -c 'import cryptography; print(cryptography.__version__)'"
- register: cryptography_version
diff --git a/test/integration/targets/incidental_setup_openssl/vars/Debian.yml b/test/integration/targets/incidental_setup_openssl/vars/Debian.yml
deleted file mode 100644
index 755c7a083c..0000000000
--- a/test/integration/targets/incidental_setup_openssl/vars/Debian.yml
+++ /dev/null
@@ -1,3 +0,0 @@
-pyopenssl_package_name: python-openssl
-pyopenssl_package_name_python3: python3-openssl
-openssl_package_name: openssl
diff --git a/test/integration/targets/incidental_setup_openssl/vars/FreeBSD.yml b/test/integration/targets/incidental_setup_openssl/vars/FreeBSD.yml
deleted file mode 100644
index 608689158a..0000000000
--- a/test/integration/targets/incidental_setup_openssl/vars/FreeBSD.yml
+++ /dev/null
@@ -1,3 +0,0 @@
-pyopenssl_package_name: py27-openssl
-pyopenssl_package_name_python3: py36-openssl
-openssl_package_name: openssl
diff --git a/test/integration/targets/incidental_setup_openssl/vars/RedHat.yml b/test/integration/targets/incidental_setup_openssl/vars/RedHat.yml
deleted file mode 100644
index 2959932cd7..0000000000
--- a/test/integration/targets/incidental_setup_openssl/vars/RedHat.yml
+++ /dev/null
@@ -1,3 +0,0 @@
-pyopenssl_package_name: pyOpenSSL
-pyopenssl_package_name_python3: python3-pyOpenSSL
-openssl_package_name: openssl
diff --git a/test/integration/targets/incidental_setup_openssl/vars/Suse.yml b/test/integration/targets/incidental_setup_openssl/vars/Suse.yml
deleted file mode 100644
index 2d5200f341..0000000000
--- a/test/integration/targets/incidental_setup_openssl/vars/Suse.yml
+++ /dev/null
@@ -1,3 +0,0 @@
-pyopenssl_package_name: python-pyOpenSSL
-pyopenssl_package_name_python3: python3-pyOpenSSL
-openssl_package_name: openssl
diff --git a/test/integration/targets/incidental_ufw/aliases b/test/integration/targets/incidental_ufw/aliases
deleted file mode 100644
index 7407abe60a..0000000000
--- a/test/integration/targets/incidental_ufw/aliases
+++ /dev/null
@@ -1,13 +0,0 @@
-shippable/posix/incidental
-skip/aix
-skip/power/centos
-skip/osx
-skip/macos
-skip/freebsd
-skip/rhel8.0
-skip/rhel8.0b
-skip/rhel8.1b
-skip/docker
-needs/root
-destructive
-needs/target/setup_epel
diff --git a/test/integration/targets/incidental_ufw/tasks/main.yml b/test/integration/targets/incidental_ufw/tasks/main.yml
deleted file mode 100644
index 28198cd600..0000000000
--- a/test/integration/targets/incidental_ufw/tasks/main.yml
+++ /dev/null
@@ -1,34 +0,0 @@
----
-# Make sure ufw is installed
-- name: Install EPEL repository (RHEL only)
- include_role:
- name: setup_epel
- when: ansible_distribution == 'RedHat'
-- name: Install iptables (SuSE only)
- package:
- name: iptables
- become: yes
- when: ansible_os_family == 'Suse'
-- name: Install ufw
- become: yes
- package:
- name: ufw
-
-# Run the tests
-- block:
- - include_tasks: run-test.yml
- with_fileglob:
- - "tests/*.yml"
- become: yes
-
- # Cleanup
- always:
- - pause:
- # ufw creates backups of the rule files with a timestamp; if reset is called
- # twice in a row fast enough (so that both timestamps are taken in the same second),
- # the second call will notice that the backup files are already there and fail.
- # Waiting one second fixes this problem.
- seconds: 1
- - name: Reset ufw to factory defaults and disable
- ufw:
- state: reset
diff --git a/test/integration/targets/incidental_ufw/tasks/run-test.yml b/test/integration/targets/incidental_ufw/tasks/run-test.yml
deleted file mode 100644
index e9c5d2929c..0000000000
--- a/test/integration/targets/incidental_ufw/tasks/run-test.yml
+++ /dev/null
@@ -1,21 +0,0 @@
----
-- pause:
- # ufw creates backups of the rule files with a timestamp; if reset is called
- # twice in a row fast enough (so that both timestamps are taken in the same second),
- # the second call will notice that the backup files are already there and fail.
- # Waiting one second fixes this problem.
- seconds: 1
-- name: Reset ufw to factory defaults
- ufw:
- state: reset
-- name: Disable ufw
- ufw:
- # Some versions of ufw have a bug which won't disable on reset.
- # That's why we explicitly deactivate here. See
- # https://bugs.launchpad.net/ufw/+bug/1810082
- state: disabled
-- name: "Loading tasks from {{ item }}"
- include_tasks: "{{ item }}"
-- name: Reset to factory defaults
- ufw:
- state: reset
diff --git a/test/integration/targets/incidental_ufw/tasks/tests/basic.yml b/test/integration/targets/incidental_ufw/tasks/tests/basic.yml
deleted file mode 100644
index 3c625112f3..0000000000
--- a/test/integration/targets/incidental_ufw/tasks/tests/basic.yml
+++ /dev/null
@@ -1,402 +0,0 @@
----
-# ############################################
-- name: Make sure it is off
- ufw:
- state: disabled
-- name: Enable (check mode)
- ufw:
- state: enabled
- check_mode: yes
- register: enable_check
-- name: Enable
- ufw:
- state: enabled
- register: enable
-- name: Enable (idempotency)
- ufw:
- state: enabled
- register: enable_idem
-- name: Enable (idempotency, check mode)
- ufw:
- state: enabled
- check_mode: yes
- register: enable_idem_check
-- assert:
- that:
- - enable_check is changed
- - enable is changed
- - enable_idem is not changed
- - enable_idem_check is not changed
-
-# ############################################
-- name: ipv4 allow (check mode)
- ufw:
- rule: allow
- port: 23
- to_ip: 0.0.0.0
- check_mode: yes
- register: ipv4_allow_check
-- name: ipv4 allow
- ufw:
- rule: allow
- port: 23
- to_ip: 0.0.0.0
- register: ipv4_allow
-- name: ipv4 allow (idempotency)
- ufw:
- rule: allow
- port: 23
- to_ip: 0.0.0.0
- register: ipv4_allow_idem
-- name: ipv4 allow (idempotency, check mode)
- ufw:
- rule: allow
- port: 23
- to_ip: 0.0.0.0
- check_mode: yes
- register: ipv4_allow_idem_check
-- assert:
- that:
- - ipv4_allow_check is changed
- - ipv4_allow is changed
- - ipv4_allow_idem is not changed
- - ipv4_allow_idem_check is not changed
-
-# ############################################
-- name: delete ipv4 allow (check mode)
- ufw:
- rule: allow
- port: 23
- to_ip: 0.0.0.0
- delete: yes
- check_mode: yes
- register: delete_ipv4_allow_check
-- name: delete ipv4 allow
- ufw:
- rule: allow
- port: 23
- to_ip: 0.0.0.0
- delete: yes
- register: delete_ipv4_allow
-- name: delete ipv4 allow (idempotency)
- ufw:
- rule: allow
- port: 23
- to_ip: 0.0.0.0
- delete: yes
- register: delete_ipv4_allow_idem
-- name: delete ipv4 allow (idempotency, check mode)
- ufw:
- rule: allow
- port: 23
- to_ip: 0.0.0.0
- delete: yes
- check_mode: yes
- register: delete_ipv4_allow_idem_check
-- assert:
- that:
- - delete_ipv4_allow_check is changed
- - delete_ipv4_allow is changed
- - delete_ipv4_allow_idem is not changed
- - delete_ipv4_allow_idem_check is not changed
-
-# ############################################
-- name: ipv6 allow (check mode)
- ufw:
- rule: allow
- port: 23
- to_ip: "::"
- check_mode: yes
- register: ipv6_allow_check
-- name: ipv6 allow
- ufw:
- rule: allow
- port: 23
- to_ip: "::"
- register: ipv6_allow
-- name: ipv6 allow (idempotency)
- ufw:
- rule: allow
- port: 23
- to_ip: "::"
- register: ipv6_allow_idem
-- name: ipv6 allow (idempotency, check mode)
- ufw:
- rule: allow
- port: 23
- to_ip: "::"
- check_mode: yes
- register: ipv6_allow_idem_check
-- assert:
- that:
- - ipv6_allow_check is changed
- - ipv6_allow is changed
- - ipv6_allow_idem is not changed
- - ipv6_allow_idem_check is not changed
-
-# ############################################
-- name: delete ipv6 allow (check mode)
- ufw:
- rule: allow
- port: 23
- to_ip: "::"
- delete: yes
- check_mode: yes
- register: delete_ipv6_allow_check
-- name: delete ipv6 allow
- ufw:
- rule: allow
- port: 23
- to_ip: "::"
- delete: yes
- register: delete_ipv6_allow
-- name: delete ipv6 allow (idempotency)
- ufw:
- rule: allow
- port: 23
- to_ip: "::"
- delete: yes
- register: delete_ipv6_allow_idem
-- name: delete ipv6 allow (idempotency, check mode)
- ufw:
- rule: allow
- port: 23
- to_ip: "::"
- delete: yes
- check_mode: yes
- register: delete_ipv6_allow_idem_check
-- assert:
- that:
- - delete_ipv6_allow_check is changed
- - delete_ipv6_allow is changed
- - delete_ipv6_allow_idem is not changed
- - delete_ipv6_allow_idem_check is not changed
-
-
-# ############################################
-- name: ipv4 allow (check mode)
- ufw:
- rule: allow
- port: 23
- to_ip: 0.0.0.0
- check_mode: yes
- register: ipv4_allow_check
-- name: ipv4 allow
- ufw:
- rule: allow
- port: 23
- to_ip: 0.0.0.0
- register: ipv4_allow
-- name: ipv4 allow (idempotency)
- ufw:
- rule: allow
- port: 23
- to_ip: 0.0.0.0
- register: ipv4_allow_idem
-- name: ipv4 allow (idempotency, check mode)
- ufw:
- rule: allow
- port: 23
- to_ip: 0.0.0.0
- check_mode: yes
- register: ipv4_allow_idem_check
-- assert:
- that:
- - ipv4_allow_check is changed
- - ipv4_allow is changed
- - ipv4_allow_idem is not changed
- - ipv4_allow_idem_check is not changed
-
-# ############################################
-- name: delete ipv4 allow (check mode)
- ufw:
- rule: allow
- port: 23
- to_ip: 0.0.0.0
- delete: yes
- check_mode: yes
- register: delete_ipv4_allow_check
-- name: delete ipv4 allow
- ufw:
- rule: allow
- port: 23
- to_ip: 0.0.0.0
- delete: yes
- register: delete_ipv4_allow
-- name: delete ipv4 allow (idempotency)
- ufw:
- rule: allow
- port: 23
- to_ip: 0.0.0.0
- delete: yes
- register: delete_ipv4_allow_idem
-- name: delete ipv4 allow (idempotency, check mode)
- ufw:
- rule: allow
- port: 23
- to_ip: 0.0.0.0
- delete: yes
- check_mode: yes
- register: delete_ipv4_allow_idem_check
-- assert:
- that:
- - delete_ipv4_allow_check is changed
- - delete_ipv4_allow is changed
- - delete_ipv4_allow_idem is not changed
- - delete_ipv4_allow_idem_check is not changed
-
-# ############################################
-- name: ipv6 allow (check mode)
- ufw:
- rule: allow
- port: 23
- to_ip: "::"
- check_mode: yes
- register: ipv6_allow_check
-- name: ipv6 allow
- ufw:
- rule: allow
- port: 23
- to_ip: "::"
- register: ipv6_allow
-- name: ipv6 allow (idempotency)
- ufw:
- rule: allow
- port: 23
- to_ip: "::"
- register: ipv6_allow_idem
-- name: ipv6 allow (idempotency, check mode)
- ufw:
- rule: allow
- port: 23
- to_ip: "::"
- check_mode: yes
- register: ipv6_allow_idem_check
-- assert:
- that:
- - ipv6_allow_check is changed
- - ipv6_allow is changed
- - ipv6_allow_idem is not changed
- - ipv6_allow_idem_check is not changed
-
-# ############################################
-- name: delete ipv6 allow (check mode)
- ufw:
- rule: allow
- port: 23
- to_ip: "::"
- delete: yes
- check_mode: yes
- register: delete_ipv6_allow_check
-- name: delete ipv6 allow
- ufw:
- rule: allow
- port: 23
- to_ip: "::"
- delete: yes
- register: delete_ipv6_allow
-- name: delete ipv6 allow (idempotency)
- ufw:
- rule: allow
- port: 23
- to_ip: "::"
- delete: yes
- register: delete_ipv6_allow_idem
-- name: delete ipv6 allow (idempotency, check mode)
- ufw:
- rule: allow
- port: 23
- to_ip: "::"
- delete: yes
- check_mode: yes
- register: delete_ipv6_allow_idem_check
-- assert:
- that:
- - delete_ipv6_allow_check is changed
- - delete_ipv6_allow is changed
- - delete_ipv6_allow_idem is not changed
- - delete_ipv6_allow_idem_check is not changed
-
-# ############################################
-- name: Reload ufw
- ufw:
- state: reloaded
- register: reload
-- name: Reload ufw (check mode)
- ufw:
- state: reloaded
- check_mode: yes
- register: reload_check
-- assert:
- that:
- - reload is changed
- - reload_check is changed
-
-# ############################################
-- name: Disable (check mode)
- ufw:
- state: disabled
- check_mode: yes
- register: disable_check
-- name: Disable
- ufw:
- state: disabled
- register: disable
-- name: Disable (idempotency)
- ufw:
- state: disabled
- register: disable_idem
-- name: Disable (idempotency, check mode)
- ufw:
- state: disabled
- check_mode: yes
- register: disable_idem_check
-- assert:
- that:
- - disable_check is changed
- - disable is changed
- - disable_idem is not changed
- - disable_idem_check is not changed
-
-# ############################################
-- name: Re-enable
- ufw:
- state: enabled
-- name: Reset (check mode)
- ufw:
- state: reset
- check_mode: yes
- register: reset_check
-- pause:
- # Should not be needed, but since ufw is ignoring --dry-run for reset
- # (https://bugs.launchpad.net/ufw/+bug/1810082) we have to wait here as well.
- seconds: 1
-- name: Reset
- ufw:
- state: reset
- register: reset
-- pause:
- # ufw creates backups of the rule files with a timestamp; if reset is called
- # twice in a row fast enough (so that both timestamps are taken in the same second),
- # the second call will notice that the backup files are already there and fail.
- # Waiting one second fixes this problem.
- seconds: 1
-- name: Reset (idempotency)
- ufw:
- state: reset
- register: reset_idem
-- pause:
- # Should not be needed, but since ufw is ignoring --dry-run for reset
- # (https://bugs.launchpad.net/ufw/+bug/1810082) we have to wait here as well.
- seconds: 1
-- name: Reset (idempotency, check mode)
- ufw:
- state: reset
- check_mode: yes
- register: reset_idem_check
-- assert:
- that:
- - reset_check is changed
- - reset is changed
- - reset_idem is changed
- - reset_idem_check is changed
diff --git a/test/integration/targets/incidental_ufw/tasks/tests/global-state.yml b/test/integration/targets/incidental_ufw/tasks/tests/global-state.yml
deleted file mode 100644
index 69b2cde938..0000000000
--- a/test/integration/targets/incidental_ufw/tasks/tests/global-state.yml
+++ /dev/null
@@ -1,150 +0,0 @@
----
-- name: Enable ufw
- ufw:
- state: enabled
-
-# ############################################
-- name: Make sure logging is off
- ufw:
- logging: no
-- name: Logging (check mode)
- ufw:
- logging: yes
- check_mode: yes
- register: logging_check
-- name: Logging
- ufw:
- logging: yes
- register: logging
-- name: Get logging
- shell: |
- ufw status verbose | grep "^Logging:"
- register: ufw_logging
- environment:
- LC_ALL: C
-- name: Logging (idempotency)
- ufw:
- logging: yes
- register: logging_idem
-- name: Logging (idempotency, check mode)
- ufw:
- logging: yes
- check_mode: yes
- register: logging_idem_check
-- name: Logging (change, check mode)
- ufw:
- logging: full
- check_mode: yes
- register: logging_change_check
-- name: Logging (change)
- ufw:
- logging: full
- register: logging_change
-- name: Get logging
- shell: |
- ufw status verbose | grep "^Logging:"
- register: ufw_logging_change
- environment:
- LC_ALL: C
-- assert:
- that:
- - logging_check is changed
- - logging is changed
- - "ufw_logging.stdout == 'Logging: on (low)'"
- - logging_idem is not changed
- - logging_idem_check is not changed
- - "ufw_logging_change.stdout == 'Logging: on (full)'"
- - logging_change is changed
- - logging_change_check is changed
-
-# ############################################
-- name: Default (check mode)
- ufw:
- default: reject
- direction: incoming
- check_mode: yes
- register: default_check
-- name: Default
- ufw:
- default: reject
- direction: incoming
- register: default
-- name: Get defaults
- shell: |
- ufw status verbose | grep "^Default:"
- register: ufw_defaults
- environment:
- LC_ALL: C
-- name: Default (idempotency)
- ufw:
- default: reject
- direction: incoming
- register: default_idem
-- name: Default (idempotency, check mode)
- ufw:
- default: reject
- direction: incoming
- check_mode: yes
- register: default_idem_check
-- name: Default (change, check mode)
- ufw:
- default: allow
- direction: incoming
- check_mode: yes
- register: default_change_check
-- name: Default (change)
- ufw:
- default: allow
- direction: incoming
- register: default_change
-- name: Get defaults
- shell: |
- ufw status verbose | grep "^Default:"
- register: ufw_defaults_change
- environment:
- LC_ALL: C
-- name: Default (change again)
- ufw:
- default: deny
- direction: incoming
- register: default_change_2
-- name: Default (change incoming implicitly, check mode)
- ufw:
- default: allow
- check_mode: yes
- register: default_change_implicit_check
-- name: Default (change incoming implicitly)
- ufw:
- default: allow
- register: default_change_implicit
-- name: Get defaults
- shell: |
- ufw status verbose | grep "^Default:"
- register: ufw_defaults_change_implicit
- environment:
- LC_ALL: C
-- name: Default (change incoming implicitly, idempotent, check mode)
- ufw:
- default: allow
- check_mode: yes
- register: default_change_implicit_idem_check
-- name: Default (change incoming implicitly, idempotent)
- ufw:
- default: allow
- register: default_change_implicit_idem
-- assert:
- that:
- - default_check is changed
- - default is changed
- - "'reject (incoming)' in ufw_defaults.stdout"
- - default_idem is not changed
- - default_idem_check is not changed
- - default_change_check is changed
- - default_change is changed
- - "'allow (incoming)' in ufw_defaults_change.stdout"
- - default_change_2 is changed
- - default_change_implicit_check is changed
- - default_change_implicit is changed
- - default_change_implicit_idem_check is not changed
- - default_change_implicit_idem is not changed
- - "'allow (incoming)' in ufw_defaults_change_implicit.stdout"
diff --git a/test/integration/targets/incidental_ufw/tasks/tests/insert_relative_to.yml b/test/integration/targets/incidental_ufw/tasks/tests/insert_relative_to.yml
deleted file mode 100644
index 3bb44a0e27..0000000000
--- a/test/integration/targets/incidental_ufw/tasks/tests/insert_relative_to.yml
+++ /dev/null
@@ -1,80 +0,0 @@
----
-- name: Enable
- ufw:
- state: enabled
- register: enable
-
-# ## CREATE RULES ############################
-- name: ipv4
- ufw:
- rule: deny
- port: 22
- to_ip: 0.0.0.0
-- name: ipv4
- ufw:
- rule: deny
- port: 23
- to_ip: 0.0.0.0
-
-- name: ipv6
- ufw:
- rule: deny
- port: 122
- to_ip: "::"
-- name: ipv6
- ufw:
- rule: deny
- port: 123
- to_ip: "::"
-
-- name: first-ipv4
- ufw:
- rule: deny
- port: 10
- to_ip: 0.0.0.0
- insert: 0
- insert_relative_to: first-ipv4
-- name: last-ipv4
- ufw:
- rule: deny
- port: 11
- to_ip: 0.0.0.0
- insert: 0
- insert_relative_to: last-ipv4
-
-- name: first-ipv6
- ufw:
- rule: deny
- port: 110
- to_ip: "::"
- insert: 0
- insert_relative_to: first-ipv6
-- name: last-ipv6
- ufw:
- rule: deny
- port: 111
- to_ip: "::"
- insert: 0
- insert_relative_to: last-ipv6
-
-# ## CHECK RESULT ############################
-- name: Get rules
- shell: |
- ufw status | grep DENY | cut -f 1-2 -d ' ' | grep -E "^(0\.0\.0\.0|::) [123]+"
- # Note that there was also a rule "ff02::fb mDNS" on at least one CI run;
- # to ignore these, the extra filtering (grepping for DENY and the regex) makes
- # sure to remove all rules not added here.
- register: ufw_status
-- assert:
- that:
- - ufw_status.stdout_lines == expected_stdout
- vars:
- expected_stdout:
- - "0.0.0.0 10"
- - "0.0.0.0 22"
- - "0.0.0.0 11"
- - "0.0.0.0 23"
- - ":: 110"
- - ":: 122"
- - ":: 111"
- - ":: 123"
diff --git a/test/integration/targets/incidental_ufw/tasks/tests/interface.yml b/test/integration/targets/incidental_ufw/tasks/tests/interface.yml
deleted file mode 100644
index 776a72f879..0000000000
--- a/test/integration/targets/incidental_ufw/tasks/tests/interface.yml
+++ /dev/null
@@ -1,81 +0,0 @@
-- name: Enable
- ufw:
- state: enabled
-
-- name: Route with interface in and out
- ufw:
- rule: allow
- route: yes
- interface_in: foo
- interface_out: bar
- proto: tcp
- from_ip: 1.1.1.1
- to_ip: 8.8.8.8
- from_port: 1111
- to_port: 2222
-
-- name: Route with interface in
- ufw:
- rule: allow
- route: yes
- interface_in: foo
- proto: tcp
- from_ip: 1.1.1.1
- from_port: 1111
-
-- name: Route with interface out
- ufw:
- rule: allow
- route: yes
- interface_out: bar
- proto: tcp
- from_ip: 1.1.1.1
- from_port: 1111
-
-- name: Non-route with interface in
- ufw:
- rule: allow
- interface_in: foo
- proto: tcp
- from_ip: 1.1.1.1
- from_port: 3333
-
-- name: Non-route with interface out
- ufw:
- rule: allow
- interface_out: bar
- proto: tcp
- from_ip: 1.1.1.1
- from_port: 4444
-
-- name: Check result
- shell: ufw status |grep -E '(ALLOW|DENY|REJECT|LIMIT)' |sed -E 's/[ \t]+/ /g'
- register: ufw_status
-
-- assert:
- that:
- - '"8.8.8.8 2222/tcp on bar ALLOW FWD 1.1.1.1 1111/tcp on foo " in stdout'
- - '"Anywhere ALLOW FWD 1.1.1.1 1111/tcp on foo " in stdout'
- - '"Anywhere on bar ALLOW FWD 1.1.1.1 1111/tcp " in stdout'
- - '"Anywhere on foo ALLOW 1.1.1.1 3333/tcp " in stdout'
- - '"Anywhere ALLOW OUT 1.1.1.1 4444/tcp on bar " in stdout'
- vars:
- stdout: '{{ ufw_status.stdout_lines }}'
-
-- name: Non-route with interface_in and interface_out
- ufw:
- rule: allow
- interface_in: foo
- interface_out: bar
- proto: tcp
- from_ip: 1.1.1.1
- from_port: 1111
- to_ip: 8.8.8.8
- to_port: 2222
- ignore_errors: yes
- register: ufw_non_route_iface
-
-- assert:
- that:
- - ufw_non_route_iface is failed
- - '"Only route rules" in ufw_non_route_iface.msg'
diff --git a/test/integration/targets/incidental_vmware_guest_custom_attributes/aliases b/test/integration/targets/incidental_vmware_guest_custom_attributes/aliases
deleted file mode 100644
index 0eb73d761d..0000000000
--- a/test/integration/targets/incidental_vmware_guest_custom_attributes/aliases
+++ /dev/null
@@ -1,3 +0,0 @@
-cloud/vcenter
-shippable/vcenter/incidental
-needs/target/incidental_vmware_prepare_tests
diff --git a/test/integration/targets/incidental_vmware_guest_custom_attributes/tasks/main.yml b/test/integration/targets/incidental_vmware_guest_custom_attributes/tasks/main.yml
deleted file mode 100644
index c9f6bdb41f..0000000000
--- a/test/integration/targets/incidental_vmware_guest_custom_attributes/tasks/main.yml
+++ /dev/null
@@ -1,110 +0,0 @@
-# Test code for the vmware_guest_custom_attributes module.
-# Copyright: (c) 2018, Abhijeet Kasurde <akasurde@redhat.com>
-# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
-
-# TODO: Current pinned version of vcsim does not support custom fields
-# commenting testcase below
-- import_role:
- name: incidental_vmware_prepare_tests
- vars:
- setup_attach_host: true
- setup_datastore: true
- setup_virtualmachines: true
-- name: Add custom attribute to the given virtual machine
- vmware_guest_custom_attributes:
- validate_certs: False
- hostname: '{{ vcenter_hostname }}'
- username: '{{ vcenter_username }}'
- password: '{{ vcenter_password }}'
- datacenter: "{{ dc1 }}"
- name: "{{ virtual_machines[0].name }}"
- folder: "{{ virtual_machines[0].folder }}"
- state: present
- attributes:
- - name: 'sample_1'
- value: 'sample_1_value'
- - name: 'sample_2'
- value: 'sample_2_value'
- - name: 'sample_3'
- value: 'sample_3_value'
- register: guest_info_0001
-
-- debug: var=guest_info_0001
-
-- assert:
- that:
- - guest_info_0001 is changed
-
-- name: Add custom attribute to the given virtual machine again
- vmware_guest_custom_attributes:
- validate_certs: False
- hostname: '{{ vcenter_hostname }}'
- username: '{{ vcenter_username }}'
- password: '{{ vcenter_password }}'
- datacenter: "{{ dc1 }}"
- name: "{{ virtual_machines[0].name }}"
- folder: "{{ virtual_machines[0].folder }}"
- state: present
- attributes:
- - name: 'sample_1'
- value: 'sample_1_value'
- - name: 'sample_2'
- value: 'sample_2_value'
- - name: 'sample_3'
- value: 'sample_3_value'
- register: guest_info_0002
-
-- debug: var=guest_info_0002
-
-- assert:
- that:
- - not (guest_info_0002 is changed)
-
-- name: Remove custom attribute to the given virtual machine
- vmware_guest_custom_attributes:
- validate_certs: False
- hostname: '{{ vcenter_hostname }}'
- username: '{{ vcenter_username }}'
- password: '{{ vcenter_password }}'
- datacenter: "{{ dc1 }}"
- name: "{{ virtual_machines[0].name }}"
- folder: "{{ virtual_machines[0].folder }}"
- state: absent
- attributes:
- - name: 'sample_1'
- - name: 'sample_2'
- - name: 'sample_3'
- register: guest_info_0004
-
-- debug: msg="{{ guest_info_0004 }}"
-
-- assert:
- that:
- - "guest_info_0004.changed"
-
-# TODO: vcsim returns duplicate values so removing custom attributes
-# results in change. vCenter show correct behavior. Commenting this
-# till this is supported by vcsim.
-- when: vcsim is not defined
- block:
- - name: Remove custom attribute to the given virtual machine again
- vmware_guest_custom_attributes:
- validate_certs: False
- hostname: '{{ vcenter_hostname }}'
- username: '{{ vcenter_username }}'
- password: '{{ vcenter_password }}'
- datacenter: "{{ dc1 }}"
- name: "{{ virtual_machines[0].name }}"
- folder: "{{ virtual_machines[0].folder }}"
- state: absent
- attributes:
- - name: 'sample_1'
- - name: 'sample_2'
- - name: 'sample_3'
- register: guest_info_0005
-
- - debug: var=guest_info_0005
-
- - assert:
- that:
- - not (guest_info_0005 is changed)
diff --git a/test/integration/targets/incidental_vyos_logging/aliases b/test/integration/targets/incidental_vyos_logging/aliases
deleted file mode 100644
index fae06ba0e7..0000000000
--- a/test/integration/targets/incidental_vyos_logging/aliases
+++ /dev/null
@@ -1,2 +0,0 @@
-shippable/vyos/incidental
-network/vyos
diff --git a/test/integration/targets/incidental_vyos_logging/defaults/main.yaml b/test/integration/targets/incidental_vyos_logging/defaults/main.yaml
deleted file mode 100644
index 9ef5ba5165..0000000000
--- a/test/integration/targets/incidental_vyos_logging/defaults/main.yaml
+++ /dev/null
@@ -1,3 +0,0 @@
----
-testcase: "*"
-test_items: []
diff --git a/test/integration/targets/incidental_vyos_logging/tasks/cli.yaml b/test/integration/targets/incidental_vyos_logging/tasks/cli.yaml
deleted file mode 100644
index 22a71d96e6..0000000000
--- a/test/integration/targets/incidental_vyos_logging/tasks/cli.yaml
+++ /dev/null
@@ -1,22 +0,0 @@
----
-- name: collect all cli test cases
- find:
- paths: "{{ role_path }}/tests/cli"
- patterns: "{{ testcase }}.yaml"
- register: test_cases
- delegate_to: localhost
-
-- name: set test_items
- set_fact: test_items="{{ test_cases.files | map(attribute='path') | list }}"
-
-- name: run test case (connection=ansible.netcommon.network_cli)
- include: "{{ test_case_to_run }} ansible_connection=ansible.netcommon.network_cli"
- with_items: "{{ test_items }}"
- loop_control:
- loop_var: test_case_to_run
-
-- name: run test case (connection=local)
- include: "{{ test_case_to_run }} ansible_connection=local"
- with_first_found: "{{ test_items }}"
- loop_control:
- loop_var: test_case_to_run
diff --git a/test/integration/targets/incidental_vyos_logging/tasks/main.yaml b/test/integration/targets/incidental_vyos_logging/tasks/main.yaml
deleted file mode 100644
index d4cf26fc4a..0000000000
--- a/test/integration/targets/incidental_vyos_logging/tasks/main.yaml
+++ /dev/null
@@ -1,2 +0,0 @@
----
-- {include: cli.yaml, tags: ['cli']}
diff --git a/test/integration/targets/incidental_vyos_logging/tests/cli/basic.yaml b/test/integration/targets/incidental_vyos_logging/tests/cli/basic.yaml
deleted file mode 100644
index d588456485..0000000000
--- a/test/integration/targets/incidental_vyos_logging/tests/cli/basic.yaml
+++ /dev/null
@@ -1,126 +0,0 @@
----
-- debug: msg="START cli/basic.yaml on connection={{ ansible_connection }}"
-
-- name: set-up logging
- vyos.vyos.vyos_logging:
- dest: console
- facility: all
- level: info
- state: present
- register: result
-
-- assert:
- that:
- - 'result.changed == true'
- - '"set system syslog console facility all level info" in result.commands'
-
-- name: set-up logging again (idempotent)
- vyos.vyos.vyos_logging:
- dest: console
- facility: all
- level: info
- state: present
- register: result
-
-- assert:
- that:
- - 'result.changed == false'
-
-- name: file logging
- vyos.vyos.vyos_logging:
- dest: file
- name: test
- facility: all
- level: notice
- state: present
- register: result
-
-- assert:
- that:
- - 'result.changed == true'
- - '"set system syslog file test facility all level notice" in result.commands'
-
-- name: file logging again (idempotent)
- vyos.vyos.vyos_logging:
- dest: file
- name: test
- facility: all
- level: notice
- state: present
- register: result
-
-- assert:
- that:
- - 'result.changed == false'
-
-- name: delete logging
- vyos.vyos.vyos_logging:
- dest: file
- name: test
- facility: all
- level: notice
- state: absent
- register: result
-
-- assert:
- that:
- - 'result.changed == true'
- - '"delete system syslog file test facility all level notice" in result.commands'
-
-- name: delete logging again (idempotent)
- vyos.vyos.vyos_logging:
- dest: file
- name: test
- facility: all
- level: notice
- state: absent
- register: result
-
-- assert:
- that:
- - 'result.changed == false'
-
-- name: Add logging collections
- vyos.vyos.vyos_logging:
- aggregate:
- - {dest: file, name: test1, facility: all, level: info}
- - {dest: file, name: test2, facility: news, level: debug}
- state: present
- register: result
-
-- assert:
- that:
- - 'result.changed == true'
- - '"set system syslog file test1 facility all level info" in result.commands'
- - '"set system syslog file test2 facility news level debug" in result.commands'
-
-- name: Add and remove logging collections with overrides
- vyos.vyos.vyos_logging:
- aggregate:
- - {dest: console, facility: all, level: info}
- - {dest: file, name: test1, facility: all, level: info, state: absent}
- - {dest: console, facility: daemon, level: warning}
- state: present
- register: result
-
-- assert:
- that:
- - 'result.changed == true'
- - '"delete system syslog file test1 facility all level info" in result.commands'
- - '"set system syslog console facility daemon level warning" in result.commands'
-
-- name: Remove logging collections
- vyos.vyos.vyos_logging:
- aggregate:
- - {dest: console, facility: all, level: info}
- - {dest: console, facility: daemon, level: warning}
- - {dest: file, name: test2, facility: news, level: debug}
- state: absent
- register: result
-
-- assert:
- that:
- - 'result.changed == true'
- - '"delete system syslog console facility all level info" in result.commands'
- - '"delete system syslog console facility daemon level warning" in result.commands'
- - '"delete system syslog file test2 facility news level debug" in result.commands'
diff --git a/test/integration/targets/incidental_vyos_logging/tests/cli/net_logging.yaml b/test/integration/targets/incidental_vyos_logging/tests/cli/net_logging.yaml
deleted file mode 100644
index 7940dd86ea..0000000000
--- a/test/integration/targets/incidental_vyos_logging/tests/cli/net_logging.yaml
+++ /dev/null
@@ -1,39 +0,0 @@
----
-- debug: msg="START vyos cli/net_logging.yaml on connection={{ ansible_connection }}"
-
-# Add minimal testcase to check args are passed correctly to
-# implementation module and module run is successful.
-
-- name: delete logging - setup
- ansible.netcommon.net_logging:
- dest: file
- name: test
- facility: all
- level: notice
- state: absent
- register: result
-
-- name: file logging using platform agnostic module
- ansible.netcommon.net_logging:
- dest: file
- name: test
- facility: all
- level: notice
- state: present
- register: result
-
-- assert:
- that:
- - 'result.changed == true'
- - '"set system syslog file test facility all level notice" in result.commands'
-
-- name: delete logging - teardown
- ansible.netcommon.net_logging:
- dest: file
- name: test
- facility: all
- level: notice
- state: absent
- register: result
-
-- debug: msg="END vyos cli/net_logging.yaml on connection={{ ansible_connection }}"
diff --git a/test/integration/targets/incidental_vyos_static_route/aliases b/test/integration/targets/incidental_vyos_static_route/aliases
deleted file mode 100644
index fae06ba0e7..0000000000
--- a/test/integration/targets/incidental_vyos_static_route/aliases
+++ /dev/null
@@ -1,2 +0,0 @@
-shippable/vyos/incidental
-network/vyos
diff --git a/test/integration/targets/incidental_vyos_static_route/defaults/main.yaml b/test/integration/targets/incidental_vyos_static_route/defaults/main.yaml
deleted file mode 100644
index 9ef5ba5165..0000000000
--- a/test/integration/targets/incidental_vyos_static_route/defaults/main.yaml
+++ /dev/null
@@ -1,3 +0,0 @@
----
-testcase: "*"
-test_items: []
diff --git a/test/integration/targets/incidental_vyos_static_route/tasks/cli.yaml b/test/integration/targets/incidental_vyos_static_route/tasks/cli.yaml
deleted file mode 100644
index 22a71d96e6..0000000000
--- a/test/integration/targets/incidental_vyos_static_route/tasks/cli.yaml
+++ /dev/null
@@ -1,22 +0,0 @@
----
-- name: collect all cli test cases
- find:
- paths: "{{ role_path }}/tests/cli"
- patterns: "{{ testcase }}.yaml"
- register: test_cases
- delegate_to: localhost
-
-- name: set test_items
- set_fact: test_items="{{ test_cases.files | map(attribute='path') | list }}"
-
-- name: run test case (connection=ansible.netcommon.network_cli)
- include: "{{ test_case_to_run }} ansible_connection=ansible.netcommon.network_cli"
- with_items: "{{ test_items }}"
- loop_control:
- loop_var: test_case_to_run
-
-- name: run test case (connection=local)
- include: "{{ test_case_to_run }} ansible_connection=local"
- with_first_found: "{{ test_items }}"
- loop_control:
- loop_var: test_case_to_run
diff --git a/test/integration/targets/incidental_vyos_static_route/tasks/main.yaml b/test/integration/targets/incidental_vyos_static_route/tasks/main.yaml
deleted file mode 100644
index d4cf26fc4a..0000000000
--- a/test/integration/targets/incidental_vyos_static_route/tasks/main.yaml
+++ /dev/null
@@ -1,2 +0,0 @@
----
-- {include: cli.yaml, tags: ['cli']}
diff --git a/test/integration/targets/incidental_vyos_static_route/tests/cli/basic.yaml b/test/integration/targets/incidental_vyos_static_route/tests/cli/basic.yaml
deleted file mode 100644
index 4b1ef1c682..0000000000
--- a/test/integration/targets/incidental_vyos_static_route/tests/cli/basic.yaml
+++ /dev/null
@@ -1,120 +0,0 @@
----
-- debug: msg="START cli/basic.yaml on connection={{ ansible_connection }}"
-
-- name: create static route
- vyos.vyos.vyos_static_route:
- prefix: 172.24.0.0/24
- next_hop: 192.168.42.64
- state: present
- register: result
-
-- assert:
- that:
- - 'result.changed == true'
- - '"set protocols static route 172.24.0.0/24 next-hop 192.168.42.64" in result.commands'
-
-- name: create static route again (idempotent)
- vyos.vyos.vyos_static_route:
- prefix: 172.24.0.0
- mask: 24
- next_hop: 192.168.42.64
- state: present
- register: result
-
-- assert:
- that:
- - 'result.changed == false'
-
-- name: modify admin distance of static route
- vyos.vyos.vyos_static_route:
- prefix: 172.24.0.0/24
- next_hop: 192.168.42.64
- admin_distance: 1
- state: present
- register: result
-
-- assert:
- that:
- - 'result.changed == true'
- - '"set protocols static route 172.24.0.0/24 next-hop 192.168.42.64 distance 1" in result.commands'
-
-- name: modify admin distance of static route again (idempotent)
- vyos.vyos.vyos_static_route:
- prefix: 172.24.0.0
- mask: 24
- next_hop: 192.168.42.64
- admin_distance: 1
- state: present
- register: result
-
-- assert:
- that:
- - 'result.changed == false'
-
-- name: delete static route
- vyos.vyos.vyos_static_route:
- prefix: 172.24.0.0/24
- next_hop: 192.168.42.64
- admin_distance: 1
- state: absent
- register: result
-
-- assert:
- that:
- - 'result.changed == true'
- - '"delete protocols static route 172.24.0.0/24" in result.commands'
-
-- name: delete static route again (idempotent)
- vyos.vyos.vyos_static_route:
- prefix: 172.24.0.0/24
- next_hop: 192.168.42.64
- admin_distance: 1
- state: absent
- register: result
-
-- assert:
- that:
- - 'result.changed == false'
-
-- name: Add static route collections
- vyos.vyos.vyos_static_route:
- aggregate:
- - {prefix: 172.24.1.0/24, next_hop: 192.168.42.64}
- - {prefix: 172.24.2.0, mask: 24, next_hop: 192.168.42.64}
- state: present
- register: result
-
-- assert:
- that:
- - 'result.changed == true'
- - '"set protocols static route 172.24.1.0/24 next-hop 192.168.42.64" in result.commands'
- - '"set protocols static route 172.24.2.0/24 next-hop 192.168.42.64" in result.commands'
-
-- name: Add and remove static route collections with overrides
- vyos.vyos.vyos_static_route:
- aggregate:
- - {prefix: 172.24.1.0/24, next_hop: 192.168.42.64}
- - {prefix: 172.24.2.0/24, next_hop: 192.168.42.64, state: absent}
- - {prefix: 172.24.3.0/24, next_hop: 192.168.42.64}
- state: present
- register: result
-
-- assert:
- that:
- - 'result.changed == true'
- - '"delete protocols static route 172.24.2.0/24" in result.commands'
- - '"set protocols static route 172.24.3.0/24 next-hop 192.168.42.64" in result.commands'
-
-- name: Remove static route collections
- vyos.vyos.vyos_static_route:
- aggregate:
- - {prefix: 172.24.1.0/24, next_hop: 192.168.42.64}
- - {prefix: 172.24.3.0/24, next_hop: 192.168.42.64}
- state: absent
- register: result
-
-- assert:
- that:
- - 'result.changed == true'
- - '"delete protocols static route 172.24.1.0/24" in result.commands'
- - '"delete protocols static route 172.24.3.0/24" in result.commands'
diff --git a/test/integration/targets/incidental_vyos_static_route/tests/cli/net_static_route.yaml b/test/integration/targets/incidental_vyos_static_route/tests/cli/net_static_route.yaml
deleted file mode 100644
index 7f6906c510..0000000000
--- a/test/integration/targets/incidental_vyos_static_route/tests/cli/net_static_route.yaml
+++ /dev/null
@@ -1,33 +0,0 @@
----
-- debug: msg="START vyos cli/net_static_route.yaml on connection={{ ansible_connection }}"
-
-# Add minimal testcase to check args are passed correctly to
-# implementation module and module run is successful.
-
-- name: delete static route - setup
- ansible.netcommon.net_static_route:
- prefix: 172.24.0.0/24
- next_hop: 192.168.42.64
- state: absent
- register: result
-
-- name: create static route using platform agnostic module
- ansible.netcommon.net_static_route:
- prefix: 172.24.0.0/24
- next_hop: 192.168.42.64
- state: present
- register: result
-
-- assert:
- that:
- - 'result.changed == true'
- - '"set protocols static route 172.24.0.0/24 next-hop 192.168.42.64" in result.commands'
-
-- name: delete static route - teardown
- ansible.netcommon.net_static_route:
- prefix: 172.24.0.0/24
- next_hop: 192.168.42.64
- state: absent
- register: result
-
-- debug: msg="END vyos cli/net_static_route.yaml on connection={{ ansible_connection }}"
diff --git a/test/integration/targets/incidental_win_hosts/aliases b/test/integration/targets/incidental_win_hosts/aliases
deleted file mode 100644
index a5fc90dcf4..0000000000
--- a/test/integration/targets/incidental_win_hosts/aliases
+++ /dev/null
@@ -1,2 +0,0 @@
-shippable/windows/incidental
-windows
diff --git a/test/integration/targets/incidental_win_hosts/defaults/main.yml b/test/integration/targets/incidental_win_hosts/defaults/main.yml
deleted file mode 100644
index c6270216d6..0000000000
--- a/test/integration/targets/incidental_win_hosts/defaults/main.yml
+++ /dev/null
@@ -1,13 +0,0 @@
----
-test_win_hosts_cname: testhost
-test_win_hosts_ip: 192.168.168.1
-
-test_win_hosts_aliases_set:
- - alias1
- - alias2
- - alias3
- - alias4
-
-test_win_hosts_aliases_remove:
- - alias3
- - alias4
diff --git a/test/integration/targets/incidental_win_hosts/meta/main.yml b/test/integration/targets/incidental_win_hosts/meta/main.yml
deleted file mode 100644
index 9f37e96cd9..0000000000
--- a/test/integration/targets/incidental_win_hosts/meta/main.yml
+++ /dev/null
@@ -1,2 +0,0 @@
-dependencies:
-- setup_remote_tmp_dir
diff --git a/test/integration/targets/incidental_win_hosts/tasks/main.yml b/test/integration/targets/incidental_win_hosts/tasks/main.yml
deleted file mode 100644
index 0997375f9f..0000000000
--- a/test/integration/targets/incidental_win_hosts/tasks/main.yml
+++ /dev/null
@@ -1,17 +0,0 @@
----
-- name: take a copy of the original hosts file
- win_copy:
- src: C:\Windows\System32\drivers\etc\hosts
- dest: '{{ remote_tmp_dir }}\hosts'
- remote_src: yes
-
-- block:
- - name: run tests
- include_tasks: tests.yml
-
- always:
- - name: restore hosts file
- win_copy:
- src: '{{ remote_tmp_dir }}\hosts'
- dest: C:\Windows\System32\drivers\etc\hosts
- remote_src: yes
diff --git a/test/integration/targets/incidental_win_hosts/tasks/tests.yml b/test/integration/targets/incidental_win_hosts/tasks/tests.yml
deleted file mode 100644
index a29e01a708..0000000000
--- a/test/integration/targets/incidental_win_hosts/tasks/tests.yml
+++ /dev/null
@@ -1,189 +0,0 @@
----
-
-- name: add a simple host with address
- win_hosts:
- state: present
- ip_address: "{{ test_win_hosts_ip }}"
- canonical_name: "{{ test_win_hosts_cname }}"
- register: add_ip
-
-- assert:
- that:
- - "add_ip.changed == true"
-
-- name: get actual dns result
- win_shell: "try{ [array]$t = [Net.DNS]::GetHostEntry('{{ test_win_hosts_cname }}') } catch { return 'false' } if ($t[0].HostName -eq '{{ test_win_hosts_cname }}' -and $t[0].AddressList[0].toString() -eq '{{ test_win_hosts_ip }}'){ return 'true' } else { return 'false' }"
- register: add_ip_actual
-
-- assert:
- that:
- - "add_ip_actual.stdout_lines[0]|lower == 'true'"
-
-- name: add a simple host with ipv4 address (idempotent)
- win_hosts:
- state: present
- ip_address: "{{ test_win_hosts_ip }}"
- canonical_name: "{{ test_win_hosts_cname }}"
- register: add_ip
-
-- assert:
- that:
- - "add_ip.changed == false"
-
-- name: remove simple host
- win_hosts:
- state: absent
- ip_address: "{{ test_win_hosts_ip }}"
- canonical_name: "{{ test_win_hosts_cname }}"
- register: remove_ip
-
-- assert:
- that:
- - "remove_ip.changed == true"
-
-- name: get actual dns result
- win_shell: "try{ [array]$t = [Net.DNS]::GetHostEntry('{{ test_win_hosts_cname}}') } catch { return 'false' } if ($t[0].HostName -eq '{{ test_win_hosts_cname }}' -and $t[0].AddressList[0].toString() -eq '{{ test_win_hosts_ip }}'){ return 'true' } else { return 'false' }"
- register: remove_ip_actual
- failed_when: "remove_ip_actual.rc == 0"
-
-- assert:
- that:
- - "remove_ip_actual.stdout_lines[0]|lower == 'false'"
-
-- name: remove simple host (idempotent)
- win_hosts:
- state: absent
- ip_address: "{{ test_win_hosts_ip }}"
- canonical_name: "{{ test_win_hosts_cname }}"
- register: remove_ip
-
-- assert:
- that:
- - "remove_ip.changed == false"
-
-- name: add host and set aliases
- win_hosts:
- state: present
- ip_address: "{{ test_win_hosts_ip }}"
- canonical_name: "{{ test_win_hosts_cname }}"
- aliases: "{{ test_win_hosts_aliases_set | union(test_win_hosts_aliases_remove) }}"
- action: set
- register: set_aliases
-
-- assert:
- that:
- - "set_aliases.changed == true"
-
-- name: get actual dns result for host
- win_shell: "try{ [array]$t = [Net.DNS]::GetHostEntry('{{ test_win_hosts_cname }}') } catch { return 'false' } if ($t[0].HostName -eq '{{ test_win_hosts_cname }}' -and $t[0].AddressList[0].toString() -eq '{{ test_win_hosts_ip }}'){ return 'true' } else { return 'false' }"
- register: set_aliases_actual_host
-
-- assert:
- that:
- - "set_aliases_actual_host.stdout_lines[0]|lower == 'true'"
-
-- name: get actual dns results for aliases
- win_shell: "try{ [array]$t = [Net.DNS]::GetHostEntry('{{ item }}') } catch { return 'false' } if ($t[0].HostName -eq '{{ test_win_hosts_cname }}' -and $t[0].AddressList[0].toString() -eq '{{ test_win_hosts_ip }}'){ return 'true' } else { return 'false' }"
- register: set_aliases_actual
- with_items: "{{ test_win_hosts_aliases_set | union(test_win_hosts_aliases_remove) }}"
-
-- assert:
- that:
- - "item.stdout_lines[0]|lower == 'true'"
- with_items: "{{ set_aliases_actual.results }}"
-
-- name: add host and set aliases (idempotent)
- win_hosts:
- state: present
- ip_address: "{{ test_win_hosts_ip }}"
- canonical_name: "{{ test_win_hosts_cname }}"
- aliases: "{{ test_win_hosts_aliases_set | union(test_win_hosts_aliases_remove) }}"
- action: set
- register: set_aliases
-
-- assert:
- that:
- - "set_aliases.changed == false"
-
-- name: remove aliases from the list
- win_hosts:
- state: present
- ip_address: "{{ test_win_hosts_ip }}"
- canonical_name: "{{ test_win_hosts_cname }}"
- aliases: "{{ test_win_hosts_aliases_remove }}"
- action: remove
- register: remove_aliases
-
-- assert:
- that:
- - "remove_aliases.changed == true"
-
-- name: get actual dns result for removed aliases
- win_shell: "try{ [array]$t = [Net.DNS]::GetHostEntry('{{ item }}') } catch { return 'false' } if ($t[0].HostName -eq '{{ test_win_hosts_cname }}' -and $t[0].AddressList[0].toString() -eq '{{ test_win_hosts_ip }}'){ return 'true' } else { return 'false' }"
- register: remove_aliases_removed_actual
- failed_when: "remove_aliases_removed_actual.rc == 0"
- with_items: "{{ test_win_hosts_aliases_remove }}"
-
-- assert:
- that:
- - "item.stdout_lines[0]|lower == 'false'"
- with_items: "{{ remove_aliases_removed_actual.results }}"
-
-- name: get actual dns result for remaining aliases
- win_shell: "try{ [array]$t = [Net.DNS]::GetHostEntry('{{ item }}') } catch { return 'false' } if ($t[0].HostName -eq '{{ test_win_hosts_cname }}' -and $t[0].AddressList[0].toString() -eq '{{ test_win_hosts_ip }}'){ return 'true' } else { return 'false' }"
- register: remove_aliases_remain_actual
- with_items: "{{ test_win_hosts_aliases_set | difference(test_win_hosts_aliases_remove) }}"
-
-- assert:
- that:
- - "item.stdout_lines[0]|lower == 'true'"
- with_items: "{{ remove_aliases_remain_actual.results }}"
-
-- name: remove aliases from the list (idempotent)
- win_hosts:
- state: present
- ip_address: "{{ test_win_hosts_ip }}"
- canonical_name: "{{ test_win_hosts_cname }}"
- aliases: "{{ test_win_hosts_aliases_remove }}"
- action: remove
- register: remove_aliases
-
-- assert:
- that:
- - "remove_aliases.changed == false"
-
-- name: add aliases back
- win_hosts:
- state: present
- ip_address: "{{ test_win_hosts_ip }}"
- canonical_name: "{{ test_win_hosts_cname }}"
- aliases: "{{ test_win_hosts_aliases_remove }}"
- action: add
- register: add_aliases
-
-- assert:
- that:
- - "add_aliases.changed == true"
-
-- name: get actual dns results for aliases
- win_shell: "try{ [array]$t = [Net.DNS]::GetHostEntry('{{ item }}') } catch { return 'false' } if ($t[0].HostName -eq '{{ test_win_hosts_cname }}' -and $t[0].AddressList[0].toString() -eq '{{ test_win_hosts_ip }}'){ return 'true' } else { return 'false' }"
- register: add_aliases_actual
- with_items: "{{ test_win_hosts_aliases_set | union(test_win_hosts_aliases_remove) }}"
-
-- assert:
- that:
- - "item.stdout_lines[0]|lower == 'true'"
- with_items: "{{ add_aliases_actual.results }}"
-
-- name: add aliases back (idempotent)
- win_hosts:
- state: present
- ip_address: "{{ test_win_hosts_ip }}"
- canonical_name: "{{ test_win_hosts_cname }}"
- aliases: "{{ test_win_hosts_aliases_remove }}"
- action: add
- register: add_aliases
-
-- assert:
- that:
- - "add_aliases.changed == false"
diff --git a/test/sanity/ignore.txt b/test/sanity/ignore.txt
index b3ff2275ca..cbf677e416 100644
--- a/test/sanity/ignore.txt
+++ b/test/sanity/ignore.txt
@@ -339,8 +339,6 @@ test/support/integration/plugins/module_utils/k8s/common.py metaclass-boilerplat
test/support/integration/plugins/module_utils/k8s/raw.py metaclass-boilerplate
test/support/integration/plugins/module_utils/mysql.py future-import-boilerplate
test/support/integration/plugins/module_utils/mysql.py metaclass-boilerplate
-test/support/integration/plugins/module_utils/net_tools/nios/api.py future-import-boilerplate
-test/support/integration/plugins/module_utils/net_tools/nios/api.py metaclass-boilerplate
test/support/integration/plugins/module_utils/network/common/utils.py future-import-boilerplate
test/support/integration/plugins/module_utils/network/common/utils.py metaclass-boilerplate
test/support/integration/plugins/module_utils/postgres.py future-import-boilerplate
@@ -395,10 +393,6 @@ test/support/network-integration/collections/ansible_collections/vyos/vyos/plugi
test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/modules/vyos_config.py metaclass-boilerplate
test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/modules/vyos_facts.py future-import-boilerplate
test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/modules/vyos_facts.py metaclass-boilerplate
-test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/modules/vyos_logging.py future-import-boilerplate
-test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/modules/vyos_logging.py metaclass-boilerplate
-test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/modules/vyos_static_route.py future-import-boilerplate
-test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/modules/vyos_static_route.py metaclass-boilerplate
test/support/windows-integration/plugins/modules/async_status.ps1 pslint!skip
test/support/windows-integration/plugins/modules/setup.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_copy.ps1 pslint!skip
diff --git a/test/support/integration/plugins/connection/chroot.py b/test/support/integration/plugins/connection/chroot.py
deleted file mode 100644
index d95497b42b..0000000000
--- a/test/support/integration/plugins/connection/chroot.py
+++ /dev/null
@@ -1,208 +0,0 @@
-# Based on local.py (c) 2012, Michael DeHaan <michael.dehaan@gmail.com>
-#
-# (c) 2013, Maykel Moya <mmoya@speedyrails.com>
-# (c) 2015, Toshio Kuratomi <tkuratomi@ansible.com>
-# Copyright (c) 2017 Ansible Project
-# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
-
-from __future__ import (absolute_import, division, print_function)
-__metaclass__ = type
-
-DOCUMENTATION = """
- author: Maykel Moya <mmoya@speedyrails.com>
- connection: chroot
- short_description: Interact with local chroot
- description:
- - Run commands or put/fetch files to an existing chroot on the Ansible controller.
- version_added: "1.1"
- options:
- remote_addr:
- description:
- - The path of the chroot you want to access.
- default: inventory_hostname
- vars:
- - name: ansible_host
- executable:
- description:
- - User specified executable shell
- ini:
- - section: defaults
- key: executable
- env:
- - name: ANSIBLE_EXECUTABLE
- vars:
- - name: ansible_executable
- default: /bin/sh
- chroot_exe:
- version_added: '2.8'
- description:
- - User specified chroot binary
- ini:
- - section: chroot_connection
- key: exe
- env:
- - name: ANSIBLE_CHROOT_EXE
- vars:
- - name: ansible_chroot_exe
- default: chroot
-"""
-
-import os
-import os.path
-import subprocess
-import traceback
-
-from ansible.errors import AnsibleError
-from ansible.module_utils.basic import is_executable
-from ansible.module_utils.common.process import get_bin_path
-from ansible.module_utils.six.moves import shlex_quote
-from ansible.module_utils._text import to_bytes, to_native
-from ansible.plugins.connection import ConnectionBase, BUFSIZE
-from ansible.utils.display import Display
-
-display = Display()
-
-
-class Connection(ConnectionBase):
- ''' Local chroot based connections '''
-
- transport = 'chroot'
- has_pipelining = True
- # su currently has an undiagnosed issue with calculating the file
- # checksums (so copy, for instance, doesn't work right)
- # Have to look into that before re-enabling this
- has_tty = False
-
- default_user = 'root'
-
- def __init__(self, play_context, new_stdin, *args, **kwargs):
- super(Connection, self).__init__(play_context, new_stdin, *args, **kwargs)
-
- self.chroot = self._play_context.remote_addr
-
- if os.geteuid() != 0:
- raise AnsibleError("chroot connection requires running as root")
-
- # we're running as root on the local system so do some
- # trivial checks for ensuring 'host' is actually a chroot'able dir
- if not os.path.isdir(self.chroot):
- raise AnsibleError("%s is not a directory" % self.chroot)
-
- chrootsh = os.path.join(self.chroot, 'bin/sh')
- # Want to check for a usable bourne shell inside the chroot.
- # is_executable() == True is sufficient. For symlinks it
- # gets really complicated really fast. So we punt on finding that
- # out. As long as it's a symlink we assume that it will work
- if not (is_executable(chrootsh) or (os.path.lexists(chrootsh) and os.path.islink(chrootsh))):
- raise AnsibleError("%s does not look like a chrootable dir (/bin/sh missing)" % self.chroot)
-
- def _connect(self):
- ''' connect to the chroot '''
- if os.path.isabs(self.get_option('chroot_exe')):
- self.chroot_cmd = self.get_option('chroot_exe')
- else:
- try:
- self.chroot_cmd = get_bin_path(self.get_option('chroot_exe'))
- except ValueError as e:
- raise AnsibleError(to_native(e))
-
- super(Connection, self)._connect()
- if not self._connected:
- display.vvv("THIS IS A LOCAL CHROOT DIR", host=self.chroot)
- self._connected = True
-
- def _buffered_exec_command(self, cmd, stdin=subprocess.PIPE):
- ''' run a command on the chroot. This is only needed for implementing
- put_file() get_file() so that we don't have to read the whole file
- into memory.
-
- compared to exec_command() it looses some niceties like being able to
- return the process's exit code immediately.
- '''
- executable = self.get_option('executable')
- local_cmd = [self.chroot_cmd, self.chroot, executable, '-c', cmd]
-
- display.vvv("EXEC %s" % (local_cmd), host=self.chroot)
- local_cmd = [to_bytes(i, errors='surrogate_or_strict') for i in local_cmd]
- p = subprocess.Popen(local_cmd, shell=False, stdin=stdin,
- stdout=subprocess.PIPE, stderr=subprocess.PIPE)
-
- return p
-
- def exec_command(self, cmd, in_data=None, sudoable=False):
- ''' run a command on the chroot '''
- super(Connection, self).exec_command(cmd, in_data=in_data, sudoable=sudoable)
-
- p = self._buffered_exec_command(cmd)
-
- stdout, stderr = p.communicate(in_data)
- return (p.returncode, stdout, stderr)
-
- def _prefix_login_path(self, remote_path):
- ''' Make sure that we put files into a standard path
-
- If a path is relative, then we need to choose where to put it.
- ssh chooses $HOME but we aren't guaranteed that a home dir will
- exist in any given chroot. So for now we're choosing "/" instead.
- This also happens to be the former default.
-
- Can revisit using $HOME instead if it's a problem
- '''
- if not remote_path.startswith(os.path.sep):
- remote_path = os.path.join(os.path.sep, remote_path)
- return os.path.normpath(remote_path)
-
- def put_file(self, in_path, out_path):
- ''' transfer a file from local to chroot '''
- super(Connection, self).put_file(in_path, out_path)
- display.vvv("PUT %s TO %s" % (in_path, out_path), host=self.chroot)
-
- out_path = shlex_quote(self._prefix_login_path(out_path))
- try:
- with open(to_bytes(in_path, errors='surrogate_or_strict'), 'rb') as in_file:
- if not os.fstat(in_file.fileno()).st_size:
- count = ' count=0'
- else:
- count = ''
- try:
- p = self._buffered_exec_command('dd of=%s bs=%s%s' % (out_path, BUFSIZE, count), stdin=in_file)
- except OSError:
- raise AnsibleError("chroot connection requires dd command in the chroot")
- try:
- stdout, stderr = p.communicate()
- except Exception:
- traceback.print_exc()
- raise AnsibleError("failed to transfer file %s to %s" % (in_path, out_path))
- if p.returncode != 0:
- raise AnsibleError("failed to transfer file %s to %s:\n%s\n%s" % (in_path, out_path, stdout, stderr))
- except IOError:
- raise AnsibleError("file or module does not exist at: %s" % in_path)
-
- def fetch_file(self, in_path, out_path):
- ''' fetch a file from chroot to local '''
- super(Connection, self).fetch_file(in_path, out_path)
- display.vvv("FETCH %s TO %s" % (in_path, out_path), host=self.chroot)
-
- in_path = shlex_quote(self._prefix_login_path(in_path))
- try:
- p = self._buffered_exec_command('dd if=%s bs=%s' % (in_path, BUFSIZE))
- except OSError:
- raise AnsibleError("chroot connection requires dd command in the chroot")
-
- with open(to_bytes(out_path, errors='surrogate_or_strict'), 'wb+') as out_file:
- try:
- chunk = p.stdout.read(BUFSIZE)
- while chunk:
- out_file.write(chunk)
- chunk = p.stdout.read(BUFSIZE)
- except Exception:
- traceback.print_exc()
- raise AnsibleError("failed to transfer file %s to %s" % (in_path, out_path))
- stdout, stderr = p.communicate()
- if p.returncode != 0:
- raise AnsibleError("failed to transfer file %s to %s:\n%s\n%s" % (in_path, out_path, stdout, stderr))
-
- def close(self):
- ''' terminate the connection; nothing to do here '''
- super(Connection, self).close()
- self._connected = False
diff --git a/test/support/integration/plugins/lookup/hashi_vault.py b/test/support/integration/plugins/lookup/hashi_vault.py
deleted file mode 100644
index b90fe586ca..0000000000
--- a/test/support/integration/plugins/lookup/hashi_vault.py
+++ /dev/null
@@ -1,302 +0,0 @@
-# (c) 2015, Jonathan Davila <jonathan(at)davila.io>
-# (c) 2017 Ansible Project
-# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
-
-from __future__ import (absolute_import, division, print_function)
-__metaclass__ = type
-
-DOCUMENTATION = """
- lookup: hashi_vault
- author: Jonathan Davila <jdavila(at)ansible.com>
- version_added: "2.0"
- short_description: retrieve secrets from HashiCorp's vault
- requirements:
- - hvac (python library)
- description:
- - retrieve secrets from HashiCorp's vault
- notes:
- - Due to a current limitation in the HVAC library there won't necessarily be an error if a bad endpoint is specified.
- - As of Ansible 2.10, only the latest secret is returned when specifying a KV v2 path.
- options:
- secret:
- description: query you are making.
- required: True
- token:
- description: vault token.
- env:
- - name: VAULT_TOKEN
- url:
- description: URL to vault service.
- env:
- - name: VAULT_ADDR
- default: 'http://127.0.0.1:8200'
- username:
- description: Authentication user name.
- password:
- description: Authentication password.
- role_id:
- description: Role id for a vault AppRole auth.
- env:
- - name: VAULT_ROLE_ID
- secret_id:
- description: Secret id for a vault AppRole auth.
- env:
- - name: VAULT_SECRET_ID
- auth_method:
- description:
- - Authentication method to be used.
- - C(userpass) is added in version 2.8.
- env:
- - name: VAULT_AUTH_METHOD
- choices:
- - userpass
- - ldap
- - approle
- mount_point:
- description: vault mount point, only required if you have a custom mount point.
- default: ldap
- ca_cert:
- description: path to certificate to use for authentication.
- aliases: [ cacert ]
- validate_certs:
- description: controls verification and validation of SSL certificates, mostly you only want to turn off with self signed ones.
- type: boolean
- default: True
- namespace:
- version_added: "2.8"
- description: namespace where secrets reside. requires HVAC 0.7.0+ and Vault 0.11+.
-"""
-
-EXAMPLES = """
-- debug:
- msg: "{{ lookup('hashi_vault', 'secret=secret/hello:value token=c975b780-d1be-8016-866b-01d0f9b688a5 url=http://myvault:8200')}}"
-
-- name: Return all secrets from a path
- debug:
- msg: "{{ lookup('hashi_vault', 'secret=secret/hello token=c975b780-d1be-8016-866b-01d0f9b688a5 url=http://myvault:8200')}}"
-
-- name: Vault that requires authentication via LDAP
- debug:
- msg: "{{ lookup('hashi_vault', 'secret=secret/hello:value auth_method=ldap mount_point=ldap username=myuser password=mypas url=http://myvault:8200')}}"
-
-- name: Vault that requires authentication via username and password
- debug:
- msg: "{{ lookup('hashi_vault', 'secret=secret/hello:value auth_method=userpass username=myuser password=mypas url=http://myvault:8200')}}"
-
-- name: Using an ssl vault
- debug:
- msg: "{{ lookup('hashi_vault', 'secret=secret/hola:value token=c975b780-d1be-8016-866b-01d0f9b688a5 url=https://myvault:8200 validate_certs=False')}}"
-
-- name: using certificate auth
- debug:
- msg: "{{ lookup('hashi_vault', 'secret=secret/hi:value token=xxxx-xxx-xxx url=https://myvault:8200 validate_certs=True cacert=/cacert/path/ca.pem')}}"
-
-- name: authenticate with a Vault app role
- debug:
- msg: "{{ lookup('hashi_vault', 'secret=secret/hello:value auth_method=approle role_id=myroleid secret_id=mysecretid url=http://myvault:8200')}}"
-
-- name: Return all secrets from a path in a namespace
- debug:
- msg: "{{ lookup('hashi_vault', 'secret=secret/hello token=c975b780-d1be-8016-866b-01d0f9b688a5 url=http://myvault:8200 namespace=teama/admins')}}"
-
-# When using KV v2 the PATH should include "data" between the secret engine mount and path (e.g. "secret/data/:path")
-# see: https://www.vaultproject.io/api/secret/kv/kv-v2.html#read-secret-version
-- name: Return latest KV v2 secret from path
- debug:
- msg: "{{ lookup('hashi_vault', 'secret=secret/data/hello token=my_vault_token url=http://myvault_url:8200') }}"
-
-
-"""
-
-RETURN = """
-_raw:
- description:
- - secrets(s) requested
-"""
-
-import os
-
-from ansible.errors import AnsibleError
-from ansible.module_utils.parsing.convert_bool import boolean
-from ansible.plugins.lookup import LookupBase
-
-HAS_HVAC = False
-try:
- import hvac
- HAS_HVAC = True
-except ImportError:
- HAS_HVAC = False
-
-
-ANSIBLE_HASHI_VAULT_ADDR = 'http://127.0.0.1:8200'
-
-if os.getenv('VAULT_ADDR') is not None:
- ANSIBLE_HASHI_VAULT_ADDR = os.environ['VAULT_ADDR']
-
-
-class HashiVault:
- def __init__(self, **kwargs):
-
- self.url = kwargs.get('url', ANSIBLE_HASHI_VAULT_ADDR)
- self.namespace = kwargs.get('namespace', None)
- self.avail_auth_method = ['approle', 'userpass', 'ldap']
-
- # split secret arg, which has format 'secret/hello:value' into secret='secret/hello' and secret_field='value'
- s = kwargs.get('secret')
- if s is None:
- raise AnsibleError("No secret specified for hashi_vault lookup")
-
- s_f = s.rsplit(':', 1)
- self.secret = s_f[0]
- if len(s_f) >= 2:
- self.secret_field = s_f[1]
- else:
- self.secret_field = ''
-
- self.verify = self.boolean_or_cacert(kwargs.get('validate_certs', True), kwargs.get('cacert', ''))
-
- # If a particular backend is asked for (and its method exists) we call it, otherwise drop through to using
- # token auth. This means if a particular auth backend is requested and a token is also given, then we
- # ignore the token and attempt authentication against the specified backend.
- #
- # to enable a new auth backend, simply add a new 'def auth_<type>' method below.
- #
- self.auth_method = kwargs.get('auth_method', os.environ.get('VAULT_AUTH_METHOD'))
- self.verify = self.boolean_or_cacert(kwargs.get('validate_certs', True), kwargs.get('cacert', ''))
- if self.auth_method and self.auth_method != 'token':
- try:
- if self.namespace is not None:
- self.client = hvac.Client(url=self.url, verify=self.verify, namespace=self.namespace)
- else:
- self.client = hvac.Client(url=self.url, verify=self.verify)
- # prefixing with auth_ to limit which methods can be accessed
- getattr(self, 'auth_' + self.auth_method)(**kwargs)
- except AttributeError:
- raise AnsibleError("Authentication method '%s' not supported."
- " Available options are %r" % (self.auth_method, self.avail_auth_method))
- else:
- self.token = kwargs.get('token', os.environ.get('VAULT_TOKEN', None))
- if self.token is None and os.environ.get('HOME'):
- token_filename = os.path.join(
- os.environ.get('HOME'),
- '.vault-token'
- )
- if os.path.exists(token_filename):
- with open(token_filename) as token_file:
- self.token = token_file.read().strip()
-
- if self.token is None:
- raise AnsibleError("No Vault Token specified")
-
- if self.namespace is not None:
- self.client = hvac.Client(url=self.url, token=self.token, verify=self.verify, namespace=self.namespace)
- else:
- self.client = hvac.Client(url=self.url, token=self.token, verify=self.verify)
-
- if not self.client.is_authenticated():
- raise AnsibleError("Invalid Hashicorp Vault Token Specified for hashi_vault lookup")
-
- def get(self):
- data = self.client.read(self.secret)
-
- # Check response for KV v2 fields and flatten nested secret data.
- #
- # https://vaultproject.io/api/secret/kv/kv-v2.html#sample-response-1
- try:
- # sentinel field checks
- check_dd = data['data']['data']
- check_md = data['data']['metadata']
- # unwrap nested data
- data = data['data']
- except KeyError:
- pass
-
- if data is None:
- raise AnsibleError("The secret %s doesn't seem to exist for hashi_vault lookup" % self.secret)
-
- if self.secret_field == '':
- return data['data']
-
- if self.secret_field not in data['data']:
- raise AnsibleError("The secret %s does not contain the field '%s'. for hashi_vault lookup" % (self.secret, self.secret_field))
-
- return data['data'][self.secret_field]
-
- def check_params(self, **kwargs):
- username = kwargs.get('username')
- if username is None:
- raise AnsibleError("Authentication method %s requires a username" % self.auth_method)
-
- password = kwargs.get('password')
- if password is None:
- raise AnsibleError("Authentication method %s requires a password" % self.auth_method)
-
- mount_point = kwargs.get('mount_point')
-
- return username, password, mount_point
-
- def auth_userpass(self, **kwargs):
- username, password, mount_point = self.check_params(**kwargs)
- if mount_point is None:
- mount_point = 'userpass'
-
- self.client.auth_userpass(username, password, mount_point=mount_point)
-
- def auth_ldap(self, **kwargs):
- username, password, mount_point = self.check_params(**kwargs)
- if mount_point is None:
- mount_point = 'ldap'
-
- self.client.auth.ldap.login(username, password, mount_point=mount_point)
-
- def boolean_or_cacert(self, validate_certs, cacert):
- validate_certs = boolean(validate_certs, strict=False)
- '''' return a bool or cacert '''
- if validate_certs is True:
- if cacert != '':
- return cacert
- else:
- return True
- else:
- return False
-
- def auth_approle(self, **kwargs):
- role_id = kwargs.get('role_id', os.environ.get('VAULT_ROLE_ID', None))
- if role_id is None:
- raise AnsibleError("Authentication method app role requires a role_id")
-
- secret_id = kwargs.get('secret_id', os.environ.get('VAULT_SECRET_ID', None))
- if secret_id is None:
- raise AnsibleError("Authentication method app role requires a secret_id")
-
- self.client.auth_approle(role_id, secret_id)
-
-
-class LookupModule(LookupBase):
- def run(self, terms, variables=None, **kwargs):
- if not HAS_HVAC:
- raise AnsibleError("Please pip install hvac to use the hashi_vault lookup module.")
-
- vault_args = terms[0].split()
- vault_dict = {}
- ret = []
-
- for param in vault_args:
- try:
- key, value = param.split('=')
- except ValueError:
- raise AnsibleError("hashi_vault lookup plugin needs key=value pairs, but received %s" % terms)
- vault_dict[key] = value
-
- if 'ca_cert' in vault_dict.keys():
- vault_dict['cacert'] = vault_dict['ca_cert']
- vault_dict.pop('ca_cert', None)
-
- vault_conn = HashiVault(**vault_dict)
-
- for term in terms:
- key = term.split()[0]
- value = vault_conn.get()
- ret.append(value)
-
- return ret
diff --git a/test/support/integration/plugins/module_utils/hcloud.py b/test/support/integration/plugins/module_utils/hcloud.py
deleted file mode 100644
index 932b0c5294..0000000000
--- a/test/support/integration/plugins/module_utils/hcloud.py
+++ /dev/null
@@ -1,63 +0,0 @@
-# -*- coding: utf-8 -*-
-# Copyright: (c) 2019, Hetzner Cloud GmbH <info@hetzner-cloud.de>
-
-# Simplified BSD License (see licenses/simplified_bsd.txt or https://opensource.org/licenses/BSD-2-Clause)
-
-from __future__ import absolute_import, division, print_function
-
-__metaclass__ = type
-
-from ansible.module_utils.ansible_release import __version__
-from ansible.module_utils.basic import env_fallback, missing_required_lib
-
-try:
- import hcloud
-
- HAS_HCLOUD = True
-except ImportError:
- HAS_HCLOUD = False
-
-
-class Hcloud(object):
- def __init__(self, module, represent):
- self.module = module
- self.represent = represent
- self.result = {"changed": False, self.represent: None}
- if not HAS_HCLOUD:
- module.fail_json(msg=missing_required_lib("hcloud-python"))
- self._build_client()
-
- def _build_client(self):
- self.client = hcloud.Client(
- token=self.module.params["api_token"],
- api_endpoint=self.module.params["endpoint"],
- application_name="ansible-module",
- application_version=__version__,
- )
-
- def _mark_as_changed(self):
- self.result["changed"] = True
-
- @staticmethod
- def base_module_arguments():
- return {
- "api_token": {
- "type": "str",
- "required": True,
- "fallback": (env_fallback, ["HCLOUD_TOKEN"]),
- "no_log": True,
- },
- "endpoint": {"type": "str", "default": "https://api.hetzner.cloud/v1"},
- }
-
- def _prepare_result(self):
- """Prepare the result for every module
-
- :return: dict
- """
- return {}
-
- def get_result(self):
- if getattr(self, self.represent) is not None:
- self.result[self.represent] = self._prepare_result()
- return self.result
diff --git a/test/support/integration/plugins/module_utils/net_tools/nios/__init__.py b/test/support/integration/plugins/module_utils/net_tools/nios/__init__.py
deleted file mode 100644
index e69de29bb2..0000000000
--- a/test/support/integration/plugins/module_utils/net_tools/nios/__init__.py
+++ /dev/null
diff --git a/test/support/integration/plugins/module_utils/net_tools/nios/api.py b/test/support/integration/plugins/module_utils/net_tools/nios/api.py
deleted file mode 100644
index 2a759033e2..0000000000
--- a/test/support/integration/plugins/module_utils/net_tools/nios/api.py
+++ /dev/null
@@ -1,601 +0,0 @@
-# This code is part of Ansible, but is an independent component.
-# This particular file snippet, and this file snippet only, is BSD licensed.
-# Modules you write using this snippet, which is embedded dynamically by Ansible
-# still belong to the author of the module, and may assign their own license
-# to the complete work.
-#
-# (c) 2018 Red Hat Inc.
-#
-# Redistribution and use in source and binary forms, with or without modification,
-# are permitted provided that the following conditions are met:
-#
-# * Redistributions of source code must retain the above copyright
-# notice, this list of conditions and the following disclaimer.
-# * Redistributions in binary form must reproduce the above copyright notice,
-# this list of conditions and the following disclaimer in the documentation
-# and/or other materials provided with the distribution.
-#
-# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
-# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
-# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
-# IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
-# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
-# PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
-# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
-# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
-# USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-#
-
-import os
-from functools import partial
-from ansible.module_utils._text import to_native
-from ansible.module_utils.six import iteritems
-from ansible.module_utils._text import to_text
-from ansible.module_utils.basic import env_fallback
-
-try:
- from infoblox_client.connector import Connector
- from infoblox_client.exceptions import InfobloxException
- HAS_INFOBLOX_CLIENT = True
-except ImportError:
- HAS_INFOBLOX_CLIENT = False
-
-# defining nios constants
-NIOS_DNS_VIEW = 'view'
-NIOS_NETWORK_VIEW = 'networkview'
-NIOS_HOST_RECORD = 'record:host'
-NIOS_IPV4_NETWORK = 'network'
-NIOS_IPV6_NETWORK = 'ipv6network'
-NIOS_ZONE = 'zone_auth'
-NIOS_PTR_RECORD = 'record:ptr'
-NIOS_A_RECORD = 'record:a'
-NIOS_AAAA_RECORD = 'record:aaaa'
-NIOS_CNAME_RECORD = 'record:cname'
-NIOS_MX_RECORD = 'record:mx'
-NIOS_SRV_RECORD = 'record:srv'
-NIOS_NAPTR_RECORD = 'record:naptr'
-NIOS_TXT_RECORD = 'record:txt'
-NIOS_NSGROUP = 'nsgroup'
-NIOS_IPV4_FIXED_ADDRESS = 'fixedaddress'
-NIOS_IPV6_FIXED_ADDRESS = 'ipv6fixedaddress'
-NIOS_NEXT_AVAILABLE_IP = 'func:nextavailableip'
-NIOS_IPV4_NETWORK_CONTAINER = 'networkcontainer'
-NIOS_IPV6_NETWORK_CONTAINER = 'ipv6networkcontainer'
-NIOS_MEMBER = 'member'
-
-NIOS_PROVIDER_SPEC = {
- 'host': dict(fallback=(env_fallback, ['INFOBLOX_HOST'])),
- 'username': dict(fallback=(env_fallback, ['INFOBLOX_USERNAME'])),
- 'password': dict(fallback=(env_fallback, ['INFOBLOX_PASSWORD']), no_log=True),
- 'validate_certs': dict(type='bool', default=False, fallback=(env_fallback, ['INFOBLOX_SSL_VERIFY']), aliases=['ssl_verify']),
- 'silent_ssl_warnings': dict(type='bool', default=True),
- 'http_request_timeout': dict(type='int', default=10, fallback=(env_fallback, ['INFOBLOX_HTTP_REQUEST_TIMEOUT'])),
- 'http_pool_connections': dict(type='int', default=10),
- 'http_pool_maxsize': dict(type='int', default=10),
- 'max_retries': dict(type='int', default=3, fallback=(env_fallback, ['INFOBLOX_MAX_RETRIES'])),
- 'wapi_version': dict(default='2.1', fallback=(env_fallback, ['INFOBLOX_WAP_VERSION'])),
- 'max_results': dict(type='int', default=1000, fallback=(env_fallback, ['INFOBLOX_MAX_RETRIES']))
-}
-
-
-def get_connector(*args, **kwargs):
- ''' Returns an instance of infoblox_client.connector.Connector
- :params args: positional arguments are silently ignored
- :params kwargs: dict that is passed to Connector init
- :returns: Connector
- '''
- if not HAS_INFOBLOX_CLIENT:
- raise Exception('infoblox-client is required but does not appear '
- 'to be installed. It can be installed using the '
- 'command `pip install infoblox-client`')
-
- if not set(kwargs.keys()).issubset(list(NIOS_PROVIDER_SPEC.keys()) + ['ssl_verify']):
- raise Exception('invalid or unsupported keyword argument for connector')
- for key, value in iteritems(NIOS_PROVIDER_SPEC):
- if key not in kwargs:
- # apply default values from NIOS_PROVIDER_SPEC since we cannot just
- # assume the provider values are coming from AnsibleModule
- if 'default' in value:
- kwargs[key] = value['default']
-
- # override any values with env variables unless they were
- # explicitly set
- env = ('INFOBLOX_%s' % key).upper()
- if env in os.environ:
- kwargs[key] = os.environ.get(env)
-
- if 'validate_certs' in kwargs.keys():
- kwargs['ssl_verify'] = kwargs['validate_certs']
- kwargs.pop('validate_certs', None)
-
- return Connector(kwargs)
-
-
-def normalize_extattrs(value):
- ''' Normalize extattrs field to expected format
- The module accepts extattrs as key/value pairs. This method will
- transform the key/value pairs into a structure suitable for
- sending across WAPI in the format of:
- extattrs: {
- key: {
- value: <value>
- }
- }
- '''
- return dict([(k, {'value': v}) for k, v in iteritems(value)])
-
-
-def flatten_extattrs(value):
- ''' Flatten the key/value struct for extattrs
- WAPI returns extattrs field as a dict in form of:
- extattrs: {
- key: {
- value: <value>
- }
- }
- This method will flatten the structure to:
- extattrs: {
- key: value
- }
- '''
- return dict([(k, v['value']) for k, v in iteritems(value)])
-
-
-def member_normalize(member_spec):
- ''' Transforms the member module arguments into a valid WAPI struct
- This function will transform the arguments into a structure that
- is a valid WAPI structure in the format of:
- {
- key: <value>,
- }
- It will remove any arguments that are set to None since WAPI will error on
- that condition.
- The remainder of the value validation is performed by WAPI
- Some parameters in ib_spec are passed as a list in order to pass the validation for elements.
- In this function, they are converted to dictionary.
- '''
- member_elements = ['vip_setting', 'ipv6_setting', 'lan2_port_setting', 'mgmt_port_setting',
- 'pre_provisioning', 'network_setting', 'v6_network_setting',
- 'ha_port_setting', 'lan_port_setting', 'lan2_physical_setting',
- 'lan_ha_port_setting', 'mgmt_network_setting', 'v6_mgmt_network_setting']
- for key in member_spec.keys():
- if key in member_elements and member_spec[key] is not None:
- member_spec[key] = member_spec[key][0]
- if isinstance(member_spec[key], dict):
- member_spec[key] = member_normalize(member_spec[key])
- elif isinstance(member_spec[key], list):
- for x in member_spec[key]:
- if isinstance(x, dict):
- x = member_normalize(x)
- elif member_spec[key] is None:
- del member_spec[key]
- return member_spec
-
-
-class WapiBase(object):
- ''' Base class for implementing Infoblox WAPI API '''
- provider_spec = {'provider': dict(type='dict', options=NIOS_PROVIDER_SPEC)}
-
- def __init__(self, provider):
- self.connector = get_connector(**provider)
-
- def __getattr__(self, name):
- try:
- return self.__dict__[name]
- except KeyError:
- if name.startswith('_'):
- raise AttributeError("'%s' object has no attribute '%s'" % (self.__class__.__name__, name))
- return partial(self._invoke_method, name)
-
- def _invoke_method(self, name, *args, **kwargs):
- try:
- method = getattr(self.connector, name)
- return method(*args, **kwargs)
- except InfobloxException as exc:
- if hasattr(self, 'handle_exception'):
- self.handle_exception(name, exc)
- else:
- raise
-
-
-class WapiLookup(WapiBase):
- ''' Implements WapiBase for lookup plugins '''
- def handle_exception(self, method_name, exc):
- if ('text' in exc.response):
- raise Exception(exc.response['text'])
- else:
- raise Exception(exc)
-
-
-class WapiInventory(WapiBase):
- ''' Implements WapiBase for dynamic inventory script '''
- pass
-
-
-class WapiModule(WapiBase):
- ''' Implements WapiBase for executing a NIOS module '''
- def __init__(self, module):
- self.module = module
- provider = module.params['provider']
- try:
- super(WapiModule, self).__init__(provider)
- except Exception as exc:
- self.module.fail_json(msg=to_text(exc))
-
- def handle_exception(self, method_name, exc):
- ''' Handles any exceptions raised
- This method will be called if an InfobloxException is raised for
- any call to the instance of Connector and also, in case of generic
- exception. This method will then gracefully fail the module.
- :args exc: instance of InfobloxException
- '''
- if ('text' in exc.response):
- self.module.fail_json(
- msg=exc.response['text'],
- type=exc.response['Error'].split(':')[0],
- code=exc.response.get('code'),
- operation=method_name
- )
- else:
- self.module.fail_json(msg=to_native(exc))
-
- def run(self, ib_obj_type, ib_spec):
- ''' Runs the module and performans configuration tasks
- :args ib_obj_type: the WAPI object type to operate against
- :args ib_spec: the specification for the WAPI object as a dict
- :returns: a results dict
- '''
-
- update = new_name = None
- state = self.module.params['state']
- if state not in ('present', 'absent'):
- self.module.fail_json(msg='state must be one of `present`, `absent`, got `%s`' % state)
-
- result = {'changed': False}
-
- obj_filter = dict([(k, self.module.params[k]) for k, v in iteritems(ib_spec) if v.get('ib_req')])
-
- # get object reference
- ib_obj_ref, update, new_name = self.get_object_ref(self.module, ib_obj_type, obj_filter, ib_spec)
- proposed_object = {}
- for key, value in iteritems(ib_spec):
- if self.module.params[key] is not None:
- if 'transform' in value:
- proposed_object[key] = value['transform'](self.module)
- else:
- proposed_object[key] = self.module.params[key]
-
- # If configure_by_dns is set to False, then delete the default dns set in the param else throw exception
- if not proposed_object.get('configure_for_dns') and proposed_object.get('view') == 'default'\
- and ib_obj_type == NIOS_HOST_RECORD:
- del proposed_object['view']
- elif not proposed_object.get('configure_for_dns') and proposed_object.get('view') != 'default'\
- and ib_obj_type == NIOS_HOST_RECORD:
- self.module.fail_json(msg='DNS Bypass is not allowed if DNS view is set other than \'default\'')
-
- if ib_obj_ref:
- if len(ib_obj_ref) > 1:
- for each in ib_obj_ref:
- # To check for existing A_record with same name with input A_record by IP
- if each.get('ipv4addr') and each.get('ipv4addr') == proposed_object.get('ipv4addr'):
- current_object = each
- # To check for existing Host_record with same name with input Host_record by IP
- elif each.get('ipv4addrs')[0].get('ipv4addr') and each.get('ipv4addrs')[0].get('ipv4addr')\
- == proposed_object.get('ipv4addrs')[0].get('ipv4addr'):
- current_object = each
- # Else set the current_object with input value
- else:
- current_object = obj_filter
- ref = None
- else:
- current_object = ib_obj_ref[0]
- if 'extattrs' in current_object:
- current_object['extattrs'] = flatten_extattrs(current_object['extattrs'])
- if current_object.get('_ref'):
- ref = current_object.pop('_ref')
- else:
- current_object = obj_filter
- ref = None
- # checks if the object type is member to normalize the attributes being passed
- if (ib_obj_type == NIOS_MEMBER):
- proposed_object = member_normalize(proposed_object)
-
- # checks if the name's field has been updated
- if update and new_name:
- proposed_object['name'] = new_name
-
- check_remove = []
- if (ib_obj_type == NIOS_HOST_RECORD):
- # this check is for idempotency, as if the same ip address shall be passed
- # add param will be removed, and same exists true for remove case as well.
- if 'ipv4addrs' in [current_object and proposed_object]:
- for each in current_object['ipv4addrs']:
- if each['ipv4addr'] == proposed_object['ipv4addrs'][0]['ipv4addr']:
- if 'add' in proposed_object['ipv4addrs'][0]:
- del proposed_object['ipv4addrs'][0]['add']
- break
- check_remove += each.values()
- if proposed_object['ipv4addrs'][0]['ipv4addr'] not in check_remove:
- if 'remove' in proposed_object['ipv4addrs'][0]:
- del proposed_object['ipv4addrs'][0]['remove']
-
- res = None
- modified = not self.compare_objects(current_object, proposed_object)
- if 'extattrs' in proposed_object:
- proposed_object['extattrs'] = normalize_extattrs(proposed_object['extattrs'])
-
- # Checks if nios_next_ip param is passed in ipv4addrs/ipv4addr args
- proposed_object = self.check_if_nios_next_ip_exists(proposed_object)
-
- if state == 'present':
- if ref is None:
- if not self.module.check_mode:
- self.create_object(ib_obj_type, proposed_object)
- result['changed'] = True
- # Check if NIOS_MEMBER and the flag to call function create_token is set
- elif (ib_obj_type == NIOS_MEMBER) and (proposed_object['create_token']):
- proposed_object = None
- # the function creates a token that can be used by a pre-provisioned member to join the grid
- result['api_results'] = self.call_func('create_token', ref, proposed_object)
- result['changed'] = True
- elif modified:
- if 'ipv4addrs' in proposed_object:
- if ('add' not in proposed_object['ipv4addrs'][0]) and ('remove' not in proposed_object['ipv4addrs'][0]):
- self.check_if_recordname_exists(obj_filter, ib_obj_ref, ib_obj_type, current_object, proposed_object)
-
- if (ib_obj_type in (NIOS_HOST_RECORD, NIOS_NETWORK_VIEW, NIOS_DNS_VIEW)):
- run_update = True
- proposed_object = self.on_update(proposed_object, ib_spec)
- if 'ipv4addrs' in proposed_object:
- if ('add' or 'remove') in proposed_object['ipv4addrs'][0]:
- run_update, proposed_object = self.check_if_add_remove_ip_arg_exists(proposed_object)
- if run_update:
- res = self.update_object(ref, proposed_object)
- result['changed'] = True
- else:
- res = ref
- if (ib_obj_type in (NIOS_A_RECORD, NIOS_AAAA_RECORD, NIOS_PTR_RECORD, NIOS_SRV_RECORD)):
- # popping 'view' key as update of 'view' is not supported with respect to a:record/aaaa:record/srv:record/ptr:record
- proposed_object = self.on_update(proposed_object, ib_spec)
- del proposed_object['view']
- if not self.module.check_mode:
- res = self.update_object(ref, proposed_object)
- result['changed'] = True
- elif 'network_view' in proposed_object:
- proposed_object.pop('network_view')
- result['changed'] = True
- if not self.module.check_mode and res is None:
- proposed_object = self.on_update(proposed_object, ib_spec)
- self.update_object(ref, proposed_object)
- result['changed'] = True
-
- elif state == 'absent':
- if ref is not None:
- if 'ipv4addrs' in proposed_object:
- if 'remove' in proposed_object['ipv4addrs'][0]:
- self.check_if_add_remove_ip_arg_exists(proposed_object)
- self.update_object(ref, proposed_object)
- result['changed'] = True
- elif not self.module.check_mode:
- self.delete_object(ref)
- result['changed'] = True
-
- return result
-
- def check_if_recordname_exists(self, obj_filter, ib_obj_ref, ib_obj_type, current_object, proposed_object):
- ''' Send POST request if host record input name and retrieved ref name is same,
- but input IP and retrieved IP is different'''
-
- if 'name' in (obj_filter and ib_obj_ref[0]) and ib_obj_type == NIOS_HOST_RECORD:
- obj_host_name = obj_filter['name']
- ref_host_name = ib_obj_ref[0]['name']
- if 'ipv4addrs' in (current_object and proposed_object):
- current_ip_addr = current_object['ipv4addrs'][0]['ipv4addr']
- proposed_ip_addr = proposed_object['ipv4addrs'][0]['ipv4addr']
- elif 'ipv6addrs' in (current_object and proposed_object):
- current_ip_addr = current_object['ipv6addrs'][0]['ipv6addr']
- proposed_ip_addr = proposed_object['ipv6addrs'][0]['ipv6addr']
-
- if obj_host_name == ref_host_name and current_ip_addr != proposed_ip_addr:
- self.create_object(ib_obj_type, proposed_object)
-
- def check_if_nios_next_ip_exists(self, proposed_object):
- ''' Check if nios_next_ip argument is passed in ipaddr while creating
- host record, if yes then format proposed object ipv4addrs and pass
- func:nextavailableip and ipaddr range to create hostrecord with next
- available ip in one call to avoid any race condition '''
-
- if 'ipv4addrs' in proposed_object:
- if 'nios_next_ip' in proposed_object['ipv4addrs'][0]['ipv4addr']:
- ip_range = self.module._check_type_dict(proposed_object['ipv4addrs'][0]['ipv4addr'])['nios_next_ip']
- proposed_object['ipv4addrs'][0]['ipv4addr'] = NIOS_NEXT_AVAILABLE_IP + ':' + ip_range
- elif 'ipv4addr' in proposed_object:
- if 'nios_next_ip' in proposed_object['ipv4addr']:
- ip_range = self.module._check_type_dict(proposed_object['ipv4addr'])['nios_next_ip']
- proposed_object['ipv4addr'] = NIOS_NEXT_AVAILABLE_IP + ':' + ip_range
-
- return proposed_object
-
- def check_if_add_remove_ip_arg_exists(self, proposed_object):
- '''
- This function shall check if add/remove param is set to true and
- is passed in the args, then we will update the proposed dictionary
- to add/remove IP to existing host_record, if the user passes false
- param with the argument nothing shall be done.
- :returns: True if param is changed based on add/remove, and also the
- changed proposed_object.
- '''
- update = False
- if 'add' in proposed_object['ipv4addrs'][0]:
- if proposed_object['ipv4addrs'][0]['add']:
- proposed_object['ipv4addrs+'] = proposed_object['ipv4addrs']
- del proposed_object['ipv4addrs']
- del proposed_object['ipv4addrs+'][0]['add']
- update = True
- else:
- del proposed_object['ipv4addrs'][0]['add']
- elif 'remove' in proposed_object['ipv4addrs'][0]:
- if proposed_object['ipv4addrs'][0]['remove']:
- proposed_object['ipv4addrs-'] = proposed_object['ipv4addrs']
- del proposed_object['ipv4addrs']
- del proposed_object['ipv4addrs-'][0]['remove']
- update = True
- else:
- del proposed_object['ipv4addrs'][0]['remove']
- return update, proposed_object
-
- def issubset(self, item, objects):
- ''' Checks if item is a subset of objects
- :args item: the subset item to validate
- :args objects: superset list of objects to validate against
- :returns: True if item is a subset of one entry in objects otherwise
- this method will return None
- '''
- for obj in objects:
- if isinstance(item, dict):
- if all(entry in obj.items() for entry in item.items()):
- return True
- else:
- if item in obj:
- return True
-
- def compare_objects(self, current_object, proposed_object):
- for key, proposed_item in iteritems(proposed_object):
- current_item = current_object.get(key)
-
- # if proposed has a key that current doesn't then the objects are
- # not equal and False will be immediately returned
- if current_item is None:
- return False
-
- elif isinstance(proposed_item, list):
- for subitem in proposed_item:
- if not self.issubset(subitem, current_item):
- return False
-
- elif isinstance(proposed_item, dict):
- return self.compare_objects(current_item, proposed_item)
-
- else:
- if current_item != proposed_item:
- return False
-
- return True
-
- def get_object_ref(self, module, ib_obj_type, obj_filter, ib_spec):
- ''' this function gets the reference object of pre-existing nios objects '''
-
- update = False
- old_name = new_name = None
- if ('name' in obj_filter):
- # gets and returns the current object based on name/old_name passed
- try:
- name_obj = self.module._check_type_dict(obj_filter['name'])
- old_name = name_obj['old_name']
- new_name = name_obj['new_name']
- except TypeError:
- name = obj_filter['name']
-
- if old_name and new_name:
- if (ib_obj_type == NIOS_HOST_RECORD):
- test_obj_filter = dict([('name', old_name), ('view', obj_filter['view'])])
- elif (ib_obj_type in (NIOS_AAAA_RECORD, NIOS_A_RECORD)):
- test_obj_filter = obj_filter
- else:
- test_obj_filter = dict([('name', old_name)])
- # get the object reference
- ib_obj = self.get_object(ib_obj_type, test_obj_filter, return_fields=ib_spec.keys())
- if ib_obj:
- obj_filter['name'] = new_name
- else:
- test_obj_filter['name'] = new_name
- ib_obj = self.get_object(ib_obj_type, test_obj_filter, return_fields=ib_spec.keys())
- update = True
- return ib_obj, update, new_name
- if (ib_obj_type == NIOS_HOST_RECORD):
- # to check only by name if dns bypassing is set
- if not obj_filter['configure_for_dns']:
- test_obj_filter = dict([('name', name)])
- else:
- test_obj_filter = dict([('name', name), ('view', obj_filter['view'])])
- elif (ib_obj_type == NIOS_IPV4_FIXED_ADDRESS or ib_obj_type == NIOS_IPV6_FIXED_ADDRESS and 'mac' in obj_filter):
- test_obj_filter = dict([['mac', obj_filter['mac']]])
- elif (ib_obj_type == NIOS_A_RECORD):
- # resolves issue where a_record with uppercase name was returning null and was failing
- test_obj_filter = obj_filter
- test_obj_filter['name'] = test_obj_filter['name'].lower()
- # resolves issue where multiple a_records with same name and different IP address
- try:
- ipaddr_obj = self.module._check_type_dict(obj_filter['ipv4addr'])
- ipaddr = ipaddr_obj['old_ipv4addr']
- except TypeError:
- ipaddr = obj_filter['ipv4addr']
- test_obj_filter['ipv4addr'] = ipaddr
- elif (ib_obj_type == NIOS_TXT_RECORD):
- # resolves issue where multiple txt_records with same name and different text
- test_obj_filter = obj_filter
- try:
- text_obj = self.module._check_type_dict(obj_filter['text'])
- txt = text_obj['old_text']
- except TypeError:
- txt = obj_filter['text']
- test_obj_filter['text'] = txt
- # check if test_obj_filter is empty copy passed obj_filter
- else:
- test_obj_filter = obj_filter
- ib_obj = self.get_object(ib_obj_type, test_obj_filter.copy(), return_fields=ib_spec.keys())
- elif (ib_obj_type == NIOS_A_RECORD):
- # resolves issue where multiple a_records with same name and different IP address
- test_obj_filter = obj_filter
- try:
- ipaddr_obj = self.module._check_type_dict(obj_filter['ipv4addr'])
- ipaddr = ipaddr_obj['old_ipv4addr']
- except TypeError:
- ipaddr = obj_filter['ipv4addr']
- test_obj_filter['ipv4addr'] = ipaddr
- ib_obj = self.get_object(ib_obj_type, test_obj_filter.copy(), return_fields=ib_spec.keys())
- elif (ib_obj_type == NIOS_TXT_RECORD):
- # resolves issue where multiple txt_records with same name and different text
- test_obj_filter = obj_filter
- try:
- text_obj = self.module._check_type_dict(obj_filter['text'])
- txt = text_obj['old_text']
- except TypeError:
- txt = obj_filter['text']
- test_obj_filter['text'] = txt
- ib_obj = self.get_object(ib_obj_type, test_obj_filter.copy(), return_fields=ib_spec.keys())
- elif (ib_obj_type == NIOS_ZONE):
- # del key 'restart_if_needed' as nios_zone get_object fails with the key present
- temp = ib_spec['restart_if_needed']
- del ib_spec['restart_if_needed']
- ib_obj = self.get_object(ib_obj_type, obj_filter.copy(), return_fields=ib_spec.keys())
- # reinstate restart_if_needed if ib_obj is none, meaning there's no existing nios_zone ref
- if not ib_obj:
- ib_spec['restart_if_needed'] = temp
- elif (ib_obj_type == NIOS_MEMBER):
- # del key 'create_token' as nios_member get_object fails with the key present
- temp = ib_spec['create_token']
- del ib_spec['create_token']
- ib_obj = self.get_object(ib_obj_type, obj_filter.copy(), return_fields=ib_spec.keys())
- if temp:
- # reinstate 'create_token' key
- ib_spec['create_token'] = temp
- else:
- ib_obj = self.get_object(ib_obj_type, obj_filter.copy(), return_fields=ib_spec.keys())
- return ib_obj, update, new_name
-
- def on_update(self, proposed_object, ib_spec):
- ''' Event called before the update is sent to the API endpoing
- This method will allow the final proposed object to be changed
- and/or keys filtered before it is sent to the API endpoint to
- be processed.
- :args proposed_object: A dict item that will be encoded and sent
- the API endpoint with the updated data structure
- :returns: updated object to be sent to API endpoint
- '''
- keys = set()
- for key, value in iteritems(proposed_object):
- update = ib_spec[key].get('update', True)
- if not update:
- keys.add(key)
- return dict([(k, v) for k, v in iteritems(proposed_object) if k not in keys])
diff --git a/test/support/integration/plugins/modules/aws_step_functions_state_machine.py b/test/support/integration/plugins/modules/aws_step_functions_state_machine.py
deleted file mode 100644
index 329ee4283d..0000000000
--- a/test/support/integration/plugins/modules/aws_step_functions_state_machine.py
+++ /dev/null
@@ -1,232 +0,0 @@
-#!/usr/bin/python
-# Copyright (c) 2019, Tom De Keyser (@tdekeyser)
-# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
-
-from __future__ import (absolute_import, division, print_function)
-
-__metaclass__ = type
-
-ANSIBLE_METADATA = {
- 'metadata_version': '1.1',
- 'status': ['preview'],
- 'supported_by': 'community'
-}
-
-DOCUMENTATION = '''
----
-module: aws_step_functions_state_machine
-
-short_description: Manage AWS Step Functions state machines
-
-version_added: "2.10"
-
-description:
- - Create, update and delete state machines in AWS Step Functions.
- - Calling the module in C(state=present) for an existing AWS Step Functions state machine
- will attempt to update the state machine definition, IAM Role, or tags with the provided data.
-
-options:
- name:
- description:
- - Name of the state machine
- required: true
- type: str
- definition:
- description:
- - The Amazon States Language definition of the state machine. See
- U(https://docs.aws.amazon.com/step-functions/latest/dg/concepts-amazon-states-language.html) for more
- information on the Amazon States Language.
- - "This parameter is required when C(state=present)."
- type: json
- role_arn:
- description:
- - The ARN of the IAM Role that will be used by the state machine for its executions.
- - "This parameter is required when C(state=present)."
- type: str
- state:
- description:
- - Desired state for the state machine
- default: present
- choices: [ present, absent ]
- type: str
- tags:
- description:
- - A hash/dictionary of tags to add to the new state machine or to add/remove from an existing one.
- type: dict
- purge_tags:
- description:
- - If yes, existing tags will be purged from the resource to match exactly what is defined by I(tags) parameter.
- If the I(tags) parameter is not set then tags will not be modified.
- default: yes
- type: bool
-
-extends_documentation_fragment:
- - aws
- - ec2
-
-author:
- - Tom De Keyser (@tdekeyser)
-'''
-
-EXAMPLES = '''
-# Create a new AWS Step Functions state machine
-- name: Setup HelloWorld state machine
- aws_step_functions_state_machine:
- name: "HelloWorldStateMachine"
- definition: "{{ lookup('file','state_machine.json') }}"
- role_arn: arn:aws:iam::987654321012:role/service-role/invokeLambdaStepFunctionsRole
- tags:
- project: helloWorld
-
-# Update an existing state machine
-- name: Change IAM Role and tags of HelloWorld state machine
- aws_step_functions_state_machine:
- name: HelloWorldStateMachine
- definition: "{{ lookup('file','state_machine.json') }}"
- role_arn: arn:aws:iam::987654321012:role/service-role/anotherStepFunctionsRole
- tags:
- otherTag: aDifferentTag
-
-# Remove the AWS Step Functions state machine
-- name: Delete HelloWorld state machine
- aws_step_functions_state_machine:
- name: HelloWorldStateMachine
- state: absent
-'''
-
-RETURN = '''
-state_machine_arn:
- description: ARN of the AWS Step Functions state machine
- type: str
- returned: always
-'''
-
-from ansible.module_utils.aws.core import AnsibleAWSModule
-from ansible.module_utils.ec2 import ansible_dict_to_boto3_tag_list, AWSRetry, compare_aws_tags, boto3_tag_list_to_ansible_dict
-
-try:
- from botocore.exceptions import ClientError, BotoCoreError
-except ImportError:
- pass # caught by AnsibleAWSModule
-
-
-def manage_state_machine(state, sfn_client, module):
- state_machine_arn = get_state_machine_arn(sfn_client, module)
-
- if state == 'present':
- if state_machine_arn is None:
- create(sfn_client, module)
- else:
- update(state_machine_arn, sfn_client, module)
- elif state == 'absent':
- if state_machine_arn is not None:
- remove(state_machine_arn, sfn_client, module)
-
- check_mode(module, msg='State is up-to-date.')
- module.exit_json(changed=False)
-
-
-def create(sfn_client, module):
- check_mode(module, msg='State machine would be created.', changed=True)
-
- tags = module.params.get('tags')
- sfn_tags = ansible_dict_to_boto3_tag_list(tags, tag_name_key_name='key', tag_value_key_name='value') if tags else []
-
- state_machine = sfn_client.create_state_machine(
- name=module.params.get('name'),
- definition=module.params.get('definition'),
- roleArn=module.params.get('role_arn'),
- tags=sfn_tags
- )
- module.exit_json(changed=True, state_machine_arn=state_machine.get('stateMachineArn'))
-
-
-def remove(state_machine_arn, sfn_client, module):
- check_mode(module, msg='State machine would be deleted: {0}'.format(state_machine_arn), changed=True)
-
- sfn_client.delete_state_machine(stateMachineArn=state_machine_arn)
- module.exit_json(changed=True, state_machine_arn=state_machine_arn)
-
-
-def update(state_machine_arn, sfn_client, module):
- tags_to_add, tags_to_remove = compare_tags(state_machine_arn, sfn_client, module)
-
- if params_changed(state_machine_arn, sfn_client, module) or tags_to_add or tags_to_remove:
- check_mode(module, msg='State machine would be updated: {0}'.format(state_machine_arn), changed=True)
-
- sfn_client.update_state_machine(
- stateMachineArn=state_machine_arn,
- definition=module.params.get('definition'),
- roleArn=module.params.get('role_arn')
- )
- sfn_client.untag_resource(
- resourceArn=state_machine_arn,
- tagKeys=tags_to_remove
- )
- sfn_client.tag_resource(
- resourceArn=state_machine_arn,
- tags=ansible_dict_to_boto3_tag_list(tags_to_add, tag_name_key_name='key', tag_value_key_name='value')
- )
-
- module.exit_json(changed=True, state_machine_arn=state_machine_arn)
-
-
-def compare_tags(state_machine_arn, sfn_client, module):
- new_tags = module.params.get('tags')
- current_tags = sfn_client.list_tags_for_resource(resourceArn=state_machine_arn).get('tags')
- return compare_aws_tags(boto3_tag_list_to_ansible_dict(current_tags), new_tags if new_tags else {}, module.params.get('purge_tags'))
-
-
-def params_changed(state_machine_arn, sfn_client, module):
- """
- Check whether the state machine definition or IAM Role ARN is different
- from the existing state machine parameters.
- """
- current = sfn_client.describe_state_machine(stateMachineArn=state_machine_arn)
- return current.get('definition') != module.params.get('definition') or current.get('roleArn') != module.params.get('role_arn')
-
-
-def get_state_machine_arn(sfn_client, module):
- """
- Finds the state machine ARN based on the name parameter. Returns None if
- there is no state machine with this name.
- """
- target_name = module.params.get('name')
- all_state_machines = sfn_client.list_state_machines(aws_retry=True).get('stateMachines')
-
- for state_machine in all_state_machines:
- if state_machine.get('name') == target_name:
- return state_machine.get('stateMachineArn')
-
-
-def check_mode(module, msg='', changed=False):
- if module.check_mode:
- module.exit_json(changed=changed, output=msg)
-
-
-def main():
- module_args = dict(
- name=dict(type='str', required=True),
- definition=dict(type='json'),
- role_arn=dict(type='str'),
- state=dict(choices=['present', 'absent'], default='present'),
- tags=dict(default=None, type='dict'),
- purge_tags=dict(default=True, type='bool'),
- )
- module = AnsibleAWSModule(
- argument_spec=module_args,
- required_if=[('state', 'present', ['role_arn']), ('state', 'present', ['definition'])],
- supports_check_mode=True
- )
-
- sfn_client = module.client('stepfunctions', retry_decorator=AWSRetry.jittered_backoff(retries=5))
- state = module.params.get('state')
-
- try:
- manage_state_machine(state, sfn_client, module)
- except (BotoCoreError, ClientError) as e:
- module.fail_json_aws(e, msg='Failed to manage state machine')
-
-
-if __name__ == '__main__':
- main()
diff --git a/test/support/integration/plugins/modules/aws_step_functions_state_machine_execution.py b/test/support/integration/plugins/modules/aws_step_functions_state_machine_execution.py
deleted file mode 100644
index a6e0d7182d..0000000000
--- a/test/support/integration/plugins/modules/aws_step_functions_state_machine_execution.py
+++ /dev/null
@@ -1,197 +0,0 @@
-#!/usr/bin/python
-# Copyright (c) 2019, Prasad Katti (@prasadkatti)
-# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
-
-from __future__ import (absolute_import, division, print_function)
-
-__metaclass__ = type
-
-ANSIBLE_METADATA = {
- 'metadata_version': '1.1',
- 'status': ['preview'],
- 'supported_by': 'community'
-}
-
-DOCUMENTATION = '''
----
-module: aws_step_functions_state_machine_execution
-
-short_description: Start or stop execution of an AWS Step Functions state machine.
-
-version_added: "2.10"
-
-description:
- - Start or stop execution of a state machine in AWS Step Functions.
-
-options:
- action:
- description: Desired action (start or stop) for a state machine execution.
- default: start
- choices: [ start, stop ]
- type: str
- name:
- description: Name of the execution.
- type: str
- execution_input:
- description: The JSON input data for the execution.
- type: json
- default: {}
- state_machine_arn:
- description: The ARN of the state machine that will be executed.
- type: str
- execution_arn:
- description: The ARN of the execution you wish to stop.
- type: str
- cause:
- description: A detailed explanation of the cause for stopping the execution.
- type: str
- default: ''
- error:
- description: The error code of the failure to pass in when stopping the execution.
- type: str
- default: ''
-
-extends_documentation_fragment:
- - aws
- - ec2
-
-author:
- - Prasad Katti (@prasadkatti)
-'''
-
-EXAMPLES = '''
-- name: Start an execution of a state machine
- aws_step_functions_state_machine_execution:
- name: an_execution_name
- execution_input: '{ "IsHelloWorldExample": true }'
- state_machine_arn: "arn:aws:states:us-west-2:682285639423:stateMachine:HelloWorldStateMachine"
-
-- name: Stop an execution of a state machine
- aws_step_functions_state_machine_execution:
- action: stop
- execution_arn: "arn:aws:states:us-west-2:682285639423:execution:HelloWorldStateMachineCopy:a1e8e2b5-5dfe-d40e-d9e3-6201061047c8"
- cause: "cause of task failure"
- error: "error code of the failure"
-'''
-
-RETURN = '''
-execution_arn:
- description: ARN of the AWS Step Functions state machine execution.
- type: str
- returned: if action == start and changed == True
- sample: "arn:aws:states:us-west-2:682285639423:execution:HelloWorldStateMachineCopy:a1e8e2b5-5dfe-d40e-d9e3-6201061047c8"
-start_date:
- description: The date the execution is started.
- type: str
- returned: if action == start and changed == True
- sample: "2019-11-02T22:39:49.071000-07:00"
-stop_date:
- description: The date the execution is stopped.
- type: str
- returned: if action == stop
- sample: "2019-11-02T22:39:49.071000-07:00"
-'''
-
-
-from ansible.module_utils.aws.core import AnsibleAWSModule
-from ansible.module_utils.ec2 import camel_dict_to_snake_dict
-
-try:
- from botocore.exceptions import ClientError, BotoCoreError
-except ImportError:
- pass # caught by AnsibleAWSModule
-
-
-def start_execution(module, sfn_client):
- '''
- start_execution uses execution name to determine if a previous execution already exists.
- If an execution by the provided name exists, call client.start_execution will not be called.
- '''
-
- state_machine_arn = module.params.get('state_machine_arn')
- name = module.params.get('name')
- execution_input = module.params.get('execution_input')
-
- try:
- # list_executions is eventually consistent
- page_iterators = sfn_client.get_paginator('list_executions').paginate(stateMachineArn=state_machine_arn)
-
- for execution in page_iterators.build_full_result()['executions']:
- if name == execution['name']:
- check_mode(module, msg='State machine execution already exists.', changed=False)
- module.exit_json(changed=False)
-
- check_mode(module, msg='State machine execution would be started.', changed=True)
- res_execution = sfn_client.start_execution(
- stateMachineArn=state_machine_arn,
- name=name,
- input=execution_input
- )
- except (ClientError, BotoCoreError) as e:
- if e.response['Error']['Code'] == 'ExecutionAlreadyExists':
- # this will never be executed anymore
- module.exit_json(changed=False)
- module.fail_json_aws(e, msg="Failed to start execution.")
-
- module.exit_json(changed=True, **camel_dict_to_snake_dict(res_execution))
-
-
-def stop_execution(module, sfn_client):
-
- cause = module.params.get('cause')
- error = module.params.get('error')
- execution_arn = module.params.get('execution_arn')
-
- try:
- # describe_execution is eventually consistent
- execution_status = sfn_client.describe_execution(executionArn=execution_arn)['status']
- if execution_status != 'RUNNING':
- check_mode(module, msg='State machine execution is not running.', changed=False)
- module.exit_json(changed=False)
-
- check_mode(module, msg='State machine execution would be stopped.', changed=True)
- res = sfn_client.stop_execution(
- executionArn=execution_arn,
- cause=cause,
- error=error
- )
- except (ClientError, BotoCoreError) as e:
- module.fail_json_aws(e, msg="Failed to stop execution.")
-
- module.exit_json(changed=True, **camel_dict_to_snake_dict(res))
-
-
-def check_mode(module, msg='', changed=False):
- if module.check_mode:
- module.exit_json(changed=changed, output=msg)
-
-
-def main():
- module_args = dict(
- action=dict(choices=['start', 'stop'], default='start'),
- name=dict(type='str'),
- execution_input=dict(type='json', default={}),
- state_machine_arn=dict(type='str'),
- cause=dict(type='str', default=''),
- error=dict(type='str', default=''),
- execution_arn=dict(type='str')
- )
- module = AnsibleAWSModule(
- argument_spec=module_args,
- required_if=[('action', 'start', ['name', 'state_machine_arn']),
- ('action', 'stop', ['execution_arn']),
- ],
- supports_check_mode=True
- )
-
- sfn_client = module.client('stepfunctions')
-
- action = module.params.get('action')
- if action == "start":
- start_execution(module, sfn_client)
- else:
- stop_execution(module, sfn_client)
-
-
-if __name__ == '__main__':
- main()
diff --git a/test/support/integration/plugins/modules/consul_session.py b/test/support/integration/plugins/modules/consul_session.py
deleted file mode 100644
index 6802ebe64e..0000000000
--- a/test/support/integration/plugins/modules/consul_session.py
+++ /dev/null
@@ -1,284 +0,0 @@
-#!/usr/bin/python
-# -*- coding: utf-8 -*-
-
-# Copyright: (c) 2015, Steve Gargan <steve.gargan@gmail.com>
-# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
-
-from __future__ import absolute_import, division, print_function
-__metaclass__ = type
-
-ANSIBLE_METADATA = {'metadata_version': '1.1',
- 'status': ['preview'],
- 'supported_by': 'community'}
-
-DOCUMENTATION = """
-module: consul_session
-short_description: Manipulate consul sessions
-description:
- - Allows the addition, modification and deletion of sessions in a consul
- cluster. These sessions can then be used in conjunction with key value pairs
- to implement distributed locks. In depth documentation for working with
- sessions can be found at http://www.consul.io/docs/internals/sessions.html
-requirements:
- - python-consul
- - requests
-version_added: "2.0"
-author:
-- Steve Gargan (@sgargan)
-options:
- id:
- description:
- - ID of the session, required when I(state) is either C(info) or
- C(remove).
- type: str
- state:
- description:
- - Whether the session should be present i.e. created if it doesn't
- exist, or absent, removed if present. If created, the I(id) for the
- session is returned in the output. If C(absent), I(id) is
- required to remove the session. Info for a single session, all the
- sessions for a node or all available sessions can be retrieved by
- specifying C(info), C(node) or C(list) for the I(state); for C(node)
- or C(info), the node I(name) or session I(id) is required as parameter.
- choices: [ absent, info, list, node, present ]
- type: str
- default: present
- name:
- description:
- - The name that should be associated with the session. Required when
- I(state=node) is used.
- type: str
- delay:
- description:
- - The optional lock delay that can be attached to the session when it
- is created. Locks for invalidated sessions ar blocked from being
- acquired until this delay has expired. Durations are in seconds.
- type: int
- default: 15
- node:
- description:
- - The name of the node that with which the session will be associated.
- by default this is the name of the agent.
- type: str
- datacenter:
- description:
- - The name of the datacenter in which the session exists or should be
- created.
- type: str
- checks:
- description:
- - Checks that will be used to verify the session health. If
- all the checks fail, the session will be invalidated and any locks
- associated with the session will be release and can be acquired once
- the associated lock delay has expired.
- type: list
- host:
- description:
- - The host of the consul agent defaults to localhost.
- type: str
- default: localhost
- port:
- description:
- - The port on which the consul agent is running.
- type: int
- default: 8500
- scheme:
- description:
- - The protocol scheme on which the consul agent is running.
- type: str
- default: http
- version_added: "2.1"
- validate_certs:
- description:
- - Whether to verify the TLS certificate of the consul agent.
- type: bool
- default: True
- version_added: "2.1"
- behavior:
- description:
- - The optional behavior that can be attached to the session when it
- is created. This controls the behavior when a session is invalidated.
- choices: [ delete, release ]
- type: str
- default: release
- version_added: "2.2"
-"""
-
-EXAMPLES = '''
-- name: register basic session with consul
- consul_session:
- name: session1
-
-- name: register a session with an existing check
- consul_session:
- name: session_with_check
- checks:
- - existing_check_name
-
-- name: register a session with lock_delay
- consul_session:
- name: session_with_delay
- delay: 20s
-
-- name: retrieve info about session by id
- consul_session:
- id: session_id
- state: info
-
-- name: retrieve active sessions
- consul_session:
- state: list
-'''
-
-try:
- import consul
- from requests.exceptions import ConnectionError
- python_consul_installed = True
-except ImportError:
- python_consul_installed = False
-
-from ansible.module_utils.basic import AnsibleModule
-
-
-def execute(module):
-
- state = module.params.get('state')
-
- if state in ['info', 'list', 'node']:
- lookup_sessions(module)
- elif state == 'present':
- update_session(module)
- else:
- remove_session(module)
-
-
-def lookup_sessions(module):
-
- datacenter = module.params.get('datacenter')
-
- state = module.params.get('state')
- consul_client = get_consul_api(module)
- try:
- if state == 'list':
- sessions_list = consul_client.session.list(dc=datacenter)
- # Ditch the index, this can be grabbed from the results
- if sessions_list and len(sessions_list) >= 2:
- sessions_list = sessions_list[1]
- module.exit_json(changed=True,
- sessions=sessions_list)
- elif state == 'node':
- node = module.params.get('node')
- sessions = consul_client.session.node(node, dc=datacenter)
- module.exit_json(changed=True,
- node=node,
- sessions=sessions)
- elif state == 'info':
- session_id = module.params.get('id')
-
- session_by_id = consul_client.session.info(session_id, dc=datacenter)
- module.exit_json(changed=True,
- session_id=session_id,
- sessions=session_by_id)
-
- except Exception as e:
- module.fail_json(msg="Could not retrieve session info %s" % e)
-
-
-def update_session(module):
-
- name = module.params.get('name')
- delay = module.params.get('delay')
- checks = module.params.get('checks')
- datacenter = module.params.get('datacenter')
- node = module.params.get('node')
- behavior = module.params.get('behavior')
-
- consul_client = get_consul_api(module)
-
- try:
- session = consul_client.session.create(
- name=name,
- behavior=behavior,
- node=node,
- lock_delay=delay,
- dc=datacenter,
- checks=checks
- )
- module.exit_json(changed=True,
- session_id=session,
- name=name,
- behavior=behavior,
- delay=delay,
- checks=checks,
- node=node)
- except Exception as e:
- module.fail_json(msg="Could not create/update session %s" % e)
-
-
-def remove_session(module):
- session_id = module.params.get('id')
-
- consul_client = get_consul_api(module)
-
- try:
- consul_client.session.destroy(session_id)
-
- module.exit_json(changed=True,
- session_id=session_id)
- except Exception as e:
- module.fail_json(msg="Could not remove session with id '%s' %s" % (
- session_id, e))
-
-
-def get_consul_api(module):
- return consul.Consul(host=module.params.get('host'),
- port=module.params.get('port'),
- scheme=module.params.get('scheme'),
- verify=module.params.get('validate_certs'))
-
-
-def test_dependencies(module):
- if not python_consul_installed:
- module.fail_json(msg="python-consul required for this module. "
- "see https://python-consul.readthedocs.io/en/latest/#installation")
-
-
-def main():
- argument_spec = dict(
- checks=dict(type='list'),
- delay=dict(type='int', default='15'),
- behavior=dict(type='str', default='release', choices=['release', 'delete']),
- host=dict(type='str', default='localhost'),
- port=dict(type='int', default=8500),
- scheme=dict(type='str', default='http'),
- validate_certs=dict(type='bool', default=True),
- id=dict(type='str'),
- name=dict(type='str'),
- node=dict(type='str'),
- state=dict(type='str', default='present', choices=['absent', 'info', 'list', 'node', 'present']),
- datacenter=dict(type='str'),
- )
-
- module = AnsibleModule(
- argument_spec=argument_spec,
- required_if=[
- ('state', 'node', ['name']),
- ('state', 'info', ['id']),
- ('state', 'remove', ['id']),
- ],
- supports_check_mode=False
- )
-
- test_dependencies(module)
-
- try:
- execute(module)
- except ConnectionError as e:
- module.fail_json(msg='Could not connect to consul agent at %s:%s, error was %s' % (
- module.params.get('host'), module.params.get('port'), e))
- except Exception as e:
- module.fail_json(msg=str(e))
-
-
-if __name__ == '__main__':
- main()
diff --git a/test/support/integration/plugins/modules/cs_service_offering.py b/test/support/integration/plugins/modules/cs_service_offering.py
deleted file mode 100644
index 3b15fe7f1e..0000000000
--- a/test/support/integration/plugins/modules/cs_service_offering.py
+++ /dev/null
@@ -1,583 +0,0 @@
-#!/usr/bin/python
-# -*- coding: utf-8 -*-
-#
-# (c) 2017, René Moser <mail@renemoser.net>
-# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
-
-from __future__ import (absolute_import, division, print_function)
-__metaclass__ = type
-
-ANSIBLE_METADATA = {'metadata_version': '1.1',
- 'status': ['preview'],
- 'supported_by': 'community'}
-
-DOCUMENTATION = '''
----
-module: cs_service_offering
-description:
- - Create and delete service offerings for guest and system VMs.
- - Update display_text of existing service offering.
-short_description: Manages service offerings on Apache CloudStack based clouds.
-version_added: '2.5'
-author: René Moser (@resmo)
-options:
- disk_bytes_read_rate:
- description:
- - Bytes read rate of the disk offering.
- type: int
- aliases: [ bytes_read_rate ]
- disk_bytes_write_rate:
- description:
- - Bytes write rate of the disk offering.
- type: int
- aliases: [ bytes_write_rate ]
- cpu_number:
- description:
- - The number of CPUs of the service offering.
- type: int
- cpu_speed:
- description:
- - The CPU speed of the service offering in MHz.
- type: int
- limit_cpu_usage:
- description:
- - Restrict the CPU usage to committed service offering.
- type: bool
- deployment_planner:
- description:
- - The deployment planner heuristics used to deploy a VM of this offering.
- - If not set, the value of global config I(vm.deployment.planner) is used.
- type: str
- display_text:
- description:
- - Display text of the service offering.
- - If not set, I(name) will be used as I(display_text) while creating.
- type: str
- domain:
- description:
- - Domain the service offering is related to.
- - Public for all domains and subdomains if not set.
- type: str
- host_tags:
- description:
- - The host tags for this service offering.
- type: list
- aliases:
- - host_tag
- hypervisor_snapshot_reserve:
- description:
- - Hypervisor snapshot reserve space as a percent of a volume.
- - Only for managed storage using Xen or VMware.
- type: int
- is_iops_customized:
- description:
- - Whether compute offering iops is custom or not.
- type: bool
- aliases: [ disk_iops_customized ]
- disk_iops_read_rate:
- description:
- - IO requests read rate of the disk offering.
- type: int
- disk_iops_write_rate:
- description:
- - IO requests write rate of the disk offering.
- type: int
- disk_iops_max:
- description:
- - Max. iops of the compute offering.
- type: int
- disk_iops_min:
- description:
- - Min. iops of the compute offering.
- type: int
- is_system:
- description:
- - Whether it is a system VM offering or not.
- type: bool
- default: no
- is_volatile:
- description:
- - Whether the virtual machine needs to be volatile or not.
- - Every reboot of VM the root disk is detached then destroyed and a fresh root disk is created and attached to VM.
- type: bool
- memory:
- description:
- - The total memory of the service offering in MB.
- type: int
- name:
- description:
- - Name of the service offering.
- type: str
- required: true
- network_rate:
- description:
- - Data transfer rate in Mb/s allowed.
- - Supported only for non-system offering and system offerings having I(system_vm_type=domainrouter).
- type: int
- offer_ha:
- description:
- - Whether HA is set for the service offering.
- type: bool
- default: no
- provisioning_type:
- description:
- - Provisioning type used to create volumes.
- type: str
- choices:
- - thin
- - sparse
- - fat
- service_offering_details:
- description:
- - Details for planner, used to store specific parameters.
- - A list of dictionaries having keys C(key) and C(value).
- type: list
- state:
- description:
- - State of the service offering.
- type: str
- choices:
- - present
- - absent
- default: present
- storage_type:
- description:
- - The storage type of the service offering.
- type: str
- choices:
- - local
- - shared
- system_vm_type:
- description:
- - The system VM type.
- - Required if I(is_system=yes).
- type: str
- choices:
- - domainrouter
- - consoleproxy
- - secondarystoragevm
- storage_tags:
- description:
- - The storage tags for this service offering.
- type: list
- aliases:
- - storage_tag
- is_customized:
- description:
- - Whether the offering is customizable or not.
- type: bool
- version_added: '2.8'
-extends_documentation_fragment: cloudstack
-'''
-
-EXAMPLES = '''
-- name: Create a non-volatile compute service offering with local storage
- cs_service_offering:
- name: Micro
- display_text: Micro 512mb 1cpu
- cpu_number: 1
- cpu_speed: 2198
- memory: 512
- host_tags: eco
- storage_type: local
- delegate_to: localhost
-
-- name: Create a volatile compute service offering with shared storage
- cs_service_offering:
- name: Tiny
- display_text: Tiny 1gb 1cpu
- cpu_number: 1
- cpu_speed: 2198
- memory: 1024
- storage_type: shared
- is_volatile: yes
- host_tags: eco
- storage_tags: eco
- delegate_to: localhost
-
-- name: Create or update a volatile compute service offering with shared storage
- cs_service_offering:
- name: Tiny
- display_text: Tiny 1gb 1cpu
- cpu_number: 1
- cpu_speed: 2198
- memory: 1024
- storage_type: shared
- is_volatile: yes
- host_tags: eco
- storage_tags: eco
- delegate_to: localhost
-
-- name: Create or update a custom compute service offering
- cs_service_offering:
- name: custom
- display_text: custom compute offer
- is_customized: yes
- storage_type: shared
- host_tags: eco
- storage_tags: eco
- delegate_to: localhost
-
-- name: Remove a compute service offering
- cs_service_offering:
- name: Tiny
- state: absent
- delegate_to: localhost
-
-- name: Create or update a system offering for the console proxy
- cs_service_offering:
- name: System Offering for Console Proxy 2GB
- display_text: System Offering for Console Proxy 2GB RAM
- is_system: yes
- system_vm_type: consoleproxy
- cpu_number: 1
- cpu_speed: 2198
- memory: 2048
- storage_type: shared
- storage_tags: perf
- delegate_to: localhost
-
-- name: Remove a system offering
- cs_service_offering:
- name: System Offering for Console Proxy 2GB
- is_system: yes
- state: absent
- delegate_to: localhost
-'''
-
-RETURN = '''
----
-id:
- description: UUID of the service offering
- returned: success
- type: str
- sample: a6f7a5fc-43f8-11e5-a151-feff819cdc9f
-cpu_number:
- description: Number of CPUs in the service offering
- returned: success
- type: int
- sample: 4
-cpu_speed:
- description: Speed of CPUs in MHz in the service offering
- returned: success
- type: int
- sample: 2198
-disk_iops_max:
- description: Max iops of the disk offering
- returned: success
- type: int
- sample: 1000
-disk_iops_min:
- description: Min iops of the disk offering
- returned: success
- type: int
- sample: 500
-disk_bytes_read_rate:
- description: Bytes read rate of the service offering
- returned: success
- type: int
- sample: 1000
-disk_bytes_write_rate:
- description: Bytes write rate of the service offering
- returned: success
- type: int
- sample: 1000
-disk_iops_read_rate:
- description: IO requests per second read rate of the service offering
- returned: success
- type: int
- sample: 1000
-disk_iops_write_rate:
- description: IO requests per second write rate of the service offering
- returned: success
- type: int
- sample: 1000
-created:
- description: Date the offering was created
- returned: success
- type: str
- sample: 2017-11-19T10:48:59+0000
-display_text:
- description: Display text of the offering
- returned: success
- type: str
- sample: Micro 512mb 1cpu
-domain:
- description: Domain the offering is into
- returned: success
- type: str
- sample: ROOT
-host_tags:
- description: List of host tags
- returned: success
- type: list
- sample: [ 'eco' ]
-storage_tags:
- description: List of storage tags
- returned: success
- type: list
- sample: [ 'eco' ]
-is_system:
- description: Whether the offering is for system VMs or not
- returned: success
- type: bool
- sample: false
-is_iops_customized:
- description: Whether the offering uses custom IOPS or not
- returned: success
- type: bool
- sample: false
-is_volatile:
- description: Whether the offering is volatile or not
- returned: success
- type: bool
- sample: false
-limit_cpu_usage:
- description: Whether the CPU usage is restricted to committed service offering
- returned: success
- type: bool
- sample: false
-memory:
- description: Memory of the system offering
- returned: success
- type: int
- sample: 512
-name:
- description: Name of the system offering
- returned: success
- type: str
- sample: Micro
-offer_ha:
- description: Whether HA support is enabled in the offering or not
- returned: success
- type: bool
- sample: false
-provisioning_type:
- description: Provisioning type used to create volumes
- returned: success
- type: str
- sample: thin
-storage_type:
- description: Storage type used to create volumes
- returned: success
- type: str
- sample: shared
-system_vm_type:
- description: System VM type of this offering
- returned: success
- type: str
- sample: consoleproxy
-service_offering_details:
- description: Additioanl service offering details
- returned: success
- type: dict
- sample: "{'vgpuType': 'GRID K180Q','pciDevice':'Group of NVIDIA Corporation GK107GL [GRID K1] GPUs'}"
-network_rate:
- description: Data transfer rate in megabits per second allowed
- returned: success
- type: int
- sample: 1000
-is_customized:
- description: Whether the offering is customizable or not
- returned: success
- type: bool
- sample: false
- version_added: '2.8'
-'''
-
-from ansible.module_utils.basic import AnsibleModule
-from ansible.module_utils.cloudstack import (
- AnsibleCloudStack,
- cs_argument_spec,
- cs_required_together,
-)
-
-
-class AnsibleCloudStackServiceOffering(AnsibleCloudStack):
-
- def __init__(self, module):
- super(AnsibleCloudStackServiceOffering, self).__init__(module)
- self.returns = {
- 'cpunumber': 'cpu_number',
- 'cpuspeed': 'cpu_speed',
- 'deploymentplanner': 'deployment_planner',
- 'diskBytesReadRate': 'disk_bytes_read_rate',
- 'diskBytesWriteRate': 'disk_bytes_write_rate',
- 'diskIopsReadRate': 'disk_iops_read_rate',
- 'diskIopsWriteRate': 'disk_iops_write_rate',
- 'maxiops': 'disk_iops_max',
- 'miniops': 'disk_iops_min',
- 'hypervisorsnapshotreserve': 'hypervisor_snapshot_reserve',
- 'iscustomized': 'is_customized',
- 'iscustomizediops': 'is_iops_customized',
- 'issystem': 'is_system',
- 'isvolatile': 'is_volatile',
- 'limitcpuuse': 'limit_cpu_usage',
- 'memory': 'memory',
- 'networkrate': 'network_rate',
- 'offerha': 'offer_ha',
- 'provisioningtype': 'provisioning_type',
- 'serviceofferingdetails': 'service_offering_details',
- 'storagetype': 'storage_type',
- 'systemvmtype': 'system_vm_type',
- 'tags': 'storage_tags',
- }
-
- def get_service_offering(self):
- args = {
- 'name': self.module.params.get('name'),
- 'domainid': self.get_domain(key='id'),
- 'issystem': self.module.params.get('is_system'),
- 'systemvmtype': self.module.params.get('system_vm_type'),
- }
- service_offerings = self.query_api('listServiceOfferings', **args)
- if service_offerings:
- return service_offerings['serviceoffering'][0]
-
- def present_service_offering(self):
- service_offering = self.get_service_offering()
- if not service_offering:
- service_offering = self._create_offering(service_offering)
- else:
- service_offering = self._update_offering(service_offering)
-
- return service_offering
-
- def absent_service_offering(self):
- service_offering = self.get_service_offering()
- if service_offering:
- self.result['changed'] = True
- if not self.module.check_mode:
- args = {
- 'id': service_offering['id'],
- }
- self.query_api('deleteServiceOffering', **args)
- return service_offering
-
- def _create_offering(self, service_offering):
- self.result['changed'] = True
-
- system_vm_type = self.module.params.get('system_vm_type')
- is_system = self.module.params.get('is_system')
-
- required_params = []
- if is_system and not system_vm_type:
- required_params.append('system_vm_type')
- self.module.fail_on_missing_params(required_params=required_params)
-
- args = {
- 'name': self.module.params.get('name'),
- 'displaytext': self.get_or_fallback('display_text', 'name'),
- 'bytesreadrate': self.module.params.get('disk_bytes_read_rate'),
- 'byteswriterate': self.module.params.get('disk_bytes_write_rate'),
- 'cpunumber': self.module.params.get('cpu_number'),
- 'cpuspeed': self.module.params.get('cpu_speed'),
- 'customizediops': self.module.params.get('is_iops_customized'),
- 'deploymentplanner': self.module.params.get('deployment_planner'),
- 'domainid': self.get_domain(key='id'),
- 'hosttags': self.module.params.get('host_tags'),
- 'hypervisorsnapshotreserve': self.module.params.get('hypervisor_snapshot_reserve'),
- 'iopsreadrate': self.module.params.get('disk_iops_read_rate'),
- 'iopswriterate': self.module.params.get('disk_iops_write_rate'),
- 'maxiops': self.module.params.get('disk_iops_max'),
- 'miniops': self.module.params.get('disk_iops_min'),
- 'issystem': is_system,
- 'isvolatile': self.module.params.get('is_volatile'),
- 'memory': self.module.params.get('memory'),
- 'networkrate': self.module.params.get('network_rate'),
- 'offerha': self.module.params.get('offer_ha'),
- 'provisioningtype': self.module.params.get('provisioning_type'),
- 'serviceofferingdetails': self.module.params.get('service_offering_details'),
- 'storagetype': self.module.params.get('storage_type'),
- 'systemvmtype': system_vm_type,
- 'tags': self.module.params.get('storage_tags'),
- 'limitcpuuse': self.module.params.get('limit_cpu_usage'),
- 'customized': self.module.params.get('is_customized')
- }
- if not self.module.check_mode:
- res = self.query_api('createServiceOffering', **args)
- service_offering = res['serviceoffering']
- return service_offering
-
- def _update_offering(self, service_offering):
- args = {
- 'id': service_offering['id'],
- 'name': self.module.params.get('name'),
- 'displaytext': self.get_or_fallback('display_text', 'name'),
- }
- if self.has_changed(args, service_offering):
- self.result['changed'] = True
-
- if not self.module.check_mode:
- res = self.query_api('updateServiceOffering', **args)
- service_offering = res['serviceoffering']
- return service_offering
-
- def get_result(self, service_offering):
- super(AnsibleCloudStackServiceOffering, self).get_result(service_offering)
- if service_offering:
- if 'hosttags' in service_offering:
- self.result['host_tags'] = service_offering['hosttags'].split(',') or [service_offering['hosttags']]
-
- # Prevent confusion, the api returns a tags key for storage tags.
- if 'tags' in service_offering:
- self.result['storage_tags'] = service_offering['tags'].split(',') or [service_offering['tags']]
- if 'tags' in self.result:
- del self.result['tags']
-
- return self.result
-
-
-def main():
- argument_spec = cs_argument_spec()
- argument_spec.update(dict(
- name=dict(required=True),
- display_text=dict(),
- cpu_number=dict(type='int'),
- cpu_speed=dict(type='int'),
- limit_cpu_usage=dict(type='bool'),
- deployment_planner=dict(),
- domain=dict(),
- host_tags=dict(type='list', aliases=['host_tag']),
- hypervisor_snapshot_reserve=dict(type='int'),
- disk_bytes_read_rate=dict(type='int', aliases=['bytes_read_rate']),
- disk_bytes_write_rate=dict(type='int', aliases=['bytes_write_rate']),
- disk_iops_read_rate=dict(type='int'),
- disk_iops_write_rate=dict(type='int'),
- disk_iops_max=dict(type='int'),
- disk_iops_min=dict(type='int'),
- is_system=dict(type='bool', default=False),
- is_volatile=dict(type='bool'),
- is_iops_customized=dict(type='bool', aliases=['disk_iops_customized']),
- memory=dict(type='int'),
- network_rate=dict(type='int'),
- offer_ha=dict(type='bool'),
- provisioning_type=dict(choices=['thin', 'sparse', 'fat']),
- service_offering_details=dict(type='list'),
- storage_type=dict(choices=['local', 'shared']),
- system_vm_type=dict(choices=['domainrouter', 'consoleproxy', 'secondarystoragevm']),
- storage_tags=dict(type='list', aliases=['storage_tag']),
- state=dict(choices=['present', 'absent'], default='present'),
- is_customized=dict(type='bool'),
- ))
-
- module = AnsibleModule(
- argument_spec=argument_spec,
- required_together=cs_required_together(),
- supports_check_mode=True
- )
-
- acs_so = AnsibleCloudStackServiceOffering(module)
-
- state = module.params.get('state')
- if state == "absent":
- service_offering = acs_so.absent_service_offering()
- else:
- service_offering = acs_so.present_service_offering()
-
- result = acs_so.get_result(service_offering)
- module.exit_json(**result)
-
-
-if __name__ == '__main__':
- main()
diff --git a/test/support/integration/plugins/modules/hcloud_server.py b/test/support/integration/plugins/modules/hcloud_server.py
deleted file mode 100644
index 791c890a29..0000000000
--- a/test/support/integration/plugins/modules/hcloud_server.py
+++ /dev/null
@@ -1,555 +0,0 @@
-#!/usr/bin/python
-# -*- coding: utf-8 -*-
-
-# Copyright: (c) 2019, Hetzner Cloud GmbH <info@hetzner-cloud.de>
-# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
-
-from __future__ import absolute_import, division, print_function
-
-__metaclass__ = type
-
-ANSIBLE_METADATA = {
- "metadata_version": "1.1",
- "status": ["preview"],
- "supported_by": "community",
-}
-
-DOCUMENTATION = """
----
-module: hcloud_server
-
-short_description: Create and manage cloud servers on the Hetzner Cloud.
-
-version_added: "2.8"
-
-description:
- - Create, update and manage cloud servers on the Hetzner Cloud.
-
-author:
- - Lukas Kaemmerling (@LKaemmerling)
-
-options:
- id:
- description:
- - The ID of the Hetzner Cloud server to manage.
- - Only required if no server I(name) is given
- type: int
- name:
- description:
- - The Name of the Hetzner Cloud server to manage.
- - Only required if no server I(id) is given or a server does not exists.
- type: str
- server_type:
- description:
- - The Server Type of the Hetzner Cloud server to manage.
- - Required if server does not exists.
- type: str
- ssh_keys:
- description:
- - List of SSH key names
- - The key names correspond to the SSH keys configured for your
- Hetzner Cloud account access.
- type: list
- volumes:
- description:
- - List of Volumes IDs that should be attached to the server on server creation.
- type: list
- image:
- description:
- - Image the server should be created from.
- - Required if server does not exists.
- type: str
- location:
- description:
- - Location of Server.
- - Required if no I(datacenter) is given and server does not exists.
- type: str
- datacenter:
- description:
- - Datacenter of Server.
- - Required of no I(location) is given and server does not exists.
- type: str
- backups:
- description:
- - Enable or disable Backups for the given Server.
- type: bool
- default: no
- upgrade_disk:
- description:
- - Resize the disk size, when resizing a server.
- - If you want to downgrade the server later, this value should be False.
- type: bool
- default: no
- force_upgrade:
- description:
- - Force the upgrade of the server.
- - Power off the server if it is running on upgrade.
- type: bool
- default: no
- user_data:
- description:
- - User Data to be passed to the server on creation.
- - Only used if server does not exists.
- type: str
- rescue_mode:
- description:
- - Add the Hetzner rescue system type you want the server to be booted into.
- type: str
- version_added: 2.9
- labels:
- description:
- - User-defined labels (key-value pairs).
- type: dict
- delete_protection:
- description:
- - Protect the Server for deletion.
- - Needs to be the same as I(rebuild_protection).
- type: bool
- version_added: "2.10"
- rebuild_protection:
- description:
- - Protect the Server for rebuild.
- - Needs to be the same as I(delete_protection).
- type: bool
- version_added: "2.10"
- state:
- description:
- - State of the server.
- default: present
- choices: [ absent, present, restarted, started, stopped, rebuild ]
- type: str
-extends_documentation_fragment: hcloud
-"""
-
-EXAMPLES = """
-- name: Create a basic server
- hcloud_server:
- name: my-server
- server_type: cx11
- image: ubuntu-18.04
- state: present
-
-- name: Create a basic server with ssh key
- hcloud_server:
- name: my-server
- server_type: cx11
- image: ubuntu-18.04
- location: fsn1
- ssh_keys:
- - me@myorganisation
- state: present
-
-- name: Resize an existing server
- hcloud_server:
- name: my-server
- server_type: cx21
- upgrade_disk: yes
- state: present
-
-- name: Ensure the server is absent (remove if needed)
- hcloud_server:
- name: my-server
- state: absent
-
-- name: Ensure the server is started
- hcloud_server:
- name: my-server
- state: started
-
-- name: Ensure the server is stopped
- hcloud_server:
- name: my-server
- state: stopped
-
-- name: Ensure the server is restarted
- hcloud_server:
- name: my-server
- state: restarted
-
-- name: Ensure the server is will be booted in rescue mode and therefore restarted
- hcloud_server:
- name: my-server
- rescue_mode: linux64
- state: restarted
-
-- name: Ensure the server is rebuild
- hcloud_server:
- name: my-server
- image: ubuntu-18.04
- state: rebuild
-"""
-
-RETURN = """
-hcloud_server:
- description: The server instance
- returned: Always
- type: complex
- contains:
- id:
- description: Numeric identifier of the server
- returned: always
- type: int
- sample: 1937415
- name:
- description: Name of the server
- returned: always
- type: str
- sample: my-server
- status:
- description: Status of the server
- returned: always
- type: str
- sample: running
- server_type:
- description: Name of the server type of the server
- returned: always
- type: str
- sample: cx11
- ipv4_address:
- description: Public IPv4 address of the server
- returned: always
- type: str
- sample: 116.203.104.109
- ipv6:
- description: IPv6 network of the server
- returned: always
- type: str
- sample: 2a01:4f8:1c1c:c140::/64
- location:
- description: Name of the location of the server
- returned: always
- type: str
- sample: fsn1
- datacenter:
- description: Name of the datacenter of the server
- returned: always
- type: str
- sample: fsn1-dc14
- rescue_enabled:
- description: True if rescue mode is enabled, Server will then boot into rescue system on next reboot
- returned: always
- type: bool
- sample: false
- backup_window:
- description: Time window (UTC) in which the backup will run, or null if the backups are not enabled
- returned: always
- type: bool
- sample: 22-02
- labels:
- description: User-defined labels (key-value pairs)
- returned: always
- type: dict
- delete_protection:
- description: True if server is protected for deletion
- type: bool
- returned: always
- sample: false
- version_added: "2.10"
- rebuild_protection:
- description: True if server is protected for rebuild
- type: bool
- returned: always
- sample: false
- version_added: "2.10"
-"""
-
-from ansible.module_utils.basic import AnsibleModule
-from ansible.module_utils._text import to_native
-from ansible.module_utils.hcloud import Hcloud
-
-try:
- from hcloud.volumes.domain import Volume
- from hcloud.ssh_keys.domain import SSHKey
- from hcloud.servers.domain import Server
- from hcloud import APIException
-except ImportError:
- pass
-
-
-class AnsibleHcloudServer(Hcloud):
- def __init__(self, module):
- Hcloud.__init__(self, module, "hcloud_server")
- self.hcloud_server = None
-
- def _prepare_result(self):
- image = None if self.hcloud_server.image is None else to_native(self.hcloud_server.image.name)
- return {
- "id": to_native(self.hcloud_server.id),
- "name": to_native(self.hcloud_server.name),
- "ipv4_address": to_native(self.hcloud_server.public_net.ipv4.ip),
- "ipv6": to_native(self.hcloud_server.public_net.ipv6.ip),
- "image": image,
- "server_type": to_native(self.hcloud_server.server_type.name),
- "datacenter": to_native(self.hcloud_server.datacenter.name),
- "location": to_native(self.hcloud_server.datacenter.location.name),
- "rescue_enabled": self.hcloud_server.rescue_enabled,
- "backup_window": to_native(self.hcloud_server.backup_window),
- "labels": self.hcloud_server.labels,
- "delete_protection": self.hcloud_server.protection["delete"],
- "rebuild_protection": self.hcloud_server.protection["rebuild"],
- "status": to_native(self.hcloud_server.status),
- }
-
- def _get_server(self):
- try:
- if self.module.params.get("id") is not None:
- self.hcloud_server = self.client.servers.get_by_id(
- self.module.params.get("id")
- )
- else:
- self.hcloud_server = self.client.servers.get_by_name(
- self.module.params.get("name")
- )
- except APIException as e:
- self.module.fail_json(msg=e.message)
-
- def _create_server(self):
-
- self.module.fail_on_missing_params(
- required_params=["name", "server_type", "image"]
- )
- params = {
- "name": self.module.params.get("name"),
- "server_type": self.client.server_types.get_by_name(
- self.module.params.get("server_type")
- ),
- "user_data": self.module.params.get("user_data"),
- "labels": self.module.params.get("labels"),
- }
- if self.client.images.get_by_name(self.module.params.get("image")) is not None:
- # When image name is not available look for id instead
- params["image"] = self.client.images.get_by_name(self.module.params.get("image"))
- else:
- params["image"] = self.client.images.get_by_id(self.module.params.get("image"))
-
- if self.module.params.get("ssh_keys") is not None:
- params["ssh_keys"] = [
- SSHKey(name=ssh_key_name)
- for ssh_key_name in self.module.params.get("ssh_keys")
- ]
-
- if self.module.params.get("volumes") is not None:
- params["volumes"] = [
- Volume(id=volume_id) for volume_id in self.module.params.get("volumes")
- ]
-
- if self.module.params.get("location") is None and self.module.params.get("datacenter") is None:
- # When not given, the API will choose the location.
- params["location"] = None
- params["datacenter"] = None
- elif self.module.params.get("location") is not None and self.module.params.get("datacenter") is None:
- params["location"] = self.client.locations.get_by_name(
- self.module.params.get("location")
- )
- elif self.module.params.get("location") is None and self.module.params.get("datacenter") is not None:
- params["datacenter"] = self.client.datacenters.get_by_name(
- self.module.params.get("datacenter")
- )
-
- if not self.module.check_mode:
- resp = self.client.servers.create(**params)
- self.result["root_password"] = resp.root_password
- resp.action.wait_until_finished(max_retries=1000)
- [action.wait_until_finished() for action in resp.next_actions]
-
- rescue_mode = self.module.params.get("rescue_mode")
- if rescue_mode:
- self._get_server()
- self._set_rescue_mode(rescue_mode)
-
- self._mark_as_changed()
- self._get_server()
-
- def _update_server(self):
- try:
- rescue_mode = self.module.params.get("rescue_mode")
- if rescue_mode and self.hcloud_server.rescue_enabled is False:
- if not self.module.check_mode:
- self._set_rescue_mode(rescue_mode)
- self._mark_as_changed()
- elif not rescue_mode and self.hcloud_server.rescue_enabled is True:
- if not self.module.check_mode:
- self.hcloud_server.disable_rescue().wait_until_finished()
- self._mark_as_changed()
-
- if self.module.params.get("backups") and self.hcloud_server.backup_window is None:
- if not self.module.check_mode:
- self.hcloud_server.enable_backup().wait_until_finished()
- self._mark_as_changed()
- elif not self.module.params.get("backups") and self.hcloud_server.backup_window is not None:
- if not self.module.check_mode:
- self.hcloud_server.disable_backup().wait_until_finished()
- self._mark_as_changed()
-
- labels = self.module.params.get("labels")
- if labels is not None and labels != self.hcloud_server.labels:
- if not self.module.check_mode:
- self.hcloud_server.update(labels=labels)
- self._mark_as_changed()
-
- server_type = self.module.params.get("server_type")
- if server_type is not None and self.hcloud_server.server_type.name != server_type:
- previous_server_status = self.hcloud_server.status
- state = self.module.params.get("state")
- if previous_server_status == Server.STATUS_RUNNING:
- if not self.module.check_mode:
- if self.module.params.get("force_upgrade") or state == "stopped":
- self.stop_server() # Only stopped server can be upgraded
- else:
- self.module.warn(
- "You can not upgrade a running instance %s. You need to stop the instance or use force_upgrade=yes."
- % self.hcloud_server.name
- )
- timeout = 100
- if self.module.params.get("upgrade_disk"):
- timeout = (
- 1000
- ) # When we upgrade the disk too the resize progress takes some more time.
- if not self.module.check_mode:
- self.hcloud_server.change_type(
- server_type=self.client.server_types.get_by_name(server_type),
- upgrade_disk=self.module.params.get("upgrade_disk"),
- ).wait_until_finished(timeout)
- if state == "present" and previous_server_status == Server.STATUS_RUNNING or state == "started":
- self.start_server()
-
- self._mark_as_changed()
-
- delete_protection = self.module.params.get("delete_protection")
- rebuild_protection = self.module.params.get("rebuild_protection")
- if (delete_protection is not None and rebuild_protection is not None) and (
- delete_protection != self.hcloud_server.protection["delete"] or rebuild_protection !=
- self.hcloud_server.protection["rebuild"]):
- if not self.module.check_mode:
- self.hcloud_server.change_protection(delete=delete_protection,
- rebuild=rebuild_protection).wait_until_finished()
- self._mark_as_changed()
- self._get_server()
- except APIException as e:
- self.module.fail_json(msg=e.message)
-
- def _set_rescue_mode(self, rescue_mode):
- if self.module.params.get("ssh_keys"):
- resp = self.hcloud_server.enable_rescue(type=rescue_mode,
- ssh_keys=[self.client.ssh_keys.get_by_name(ssh_key_name).id
- for
- ssh_key_name in
- self.module.params.get("ssh_keys")])
- else:
- resp = self.hcloud_server.enable_rescue(type=rescue_mode)
- resp.action.wait_until_finished()
- self.result["root_password"] = resp.root_password
-
- def start_server(self):
- try:
- if self.hcloud_server.status != Server.STATUS_RUNNING:
- if not self.module.check_mode:
- self.client.servers.power_on(self.hcloud_server).wait_until_finished()
- self._mark_as_changed()
- self._get_server()
- except APIException as e:
- self.module.fail_json(msg=e.message)
-
- def stop_server(self):
- try:
- if self.hcloud_server.status != Server.STATUS_OFF:
- if not self.module.check_mode:
- self.client.servers.power_off(self.hcloud_server).wait_until_finished()
- self._mark_as_changed()
- self._get_server()
- except APIException as e:
- self.module.fail_json(msg=e.message)
-
- def rebuild_server(self):
- self.module.fail_on_missing_params(
- required_params=["image"]
- )
- try:
- if not self.module.check_mode:
- self.client.servers.rebuild(self.hcloud_server, self.client.images.get_by_name(
- self.module.params.get("image"))).wait_until_finished()
- self._mark_as_changed()
-
- self._get_server()
- except APIException as e:
- self.module.fail_json(msg=e.message)
-
- def present_server(self):
- self._get_server()
- if self.hcloud_server is None:
- self._create_server()
- else:
- self._update_server()
-
- def delete_server(self):
- try:
- self._get_server()
- if self.hcloud_server is not None:
- if not self.module.check_mode:
- self.client.servers.delete(self.hcloud_server).wait_until_finished()
- self._mark_as_changed()
- self.hcloud_server = None
- except APIException as e:
- self.module.fail_json(msg=e.message)
-
- @staticmethod
- def define_module():
- return AnsibleModule(
- argument_spec=dict(
- id={"type": "int"},
- name={"type": "str"},
- image={"type": "str"},
- server_type={"type": "str"},
- location={"type": "str"},
- datacenter={"type": "str"},
- user_data={"type": "str"},
- ssh_keys={"type": "list"},
- volumes={"type": "list"},
- labels={"type": "dict"},
- backups={"type": "bool", "default": False},
- upgrade_disk={"type": "bool", "default": False},
- force_upgrade={"type": "bool", "default": False},
- rescue_mode={"type": "str"},
- delete_protection={"type": "bool"},
- rebuild_protection={"type": "bool"},
- state={
- "choices": ["absent", "present", "restarted", "started", "stopped", "rebuild"],
- "default": "present",
- },
- **Hcloud.base_module_arguments()
- ),
- required_one_of=[['id', 'name']],
- mutually_exclusive=[["location", "datacenter"]],
- required_together=[["delete_protection", "rebuild_protection"]],
- supports_check_mode=True,
- )
-
-
-def main():
- module = AnsibleHcloudServer.define_module()
-
- hcloud = AnsibleHcloudServer(module)
- state = module.params.get("state")
- if state == "absent":
- hcloud.delete_server()
- elif state == "present":
- hcloud.present_server()
- elif state == "started":
- hcloud.present_server()
- hcloud.start_server()
- elif state == "stopped":
- hcloud.present_server()
- hcloud.stop_server()
- elif state == "restarted":
- hcloud.present_server()
- hcloud.stop_server()
- hcloud.start_server()
- elif state == "rebuild":
- hcloud.present_server()
- hcloud.rebuild_server()
-
- module.exit_json(**hcloud.get_result())
-
-
-if __name__ == "__main__":
- main()
diff --git a/test/support/integration/plugins/modules/nios_txt_record.py b/test/support/integration/plugins/modules/nios_txt_record.py
deleted file mode 100644
index b9e63dfc6e..0000000000
--- a/test/support/integration/plugins/modules/nios_txt_record.py
+++ /dev/null
@@ -1,134 +0,0 @@
-#!/usr/bin/python
-# Copyright (c) 2018 Red Hat, Inc.
-# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
-
-from __future__ import absolute_import, division, print_function
-__metaclass__ = type
-
-ANSIBLE_METADATA = {'metadata_version': '1.1',
- 'status': ['preview'],
- 'supported_by': 'certified'}
-
-
-DOCUMENTATION = '''
----
-module: nios_txt_record
-version_added: "2.7"
-author: "Corey Wanless (@coreywan)"
-short_description: Configure Infoblox NIOS txt records
-description:
- - Adds and/or removes instances of txt record objects from
- Infoblox NIOS servers. This module manages NIOS C(record:txt) objects
- using the Infoblox WAPI interface over REST.
-requirements:
- - infoblox_client
-extends_documentation_fragment: nios
-options:
- name:
- description:
- - Specifies the fully qualified hostname to add or remove from
- the system
- required: true
- view:
- description:
- - Sets the DNS view to associate this tst record with. The DNS
- view must already be configured on the system
- required: true
- default: default
- aliases:
- - dns_view
- text:
- description:
- - Text associated with the record. It can contain up to 255 bytes
- per substring, up to a total of 512 bytes. To enter leading,
- trailing, or embedded spaces in the text, add quotes around the
- text to preserve the spaces.
- required: true
- ttl:
- description:
- - Configures the TTL to be associated with this tst record
- extattrs:
- description:
- - Allows for the configuration of Extensible Attributes on the
- instance of the object. This argument accepts a set of key / value
- pairs for configuration.
- comment:
- description:
- - Configures a text string comment to be associated with the instance
- of this object. The provided text string will be configured on the
- object instance.
- state:
- description:
- - Configures the intended state of the instance of the object on
- the NIOS server. When this value is set to C(present), the object
- is configured on the device and when this value is set to C(absent)
- the value is removed (if necessary) from the device.
- default: present
- choices:
- - present
- - absent
-'''
-
-EXAMPLES = '''
- - name: Ensure a text Record Exists
- nios_txt_record:
- name: fqdn.txt.record.com
- text: mytext
- state: present
- view: External
- provider:
- host: "{{ inventory_hostname_short }}"
- username: admin
- password: admin
-
- - name: Ensure a text Record does not exist
- nios_txt_record:
- name: fqdn.txt.record.com
- text: mytext
- state: absent
- view: External
- provider:
- host: "{{ inventory_hostname_short }}"
- username: admin
- password: admin
-'''
-
-RETURN = ''' # '''
-
-from ansible.module_utils.basic import AnsibleModule
-from ansible.module_utils.six import iteritems
-from ansible.module_utils.net_tools.nios.api import WapiModule
-
-
-def main():
- ''' Main entry point for module execution
- '''
-
- ib_spec = dict(
- name=dict(required=True, ib_req=True),
- view=dict(default='default', aliases=['dns_view'], ib_req=True),
- text=dict(ib_req=True),
- ttl=dict(type='int'),
- extattrs=dict(type='dict'),
- comment=dict(),
- )
-
- argument_spec = dict(
- provider=dict(required=True),
- state=dict(default='present', choices=['present', 'absent'])
- )
-
- argument_spec.update(ib_spec)
- argument_spec.update(WapiModule.provider_spec)
-
- module = AnsibleModule(argument_spec=argument_spec,
- supports_check_mode=True)
-
- wapi = WapiModule(module)
- result = wapi.run('record:txt', ib_spec)
-
- module.exit_json(**result)
-
-
-if __name__ == '__main__':
- main()
diff --git a/test/support/integration/plugins/modules/nios_zone.py b/test/support/integration/plugins/modules/nios_zone.py
deleted file mode 100644
index 0ffb2ff0a4..0000000000
--- a/test/support/integration/plugins/modules/nios_zone.py
+++ /dev/null
@@ -1,228 +0,0 @@
-#!/usr/bin/python
-# Copyright (c) 2018 Red Hat, Inc.
-# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
-
-from __future__ import absolute_import, division, print_function
-__metaclass__ = type
-
-ANSIBLE_METADATA = {'metadata_version': '1.1',
- 'status': ['preview'],
- 'supported_by': 'certified'}
-
-
-DOCUMENTATION = '''
----
-module: nios_zone
-version_added: "2.5"
-author: "Peter Sprygada (@privateip)"
-short_description: Configure Infoblox NIOS DNS zones
-description:
- - Adds and/or removes instances of DNS zone objects from
- Infoblox NIOS servers. This module manages NIOS C(zone_auth) objects
- using the Infoblox WAPI interface over REST.
-requirements:
- - infoblox-client
-extends_documentation_fragment: nios
-options:
- fqdn:
- description:
- - Specifies the qualified domain name to either add or remove from
- the NIOS instance based on the configured C(state) value.
- required: true
- aliases:
- - name
- view:
- description:
- - Configures the DNS view name for the configured resource. The
- specified DNS zone must already exist on the running NIOS instance
- prior to configuring zones.
- required: true
- default: default
- aliases:
- - dns_view
- grid_primary:
- description:
- - Configures the grid primary servers for this zone.
- suboptions:
- name:
- description:
- - The name of the grid primary server
- grid_secondaries:
- description:
- - Configures the grid secondary servers for this zone.
- suboptions:
- name:
- description:
- - The name of the grid secondary server
- ns_group:
- version_added: "2.6"
- description:
- - Configures the name server group for this zone. Name server group is
- mutually exclusive with grid primary and grid secondaries.
- restart_if_needed:
- version_added: "2.6"
- description:
- - If set to true, causes the NIOS DNS service to restart and load the
- new zone configuration
- type: bool
- zone_format:
- version_added: "2.7"
- description:
- - Create an authorative Reverse-Mapping Zone which is an area of network
- space for which one or more name servers-primary and secondary-have the
- responsibility to respond to address-to-name queries. It supports
- reverse-mapping zones for both IPv4 and IPv6 addresses.
- default: FORWARD
- extattrs:
- description:
- - Allows for the configuration of Extensible Attributes on the
- instance of the object. This argument accepts a set of key / value
- pairs for configuration.
- comment:
- description:
- - Configures a text string comment to be associated with the instance
- of this object. The provided text string will be configured on the
- object instance.
- state:
- description:
- - Configures the intended state of the instance of the object on
- the NIOS server. When this value is set to C(present), the object
- is configured on the device and when this value is set to C(absent)
- the value is removed (if necessary) from the device.
- default: present
- choices:
- - present
- - absent
-'''
-
-EXAMPLES = '''
-- name: configure a zone on the system using grid primary and secondaries
- nios_zone:
- name: ansible.com
- grid_primary:
- - name: gridprimary.grid.com
- grid_secondaries:
- - name: gridsecondary1.grid.com
- - name: gridsecondary2.grid.com
- restart_if_needed: true
- state: present
- provider:
- host: "{{ inventory_hostname_short }}"
- username: admin
- password: admin
- connection: local
-- name: configure a zone on the system using a name server group
- nios_zone:
- name: ansible.com
- ns_group: examplensg
- restart_if_needed: true
- state: present
- provider:
- host: "{{ inventory_hostname_short }}"
- username: admin
- password: admin
- connection: local
-- name: configure a reverse mapping zone on the system using IPV4 zone format
- nios_zone:
- name: 10.10.10.0/24
- zone_format: IPV4
- state: present
- provider:
- host: "{{ inventory_hostname_short }}"
- username: admin
- password: admin
- connection: local
-- name: configure a reverse mapping zone on the system using IPV6 zone format
- nios_zone:
- name: 100::1/128
- zone_format: IPV6
- state: present
- provider:
- host: "{{ inventory_hostname_short }}"
- username: admin
- password: admin
- connection: local
-- name: update the comment and ext attributes for an existing zone
- nios_zone:
- name: ansible.com
- comment: this is an example comment
- extattrs:
- Site: west-dc
- state: present
- provider:
- host: "{{ inventory_hostname_short }}"
- username: admin
- password: admin
- connection: local
-- name: remove the dns zone
- nios_zone:
- name: ansible.com
- state: absent
- provider:
- host: "{{ inventory_hostname_short }}"
- username: admin
- password: admin
- connection: local
-- name: remove the reverse mapping dns zone from the system with IPV4 zone format
- nios_zone:
- name: 10.10.10.0/24
- zone_format: IPV4
- state: absent
- provider:
- host: "{{ inventory_hostname_short }}"
- username: admin
- password: admin
- connection: local
-'''
-
-RETURN = ''' # '''
-
-from ansible.module_utils.basic import AnsibleModule
-from ansible.module_utils.net_tools.nios.api import WapiModule
-from ansible.module_utils.net_tools.nios.api import NIOS_ZONE
-
-
-def main():
- ''' Main entry point for module execution
- '''
- grid_spec = dict(
- name=dict(required=True),
- )
-
- ib_spec = dict(
- fqdn=dict(required=True, aliases=['name'], ib_req=True, update=False),
- zone_format=dict(default='FORWARD', aliases=['zone_format'], ib_req=False),
- view=dict(default='default', aliases=['dns_view'], ib_req=True),
-
- grid_primary=dict(type='list', elements='dict', options=grid_spec),
- grid_secondaries=dict(type='list', elements='dict', options=grid_spec),
- ns_group=dict(),
- restart_if_needed=dict(type='bool'),
-
- extattrs=dict(type='dict'),
- comment=dict()
- )
-
- argument_spec = dict(
- provider=dict(required=True),
- state=dict(default='present', choices=['present', 'absent'])
- )
-
- argument_spec.update(ib_spec)
- argument_spec.update(WapiModule.provider_spec)
-
- module = AnsibleModule(argument_spec=argument_spec,
- supports_check_mode=True,
- mutually_exclusive=[
- ['ns_group', 'grid_primary'],
- ['ns_group', 'grid_secondaries']
- ])
-
- wapi = WapiModule(module)
- result = wapi.run(NIOS_ZONE, ib_spec)
-
- module.exit_json(**result)
-
-
-if __name__ == '__main__':
- main()
diff --git a/test/support/integration/plugins/modules/openssl_certificate.py b/test/support/integration/plugins/modules/openssl_certificate.py
deleted file mode 100644
index 28780bf22c..0000000000
--- a/test/support/integration/plugins/modules/openssl_certificate.py
+++ /dev/null
@@ -1,2757 +0,0 @@
-#!/usr/bin/python
-# -*- coding: utf-8 -*-
-
-# Copyright: (c) 2016-2017, Yanis Guenane <yanis+ansible@guenane.org>
-# Copyright: (c) 2017, Markus Teufelberger <mteufelberger+ansible@mgit.at>
-# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
-
-from __future__ import absolute_import, division, print_function
-__metaclass__ = type
-
-ANSIBLE_METADATA = {'metadata_version': '1.1',
- 'status': ['preview'],
- 'supported_by': 'community'}
-
-DOCUMENTATION = r'''
----
-module: openssl_certificate
-version_added: "2.4"
-short_description: Generate and/or check OpenSSL certificates
-description:
- - This module allows one to (re)generate OpenSSL certificates.
- - It implements a notion of provider (ie. C(selfsigned), C(ownca), C(acme), C(assertonly), C(entrust))
- for your certificate.
- - The C(assertonly) provider is intended for use cases where one is only interested in
- checking properties of a supplied certificate. Please note that this provider has been
- deprecated in Ansible 2.9 and will be removed in Ansible 2.13. See the examples on how
- to emulate C(assertonly) usage with M(openssl_certificate_info), M(openssl_csr_info),
- M(openssl_privatekey_info) and M(assert). This also allows more flexible checks than
- the ones offered by the C(assertonly) provider.
- - The C(ownca) provider is intended for generating OpenSSL certificate signed with your own
- CA (Certificate Authority) certificate (self-signed certificate).
- - Many properties that can be specified in this module are for validation of an
- existing or newly generated certificate. The proper place to specify them, if you
- want to receive a certificate with these properties is a CSR (Certificate Signing Request).
- - "Please note that the module regenerates existing certificate if it doesn't match the module's
- options, or if it seems to be corrupt. If you are concerned that this could overwrite
- your existing certificate, consider using the I(backup) option."
- - It uses the pyOpenSSL or cryptography python library to interact with OpenSSL.
- - If both the cryptography and PyOpenSSL libraries are available (and meet the minimum version requirements)
- cryptography will be preferred as a backend over PyOpenSSL (unless the backend is forced with C(select_crypto_backend)).
- Please note that the PyOpenSSL backend was deprecated in Ansible 2.9 and will be removed in Ansible 2.13.
-requirements:
- - PyOpenSSL >= 0.15 or cryptography >= 1.6 (if using C(selfsigned) or C(assertonly) provider)
- - acme-tiny >= 4.0.0 (if using the C(acme) provider)
-author:
- - Yanis Guenane (@Spredzy)
- - Markus Teufelberger (@MarkusTeufelberger)
-options:
- state:
- description:
- - Whether the certificate should exist or not, taking action if the state is different from what is stated.
- type: str
- default: present
- choices: [ absent, present ]
-
- path:
- description:
- - Remote absolute path where the generated certificate file should be created or is already located.
- type: path
- required: true
-
- provider:
- description:
- - Name of the provider to use to generate/retrieve the OpenSSL certificate.
- - The C(assertonly) provider will not generate files and fail if the certificate file is missing.
- - The C(assertonly) provider has been deprecated in Ansible 2.9 and will be removed in Ansible 2.13.
- Please see the examples on how to emulate it with M(openssl_certificate_info), M(openssl_csr_info),
- M(openssl_privatekey_info) and M(assert).
- - "The C(entrust) provider was added for Ansible 2.9 and requires credentials for the
- L(Entrust Certificate Services,https://www.entrustdatacard.com/products/categories/ssl-certificates) (ECS) API."
- - Required if I(state) is C(present).
- type: str
- choices: [ acme, assertonly, entrust, ownca, selfsigned ]
-
- force:
- description:
- - Generate the certificate, even if it already exists.
- type: bool
- default: no
-
- csr_path:
- description:
- - Path to the Certificate Signing Request (CSR) used to generate this certificate.
- - This is not required in C(assertonly) mode.
- - This is mutually exclusive with I(csr_content).
- type: path
- csr_content:
- description:
- - Content of the Certificate Signing Request (CSR) used to generate this certificate.
- - This is not required in C(assertonly) mode.
- - This is mutually exclusive with I(csr_path).
- type: str
- version_added: "2.10"
-
- privatekey_path:
- description:
- - Path to the private key to use when signing the certificate.
- - This is mutually exclusive with I(privatekey_content).
- type: path
- privatekey_content:
- description:
- - Path to the private key to use when signing the certificate.
- - This is mutually exclusive with I(privatekey_path).
- type: str
- version_added: "2.10"
-
- privatekey_passphrase:
- description:
- - The passphrase for the I(privatekey_path) resp. I(privatekey_content).
- - This is required if the private key is password protected.
- type: str
-
- selfsigned_version:
- description:
- - Version of the C(selfsigned) certificate.
- - Nowadays it should almost always be C(3).
- - This is only used by the C(selfsigned) provider.
- type: int
- default: 3
- version_added: "2.5"
-
- selfsigned_digest:
- description:
- - Digest algorithm to be used when self-signing the certificate.
- - This is only used by the C(selfsigned) provider.
- type: str
- default: sha256
-
- selfsigned_not_before:
- description:
- - The point in time the certificate is valid from.
- - Time can be specified either as relative time or as absolute timestamp.
- - Time will always be interpreted as UTC.
- - Valid format is C([+-]timespec | ASN.1 TIME) where timespec can be an integer
- + C([w | d | h | m | s]) (e.g. C(+32w1d2h).
- - Note that if using relative time this module is NOT idempotent.
- - If this value is not specified, the certificate will start being valid from now.
- - This is only used by the C(selfsigned) provider.
- type: str
- default: +0s
- aliases: [ selfsigned_notBefore ]
-
- selfsigned_not_after:
- description:
- - The point in time at which the certificate stops being valid.
- - Time can be specified either as relative time or as absolute timestamp.
- - Time will always be interpreted as UTC.
- - Valid format is C([+-]timespec | ASN.1 TIME) where timespec can be an integer
- + C([w | d | h | m | s]) (e.g. C(+32w1d2h).
- - Note that if using relative time this module is NOT idempotent.
- - If this value is not specified, the certificate will stop being valid 10 years from now.
- - This is only used by the C(selfsigned) provider.
- type: str
- default: +3650d
- aliases: [ selfsigned_notAfter ]
-
- selfsigned_create_subject_key_identifier:
- description:
- - Whether to create the Subject Key Identifier (SKI) from the public key.
- - A value of C(create_if_not_provided) (default) only creates a SKI when the CSR does not
- provide one.
- - A value of C(always_create) always creates a SKI. If the CSR provides one, that one is
- ignored.
- - A value of C(never_create) never creates a SKI. If the CSR provides one, that one is used.
- - This is only used by the C(selfsigned) provider.
- - Note that this is only supported if the C(cryptography) backend is used!
- type: str
- choices: [create_if_not_provided, always_create, never_create]
- default: create_if_not_provided
- version_added: "2.9"
-
- ownca_path:
- description:
- - Remote absolute path of the CA (Certificate Authority) certificate.
- - This is only used by the C(ownca) provider.
- - This is mutually exclusive with I(ownca_content).
- type: path
- version_added: "2.7"
- ownca_content:
- description:
- - Content of the CA (Certificate Authority) certificate.
- - This is only used by the C(ownca) provider.
- - This is mutually exclusive with I(ownca_path).
- type: str
- version_added: "2.10"
-
- ownca_privatekey_path:
- description:
- - Path to the CA (Certificate Authority) private key to use when signing the certificate.
- - This is only used by the C(ownca) provider.
- - This is mutually exclusive with I(ownca_privatekey_content).
- type: path
- version_added: "2.7"
- ownca_privatekey_content:
- description:
- - Path to the CA (Certificate Authority) private key to use when signing the certificate.
- - This is only used by the C(ownca) provider.
- - This is mutually exclusive with I(ownca_privatekey_path).
- type: str
- version_added: "2.10"
-
- ownca_privatekey_passphrase:
- description:
- - The passphrase for the I(ownca_privatekey_path) resp. I(ownca_privatekey_content).
- - This is only used by the C(ownca) provider.
- type: str
- version_added: "2.7"
-
- ownca_digest:
- description:
- - The digest algorithm to be used for the C(ownca) certificate.
- - This is only used by the C(ownca) provider.
- type: str
- default: sha256
- version_added: "2.7"
-
- ownca_version:
- description:
- - The version of the C(ownca) certificate.
- - Nowadays it should almost always be C(3).
- - This is only used by the C(ownca) provider.
- type: int
- default: 3
- version_added: "2.7"
-
- ownca_not_before:
- description:
- - The point in time the certificate is valid from.
- - Time can be specified either as relative time or as absolute timestamp.
- - Time will always be interpreted as UTC.
- - Valid format is C([+-]timespec | ASN.1 TIME) where timespec can be an integer
- + C([w | d | h | m | s]) (e.g. C(+32w1d2h).
- - Note that if using relative time this module is NOT idempotent.
- - If this value is not specified, the certificate will start being valid from now.
- - This is only used by the C(ownca) provider.
- type: str
- default: +0s
- version_added: "2.7"
-
- ownca_not_after:
- description:
- - The point in time at which the certificate stops being valid.
- - Time can be specified either as relative time or as absolute timestamp.
- - Time will always be interpreted as UTC.
- - Valid format is C([+-]timespec | ASN.1 TIME) where timespec can be an integer
- + C([w | d | h | m | s]) (e.g. C(+32w1d2h).
- - Note that if using relative time this module is NOT idempotent.
- - If this value is not specified, the certificate will stop being valid 10 years from now.
- - This is only used by the C(ownca) provider.
- type: str
- default: +3650d
- version_added: "2.7"
-
- ownca_create_subject_key_identifier:
- description:
- - Whether to create the Subject Key Identifier (SKI) from the public key.
- - A value of C(create_if_not_provided) (default) only creates a SKI when the CSR does not
- provide one.
- - A value of C(always_create) always creates a SKI. If the CSR provides one, that one is
- ignored.
- - A value of C(never_create) never creates a SKI. If the CSR provides one, that one is used.
- - This is only used by the C(ownca) provider.
- - Note that this is only supported if the C(cryptography) backend is used!
- type: str
- choices: [create_if_not_provided, always_create, never_create]
- default: create_if_not_provided
- version_added: "2.9"
-
- ownca_create_authority_key_identifier:
- description:
- - Create a Authority Key Identifier from the CA's certificate. If the CSR provided
- a authority key identifier, it is ignored.
- - The Authority Key Identifier is generated from the CA certificate's Subject Key Identifier,
- if available. If it is not available, the CA certificate's public key will be used.
- - This is only used by the C(ownca) provider.
- - Note that this is only supported if the C(cryptography) backend is used!
- type: bool
- default: yes
- version_added: "2.9"
-
- acme_accountkey_path:
- description:
- - The path to the accountkey for the C(acme) provider.
- - This is only used by the C(acme) provider.
- type: path
-
- acme_challenge_path:
- description:
- - The path to the ACME challenge directory that is served on U(http://<HOST>:80/.well-known/acme-challenge/)
- - This is only used by the C(acme) provider.
- type: path
-
- acme_chain:
- description:
- - Include the intermediate certificate to the generated certificate
- - This is only used by the C(acme) provider.
- - Note that this is only available for older versions of C(acme-tiny).
- New versions include the chain automatically, and setting I(acme_chain) to C(yes) results in an error.
- type: bool
- default: no
- version_added: "2.5"
-
- acme_directory:
- description:
- - "The ACME directory to use. You can use any directory that supports the ACME protocol, such as Buypass or Let's Encrypt."
- - "Let's Encrypt recommends using their staging server while developing jobs. U(https://letsencrypt.org/docs/staging-environment/)."
- type: str
- default: https://acme-v02.api.letsencrypt.org/directory
- version_added: "2.10"
-
- signature_algorithms:
- description:
- - A list of algorithms that you would accept the certificate to be signed with
- (e.g. ['sha256WithRSAEncryption', 'sha512WithRSAEncryption']).
- - This is only used by the C(assertonly) provider.
- - This option is deprecated since Ansible 2.9 and will be removed with the C(assertonly) provider in Ansible 2.13.
- For alternatives, see the example on replacing C(assertonly).
- type: list
- elements: str
-
- issuer:
- description:
- - The key/value pairs that must be present in the issuer name field of the certificate.
- - If you need to specify more than one value with the same key, use a list as value.
- - This is only used by the C(assertonly) provider.
- - This option is deprecated since Ansible 2.9 and will be removed with the C(assertonly) provider in Ansible 2.13.
- For alternatives, see the example on replacing C(assertonly).
- type: dict
-
- issuer_strict:
- description:
- - If set to C(yes), the I(issuer) field must contain only these values.
- - This is only used by the C(assertonly) provider.
- - This option is deprecated since Ansible 2.9 and will be removed with the C(assertonly) provider in Ansible 2.13.
- For alternatives, see the example on replacing C(assertonly).
- type: bool
- default: no
- version_added: "2.5"
-
- subject:
- description:
- - The key/value pairs that must be present in the subject name field of the certificate.
- - If you need to specify more than one value with the same key, use a list as value.
- - This is only used by the C(assertonly) provider.
- - This option is deprecated since Ansible 2.9 and will be removed with the C(assertonly) provider in Ansible 2.13.
- For alternatives, see the example on replacing C(assertonly).
- type: dict
-
- subject_strict:
- description:
- - If set to C(yes), the I(subject) field must contain only these values.
- - This is only used by the C(assertonly) provider.
- - This option is deprecated since Ansible 2.9 and will be removed with the C(assertonly) provider in Ansible 2.13.
- For alternatives, see the example on replacing C(assertonly).
- type: bool
- default: no
- version_added: "2.5"
-
- has_expired:
- description:
- - Checks if the certificate is expired/not expired at the time the module is executed.
- - This is only used by the C(assertonly) provider.
- - This option is deprecated since Ansible 2.9 and will be removed with the C(assertonly) provider in Ansible 2.13.
- For alternatives, see the example on replacing C(assertonly).
- type: bool
- default: no
-
- version:
- description:
- - The version of the certificate.
- - Nowadays it should almost always be 3.
- - This is only used by the C(assertonly) provider.
- - This option is deprecated since Ansible 2.9 and will be removed with the C(assertonly) provider in Ansible 2.13.
- For alternatives, see the example on replacing C(assertonly).
- type: int
-
- valid_at:
- description:
- - The certificate must be valid at this point in time.
- - The timestamp is formatted as an ASN.1 TIME.
- - This is only used by the C(assertonly) provider.
- - This option is deprecated since Ansible 2.9 and will be removed with the C(assertonly) provider in Ansible 2.13.
- For alternatives, see the example on replacing C(assertonly).
- type: str
-
- invalid_at:
- description:
- - The certificate must be invalid at this point in time.
- - The timestamp is formatted as an ASN.1 TIME.
- - This is only used by the C(assertonly) provider.
- - This option is deprecated since Ansible 2.9 and will be removed with the C(assertonly) provider in Ansible 2.13.
- For alternatives, see the example on replacing C(assertonly).
- type: str
-
- not_before:
- description:
- - The certificate must start to become valid at this point in time.
- - The timestamp is formatted as an ASN.1 TIME.
- - This is only used by the C(assertonly) provider.
- - This option is deprecated since Ansible 2.9 and will be removed with the C(assertonly) provider in Ansible 2.13.
- For alternatives, see the example on replacing C(assertonly).
- type: str
- aliases: [ notBefore ]
-
- not_after:
- description:
- - The certificate must expire at this point in time.
- - The timestamp is formatted as an ASN.1 TIME.
- - This is only used by the C(assertonly) provider.
- - This option is deprecated since Ansible 2.9 and will be removed with the C(assertonly) provider in Ansible 2.13.
- For alternatives, see the example on replacing C(assertonly).
- type: str
- aliases: [ notAfter ]
-
- valid_in:
- description:
- - The certificate must still be valid at this relative time offset from now.
- - Valid format is C([+-]timespec | number_of_seconds) where timespec can be an integer
- + C([w | d | h | m | s]) (e.g. C(+32w1d2h).
- - Note that if using this parameter, this module is NOT idempotent.
- - This is only used by the C(assertonly) provider.
- - This option is deprecated since Ansible 2.9 and will be removed with the C(assertonly) provider in Ansible 2.13.
- For alternatives, see the example on replacing C(assertonly).
- type: str
-
- key_usage:
- description:
- - The I(key_usage) extension field must contain all these values.
- - This is only used by the C(assertonly) provider.
- - This option is deprecated since Ansible 2.9 and will be removed with the C(assertonly) provider in Ansible 2.13.
- For alternatives, see the example on replacing C(assertonly).
- type: list
- elements: str
- aliases: [ keyUsage ]
-
- key_usage_strict:
- description:
- - If set to C(yes), the I(key_usage) extension field must contain only these values.
- - This is only used by the C(assertonly) provider.
- - This option is deprecated since Ansible 2.9 and will be removed with the C(assertonly) provider in Ansible 2.13.
- For alternatives, see the example on replacing C(assertonly).
- type: bool
- default: no
- aliases: [ keyUsage_strict ]
-
- extended_key_usage:
- description:
- - The I(extended_key_usage) extension field must contain all these values.
- - This is only used by the C(assertonly) provider.
- - This option is deprecated since Ansible 2.9 and will be removed with the C(assertonly) provider in Ansible 2.13.
- For alternatives, see the example on replacing C(assertonly).
- type: list
- elements: str
- aliases: [ extendedKeyUsage ]
-
- extended_key_usage_strict:
- description:
- - If set to C(yes), the I(extended_key_usage) extension field must contain only these values.
- - This is only used by the C(assertonly) provider.
- - This option is deprecated since Ansible 2.9 and will be removed with the C(assertonly) provider in Ansible 2.13.
- For alternatives, see the example on replacing C(assertonly).
- type: bool
- default: no
- aliases: [ extendedKeyUsage_strict ]
-
- subject_alt_name:
- description:
- - The I(subject_alt_name) extension field must contain these values.
- - This is only used by the C(assertonly) provider.
- - This option is deprecated since Ansible 2.9 and will be removed with the C(assertonly) provider in Ansible 2.13.
- For alternatives, see the example on replacing C(assertonly).
- type: list
- elements: str
- aliases: [ subjectAltName ]
-
- subject_alt_name_strict:
- description:
- - If set to C(yes), the I(subject_alt_name) extension field must contain only these values.
- - This is only used by the C(assertonly) provider.
- - This option is deprecated since Ansible 2.9 and will be removed with the C(assertonly) provider in Ansible 2.13.
- For alternatives, see the example on replacing C(assertonly).
- type: bool
- default: no
- aliases: [ subjectAltName_strict ]
-
- select_crypto_backend:
- description:
- - Determines which crypto backend to use.
- - The default choice is C(auto), which tries to use C(cryptography) if available, and falls back to C(pyopenssl).
- - If set to C(pyopenssl), will try to use the L(pyOpenSSL,https://pypi.org/project/pyOpenSSL/) library.
- - If set to C(cryptography), will try to use the L(cryptography,https://cryptography.io/) library.
- - Please note that the C(pyopenssl) backend has been deprecated in Ansible 2.9, and will be removed in Ansible 2.13.
- From that point on, only the C(cryptography) backend will be available.
- type: str
- default: auto
- choices: [ auto, cryptography, pyopenssl ]
- version_added: "2.8"
-
- backup:
- description:
- - Create a backup file including a timestamp so you can get the original
- certificate back if you overwrote it with a new one by accident.
- - This is not used by the C(assertonly) provider.
- - This option is deprecated since Ansible 2.9 and will be removed with the C(assertonly) provider in Ansible 2.13.
- For alternatives, see the example on replacing C(assertonly).
- type: bool
- default: no
- version_added: "2.8"
-
- entrust_cert_type:
- description:
- - Specify the type of certificate requested.
- - This is only used by the C(entrust) provider.
- type: str
- default: STANDARD_SSL
- choices: [ 'STANDARD_SSL', 'ADVANTAGE_SSL', 'UC_SSL', 'EV_SSL', 'WILDCARD_SSL', 'PRIVATE_SSL', 'PD_SSL', 'CDS_ENT_LITE', 'CDS_ENT_PRO', 'SMIME_ENT' ]
- version_added: "2.9"
-
- entrust_requester_email:
- description:
- - The email of the requester of the certificate (for tracking purposes).
- - This is only used by the C(entrust) provider.
- - This is required if the provider is C(entrust).
- type: str
- version_added: "2.9"
-
- entrust_requester_name:
- description:
- - The name of the requester of the certificate (for tracking purposes).
- - This is only used by the C(entrust) provider.
- - This is required if the provider is C(entrust).
- type: str
- version_added: "2.9"
-
- entrust_requester_phone:
- description:
- - The phone number of the requester of the certificate (for tracking purposes).
- - This is only used by the C(entrust) provider.
- - This is required if the provider is C(entrust).
- type: str
- version_added: "2.9"
-
- entrust_api_user:
- description:
- - The username for authentication to the Entrust Certificate Services (ECS) API.
- - This is only used by the C(entrust) provider.
- - This is required if the provider is C(entrust).
- type: str
- version_added: "2.9"
-
- entrust_api_key:
- description:
- - The key (password) for authentication to the Entrust Certificate Services (ECS) API.
- - This is only used by the C(entrust) provider.
- - This is required if the provider is C(entrust).
- type: str
- version_added: "2.9"
-
- entrust_api_client_cert_path:
- description:
- - The path to the client certificate used to authenticate to the Entrust Certificate Services (ECS) API.
- - This is only used by the C(entrust) provider.
- - This is required if the provider is C(entrust).
- type: path
- version_added: "2.9"
-
- entrust_api_client_cert_key_path:
- description:
- - The path to the private key of the client certificate used to authenticate to the Entrust Certificate Services (ECS) API.
- - This is only used by the C(entrust) provider.
- - This is required if the provider is C(entrust).
- type: path
- version_added: "2.9"
-
- entrust_not_after:
- description:
- - The point in time at which the certificate stops being valid.
- - Time can be specified either as relative time or as an absolute timestamp.
- - A valid absolute time format is C(ASN.1 TIME) such as C(2019-06-18).
- - A valid relative time format is C([+-]timespec) where timespec can be an integer + C([w | d | h | m | s]), such as C(+365d) or C(+32w1d2h)).
- - Time will always be interpreted as UTC.
- - Note that only the date (day, month, year) is supported for specifying the expiry date of the issued certificate.
- - The full date-time is adjusted to EST (GMT -5:00) before issuance, which may result in a certificate with an expiration date one day
- earlier than expected if a relative time is used.
- - The minimum certificate lifetime is 90 days, and maximum is three years.
- - If this value is not specified, the certificate will stop being valid 365 days the date of issue.
- - This is only used by the C(entrust) provider.
- type: str
- default: +365d
- version_added: "2.9"
-
- entrust_api_specification_path:
- description:
- - The path to the specification file defining the Entrust Certificate Services (ECS) API configuration.
- - You can use this to keep a local copy of the specification to avoid downloading it every time the module is used.
- - This is only used by the C(entrust) provider.
- type: path
- default: https://cloud.entrust.net/EntrustCloud/documentation/cms-api-2.1.0.yaml
- version_added: "2.9"
-
- return_content:
- description:
- - If set to C(yes), will return the (current or generated) certificate's content as I(certificate).
- type: bool
- default: no
- version_added: "2.10"
-
-extends_documentation_fragment: files
-notes:
- - All ASN.1 TIME values should be specified following the YYYYMMDDHHMMSSZ pattern.
- - Date specified should be UTC. Minutes and seconds are mandatory.
- - For security reason, when you use C(ownca) provider, you should NOT run M(openssl_certificate) on
- a target machine, but on a dedicated CA machine. It is recommended not to store the CA private key
- on the target machine. Once signed, the certificate can be moved to the target machine.
-seealso:
-- module: openssl_csr
-- module: openssl_dhparam
-- module: openssl_pkcs12
-- module: openssl_privatekey
-- module: openssl_publickey
-'''
-
-EXAMPLES = r'''
-- name: Generate a Self Signed OpenSSL certificate
- openssl_certificate:
- path: /etc/ssl/crt/ansible.com.crt
- privatekey_path: /etc/ssl/private/ansible.com.pem
- csr_path: /etc/ssl/csr/ansible.com.csr
- provider: selfsigned
-
-- name: Generate an OpenSSL certificate signed with your own CA certificate
- openssl_certificate:
- path: /etc/ssl/crt/ansible.com.crt
- csr_path: /etc/ssl/csr/ansible.com.csr
- ownca_path: /etc/ssl/crt/ansible_CA.crt
- ownca_privatekey_path: /etc/ssl/private/ansible_CA.pem
- provider: ownca
-
-- name: Generate a Let's Encrypt Certificate
- openssl_certificate:
- path: /etc/ssl/crt/ansible.com.crt
- csr_path: /etc/ssl/csr/ansible.com.csr
- provider: acme
- acme_accountkey_path: /etc/ssl/private/ansible.com.pem
- acme_challenge_path: /etc/ssl/challenges/ansible.com/
-
-- name: Force (re-)generate a new Let's Encrypt Certificate
- openssl_certificate:
- path: /etc/ssl/crt/ansible.com.crt
- csr_path: /etc/ssl/csr/ansible.com.csr
- provider: acme
- acme_accountkey_path: /etc/ssl/private/ansible.com.pem
- acme_challenge_path: /etc/ssl/challenges/ansible.com/
- force: yes
-
-- name: Generate an Entrust certificate via the Entrust Certificate Services (ECS) API
- openssl_certificate:
- path: /etc/ssl/crt/ansible.com.crt
- csr_path: /etc/ssl/csr/ansible.com.csr
- provider: entrust
- entrust_requester_name: Jo Doe
- entrust_requester_email: jdoe@ansible.com
- entrust_requester_phone: 555-555-5555
- entrust_cert_type: STANDARD_SSL
- entrust_api_user: apiusername
- entrust_api_key: a^lv*32!cd9LnT
- entrust_api_client_cert_path: /etc/ssl/entrust/ecs-client.crt
- entrust_api_client_cert_key_path: /etc/ssl/entrust/ecs-key.crt
- entrust_api_specification_path: /etc/ssl/entrust/api-docs/cms-api-2.1.0.yaml
-
-# The following example shows one assertonly usage using all existing options for
-# assertonly, and shows how to emulate the behavior with the openssl_certificate_info,
-# openssl_csr_info, openssl_privatekey_info and assert modules:
-
-- openssl_certificate:
- provider: assertonly
- path: /etc/ssl/crt/ansible.com.crt
- csr_path: /etc/ssl/csr/ansible.com.csr
- privatekey_path: /etc/ssl/csr/ansible.com.key
- signature_algorithms:
- - sha256WithRSAEncryption
- - sha512WithRSAEncryption
- subject:
- commonName: ansible.com
- subject_strict: yes
- issuer:
- commonName: ansible.com
- issuer_strict: yes
- has_expired: no
- version: 3
- key_usage:
- - Data Encipherment
- key_usage_strict: yes
- extended_key_usage:
- - DVCS
- extended_key_usage_strict: yes
- subject_alt_name:
- - dns:ansible.com
- subject_alt_name_strict: yes
- not_before: 20190331202428Z
- not_after: 20190413202428Z
- valid_at: "+1d10h"
- invalid_at: 20200331202428Z
- valid_in: 10 # in ten seconds
-
-- openssl_certificate_info:
- path: /etc/ssl/crt/ansible.com.crt
- # for valid_at, invalid_at and valid_in
- valid_at:
- one_day_ten_hours: "+1d10h"
- fixed_timestamp: 20200331202428Z
- ten_seconds: "+10"
- register: result
-
-- openssl_csr_info:
- # Verifies that the CSR signature is valid; module will fail if not
- path: /etc/ssl/csr/ansible.com.csr
- register: result_csr
-
-- openssl_privatekey_info:
- path: /etc/ssl/csr/ansible.com.key
- register: result_privatekey
-
-- assert:
- that:
- # When private key is specified for assertonly, this will be checked:
- - result.public_key == result_privatekey.public_key
- # When CSR is specified for assertonly, this will be checked:
- - result.public_key == result_csr.public_key
- - result.subject_ordered == result_csr.subject_ordered
- - result.extensions_by_oid == result_csr.extensions_by_oid
- # signature_algorithms check
- - "result.signature_algorithm == 'sha256WithRSAEncryption' or result.signature_algorithm == 'sha512WithRSAEncryption'"
- # subject and subject_strict
- - "result.subject.commonName == 'ansible.com'"
- - "result.subject | length == 1" # the number must be the number of entries you check for
- # issuer and issuer_strict
- - "result.issuer.commonName == 'ansible.com'"
- - "result.issuer | length == 1" # the number must be the number of entries you check for
- # has_expired
- - not result.expired
- # version
- - result.version == 3
- # key_usage and key_usage_strict
- - "'Data Encipherment' in result.key_usage"
- - "result.key_usage | length == 1" # the number must be the number of entries you check for
- # extended_key_usage and extended_key_usage_strict
- - "'DVCS' in result.extended_key_usage"
- - "result.extended_key_usage | length == 1" # the number must be the number of entries you check for
- # subject_alt_name and subject_alt_name_strict
- - "'dns:ansible.com' in result.subject_alt_name"
- - "result.subject_alt_name | length == 1" # the number must be the number of entries you check for
- # not_before and not_after
- - "result.not_before == '20190331202428Z'"
- - "result.not_after == '20190413202428Z'"
- # valid_at, invalid_at and valid_in
- - "result.valid_at.one_day_ten_hours" # for valid_at
- - "not result.valid_at.fixed_timestamp" # for invalid_at
- - "result.valid_at.ten_seconds" # for valid_in
-
-# Examples for some checks one could use the assertonly provider for:
-# (Please note that assertonly has been deprecated!)
-
-# How to use the assertonly provider to implement and trigger your own custom certificate generation workflow:
-- name: Check if a certificate is currently still valid, ignoring failures
- openssl_certificate:
- path: /etc/ssl/crt/example.com.crt
- provider: assertonly
- has_expired: no
- ignore_errors: yes
- register: validity_check
-
-- name: Run custom task(s) to get a new, valid certificate in case the initial check failed
- command: superspecialSSL recreate /etc/ssl/crt/example.com.crt
- when: validity_check.failed
-
-- name: Check the new certificate again for validity with the same parameters, this time failing the play if it is still invalid
- openssl_certificate:
- path: /etc/ssl/crt/example.com.crt
- provider: assertonly
- has_expired: no
- when: validity_check.failed
-
-# Some other checks that assertonly could be used for:
-- name: Verify that an existing certificate was issued by the Let's Encrypt CA and is currently still valid
- openssl_certificate:
- path: /etc/ssl/crt/example.com.crt
- provider: assertonly
- issuer:
- O: Let's Encrypt
- has_expired: no
-
-- name: Ensure that a certificate uses a modern signature algorithm (no SHA1, MD5 or DSA)
- openssl_certificate:
- path: /etc/ssl/crt/example.com.crt
- provider: assertonly
- signature_algorithms:
- - sha224WithRSAEncryption
- - sha256WithRSAEncryption
- - sha384WithRSAEncryption
- - sha512WithRSAEncryption
- - sha224WithECDSAEncryption
- - sha256WithECDSAEncryption
- - sha384WithECDSAEncryption
- - sha512WithECDSAEncryption
-
-- name: Ensure that the existing certificate belongs to the specified private key
- openssl_certificate:
- path: /etc/ssl/crt/example.com.crt
- privatekey_path: /etc/ssl/private/example.com.pem
- provider: assertonly
-
-- name: Ensure that the existing certificate is still valid at the winter solstice 2017
- openssl_certificate:
- path: /etc/ssl/crt/example.com.crt
- provider: assertonly
- valid_at: 20171221162800Z
-
-- name: Ensure that the existing certificate is still valid 2 weeks (1209600 seconds) from now
- openssl_certificate:
- path: /etc/ssl/crt/example.com.crt
- provider: assertonly
- valid_in: 1209600
-
-- name: Ensure that the existing certificate is only used for digital signatures and encrypting other keys
- openssl_certificate:
- path: /etc/ssl/crt/example.com.crt
- provider: assertonly
- key_usage:
- - digitalSignature
- - keyEncipherment
- key_usage_strict: true
-
-- name: Ensure that the existing certificate can be used for client authentication
- openssl_certificate:
- path: /etc/ssl/crt/example.com.crt
- provider: assertonly
- extended_key_usage:
- - clientAuth
-
-- name: Ensure that the existing certificate can only be used for client authentication and time stamping
- openssl_certificate:
- path: /etc/ssl/crt/example.com.crt
- provider: assertonly
- extended_key_usage:
- - clientAuth
- - 1.3.6.1.5.5.7.3.8
- extended_key_usage_strict: true
-
-- name: Ensure that the existing certificate has a certain domain in its subjectAltName
- openssl_certificate:
- path: /etc/ssl/crt/example.com.crt
- provider: assertonly
- subject_alt_name:
- - www.example.com
- - test.example.com
-'''
-
-RETURN = r'''
-filename:
- description: Path to the generated certificate.
- returned: changed or success
- type: str
- sample: /etc/ssl/crt/www.ansible.com.crt
-backup_file:
- description: Name of backup file created.
- returned: changed and if I(backup) is C(yes)
- type: str
- sample: /path/to/www.ansible.com.crt.2019-03-09@11:22~
-certificate:
- description: The (current or generated) certificate's content.
- returned: if I(state) is C(present) and I(return_content) is C(yes)
- type: str
- version_added: "2.10"
-'''
-
-
-from random import randint
-import abc
-import datetime
-import time
-import os
-import tempfile
-import traceback
-from distutils.version import LooseVersion
-
-from ansible.module_utils import crypto as crypto_utils
-from ansible.module_utils.basic import AnsibleModule, missing_required_lib
-from ansible.module_utils._text import to_native, to_bytes, to_text
-from ansible.module_utils.compat import ipaddress as compat_ipaddress
-from ansible.module_utils.ecs.api import ECSClient, RestOperationException, SessionConfigurationException
-
-MINIMAL_CRYPTOGRAPHY_VERSION = '1.6'
-MINIMAL_PYOPENSSL_VERSION = '0.15'
-
-PYOPENSSL_IMP_ERR = None
-try:
- import OpenSSL
- from OpenSSL import crypto
- PYOPENSSL_VERSION = LooseVersion(OpenSSL.__version__)
-except ImportError:
- PYOPENSSL_IMP_ERR = traceback.format_exc()
- PYOPENSSL_FOUND = False
-else:
- PYOPENSSL_FOUND = True
-
-CRYPTOGRAPHY_IMP_ERR = None
-try:
- import cryptography
- from cryptography import x509
- from cryptography.hazmat.backends import default_backend
- from cryptography.hazmat.primitives.serialization import Encoding
- from cryptography.x509 import NameAttribute, Name
- from cryptography.x509.oid import NameOID
- CRYPTOGRAPHY_VERSION = LooseVersion(cryptography.__version__)
-except ImportError:
- CRYPTOGRAPHY_IMP_ERR = traceback.format_exc()
- CRYPTOGRAPHY_FOUND = False
-else:
- CRYPTOGRAPHY_FOUND = True
-
-
-class CertificateError(crypto_utils.OpenSSLObjectError):
- pass
-
-
-class Certificate(crypto_utils.OpenSSLObject):
-
- def __init__(self, module, backend):
- super(Certificate, self).__init__(
- module.params['path'],
- module.params['state'],
- module.params['force'],
- module.check_mode
- )
-
- self.provider = module.params['provider']
- self.privatekey_path = module.params['privatekey_path']
- self.privatekey_content = module.params['privatekey_content']
- if self.privatekey_content is not None:
- self.privatekey_content = self.privatekey_content.encode('utf-8')
- self.privatekey_passphrase = module.params['privatekey_passphrase']
- self.csr_path = module.params['csr_path']
- self.csr_content = module.params['csr_content']
- if self.csr_content is not None:
- self.csr_content = self.csr_content.encode('utf-8')
- self.cert = None
- self.privatekey = None
- self.csr = None
- self.backend = backend
- self.module = module
- self.return_content = module.params['return_content']
-
- # The following are default values which make sure check() works as
- # before if providers do not explicitly change these properties.
- self.create_subject_key_identifier = 'never_create'
- self.create_authority_key_identifier = False
-
- self.backup = module.params['backup']
- self.backup_file = None
-
- def _validate_privatekey(self):
- if self.backend == 'pyopenssl':
- ctx = OpenSSL.SSL.Context(OpenSSL.SSL.TLSv1_2_METHOD)
- ctx.use_privatekey(self.privatekey)
- ctx.use_certificate(self.cert)
- try:
- ctx.check_privatekey()
- return True
- except OpenSSL.SSL.Error:
- return False
- elif self.backend == 'cryptography':
- return crypto_utils.cryptography_compare_public_keys(self.cert.public_key(), self.privatekey.public_key())
-
- def _validate_csr(self):
- if self.backend == 'pyopenssl':
- # Verify that CSR is signed by certificate's private key
- try:
- self.csr.verify(self.cert.get_pubkey())
- except OpenSSL.crypto.Error:
- return False
- # Check subject
- if self.csr.get_subject() != self.cert.get_subject():
- return False
- # Check extensions
- csr_extensions = self.csr.get_extensions()
- cert_extension_count = self.cert.get_extension_count()
- if len(csr_extensions) != cert_extension_count:
- return False
- for extension_number in range(0, cert_extension_count):
- cert_extension = self.cert.get_extension(extension_number)
- csr_extension = filter(lambda extension: extension.get_short_name() == cert_extension.get_short_name(), csr_extensions)
- if cert_extension.get_data() != list(csr_extension)[0].get_data():
- return False
- return True
- elif self.backend == 'cryptography':
- # Verify that CSR is signed by certificate's private key
- if not self.csr.is_signature_valid:
- return False
- if not crypto_utils.cryptography_compare_public_keys(self.csr.public_key(), self.cert.public_key()):
- return False
- # Check subject
- if self.csr.subject != self.cert.subject:
- return False
- # Check extensions
- cert_exts = list(self.cert.extensions)
- csr_exts = list(self.csr.extensions)
- if self.create_subject_key_identifier != 'never_create':
- # Filter out SubjectKeyIdentifier extension before comparison
- cert_exts = list(filter(lambda x: not isinstance(x.value, x509.SubjectKeyIdentifier), cert_exts))
- csr_exts = list(filter(lambda x: not isinstance(x.value, x509.SubjectKeyIdentifier), csr_exts))
- if self.create_authority_key_identifier:
- # Filter out AuthorityKeyIdentifier extension before comparison
- cert_exts = list(filter(lambda x: not isinstance(x.value, x509.AuthorityKeyIdentifier), cert_exts))
- csr_exts = list(filter(lambda x: not isinstance(x.value, x509.AuthorityKeyIdentifier), csr_exts))
- if len(cert_exts) != len(csr_exts):
- return False
- for cert_ext in cert_exts:
- try:
- csr_ext = self.csr.extensions.get_extension_for_oid(cert_ext.oid)
- if cert_ext != csr_ext:
- return False
- except cryptography.x509.ExtensionNotFound as dummy:
- return False
- return True
-
- def remove(self, module):
- if self.backup:
- self.backup_file = module.backup_local(self.path)
- super(Certificate, self).remove(module)
-
- def check(self, module, perms_required=True):
- """Ensure the resource is in its desired state."""
-
- state_and_perms = super(Certificate, self).check(module, perms_required)
-
- if not state_and_perms:
- return False
-
- try:
- self.cert = crypto_utils.load_certificate(self.path, backend=self.backend)
- except Exception as dummy:
- return False
-
- if self.privatekey_path or self.privatekey_content:
- try:
- self.privatekey = crypto_utils.load_privatekey(
- path=self.privatekey_path,
- content=self.privatekey_content,
- passphrase=self.privatekey_passphrase,
- backend=self.backend
- )
- except crypto_utils.OpenSSLBadPassphraseError as exc:
- raise CertificateError(exc)
- if not self._validate_privatekey():
- return False
-
- if self.csr_path or self.csr_content:
- self.csr = crypto_utils.load_certificate_request(
- path=self.csr_path,
- content=self.csr_content,
- backend=self.backend
- )
- if not self._validate_csr():
- return False
-
- # Check SubjectKeyIdentifier
- if self.backend == 'cryptography' and self.create_subject_key_identifier != 'never_create':
- # Get hold of certificate's SKI
- try:
- ext = self.cert.extensions.get_extension_for_class(x509.SubjectKeyIdentifier)
- except cryptography.x509.ExtensionNotFound as dummy:
- return False
- # Get hold of CSR's SKI for 'create_if_not_provided'
- csr_ext = None
- if self.create_subject_key_identifier == 'create_if_not_provided':
- try:
- csr_ext = self.csr.extensions.get_extension_for_class(x509.SubjectKeyIdentifier)
- except cryptography.x509.ExtensionNotFound as dummy:
- pass
- if csr_ext is None:
- # If CSR had no SKI, or we chose to ignore it ('always_create'), compare with created SKI
- if ext.value.digest != x509.SubjectKeyIdentifier.from_public_key(self.cert.public_key()).digest:
- return False
- else:
- # If CSR had SKI and we didn't ignore it ('create_if_not_provided'), compare SKIs
- if ext.value.digest != csr_ext.value.digest:
- return False
-
- return True
-
-
-class CertificateAbsent(Certificate):
- def __init__(self, module):
- super(CertificateAbsent, self).__init__(module, 'cryptography') # backend doesn't matter
-
- def generate(self, module):
- pass
-
- def dump(self, check_mode=False):
- # Use only for absent
-
- result = {
- 'changed': self.changed,
- 'filename': self.path,
- 'privatekey': self.privatekey_path,
- 'csr': self.csr_path
- }
- if self.backup_file:
- result['backup_file'] = self.backup_file
- if self.return_content:
- result['certificate'] = None
-
- return result
-
-
-class SelfSignedCertificateCryptography(Certificate):
- """Generate the self-signed certificate, using the cryptography backend"""
- def __init__(self, module):
- super(SelfSignedCertificateCryptography, self).__init__(module, 'cryptography')
- self.create_subject_key_identifier = module.params['selfsigned_create_subject_key_identifier']
- self.notBefore = crypto_utils.get_relative_time_option(module.params['selfsigned_not_before'], 'selfsigned_not_before', backend=self.backend)
- self.notAfter = crypto_utils.get_relative_time_option(module.params['selfsigned_not_after'], 'selfsigned_not_after', backend=self.backend)
- self.digest = crypto_utils.select_message_digest(module.params['selfsigned_digest'])
- self.version = module.params['selfsigned_version']
- self.serial_number = x509.random_serial_number()
-
- if self.csr_content is None and not os.path.exists(self.csr_path):
- raise CertificateError(
- 'The certificate signing request file {0} does not exist'.format(self.csr_path)
- )
- if self.privatekey_content is None and not os.path.exists(self.privatekey_path):
- raise CertificateError(
- 'The private key file {0} does not exist'.format(self.privatekey_path)
- )
-
- self.csr = crypto_utils.load_certificate_request(
- path=self.csr_path,
- content=self.csr_content,
- backend=self.backend
- )
- self._module = module
-
- try:
- self.privatekey = crypto_utils.load_privatekey(
- path=self.privatekey_path,
- content=self.privatekey_content,
- passphrase=self.privatekey_passphrase,
- backend=self.backend
- )
- except crypto_utils.OpenSSLBadPassphraseError as exc:
- module.fail_json(msg=to_native(exc))
-
- if crypto_utils.cryptography_key_needs_digest_for_signing(self.privatekey):
- if self.digest is None:
- raise CertificateError(
- 'The digest %s is not supported with the cryptography backend' % module.params['selfsigned_digest']
- )
- else:
- self.digest = None
-
- def generate(self, module):
- if self.privatekey_content is None and not os.path.exists(self.privatekey_path):
- raise CertificateError(
- 'The private key %s does not exist' % self.privatekey_path
- )
- if self.csr_content is None and not os.path.exists(self.csr_path):
- raise CertificateError(
- 'The certificate signing request file %s does not exist' % self.csr_path
- )
- if not self.check(module, perms_required=False) or self.force:
- try:
- cert_builder = x509.CertificateBuilder()
- cert_builder = cert_builder.subject_name(self.csr.subject)
- cert_builder = cert_builder.issuer_name(self.csr.subject)
- cert_builder = cert_builder.serial_number(self.serial_number)
- cert_builder = cert_builder.not_valid_before(self.notBefore)
- cert_builder = cert_builder.not_valid_after(self.notAfter)
- cert_builder = cert_builder.public_key(self.privatekey.public_key())
- has_ski = False
- for extension in self.csr.extensions:
- if isinstance(extension.value, x509.SubjectKeyIdentifier):
- if self.create_subject_key_identifier == 'always_create':
- continue
- has_ski = True
- cert_builder = cert_builder.add_extension(extension.value, critical=extension.critical)
- if not has_ski and self.create_subject_key_identifier != 'never_create':
- cert_builder = cert_builder.add_extension(
- x509.SubjectKeyIdentifier.from_public_key(self.privatekey.public_key()),
- critical=False
- )
- except ValueError as e:
- raise CertificateError(str(e))
-
- try:
- certificate = cert_builder.sign(
- private_key=self.privatekey, algorithm=self.digest,
- backend=default_backend()
- )
- except TypeError as e:
- if str(e) == 'Algorithm must be a registered hash algorithm.' and self.digest is None:
- module.fail_json(msg='Signing with Ed25519 and Ed448 keys requires cryptography 2.8 or newer.')
- raise
-
- self.cert = certificate
-
- if self.backup:
- self.backup_file = module.backup_local(self.path)
- crypto_utils.write_file(module, certificate.public_bytes(Encoding.PEM))
- self.changed = True
- else:
- self.cert = crypto_utils.load_certificate(self.path, backend=self.backend)
-
- file_args = module.load_file_common_arguments(module.params)
- if module.set_fs_attributes_if_different(file_args, False):
- self.changed = True
-
- def dump(self, check_mode=False):
-
- result = {
- 'changed': self.changed,
- 'filename': self.path,
- 'privatekey': self.privatekey_path,
- 'csr': self.csr_path
- }
- if self.backup_file:
- result['backup_file'] = self.backup_file
- if self.return_content:
- content = crypto_utils.load_file_if_exists(self.path, ignore_errors=True)
- result['certificate'] = content.decode('utf-8') if content else None
-
- if check_mode:
- result.update({
- 'notBefore': self.notBefore.strftime("%Y%m%d%H%M%SZ"),
- 'notAfter': self.notAfter.strftime("%Y%m%d%H%M%SZ"),
- 'serial_number': self.serial_number,
- })
- else:
- result.update({
- 'notBefore': self.cert.not_valid_before.strftime("%Y%m%d%H%M%SZ"),
- 'notAfter': self.cert.not_valid_after.strftime("%Y%m%d%H%M%SZ"),
- 'serial_number': self.cert.serial_number,
- })
-
- return result
-
-
-class SelfSignedCertificate(Certificate):
- """Generate the self-signed certificate."""
-
- def __init__(self, module):
- super(SelfSignedCertificate, self).__init__(module, 'pyopenssl')
- if module.params['selfsigned_create_subject_key_identifier'] != 'create_if_not_provided':
- module.fail_json(msg='selfsigned_create_subject_key_identifier cannot be used with the pyOpenSSL backend!')
- self.notBefore = crypto_utils.get_relative_time_option(module.params['selfsigned_not_before'], 'selfsigned_not_before', backend=self.backend)
- self.notAfter = crypto_utils.get_relative_time_option(module.params['selfsigned_not_after'], 'selfsigned_not_after', backend=self.backend)
- self.digest = module.params['selfsigned_digest']
- self.version = module.params['selfsigned_version']
- self.serial_number = randint(1000, 99999)
-
- if self.csr_content is None and not os.path.exists(self.csr_path):
- raise CertificateError(
- 'The certificate signing request file {0} does not exist'.format(self.csr_path)
- )
- if self.privatekey_content is None and not os.path.exists(self.privatekey_path):
- raise CertificateError(
- 'The private key file {0} does not exist'.format(self.privatekey_path)
- )
-
- self.csr = crypto_utils.load_certificate_request(
- path=self.csr_path,
- content=self.csr_content,
- )
- try:
- self.privatekey = crypto_utils.load_privatekey(
- path=self.privatekey_path,
- content=self.privatekey_content,
- passphrase=self.privatekey_passphrase,
- )
- except crypto_utils.OpenSSLBadPassphraseError as exc:
- module.fail_json(msg=str(exc))
-
- def generate(self, module):
-
- if self.privatekey_content is None and not os.path.exists(self.privatekey_path):
- raise CertificateError(
- 'The private key %s does not exist' % self.privatekey_path
- )
-
- if self.csr_content is None and not os.path.exists(self.csr_path):
- raise CertificateError(
- 'The certificate signing request file %s does not exist' % self.csr_path
- )
-
- if not self.check(module, perms_required=False) or self.force:
- cert = crypto.X509()
- cert.set_serial_number(self.serial_number)
- cert.set_notBefore(to_bytes(self.notBefore))
- cert.set_notAfter(to_bytes(self.notAfter))
- cert.set_subject(self.csr.get_subject())
- cert.set_issuer(self.csr.get_subject())
- cert.set_version(self.version - 1)
- cert.set_pubkey(self.csr.get_pubkey())
- cert.add_extensions(self.csr.get_extensions())
-
- cert.sign(self.privatekey, self.digest)
- self.cert = cert
-
- if self.backup:
- self.backup_file = module.backup_local(self.path)
- crypto_utils.write_file(module, crypto.dump_certificate(crypto.FILETYPE_PEM, self.cert))
- self.changed = True
-
- file_args = module.load_file_common_arguments(module.params)
- if module.set_fs_attributes_if_different(file_args, False):
- self.changed = True
-
- def dump(self, check_mode=False):
-
- result = {
- 'changed': self.changed,
- 'filename': self.path,
- 'privatekey': self.privatekey_path,
- 'csr': self.csr_path
- }
- if self.backup_file:
- result['backup_file'] = self.backup_file
- if self.return_content:
- content = crypto_utils.load_file_if_exists(self.path, ignore_errors=True)
- result['certificate'] = content.decode('utf-8') if content else None
-
- if check_mode:
- result.update({
- 'notBefore': self.notBefore,
- 'notAfter': self.notAfter,
- 'serial_number': self.serial_number,
- })
- else:
- result.update({
- 'notBefore': self.cert.get_notBefore(),
- 'notAfter': self.cert.get_notAfter(),
- 'serial_number': self.cert.get_serial_number(),
- })
-
- return result
-
-
-class OwnCACertificateCryptography(Certificate):
- """Generate the own CA certificate. Using the cryptography backend"""
- def __init__(self, module):
- super(OwnCACertificateCryptography, self).__init__(module, 'cryptography')
- self.create_subject_key_identifier = module.params['ownca_create_subject_key_identifier']
- self.create_authority_key_identifier = module.params['ownca_create_authority_key_identifier']
- self.notBefore = crypto_utils.get_relative_time_option(module.params['ownca_not_before'], 'ownca_not_before', backend=self.backend)
- self.notAfter = crypto_utils.get_relative_time_option(module.params['ownca_not_after'], 'ownca_not_after', backend=self.backend)
- self.digest = crypto_utils.select_message_digest(module.params['ownca_digest'])
- self.version = module.params['ownca_version']
- self.serial_number = x509.random_serial_number()
- self.ca_cert_path = module.params['ownca_path']
- self.ca_cert_content = module.params['ownca_content']
- if self.ca_cert_content is not None:
- self.ca_cert_content = self.ca_cert_content.encode('utf-8')
- self.ca_privatekey_path = module.params['ownca_privatekey_path']
- self.ca_privatekey_content = module.params['ownca_privatekey_content']
- if self.ca_privatekey_content is not None:
- self.ca_privatekey_content = self.ca_privatekey_content.encode('utf-8')
- self.ca_privatekey_passphrase = module.params['ownca_privatekey_passphrase']
-
- if self.csr_content is None and not os.path.exists(self.csr_path):
- raise CertificateError(
- 'The certificate signing request file {0} does not exist'.format(self.csr_path)
- )
- if self.ca_cert_content is None and not os.path.exists(self.ca_cert_path):
- raise CertificateError(
- 'The CA certificate file {0} does not exist'.format(self.ca_cert_path)
- )
- if self.ca_privatekey_content is None and not os.path.exists(self.ca_privatekey_path):
- raise CertificateError(
- 'The CA private key file {0} does not exist'.format(self.ca_privatekey_path)
- )
-
- self.csr = crypto_utils.load_certificate_request(
- path=self.csr_path,
- content=self.csr_content,
- backend=self.backend
- )
- self.ca_cert = crypto_utils.load_certificate(
- path=self.ca_cert_path,
- content=self.ca_cert_content,
- backend=self.backend
- )
- try:
- self.ca_private_key = crypto_utils.load_privatekey(
- path=self.ca_privatekey_path,
- content=self.ca_privatekey_content,
- passphrase=self.ca_privatekey_passphrase,
- backend=self.backend
- )
- except crypto_utils.OpenSSLBadPassphraseError as exc:
- module.fail_json(msg=str(exc))
-
- if crypto_utils.cryptography_key_needs_digest_for_signing(self.ca_private_key):
- if self.digest is None:
- raise CertificateError(
- 'The digest %s is not supported with the cryptography backend' % module.params['ownca_digest']
- )
- else:
- self.digest = None
-
- def generate(self, module):
-
- if self.ca_cert_content is None and not os.path.exists(self.ca_cert_path):
- raise CertificateError(
- 'The CA certificate %s does not exist' % self.ca_cert_path
- )
-
- if self.ca_privatekey_content is None and not os.path.exists(self.ca_privatekey_path):
- raise CertificateError(
- 'The CA private key %s does not exist' % self.ca_privatekey_path
- )
-
- if self.csr_content is None and not os.path.exists(self.csr_path):
- raise CertificateError(
- 'The certificate signing request file %s does not exist' % self.csr_path
- )
-
- if not self.check(module, perms_required=False) or self.force:
- cert_builder = x509.CertificateBuilder()
- cert_builder = cert_builder.subject_name(self.csr.subject)
- cert_builder = cert_builder.issuer_name(self.ca_cert.subject)
- cert_builder = cert_builder.serial_number(self.serial_number)
- cert_builder = cert_builder.not_valid_before(self.notBefore)
- cert_builder = cert_builder.not_valid_after(self.notAfter)
- cert_builder = cert_builder.public_key(self.csr.public_key())
- has_ski = False
- for extension in self.csr.extensions:
- if isinstance(extension.value, x509.SubjectKeyIdentifier):
- if self.create_subject_key_identifier == 'always_create':
- continue
- has_ski = True
- if self.create_authority_key_identifier and isinstance(extension.value, x509.AuthorityKeyIdentifier):
- continue
- cert_builder = cert_builder.add_extension(extension.value, critical=extension.critical)
- if not has_ski and self.create_subject_key_identifier != 'never_create':
- cert_builder = cert_builder.add_extension(
- x509.SubjectKeyIdentifier.from_public_key(self.csr.public_key()),
- critical=False
- )
- if self.create_authority_key_identifier:
- try:
- ext = self.ca_cert.extensions.get_extension_for_class(x509.SubjectKeyIdentifier)
- cert_builder = cert_builder.add_extension(
- x509.AuthorityKeyIdentifier.from_issuer_subject_key_identifier(ext.value)
- if CRYPTOGRAPHY_VERSION >= LooseVersion('2.7') else
- x509.AuthorityKeyIdentifier.from_issuer_subject_key_identifier(ext),
- critical=False
- )
- except cryptography.x509.ExtensionNotFound:
- cert_builder = cert_builder.add_extension(
- x509.AuthorityKeyIdentifier.from_issuer_public_key(self.ca_cert.public_key()),
- critical=False
- )
-
- try:
- certificate = cert_builder.sign(
- private_key=self.ca_private_key, algorithm=self.digest,
- backend=default_backend()
- )
- except TypeError as e:
- if str(e) == 'Algorithm must be a registered hash algorithm.' and self.digest is None:
- module.fail_json(msg='Signing with Ed25519 and Ed448 keys requires cryptography 2.8 or newer.')
- raise
-
- self.cert = certificate
-
- if self.backup:
- self.backup_file = module.backup_local(self.path)
- crypto_utils.write_file(module, certificate.public_bytes(Encoding.PEM))
- self.changed = True
- else:
- self.cert = crypto_utils.load_certificate(self.path, backend=self.backend)
-
- file_args = module.load_file_common_arguments(module.params)
- if module.set_fs_attributes_if_different(file_args, False):
- self.changed = True
-
- def check(self, module, perms_required=True):
- """Ensure the resource is in its desired state."""
-
- if not super(OwnCACertificateCryptography, self).check(module, perms_required):
- return False
-
- # Check AuthorityKeyIdentifier
- if self.create_authority_key_identifier:
- try:
- ext = self.ca_cert.extensions.get_extension_for_class(x509.SubjectKeyIdentifier)
- expected_ext = (
- x509.AuthorityKeyIdentifier.from_issuer_subject_key_identifier(ext.value)
- if CRYPTOGRAPHY_VERSION >= LooseVersion('2.7') else
- x509.AuthorityKeyIdentifier.from_issuer_subject_key_identifier(ext)
- )
- except cryptography.x509.ExtensionNotFound:
- expected_ext = x509.AuthorityKeyIdentifier.from_issuer_public_key(self.ca_cert.public_key())
- try:
- ext = self.cert.extensions.get_extension_for_class(x509.AuthorityKeyIdentifier)
- if ext.value != expected_ext:
- return False
- except cryptography.x509.ExtensionNotFound as dummy:
- return False
-
- return True
-
- def dump(self, check_mode=False):
-
- result = {
- 'changed': self.changed,
- 'filename': self.path,
- 'privatekey': self.privatekey_path,
- 'csr': self.csr_path,
- 'ca_cert': self.ca_cert_path,
- 'ca_privatekey': self.ca_privatekey_path
- }
- if self.backup_file:
- result['backup_file'] = self.backup_file
- if self.return_content:
- content = crypto_utils.load_file_if_exists(self.path, ignore_errors=True)
- result['certificate'] = content.decode('utf-8') if content else None
-
- if check_mode:
- result.update({
- 'notBefore': self.notBefore.strftime("%Y%m%d%H%M%SZ"),
- 'notAfter': self.notAfter.strftime("%Y%m%d%H%M%SZ"),
- 'serial_number': self.serial_number,
- })
- else:
- result.update({
- 'notBefore': self.cert.not_valid_before.strftime("%Y%m%d%H%M%SZ"),
- 'notAfter': self.cert.not_valid_after.strftime("%Y%m%d%H%M%SZ"),
- 'serial_number': self.cert.serial_number,
- })
-
- return result
-
-
-class OwnCACertificate(Certificate):
- """Generate the own CA certificate."""
-
- def __init__(self, module):
- super(OwnCACertificate, self).__init__(module, 'pyopenssl')
- self.notBefore = crypto_utils.get_relative_time_option(module.params['ownca_not_before'], 'ownca_not_before', backend=self.backend)
- self.notAfter = crypto_utils.get_relative_time_option(module.params['ownca_not_after'], 'ownca_not_after', backend=self.backend)
- self.digest = module.params['ownca_digest']
- self.version = module.params['ownca_version']
- self.serial_number = randint(1000, 99999)
- if module.params['ownca_create_subject_key_identifier'] != 'create_if_not_provided':
- module.fail_json(msg='ownca_create_subject_key_identifier cannot be used with the pyOpenSSL backend!')
- if module.params['ownca_create_authority_key_identifier']:
- module.warn('ownca_create_authority_key_identifier is ignored by the pyOpenSSL backend!')
- self.ca_cert_path = module.params['ownca_path']
- self.ca_cert_content = module.params['ownca_content']
- if self.ca_cert_content is not None:
- self.ca_cert_content = self.ca_cert_content.encode('utf-8')
- self.ca_privatekey_path = module.params['ownca_privatekey_path']
- self.ca_privatekey_content = module.params['ownca_privatekey_content']
- if self.ca_privatekey_content is not None:
- self.ca_privatekey_content = self.ca_privatekey_content.encode('utf-8')
- self.ca_privatekey_passphrase = module.params['ownca_privatekey_passphrase']
-
- if self.csr_content is None and not os.path.exists(self.csr_path):
- raise CertificateError(
- 'The certificate signing request file {0} does not exist'.format(self.csr_path)
- )
- if self.ca_cert_content is None and not os.path.exists(self.ca_cert_path):
- raise CertificateError(
- 'The CA certificate file {0} does not exist'.format(self.ca_cert_path)
- )
- if self.ca_privatekey_content is None and not os.path.exists(self.ca_privatekey_path):
- raise CertificateError(
- 'The CA private key file {0} does not exist'.format(self.ca_privatekey_path)
- )
-
- self.csr = crypto_utils.load_certificate_request(
- path=self.csr_path,
- content=self.csr_content,
- )
- self.ca_cert = crypto_utils.load_certificate(
- path=self.ca_cert_path,
- content=self.ca_cert_content,
- )
- try:
- self.ca_privatekey = crypto_utils.load_privatekey(
- path=self.ca_privatekey_path,
- content=self.ca_privatekey_content,
- passphrase=self.ca_privatekey_passphrase
- )
- except crypto_utils.OpenSSLBadPassphraseError as exc:
- module.fail_json(msg=str(exc))
-
- def generate(self, module):
-
- if self.ca_cert_content is None and not os.path.exists(self.ca_cert_path):
- raise CertificateError(
- 'The CA certificate %s does not exist' % self.ca_cert_path
- )
-
- if self.ca_privatekey_content is None and not os.path.exists(self.ca_privatekey_path):
- raise CertificateError(
- 'The CA private key %s does not exist' % self.ca_privatekey_path
- )
-
- if self.csr_content is None and not os.path.exists(self.csr_path):
- raise CertificateError(
- 'The certificate signing request file %s does not exist' % self.csr_path
- )
-
- if not self.check(module, perms_required=False) or self.force:
- cert = crypto.X509()
- cert.set_serial_number(self.serial_number)
- cert.set_notBefore(to_bytes(self.notBefore))
- cert.set_notAfter(to_bytes(self.notAfter))
- cert.set_subject(self.csr.get_subject())
- cert.set_issuer(self.ca_cert.get_subject())
- cert.set_version(self.version - 1)
- cert.set_pubkey(self.csr.get_pubkey())
- cert.add_extensions(self.csr.get_extensions())
-
- cert.sign(self.ca_privatekey, self.digest)
- self.cert = cert
-
- if self.backup:
- self.backup_file = module.backup_local(self.path)
- crypto_utils.write_file(module, crypto.dump_certificate(crypto.FILETYPE_PEM, self.cert))
- self.changed = True
-
- file_args = module.load_file_common_arguments(module.params)
- if module.set_fs_attributes_if_different(file_args, False):
- self.changed = True
-
- def dump(self, check_mode=False):
-
- result = {
- 'changed': self.changed,
- 'filename': self.path,
- 'privatekey': self.privatekey_path,
- 'csr': self.csr_path,
- 'ca_cert': self.ca_cert_path,
- 'ca_privatekey': self.ca_privatekey_path
- }
- if self.backup_file:
- result['backup_file'] = self.backup_file
- if self.return_content:
- content = crypto_utils.load_file_if_exists(self.path, ignore_errors=True)
- result['certificate'] = content.decode('utf-8') if content else None
-
- if check_mode:
- result.update({
- 'notBefore': self.notBefore,
- 'notAfter': self.notAfter,
- 'serial_number': self.serial_number,
- })
- else:
- result.update({
- 'notBefore': self.cert.get_notBefore(),
- 'notAfter': self.cert.get_notAfter(),
- 'serial_number': self.cert.get_serial_number(),
- })
-
- return result
-
-
-def compare_sets(subset, superset, equality=False):
- if equality:
- return set(subset) == set(superset)
- else:
- return all(x in superset for x in subset)
-
-
-def compare_dicts(subset, superset, equality=False):
- if equality:
- return subset == superset
- else:
- return all(superset.get(x) == v for x, v in subset.items())
-
-
-NO_EXTENSION = 'no extension'
-
-
-class AssertOnlyCertificateBase(Certificate):
-
- def __init__(self, module, backend):
- super(AssertOnlyCertificateBase, self).__init__(module, backend)
-
- self.signature_algorithms = module.params['signature_algorithms']
- if module.params['subject']:
- self.subject = crypto_utils.parse_name_field(module.params['subject'])
- else:
- self.subject = []
- self.subject_strict = module.params['subject_strict']
- if module.params['issuer']:
- self.issuer = crypto_utils.parse_name_field(module.params['issuer'])
- else:
- self.issuer = []
- self.issuer_strict = module.params['issuer_strict']
- self.has_expired = module.params['has_expired']
- self.version = module.params['version']
- self.key_usage = module.params['key_usage']
- self.key_usage_strict = module.params['key_usage_strict']
- self.extended_key_usage = module.params['extended_key_usage']
- self.extended_key_usage_strict = module.params['extended_key_usage_strict']
- self.subject_alt_name = module.params['subject_alt_name']
- self.subject_alt_name_strict = module.params['subject_alt_name_strict']
- self.not_before = module.params['not_before']
- self.not_after = module.params['not_after']
- self.valid_at = module.params['valid_at']
- self.invalid_at = module.params['invalid_at']
- self.valid_in = module.params['valid_in']
- if self.valid_in and not self.valid_in.startswith("+") and not self.valid_in.startswith("-"):
- try:
- int(self.valid_in)
- except ValueError:
- module.fail_json(msg='The supplied value for "valid_in" (%s) is not an integer or a valid timespec' % self.valid_in)
- self.valid_in = "+" + self.valid_in + "s"
-
- # Load objects
- self.cert = crypto_utils.load_certificate(self.path, backend=self.backend)
- if self.privatekey_path is not None or self.privatekey_content is not None:
- try:
- self.privatekey = crypto_utils.load_privatekey(
- path=self.privatekey_path,
- content=self.privatekey_content,
- passphrase=self.privatekey_passphrase,
- backend=self.backend
- )
- except crypto_utils.OpenSSLBadPassphraseError as exc:
- raise CertificateError(exc)
- if self.csr_path is not None or self.csr_content is not None:
- self.csr = crypto_utils.load_certificate_request(
- path=self.csr_path,
- content=self.csr_content,
- backend=self.backend
- )
-
- @abc.abstractmethod
- def _validate_privatekey(self):
- pass
-
- @abc.abstractmethod
- def _validate_csr_signature(self):
- pass
-
- @abc.abstractmethod
- def _validate_csr_subject(self):
- pass
-
- @abc.abstractmethod
- def _validate_csr_extensions(self):
- pass
-
- @abc.abstractmethod
- def _validate_signature_algorithms(self):
- pass
-
- @abc.abstractmethod
- def _validate_subject(self):
- pass
-
- @abc.abstractmethod
- def _validate_issuer(self):
- pass
-
- @abc.abstractmethod
- def _validate_has_expired(self):
- pass
-
- @abc.abstractmethod
- def _validate_version(self):
- pass
-
- @abc.abstractmethod
- def _validate_key_usage(self):
- pass
-
- @abc.abstractmethod
- def _validate_extended_key_usage(self):
- pass
-
- @abc.abstractmethod
- def _validate_subject_alt_name(self):
- pass
-
- @abc.abstractmethod
- def _validate_not_before(self):
- pass
-
- @abc.abstractmethod
- def _validate_not_after(self):
- pass
-
- @abc.abstractmethod
- def _validate_valid_at(self):
- pass
-
- @abc.abstractmethod
- def _validate_invalid_at(self):
- pass
-
- @abc.abstractmethod
- def _validate_valid_in(self):
- pass
-
- def assertonly(self, module):
- messages = []
- if self.privatekey_path is not None or self.privatekey_content is not None:
- if not self._validate_privatekey():
- messages.append(
- 'Certificate %s and private key %s do not match' %
- (self.path, self.privatekey_path or '(provided in module options)')
- )
-
- if self.csr_path is not None or self.csr_content is not None:
- if not self._validate_csr_signature():
- messages.append(
- 'Certificate %s and CSR %s do not match: private key mismatch' %
- (self.path, self.csr_path or '(provided in module options)')
- )
- if not self._validate_csr_subject():
- messages.append(
- 'Certificate %s and CSR %s do not match: subject mismatch' %
- (self.path, self.csr_path or '(provided in module options)')
- )
- if not self._validate_csr_extensions():
- messages.append(
- 'Certificate %s and CSR %s do not match: extensions mismatch' %
- (self.path, self.csr_path or '(provided in module options)')
- )
-
- if self.signature_algorithms is not None:
- wrong_alg = self._validate_signature_algorithms()
- if wrong_alg:
- messages.append(
- 'Invalid signature algorithm (got %s, expected one of %s)' %
- (wrong_alg, self.signature_algorithms)
- )
-
- if self.subject is not None:
- failure = self._validate_subject()
- if failure:
- dummy, cert_subject = failure
- messages.append(
- 'Invalid subject component (got %s, expected all of %s to be present)' %
- (cert_subject, self.subject)
- )
-
- if self.issuer is not None:
- failure = self._validate_issuer()
- if failure:
- dummy, cert_issuer = failure
- messages.append(
- 'Invalid issuer component (got %s, expected all of %s to be present)' % (cert_issuer, self.issuer)
- )
-
- if self.has_expired is not None:
- cert_expired = self._validate_has_expired()
- if cert_expired != self.has_expired:
- messages.append(
- 'Certificate expiration check failed (certificate expiration is %s, expected %s)' %
- (cert_expired, self.has_expired)
- )
-
- if self.version is not None:
- cert_version = self._validate_version()
- if cert_version != self.version:
- messages.append(
- 'Invalid certificate version number (got %s, expected %s)' %
- (cert_version, self.version)
- )
-
- if self.key_usage is not None:
- failure = self._validate_key_usage()
- if failure == NO_EXTENSION:
- messages.append('Found no keyUsage extension')
- elif failure:
- dummy, cert_key_usage = failure
- messages.append(
- 'Invalid keyUsage components (got %s, expected all of %s to be present)' %
- (cert_key_usage, self.key_usage)
- )
-
- if self.extended_key_usage is not None:
- failure = self._validate_extended_key_usage()
- if failure == NO_EXTENSION:
- messages.append('Found no extendedKeyUsage extension')
- elif failure:
- dummy, ext_cert_key_usage = failure
- messages.append(
- 'Invalid extendedKeyUsage component (got %s, expected all of %s to be present)' % (ext_cert_key_usage, self.extended_key_usage)
- )
-
- if self.subject_alt_name is not None:
- failure = self._validate_subject_alt_name()
- if failure == NO_EXTENSION:
- messages.append('Found no subjectAltName extension')
- elif failure:
- dummy, cert_san = failure
- messages.append(
- 'Invalid subjectAltName component (got %s, expected all of %s to be present)' %
- (cert_san, self.subject_alt_name)
- )
-
- if self.not_before is not None:
- cert_not_valid_before = self._validate_not_before()
- if cert_not_valid_before != crypto_utils.get_relative_time_option(self.not_before, 'not_before', backend=self.backend):
- messages.append(
- 'Invalid not_before component (got %s, expected %s to be present)' %
- (cert_not_valid_before, self.not_before)
- )
-
- if self.not_after is not None:
- cert_not_valid_after = self._validate_not_after()
- if cert_not_valid_after != crypto_utils.get_relative_time_option(self.not_after, 'not_after', backend=self.backend):
- messages.append(
- 'Invalid not_after component (got %s, expected %s to be present)' %
- (cert_not_valid_after, self.not_after)
- )
-
- if self.valid_at is not None:
- not_before, valid_at, not_after = self._validate_valid_at()
- if not (not_before <= valid_at <= not_after):
- messages.append(
- 'Certificate is not valid for the specified date (%s) - not_before: %s - not_after: %s' %
- (self.valid_at, not_before, not_after)
- )
-
- if self.invalid_at is not None:
- not_before, invalid_at, not_after = self._validate_invalid_at()
- if not_before <= invalid_at <= not_after:
- messages.append(
- 'Certificate is not invalid for the specified date (%s) - not_before: %s - not_after: %s' %
- (self.invalid_at, not_before, not_after)
- )
-
- if self.valid_in is not None:
- not_before, valid_in, not_after = self._validate_valid_in()
- if not not_before <= valid_in <= not_after:
- messages.append(
- 'Certificate is not valid in %s from now (that would be %s) - not_before: %s - not_after: %s' %
- (self.valid_in, valid_in, not_before, not_after)
- )
- return messages
-
- def generate(self, module):
- """Don't generate anything - only assert"""
- messages = self.assertonly(module)
- if messages:
- module.fail_json(msg=' | '.join(messages))
-
- def check(self, module, perms_required=False):
- """Ensure the resource is in its desired state."""
- messages = self.assertonly(module)
- return len(messages) == 0
-
- def dump(self, check_mode=False):
- result = {
- 'changed': self.changed,
- 'filename': self.path,
- 'privatekey': self.privatekey_path,
- 'csr': self.csr_path,
- }
- if self.return_content:
- content = crypto_utils.load_file_if_exists(self.path, ignore_errors=True)
- result['certificate'] = content.decode('utf-8') if content else None
- return result
-
-
-class AssertOnlyCertificateCryptography(AssertOnlyCertificateBase):
- """Validate the supplied cert, using the cryptography backend"""
- def __init__(self, module):
- super(AssertOnlyCertificateCryptography, self).__init__(module, 'cryptography')
-
- def _validate_privatekey(self):
- return crypto_utils.cryptography_compare_public_keys(self.cert.public_key(), self.privatekey.public_key())
-
- def _validate_csr_signature(self):
- if not self.csr.is_signature_valid:
- return False
- return crypto_utils.cryptography_compare_public_keys(self.csr.public_key(), self.cert.public_key())
-
- def _validate_csr_subject(self):
- return self.csr.subject == self.cert.subject
-
- def _validate_csr_extensions(self):
- cert_exts = self.cert.extensions
- csr_exts = self.csr.extensions
- if len(cert_exts) != len(csr_exts):
- return False
- for cert_ext in cert_exts:
- try:
- csr_ext = csr_exts.get_extension_for_oid(cert_ext.oid)
- if cert_ext != csr_ext:
- return False
- except cryptography.x509.ExtensionNotFound as dummy:
- return False
- return True
-
- def _validate_signature_algorithms(self):
- if self.cert.signature_algorithm_oid._name not in self.signature_algorithms:
- return self.cert.signature_algorithm_oid._name
-
- def _validate_subject(self):
- expected_subject = Name([NameAttribute(oid=crypto_utils.cryptography_name_to_oid(sub[0]), value=to_text(sub[1]))
- for sub in self.subject])
- cert_subject = self.cert.subject
- if not compare_sets(expected_subject, cert_subject, self.subject_strict):
- return expected_subject, cert_subject
-
- def _validate_issuer(self):
- expected_issuer = Name([NameAttribute(oid=crypto_utils.cryptography_name_to_oid(iss[0]), value=to_text(iss[1]))
- for iss in self.issuer])
- cert_issuer = self.cert.issuer
- if not compare_sets(expected_issuer, cert_issuer, self.issuer_strict):
- return self.issuer, cert_issuer
-
- def _validate_has_expired(self):
- cert_not_after = self.cert.not_valid_after
- cert_expired = cert_not_after < datetime.datetime.utcnow()
- return cert_expired
-
- def _validate_version(self):
- if self.cert.version == x509.Version.v1:
- return 1
- if self.cert.version == x509.Version.v3:
- return 3
- return "unknown"
-
- def _validate_key_usage(self):
- try:
- current_key_usage = self.cert.extensions.get_extension_for_class(x509.KeyUsage).value
- test_key_usage = dict(
- digital_signature=current_key_usage.digital_signature,
- content_commitment=current_key_usage.content_commitment,
- key_encipherment=current_key_usage.key_encipherment,
- data_encipherment=current_key_usage.data_encipherment,
- key_agreement=current_key_usage.key_agreement,
- key_cert_sign=current_key_usage.key_cert_sign,
- crl_sign=current_key_usage.crl_sign,
- encipher_only=False,
- decipher_only=False
- )
- if test_key_usage['key_agreement']:
- test_key_usage.update(dict(
- encipher_only=current_key_usage.encipher_only,
- decipher_only=current_key_usage.decipher_only
- ))
-
- key_usages = crypto_utils.cryptography_parse_key_usage_params(self.key_usage)
- if not compare_dicts(key_usages, test_key_usage, self.key_usage_strict):
- return self.key_usage, [k for k, v in test_key_usage.items() if v is True]
-
- except cryptography.x509.ExtensionNotFound:
- # This is only bad if the user specified a non-empty list
- if self.key_usage:
- return NO_EXTENSION
-
- def _validate_extended_key_usage(self):
- try:
- current_ext_keyusage = self.cert.extensions.get_extension_for_class(x509.ExtendedKeyUsage).value
- usages = [crypto_utils.cryptography_name_to_oid(usage) for usage in self.extended_key_usage]
- expected_ext_keyusage = x509.ExtendedKeyUsage(usages)
- if not compare_sets(expected_ext_keyusage, current_ext_keyusage, self.extended_key_usage_strict):
- return [eku.value for eku in expected_ext_keyusage], [eku.value for eku in current_ext_keyusage]
-
- except cryptography.x509.ExtensionNotFound:
- # This is only bad if the user specified a non-empty list
- if self.extended_key_usage:
- return NO_EXTENSION
-
- def _validate_subject_alt_name(self):
- try:
- current_san = self.cert.extensions.get_extension_for_class(x509.SubjectAlternativeName).value
- expected_san = [crypto_utils.cryptography_get_name(san) for san in self.subject_alt_name]
- if not compare_sets(expected_san, current_san, self.subject_alt_name_strict):
- return self.subject_alt_name, current_san
- except cryptography.x509.ExtensionNotFound:
- # This is only bad if the user specified a non-empty list
- if self.subject_alt_name:
- return NO_EXTENSION
-
- def _validate_not_before(self):
- return self.cert.not_valid_before
-
- def _validate_not_after(self):
- return self.cert.not_valid_after
-
- def _validate_valid_at(self):
- rt = crypto_utils.get_relative_time_option(self.valid_at, 'valid_at', backend=self.backend)
- return self.cert.not_valid_before, rt, self.cert.not_valid_after
-
- def _validate_invalid_at(self):
- rt = crypto_utils.get_relative_time_option(self.invalid_at, 'invalid_at', backend=self.backend)
- return self.cert.not_valid_before, rt, self.cert.not_valid_after
-
- def _validate_valid_in(self):
- valid_in_date = crypto_utils.get_relative_time_option(self.valid_in, "valid_in", backend=self.backend)
- return self.cert.not_valid_before, valid_in_date, self.cert.not_valid_after
-
-
-class AssertOnlyCertificate(AssertOnlyCertificateBase):
- """validate the supplied certificate."""
-
- def __init__(self, module):
- super(AssertOnlyCertificate, self).__init__(module, 'pyopenssl')
-
- # Ensure inputs are properly sanitized before comparison.
- for param in ['signature_algorithms', 'key_usage', 'extended_key_usage',
- 'subject_alt_name', 'subject', 'issuer', 'not_before',
- 'not_after', 'valid_at', 'invalid_at']:
- attr = getattr(self, param)
- if isinstance(attr, list) and attr:
- if isinstance(attr[0], str):
- setattr(self, param, [to_bytes(item) for item in attr])
- elif isinstance(attr[0], tuple):
- setattr(self, param, [(to_bytes(item[0]), to_bytes(item[1])) for item in attr])
- elif isinstance(attr, tuple):
- setattr(self, param, dict((to_bytes(k), to_bytes(v)) for (k, v) in attr.items()))
- elif isinstance(attr, dict):
- setattr(self, param, dict((to_bytes(k), to_bytes(v)) for (k, v) in attr.items()))
- elif isinstance(attr, str):
- setattr(self, param, to_bytes(attr))
-
- def _validate_privatekey(self):
- ctx = OpenSSL.SSL.Context(OpenSSL.SSL.TLSv1_2_METHOD)
- ctx.use_privatekey(self.privatekey)
- ctx.use_certificate(self.cert)
- try:
- ctx.check_privatekey()
- return True
- except OpenSSL.SSL.Error:
- return False
-
- def _validate_csr_signature(self):
- try:
- self.csr.verify(self.cert.get_pubkey())
- except OpenSSL.crypto.Error:
- return False
-
- def _validate_csr_subject(self):
- if self.csr.get_subject() != self.cert.get_subject():
- return False
-
- def _validate_csr_extensions(self):
- csr_extensions = self.csr.get_extensions()
- cert_extension_count = self.cert.get_extension_count()
- if len(csr_extensions) != cert_extension_count:
- return False
- for extension_number in range(0, cert_extension_count):
- cert_extension = self.cert.get_extension(extension_number)
- csr_extension = filter(lambda extension: extension.get_short_name() == cert_extension.get_short_name(), csr_extensions)
- if cert_extension.get_data() != list(csr_extension)[0].get_data():
- return False
- return True
-
- def _validate_signature_algorithms(self):
- if self.cert.get_signature_algorithm() not in self.signature_algorithms:
- return self.cert.get_signature_algorithm()
-
- def _validate_subject(self):
- expected_subject = [(OpenSSL._util.lib.OBJ_txt2nid(sub[0]), sub[1]) for sub in self.subject]
- cert_subject = self.cert.get_subject().get_components()
- current_subject = [(OpenSSL._util.lib.OBJ_txt2nid(sub[0]), sub[1]) for sub in cert_subject]
- if not compare_sets(expected_subject, current_subject, self.subject_strict):
- return expected_subject, current_subject
-
- def _validate_issuer(self):
- expected_issuer = [(OpenSSL._util.lib.OBJ_txt2nid(iss[0]), iss[1]) for iss in self.issuer]
- cert_issuer = self.cert.get_issuer().get_components()
- current_issuer = [(OpenSSL._util.lib.OBJ_txt2nid(iss[0]), iss[1]) for iss in cert_issuer]
- if not compare_sets(expected_issuer, current_issuer, self.issuer_strict):
- return self.issuer, cert_issuer
-
- def _validate_has_expired(self):
- # The following 3 lines are the same as the current PyOpenSSL code for cert.has_expired().
- # Older version of PyOpenSSL have a buggy implementation,
- # to avoid issues with those we added the code from a more recent release here.
-
- time_string = to_native(self.cert.get_notAfter())
- not_after = datetime.datetime.strptime(time_string, "%Y%m%d%H%M%SZ")
- cert_expired = not_after < datetime.datetime.utcnow()
- return cert_expired
-
- def _validate_version(self):
- # Version numbers in certs are off by one:
- # v1: 0, v2: 1, v3: 2 ...
- return self.cert.get_version() + 1
-
- def _validate_key_usage(self):
- found = False
- for extension_idx in range(0, self.cert.get_extension_count()):
- extension = self.cert.get_extension(extension_idx)
- if extension.get_short_name() == b'keyUsage':
- found = True
- expected_extension = crypto.X509Extension(b"keyUsage", False, b', '.join(self.key_usage))
- key_usage = [usage.strip() for usage in to_text(expected_extension, errors='surrogate_or_strict').split(',')]
- current_ku = [usage.strip() for usage in to_text(extension, errors='surrogate_or_strict').split(',')]
- if not compare_sets(key_usage, current_ku, self.key_usage_strict):
- return self.key_usage, str(extension).split(', ')
- if not found:
- # This is only bad if the user specified a non-empty list
- if self.key_usage:
- return NO_EXTENSION
-
- def _validate_extended_key_usage(self):
- found = False
- for extension_idx in range(0, self.cert.get_extension_count()):
- extension = self.cert.get_extension(extension_idx)
- if extension.get_short_name() == b'extendedKeyUsage':
- found = True
- extKeyUsage = [OpenSSL._util.lib.OBJ_txt2nid(keyUsage) for keyUsage in self.extended_key_usage]
- current_xku = [OpenSSL._util.lib.OBJ_txt2nid(usage.strip()) for usage in
- to_bytes(extension, errors='surrogate_or_strict').split(b',')]
- if not compare_sets(extKeyUsage, current_xku, self.extended_key_usage_strict):
- return self.extended_key_usage, str(extension).split(', ')
- if not found:
- # This is only bad if the user specified a non-empty list
- if self.extended_key_usage:
- return NO_EXTENSION
-
- def _normalize_san(self, san):
- # Apparently OpenSSL returns 'IP address' not 'IP' as specifier when converting the subjectAltName to string
- # although it won't accept this specifier when generating the CSR. (https://github.com/openssl/openssl/issues/4004)
- if san.startswith('IP Address:'):
- san = 'IP:' + san[len('IP Address:'):]
- if san.startswith('IP:'):
- ip = compat_ipaddress.ip_address(san[3:])
- san = 'IP:{0}'.format(ip.compressed)
- return san
-
- def _validate_subject_alt_name(self):
- found = False
- for extension_idx in range(0, self.cert.get_extension_count()):
- extension = self.cert.get_extension(extension_idx)
- if extension.get_short_name() == b'subjectAltName':
- found = True
- l_altnames = [self._normalize_san(altname.strip()) for altname in
- to_text(extension, errors='surrogate_or_strict').split(', ')]
- sans = [self._normalize_san(to_text(san, errors='surrogate_or_strict')) for san in self.subject_alt_name]
- if not compare_sets(sans, l_altnames, self.subject_alt_name_strict):
- return self.subject_alt_name, l_altnames
- if not found:
- # This is only bad if the user specified a non-empty list
- if self.subject_alt_name:
- return NO_EXTENSION
-
- def _validate_not_before(self):
- return self.cert.get_notBefore()
-
- def _validate_not_after(self):
- return self.cert.get_notAfter()
-
- def _validate_valid_at(self):
- rt = crypto_utils.get_relative_time_option(self.valid_at, "valid_at", backend=self.backend)
- rt = to_bytes(rt, errors='surrogate_or_strict')
- return self.cert.get_notBefore(), rt, self.cert.get_notAfter()
-
- def _validate_invalid_at(self):
- rt = crypto_utils.get_relative_time_option(self.invalid_at, "invalid_at", backend=self.backend)
- rt = to_bytes(rt, errors='surrogate_or_strict')
- return self.cert.get_notBefore(), rt, self.cert.get_notAfter()
-
- def _validate_valid_in(self):
- valid_in_asn1 = crypto_utils.get_relative_time_option(self.valid_in, "valid_in", backend=self.backend)
- valid_in_date = to_bytes(valid_in_asn1, errors='surrogate_or_strict')
- return self.cert.get_notBefore(), valid_in_date, self.cert.get_notAfter()
-
-
-class EntrustCertificate(Certificate):
- """Retrieve a certificate using Entrust (ECS)."""
-
- def __init__(self, module, backend):
- super(EntrustCertificate, self).__init__(module, backend)
- self.trackingId = None
- self.notAfter = crypto_utils.get_relative_time_option(module.params['entrust_not_after'], 'entrust_not_after', backend=self.backend)
-
- if self.csr_content is None or not os.path.exists(self.csr_path):
- raise CertificateError(
- 'The certificate signing request file {0} does not exist'.format(self.csr_path)
- )
-
- self.csr = crypto_utils.load_certificate_request(
- path=self.csr_path,
- content=self.csr_content,
- backend=self.backend,
- )
-
- # ECS API defaults to using the validated organization tied to the account.
- # We want to always force behavior of trying to use the organization provided in the CSR.
- # To that end we need to parse out the organization from the CSR.
- self.csr_org = None
- if self.backend == 'pyopenssl':
- csr_subject = self.csr.get_subject()
- csr_subject_components = csr_subject.get_components()
- for k, v in csr_subject_components:
- if k.upper() == 'O':
- # Entrust does not support multiple validated organizations in a single certificate
- if self.csr_org is not None:
- module.fail_json(msg=("Entrust provider does not currently support multiple validated organizations. Multiple organizations found in "
- "Subject DN: '{0}'. ".format(csr_subject)))
- else:
- self.csr_org = v
- elif self.backend == 'cryptography':
- csr_subject_orgs = self.csr.subject.get_attributes_for_oid(NameOID.ORGANIZATION_NAME)
- if len(csr_subject_orgs) == 1:
- self.csr_org = csr_subject_orgs[0].value
- elif len(csr_subject_orgs) > 1:
- module.fail_json(msg=("Entrust provider does not currently support multiple validated organizations. Multiple organizations found in "
- "Subject DN: '{0}'. ".format(self.csr.subject)))
- # If no organization in the CSR, explicitly tell ECS that it should be blank in issued cert, not defaulted to
- # organization tied to the account.
- if self.csr_org is None:
- self.csr_org = ''
-
- try:
- self.ecs_client = ECSClient(
- entrust_api_user=module.params.get('entrust_api_user'),
- entrust_api_key=module.params.get('entrust_api_key'),
- entrust_api_cert=module.params.get('entrust_api_client_cert_path'),
- entrust_api_cert_key=module.params.get('entrust_api_client_cert_key_path'),
- entrust_api_specification_path=module.params.get('entrust_api_specification_path')
- )
- except SessionConfigurationException as e:
- module.fail_json(msg='Failed to initialize Entrust Provider: {0}'.format(to_native(e.message)))
-
- def generate(self, module):
-
- if not self.check(module, perms_required=False) or self.force:
- # Read the CSR that was generated for us
- body = {}
- if self.csr_content is not None:
- body['csr'] = self.csr_content
- else:
- with open(self.csr_path, 'r') as csr_file:
- body['csr'] = csr_file.read()
-
- body['certType'] = module.params['entrust_cert_type']
-
- # Handle expiration (30 days if not specified)
- expiry = self.notAfter
- if not expiry:
- gmt_now = datetime.datetime.fromtimestamp(time.mktime(time.gmtime()))
- expiry = gmt_now + datetime.timedelta(days=365)
-
- expiry_iso3339 = expiry.strftime("%Y-%m-%dT%H:%M:%S.00Z")
- body['certExpiryDate'] = expiry_iso3339
- body['org'] = self.csr_org
- body['tracking'] = {
- 'requesterName': module.params['entrust_requester_name'],
- 'requesterEmail': module.params['entrust_requester_email'],
- 'requesterPhone': module.params['entrust_requester_phone'],
- }
-
- try:
- result = self.ecs_client.NewCertRequest(Body=body)
- self.trackingId = result.get('trackingId')
- except RestOperationException as e:
- module.fail_json(msg='Failed to request new certificate from Entrust Certificate Services (ECS): {0}'.format(to_native(e.message)))
-
- if self.backup:
- self.backup_file = module.backup_local(self.path)
- crypto_utils.write_file(module, to_bytes(result.get('endEntityCert')))
- self.cert = crypto_utils.load_certificate(self.path, backend=self.backend)
- self.changed = True
-
- def check(self, module, perms_required=True):
- """Ensure the resource is in its desired state."""
-
- parent_check = super(EntrustCertificate, self).check(module, perms_required)
-
- try:
- cert_details = self._get_cert_details()
- except RestOperationException as e:
- module.fail_json(msg='Failed to get status of existing certificate from Entrust Certificate Services (ECS): {0}.'.format(to_native(e.message)))
-
- # Always issue a new certificate if the certificate is expired, suspended or revoked
- status = cert_details.get('status', False)
- if status == 'EXPIRED' or status == 'SUSPENDED' or status == 'REVOKED':
- return False
-
- # If the requested cert type was specified and it is for a different certificate type than the initial certificate, a new one is needed
- if module.params['entrust_cert_type'] and cert_details.get('certType') and module.params['entrust_cert_type'] != cert_details.get('certType'):
- return False
-
- return parent_check
-
- def _get_cert_details(self):
- cert_details = {}
- if self.cert:
- serial_number = None
- expiry = None
- if self.backend == 'pyopenssl':
- serial_number = "{0:X}".format(self.cert.get_serial_number())
- time_string = to_native(self.cert.get_notAfter())
- expiry = datetime.datetime.strptime(time_string, "%Y%m%d%H%M%SZ")
- elif self.backend == 'cryptography':
- serial_number = "{0:X}".format(self.cert.serial_number)
- expiry = self.cert.not_valid_after
-
- # get some information about the expiry of this certificate
- expiry_iso3339 = expiry.strftime("%Y-%m-%dT%H:%M:%S.00Z")
- cert_details['expiresAfter'] = expiry_iso3339
-
- # If a trackingId is not already defined (from the result of a generate)
- # use the serial number to identify the tracking Id
- if self.trackingId is None and serial_number is not None:
- cert_results = self.ecs_client.GetCertificates(serialNumber=serial_number).get('certificates', {})
-
- # Finding 0 or more than 1 result is a very unlikely use case, it simply means we cannot perform additional checks
- # on the 'state' as returned by Entrust Certificate Services (ECS). The general certificate validity is
- # still checked as it is in the rest of the module.
- if len(cert_results) == 1:
- self.trackingId = cert_results[0].get('trackingId')
-
- if self.trackingId is not None:
- cert_details.update(self.ecs_client.GetCertificate(trackingId=self.trackingId))
-
- return cert_details
-
- def dump(self, check_mode=False):
-
- result = {
- 'changed': self.changed,
- 'filename': self.path,
- 'privatekey': self.privatekey_path,
- 'csr': self.csr_path,
- }
-
- if self.backup_file:
- result['backup_file'] = self.backup_file
- if self.return_content:
- content = crypto_utils.load_file_if_exists(self.path, ignore_errors=True)
- result['certificate'] = content.decode('utf-8') if content else None
-
- result.update(self._get_cert_details())
-
- return result
-
-
-class AcmeCertificate(Certificate):
- """Retrieve a certificate using the ACME protocol."""
-
- # Since there's no real use of the backend,
- # other than the 'self.check' function, we just pass the backend to the constructor
-
- def __init__(self, module, backend):
- super(AcmeCertificate, self).__init__(module, backend)
- self.accountkey_path = module.params['acme_accountkey_path']
- self.challenge_path = module.params['acme_challenge_path']
- self.use_chain = module.params['acme_chain']
- self.acme_directory = module.params['acme_directory']
-
- def generate(self, module):
-
- if self.csr_content is None and not os.path.exists(self.csr_path):
- raise CertificateError(
- 'The certificate signing request file %s does not exist' % self.csr_path
- )
-
- if not os.path.exists(self.accountkey_path):
- raise CertificateError(
- 'The account key %s does not exist' % self.accountkey_path
- )
-
- if not os.path.exists(self.challenge_path):
- raise CertificateError(
- 'The challenge path %s does not exist' % self.challenge_path
- )
-
- if not self.check(module, perms_required=False) or self.force:
- acme_tiny_path = self.module.get_bin_path('acme-tiny', required=True)
- command = [acme_tiny_path]
- if self.use_chain:
- command.append('--chain')
- command.extend(['--account-key', self.accountkey_path])
- if self.csr_content is not None:
- # We need to temporarily write the CSR to disk
- fd, tmpsrc = tempfile.mkstemp()
- module.add_cleanup_file(tmpsrc) # Ansible will delete the file on exit
- f = os.fdopen(fd, 'wb')
- try:
- f.write(self.csr_content)
- except Exception as err:
- try:
- f.close()
- except Exception as dummy:
- pass
- module.fail_json(
- msg="failed to create temporary CSR file: %s" % to_native(err),
- exception=traceback.format_exc()
- )
- f.close()
- command.extend(['--csr', tmpsrc])
- else:
- command.extend(['--csr', self.csr_path])
- command.extend(['--acme-dir', self.challenge_path])
- command.extend(['--directory-url', self.acme_directory])
-
- try:
- crt = module.run_command(command, check_rc=True)[1]
- if self.backup:
- self.backup_file = module.backup_local(self.path)
- crypto_utils.write_file(module, to_bytes(crt))
- self.changed = True
- except OSError as exc:
- raise CertificateError(exc)
-
- file_args = module.load_file_common_arguments(module.params)
- if module.set_fs_attributes_if_different(file_args, False):
- self.changed = True
-
- def dump(self, check_mode=False):
-
- result = {
- 'changed': self.changed,
- 'filename': self.path,
- 'privatekey': self.privatekey_path,
- 'accountkey': self.accountkey_path,
- 'csr': self.csr_path,
- }
- if self.backup_file:
- result['backup_file'] = self.backup_file
- if self.return_content:
- content = crypto_utils.load_file_if_exists(self.path, ignore_errors=True)
- result['certificate'] = content.decode('utf-8') if content else None
-
- return result
-
-
-def main():
- module = AnsibleModule(
- argument_spec=dict(
- state=dict(type='str', default='present', choices=['present', 'absent']),
- path=dict(type='path', required=True),
- provider=dict(type='str', choices=['acme', 'assertonly', 'entrust', 'ownca', 'selfsigned']),
- force=dict(type='bool', default=False,),
- csr_path=dict(type='path'),
- csr_content=dict(type='str'),
- backup=dict(type='bool', default=False),
- select_crypto_backend=dict(type='str', default='auto', choices=['auto', 'cryptography', 'pyopenssl']),
- return_content=dict(type='bool', default=False),
-
- # General properties of a certificate
- privatekey_path=dict(type='path'),
- privatekey_content=dict(type='str'),
- privatekey_passphrase=dict(type='str', no_log=True),
-
- # provider: assertonly
- signature_algorithms=dict(type='list', elements='str', removed_in_version='2.13'),
- subject=dict(type='dict', removed_in_version='2.13'),
- subject_strict=dict(type='bool', default=False, removed_in_version='2.13'),
- issuer=dict(type='dict', removed_in_version='2.13'),
- issuer_strict=dict(type='bool', default=False, removed_in_version='2.13'),
- has_expired=dict(type='bool', default=False, removed_in_version='2.13'),
- version=dict(type='int', removed_in_version='2.13'),
- key_usage=dict(type='list', elements='str', aliases=['keyUsage'], removed_in_version='2.13'),
- key_usage_strict=dict(type='bool', default=False, aliases=['keyUsage_strict'], removed_in_version='2.13'),
- extended_key_usage=dict(type='list', elements='str', aliases=['extendedKeyUsage'], removed_in_version='2.13'),
- extended_key_usage_strict=dict(type='bool', default=False, aliases=['extendedKeyUsage_strict'], removed_in_version='2.13'),
- subject_alt_name=dict(type='list', elements='str', aliases=['subjectAltName'], removed_in_version='2.13'),
- subject_alt_name_strict=dict(type='bool', default=False, aliases=['subjectAltName_strict'], removed_in_version='2.13'),
- not_before=dict(type='str', aliases=['notBefore'], removed_in_version='2.13'),
- not_after=dict(type='str', aliases=['notAfter'], removed_in_version='2.13'),
- valid_at=dict(type='str', removed_in_version='2.13'),
- invalid_at=dict(type='str', removed_in_version='2.13'),
- valid_in=dict(type='str', removed_in_version='2.13'),
-
- # provider: selfsigned
- selfsigned_version=dict(type='int', default=3),
- selfsigned_digest=dict(type='str', default='sha256'),
- selfsigned_not_before=dict(type='str', default='+0s', aliases=['selfsigned_notBefore']),
- selfsigned_not_after=dict(type='str', default='+3650d', aliases=['selfsigned_notAfter']),
- selfsigned_create_subject_key_identifier=dict(
- type='str',
- default='create_if_not_provided',
- choices=['create_if_not_provided', 'always_create', 'never_create']
- ),
-
- # provider: ownca
- ownca_path=dict(type='path'),
- ownca_content=dict(type='str'),
- ownca_privatekey_path=dict(type='path'),
- ownca_privatekey_content=dict(type='str'),
- ownca_privatekey_passphrase=dict(type='str', no_log=True),
- ownca_digest=dict(type='str', default='sha256'),
- ownca_version=dict(type='int', default=3),
- ownca_not_before=dict(type='str', default='+0s'),
- ownca_not_after=dict(type='str', default='+3650d'),
- ownca_create_subject_key_identifier=dict(
- type='str',
- default='create_if_not_provided',
- choices=['create_if_not_provided', 'always_create', 'never_create']
- ),
- ownca_create_authority_key_identifier=dict(type='bool', default=True),
-
- # provider: acme
- acme_accountkey_path=dict(type='path'),
- acme_challenge_path=dict(type='path'),
- acme_chain=dict(type='bool', default=False),
- acme_directory=dict(type='str', default="https://acme-v02.api.letsencrypt.org/directory"),
-
- # provider: entrust
- entrust_cert_type=dict(type='str', default='STANDARD_SSL',
- choices=['STANDARD_SSL', 'ADVANTAGE_SSL', 'UC_SSL', 'EV_SSL', 'WILDCARD_SSL',
- 'PRIVATE_SSL', 'PD_SSL', 'CDS_ENT_LITE', 'CDS_ENT_PRO', 'SMIME_ENT']),
- entrust_requester_email=dict(type='str'),
- entrust_requester_name=dict(type='str'),
- entrust_requester_phone=dict(type='str'),
- entrust_api_user=dict(type='str'),
- entrust_api_key=dict(type='str', no_log=True),
- entrust_api_client_cert_path=dict(type='path'),
- entrust_api_client_cert_key_path=dict(type='path', no_log=True),
- entrust_api_specification_path=dict(type='path', default='https://cloud.entrust.net/EntrustCloud/documentation/cms-api-2.1.0.yaml'),
- entrust_not_after=dict(type='str', default='+365d'),
- ),
- supports_check_mode=True,
- add_file_common_args=True,
- required_if=[
- ['state', 'present', ['provider']],
- ['provider', 'entrust', ['entrust_requester_email', 'entrust_requester_name', 'entrust_requester_phone',
- 'entrust_api_user', 'entrust_api_key', 'entrust_api_client_cert_path',
- 'entrust_api_client_cert_key_path']],
- ],
- mutually_exclusive=[
- ['csr_path', 'csr_content'],
- ['privatekey_path', 'privatekey_content'],
- ['ownca_path', 'ownca_content'],
- ['ownca_privatekey_path', 'ownca_privatekey_content'],
- ],
- )
-
- try:
- if module.params['state'] == 'absent':
- certificate = CertificateAbsent(module)
-
- else:
- if module.params['provider'] != 'assertonly' and module.params['csr_path'] is None and module.params['csr_content'] is None:
- module.fail_json(msg='csr_path or csr_content is required when provider is not assertonly')
-
- base_dir = os.path.dirname(module.params['path']) or '.'
- if not os.path.isdir(base_dir):
- module.fail_json(
- name=base_dir,
- msg='The directory %s does not exist or the file is not a directory' % base_dir
- )
-
- provider = module.params['provider']
- if provider == 'assertonly':
- module.deprecate("The 'assertonly' provider is deprecated; please see the examples of "
- "the 'openssl_certificate' module on how to replace it with other modules",
- version='2.13', collection_name='ansible.builtin')
- elif provider == 'selfsigned':
- if module.params['privatekey_path'] is None and module.params['privatekey_content'] is None:
- module.fail_json(msg='One of privatekey_path and privatekey_content must be specified for the selfsigned provider.')
- elif provider == 'acme':
- if module.params['acme_accountkey_path'] is None:
- module.fail_json(msg='The acme_accountkey_path option must be specified for the acme provider.')
- if module.params['acme_challenge_path'] is None:
- module.fail_json(msg='The acme_challenge_path option must be specified for the acme provider.')
- elif provider == 'ownca':
- if module.params['ownca_path'] is None and module.params['ownca_content'] is None:
- module.fail_json(msg='One of ownca_path and ownca_content must be specified for the ownca provider.')
- if module.params['ownca_privatekey_path'] is None and module.params['ownca_privatekey_content'] is None:
- module.fail_json(msg='One of ownca_privatekey_path and ownca_privatekey_content must be specified for the ownca provider.')
-
- backend = module.params['select_crypto_backend']
- if backend == 'auto':
- # Detect what backend we can use
- can_use_cryptography = CRYPTOGRAPHY_FOUND and CRYPTOGRAPHY_VERSION >= LooseVersion(MINIMAL_CRYPTOGRAPHY_VERSION)
- can_use_pyopenssl = PYOPENSSL_FOUND and PYOPENSSL_VERSION >= LooseVersion(MINIMAL_PYOPENSSL_VERSION)
-
- # If cryptography is available we'll use it
- if can_use_cryptography:
- backend = 'cryptography'
- elif can_use_pyopenssl:
- backend = 'pyopenssl'
-
- if module.params['selfsigned_version'] == 2 or module.params['ownca_version'] == 2:
- module.warn('crypto backend forced to pyopenssl. The cryptography library does not support v2 certificates')
- backend = 'pyopenssl'
-
- # Fail if no backend has been found
- if backend == 'auto':
- module.fail_json(msg=("Can't detect any of the required Python libraries "
- "cryptography (>= {0}) or PyOpenSSL (>= {1})").format(
- MINIMAL_CRYPTOGRAPHY_VERSION,
- MINIMAL_PYOPENSSL_VERSION))
-
- if backend == 'pyopenssl':
- if not PYOPENSSL_FOUND:
- module.fail_json(msg=missing_required_lib('pyOpenSSL >= {0}'.format(MINIMAL_PYOPENSSL_VERSION)),
- exception=PYOPENSSL_IMP_ERR)
- if module.params['provider'] in ['selfsigned', 'ownca', 'assertonly']:
- try:
- getattr(crypto.X509Req, 'get_extensions')
- except AttributeError:
- module.fail_json(msg='You need to have PyOpenSSL>=0.15')
-
- module.deprecate('The module is using the PyOpenSSL backend. This backend has been deprecated',
- version='2.13', collection_name='ansible.builtin')
- if provider == 'selfsigned':
- certificate = SelfSignedCertificate(module)
- elif provider == 'acme':
- certificate = AcmeCertificate(module, 'pyopenssl')
- elif provider == 'ownca':
- certificate = OwnCACertificate(module)
- elif provider == 'entrust':
- certificate = EntrustCertificate(module, 'pyopenssl')
- else:
- certificate = AssertOnlyCertificate(module)
- elif backend == 'cryptography':
- if not CRYPTOGRAPHY_FOUND:
- module.fail_json(msg=missing_required_lib('cryptography >= {0}'.format(MINIMAL_CRYPTOGRAPHY_VERSION)),
- exception=CRYPTOGRAPHY_IMP_ERR)
- if module.params['selfsigned_version'] == 2 or module.params['ownca_version'] == 2:
- module.fail_json(msg='The cryptography backend does not support v2 certificates, '
- 'use select_crypto_backend=pyopenssl for v2 certificates')
- if provider == 'selfsigned':
- certificate = SelfSignedCertificateCryptography(module)
- elif provider == 'acme':
- certificate = AcmeCertificate(module, 'cryptography')
- elif provider == 'ownca':
- certificate = OwnCACertificateCryptography(module)
- elif provider == 'entrust':
- certificate = EntrustCertificate(module, 'cryptography')
- else:
- certificate = AssertOnlyCertificateCryptography(module)
-
- if module.params['state'] == 'present':
- if module.check_mode:
- result = certificate.dump(check_mode=True)
- result['changed'] = module.params['force'] or not certificate.check(module)
- module.exit_json(**result)
-
- certificate.generate(module)
- else:
- if module.check_mode:
- result = certificate.dump(check_mode=True)
- result['changed'] = os.path.exists(module.params['path'])
- module.exit_json(**result)
-
- certificate.remove(module)
-
- result = certificate.dump()
- module.exit_json(**result)
- except crypto_utils.OpenSSLObjectError as exc:
- module.fail_json(msg=to_native(exc))
-
-
-if __name__ == "__main__":
- main()
diff --git a/test/support/integration/plugins/modules/openssl_certificate_info.py b/test/support/integration/plugins/modules/openssl_certificate_info.py
deleted file mode 100644
index 27e65153ea..0000000000
--- a/test/support/integration/plugins/modules/openssl_certificate_info.py
+++ /dev/null
@@ -1,864 +0,0 @@
-#!/usr/bin/python
-# -*- coding: utf-8 -*-
-
-# Copyright: (c) 2016-2017, Yanis Guenane <yanis+ansible@guenane.org>
-# Copyright: (c) 2017, Markus Teufelberger <mteufelberger+ansible@mgit.at>
-# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
-
-from __future__ import absolute_import, division, print_function
-__metaclass__ = type
-
-ANSIBLE_METADATA = {'metadata_version': '1.1',
- 'status': ['preview'],
- 'supported_by': 'community'}
-
-DOCUMENTATION = r'''
----
-module: openssl_certificate_info
-version_added: '2.8'
-short_description: Provide information of OpenSSL X.509 certificates
-description:
- - This module allows one to query information on OpenSSL certificates.
- - It uses the pyOpenSSL or cryptography python library to interact with OpenSSL. If both the
- cryptography and PyOpenSSL libraries are available (and meet the minimum version requirements)
- cryptography will be preferred as a backend over PyOpenSSL (unless the backend is forced with
- C(select_crypto_backend)). Please note that the PyOpenSSL backend was deprecated in Ansible 2.9
- and will be removed in Ansible 2.13.
-requirements:
- - PyOpenSSL >= 0.15 or cryptography >= 1.6
-author:
- - Felix Fontein (@felixfontein)
- - Yanis Guenane (@Spredzy)
- - Markus Teufelberger (@MarkusTeufelberger)
-options:
- path:
- description:
- - Remote absolute path where the certificate file is loaded from.
- - Either I(path) or I(content) must be specified, but not both.
- type: path
- content:
- description:
- - Content of the X.509 certificate in PEM format.
- - Either I(path) or I(content) must be specified, but not both.
- type: str
- version_added: "2.10"
- valid_at:
- description:
- - A dict of names mapping to time specifications. Every time specified here
- will be checked whether the certificate is valid at this point. See the
- C(valid_at) return value for informations on the result.
- - Time can be specified either as relative time or as absolute timestamp.
- - Time will always be interpreted as UTC.
- - Valid format is C([+-]timespec | ASN.1 TIME) where timespec can be an integer
- + C([w | d | h | m | s]) (e.g. C(+32w1d2h), and ASN.1 TIME (i.e. pattern C(YYYYMMDDHHMMSSZ)).
- Note that all timestamps will be treated as being in UTC.
- type: dict
- select_crypto_backend:
- description:
- - Determines which crypto backend to use.
- - The default choice is C(auto), which tries to use C(cryptography) if available, and falls back to C(pyopenssl).
- - If set to C(pyopenssl), will try to use the L(pyOpenSSL,https://pypi.org/project/pyOpenSSL/) library.
- - If set to C(cryptography), will try to use the L(cryptography,https://cryptography.io/) library.
- - Please note that the C(pyopenssl) backend has been deprecated in Ansible 2.9, and will be removed in Ansible 2.13.
- From that point on, only the C(cryptography) backend will be available.
- type: str
- default: auto
- choices: [ auto, cryptography, pyopenssl ]
-
-notes:
- - All timestamp values are provided in ASN.1 TIME format, i.e. following the C(YYYYMMDDHHMMSSZ) pattern.
- They are all in UTC.
-seealso:
-- module: openssl_certificate
-'''
-
-EXAMPLES = r'''
-- name: Generate a Self Signed OpenSSL certificate
- openssl_certificate:
- path: /etc/ssl/crt/ansible.com.crt
- privatekey_path: /etc/ssl/private/ansible.com.pem
- csr_path: /etc/ssl/csr/ansible.com.csr
- provider: selfsigned
-
-
-# Get information on the certificate
-
-- name: Get information on generated certificate
- openssl_certificate_info:
- path: /etc/ssl/crt/ansible.com.crt
- register: result
-
-- name: Dump information
- debug:
- var: result
-
-
-# Check whether the certificate is valid or not valid at certain times, fail
-# if this is not the case. The first task (openssl_certificate_info) collects
-# the information, and the second task (assert) validates the result and
-# makes the playbook fail in case something is not as expected.
-
-- name: Test whether that certificate is valid tomorrow and/or in three weeks
- openssl_certificate_info:
- path: /etc/ssl/crt/ansible.com.crt
- valid_at:
- point_1: "+1d"
- point_2: "+3w"
- register: result
-
-- name: Validate that certificate is valid tomorrow, but not in three weeks
- assert:
- that:
- - result.valid_at.point_1 # valid in one day
- - not result.valid_at.point_2 # not valid in three weeks
-'''
-
-RETURN = r'''
-expired:
- description: Whether the certificate is expired (i.e. C(notAfter) is in the past)
- returned: success
- type: bool
-basic_constraints:
- description: Entries in the C(basic_constraints) extension, or C(none) if extension is not present.
- returned: success
- type: list
- elements: str
- sample: "[CA:TRUE, pathlen:1]"
-basic_constraints_critical:
- description: Whether the C(basic_constraints) extension is critical.
- returned: success
- type: bool
-extended_key_usage:
- description: Entries in the C(extended_key_usage) extension, or C(none) if extension is not present.
- returned: success
- type: list
- elements: str
- sample: "[Biometric Info, DVCS, Time Stamping]"
-extended_key_usage_critical:
- description: Whether the C(extended_key_usage) extension is critical.
- returned: success
- type: bool
-extensions_by_oid:
- description: Returns a dictionary for every extension OID
- returned: success
- type: dict
- contains:
- critical:
- description: Whether the extension is critical.
- returned: success
- type: bool
- value:
- description: The Base64 encoded value (in DER format) of the extension
- returned: success
- type: str
- sample: "MAMCAQU="
- sample: '{"1.3.6.1.5.5.7.1.24": { "critical": false, "value": "MAMCAQU="}}'
-key_usage:
- description: Entries in the C(key_usage) extension, or C(none) if extension is not present.
- returned: success
- type: str
- sample: "[Key Agreement, Data Encipherment]"
-key_usage_critical:
- description: Whether the C(key_usage) extension is critical.
- returned: success
- type: bool
-subject_alt_name:
- description: Entries in the C(subject_alt_name) extension, or C(none) if extension is not present.
- returned: success
- type: list
- elements: str
- sample: "[DNS:www.ansible.com, IP:1.2.3.4]"
-subject_alt_name_critical:
- description: Whether the C(subject_alt_name) extension is critical.
- returned: success
- type: bool
-ocsp_must_staple:
- description: C(yes) if the OCSP Must Staple extension is present, C(none) otherwise.
- returned: success
- type: bool
-ocsp_must_staple_critical:
- description: Whether the C(ocsp_must_staple) extension is critical.
- returned: success
- type: bool
-issuer:
- description:
- - The certificate's issuer.
- - Note that for repeated values, only the last one will be returned.
- returned: success
- type: dict
- sample: '{"organizationName": "Ansible", "commonName": "ca.example.com"}'
-issuer_ordered:
- description: The certificate's issuer as an ordered list of tuples.
- returned: success
- type: list
- elements: list
- sample: '[["organizationName", "Ansible"], ["commonName": "ca.example.com"]]'
- version_added: "2.9"
-subject:
- description:
- - The certificate's subject as a dictionary.
- - Note that for repeated values, only the last one will be returned.
- returned: success
- type: dict
- sample: '{"commonName": "www.example.com", "emailAddress": "test@example.com"}'
-subject_ordered:
- description: The certificate's subject as an ordered list of tuples.
- returned: success
- type: list
- elements: list
- sample: '[["commonName", "www.example.com"], ["emailAddress": "test@example.com"]]'
- version_added: "2.9"
-not_after:
- description: C(notAfter) date as ASN.1 TIME
- returned: success
- type: str
- sample: 20190413202428Z
-not_before:
- description: C(notBefore) date as ASN.1 TIME
- returned: success
- type: str
- sample: 20190331202428Z
-public_key:
- description: Certificate's public key in PEM format
- returned: success
- type: str
- sample: "-----BEGIN PUBLIC KEY-----\nMIICIjANBgkqhkiG9w0BAQEFAAOCAg8A..."
-public_key_fingerprints:
- description:
- - Fingerprints of certificate's public key.
- - For every hash algorithm available, the fingerprint is computed.
- returned: success
- type: dict
- sample: "{'sha256': 'd4:b3:aa:6d:c8:04:ce:4e:ba:f6:29:4d:92:a3:94:b0:c2:ff:bd:bf:33:63:11:43:34:0f:51:b0:95:09:2f:63',
- 'sha512': 'f7:07:4a:f0:b0:f0:e6:8b:95:5f:f9:e6:61:0a:32:68:f1..."
-signature_algorithm:
- description: The signature algorithm used to sign the certificate.
- returned: success
- type: str
- sample: sha256WithRSAEncryption
-serial_number:
- description: The certificate's serial number.
- returned: success
- type: int
- sample: 1234
-version:
- description: The certificate version.
- returned: success
- type: int
- sample: 3
-valid_at:
- description: For every time stamp provided in the I(valid_at) option, a
- boolean whether the certificate is valid at that point in time
- or not.
- returned: success
- type: dict
-subject_key_identifier:
- description:
- - The certificate's subject key identifier.
- - The identifier is returned in hexadecimal, with C(:) used to separate bytes.
- - Is C(none) if the C(SubjectKeyIdentifier) extension is not present.
- returned: success and if the pyOpenSSL backend is I(not) used
- type: str
- sample: '00:11:22:33:44:55:66:77:88:99:aa:bb:cc:dd:ee:ff:00:11:22:33'
- version_added: "2.9"
-authority_key_identifier:
- description:
- - The certificate's authority key identifier.
- - The identifier is returned in hexadecimal, with C(:) used to separate bytes.
- - Is C(none) if the C(AuthorityKeyIdentifier) extension is not present.
- returned: success and if the pyOpenSSL backend is I(not) used
- type: str
- sample: '00:11:22:33:44:55:66:77:88:99:aa:bb:cc:dd:ee:ff:00:11:22:33'
- version_added: "2.9"
-authority_cert_issuer:
- description:
- - The certificate's authority cert issuer as a list of general names.
- - Is C(none) if the C(AuthorityKeyIdentifier) extension is not present.
- returned: success and if the pyOpenSSL backend is I(not) used
- type: list
- elements: str
- sample: "[DNS:www.ansible.com, IP:1.2.3.4]"
- version_added: "2.9"
-authority_cert_serial_number:
- description:
- - The certificate's authority cert serial number.
- - Is C(none) if the C(AuthorityKeyIdentifier) extension is not present.
- returned: success and if the pyOpenSSL backend is I(not) used
- type: int
- sample: '12345'
- version_added: "2.9"
-ocsp_uri:
- description: The OCSP responder URI, if included in the certificate. Will be
- C(none) if no OCSP responder URI is included.
- returned: success
- type: str
- version_added: "2.9"
-'''
-
-
-import abc
-import binascii
-import datetime
-import os
-import re
-import traceback
-from distutils.version import LooseVersion
-
-from ansible.module_utils import crypto as crypto_utils
-from ansible.module_utils.basic import AnsibleModule, missing_required_lib
-from ansible.module_utils.six import string_types
-from ansible.module_utils._text import to_native, to_text, to_bytes
-from ansible.module_utils.compat import ipaddress as compat_ipaddress
-
-MINIMAL_CRYPTOGRAPHY_VERSION = '1.6'
-MINIMAL_PYOPENSSL_VERSION = '0.15'
-
-PYOPENSSL_IMP_ERR = None
-try:
- import OpenSSL
- from OpenSSL import crypto
- PYOPENSSL_VERSION = LooseVersion(OpenSSL.__version__)
- if OpenSSL.SSL.OPENSSL_VERSION_NUMBER >= 0x10100000:
- # OpenSSL 1.1.0 or newer
- OPENSSL_MUST_STAPLE_NAME = b"tlsfeature"
- OPENSSL_MUST_STAPLE_VALUE = b"status_request"
- else:
- # OpenSSL 1.0.x or older
- OPENSSL_MUST_STAPLE_NAME = b"1.3.6.1.5.5.7.1.24"
- OPENSSL_MUST_STAPLE_VALUE = b"DER:30:03:02:01:05"
-except ImportError:
- PYOPENSSL_IMP_ERR = traceback.format_exc()
- PYOPENSSL_FOUND = False
-else:
- PYOPENSSL_FOUND = True
-
-CRYPTOGRAPHY_IMP_ERR = None
-try:
- import cryptography
- from cryptography import x509
- from cryptography.hazmat.primitives import serialization
- CRYPTOGRAPHY_VERSION = LooseVersion(cryptography.__version__)
-except ImportError:
- CRYPTOGRAPHY_IMP_ERR = traceback.format_exc()
- CRYPTOGRAPHY_FOUND = False
-else:
- CRYPTOGRAPHY_FOUND = True
-
-
-TIMESTAMP_FORMAT = "%Y%m%d%H%M%SZ"
-
-
-class CertificateInfo(crypto_utils.OpenSSLObject):
- def __init__(self, module, backend):
- super(CertificateInfo, self).__init__(
- module.params['path'] or '',
- 'present',
- False,
- module.check_mode,
- )
- self.backend = backend
- self.module = module
- self.content = module.params['content']
- if self.content is not None:
- self.content = self.content.encode('utf-8')
-
- self.valid_at = module.params['valid_at']
- if self.valid_at:
- for k, v in self.valid_at.items():
- if not isinstance(v, string_types):
- self.module.fail_json(
- msg='The value for valid_at.{0} must be of type string (got {1})'.format(k, type(v))
- )
- self.valid_at[k] = crypto_utils.get_relative_time_option(v, 'valid_at.{0}'.format(k))
-
- def generate(self):
- # Empty method because crypto_utils.OpenSSLObject wants this
- pass
-
- def dump(self):
- # Empty method because crypto_utils.OpenSSLObject wants this
- pass
-
- @abc.abstractmethod
- def _get_signature_algorithm(self):
- pass
-
- @abc.abstractmethod
- def _get_subject_ordered(self):
- pass
-
- @abc.abstractmethod
- def _get_issuer_ordered(self):
- pass
-
- @abc.abstractmethod
- def _get_version(self):
- pass
-
- @abc.abstractmethod
- def _get_key_usage(self):
- pass
-
- @abc.abstractmethod
- def _get_extended_key_usage(self):
- pass
-
- @abc.abstractmethod
- def _get_basic_constraints(self):
- pass
-
- @abc.abstractmethod
- def _get_ocsp_must_staple(self):
- pass
-
- @abc.abstractmethod
- def _get_subject_alt_name(self):
- pass
-
- @abc.abstractmethod
- def _get_not_before(self):
- pass
-
- @abc.abstractmethod
- def _get_not_after(self):
- pass
-
- @abc.abstractmethod
- def _get_public_key(self, binary):
- pass
-
- @abc.abstractmethod
- def _get_subject_key_identifier(self):
- pass
-
- @abc.abstractmethod
- def _get_authority_key_identifier(self):
- pass
-
- @abc.abstractmethod
- def _get_serial_number(self):
- pass
-
- @abc.abstractmethod
- def _get_all_extensions(self):
- pass
-
- @abc.abstractmethod
- def _get_ocsp_uri(self):
- pass
-
- def get_info(self):
- result = dict()
- self.cert = crypto_utils.load_certificate(self.path, content=self.content, backend=self.backend)
-
- result['signature_algorithm'] = self._get_signature_algorithm()
- subject = self._get_subject_ordered()
- issuer = self._get_issuer_ordered()
- result['subject'] = dict()
- for k, v in subject:
- result['subject'][k] = v
- result['subject_ordered'] = subject
- result['issuer'] = dict()
- for k, v in issuer:
- result['issuer'][k] = v
- result['issuer_ordered'] = issuer
- result['version'] = self._get_version()
- result['key_usage'], result['key_usage_critical'] = self._get_key_usage()
- result['extended_key_usage'], result['extended_key_usage_critical'] = self._get_extended_key_usage()
- result['basic_constraints'], result['basic_constraints_critical'] = self._get_basic_constraints()
- result['ocsp_must_staple'], result['ocsp_must_staple_critical'] = self._get_ocsp_must_staple()
- result['subject_alt_name'], result['subject_alt_name_critical'] = self._get_subject_alt_name()
-
- not_before = self._get_not_before()
- not_after = self._get_not_after()
- result['not_before'] = not_before.strftime(TIMESTAMP_FORMAT)
- result['not_after'] = not_after.strftime(TIMESTAMP_FORMAT)
- result['expired'] = not_after < datetime.datetime.utcnow()
-
- result['valid_at'] = dict()
- if self.valid_at:
- for k, v in self.valid_at.items():
- result['valid_at'][k] = not_before <= v <= not_after
-
- result['public_key'] = self._get_public_key(binary=False)
- pk = self._get_public_key(binary=True)
- result['public_key_fingerprints'] = crypto_utils.get_fingerprint_of_bytes(pk) if pk is not None else dict()
-
- if self.backend != 'pyopenssl':
- ski = self._get_subject_key_identifier()
- if ski is not None:
- ski = to_native(binascii.hexlify(ski))
- ski = ':'.join([ski[i:i + 2] for i in range(0, len(ski), 2)])
- result['subject_key_identifier'] = ski
-
- aki, aci, acsn = self._get_authority_key_identifier()
- if aki is not None:
- aki = to_native(binascii.hexlify(aki))
- aki = ':'.join([aki[i:i + 2] for i in range(0, len(aki), 2)])
- result['authority_key_identifier'] = aki
- result['authority_cert_issuer'] = aci
- result['authority_cert_serial_number'] = acsn
-
- result['serial_number'] = self._get_serial_number()
- result['extensions_by_oid'] = self._get_all_extensions()
- result['ocsp_uri'] = self._get_ocsp_uri()
-
- return result
-
-
-class CertificateInfoCryptography(CertificateInfo):
- """Validate the supplied cert, using the cryptography backend"""
- def __init__(self, module):
- super(CertificateInfoCryptography, self).__init__(module, 'cryptography')
-
- def _get_signature_algorithm(self):
- return crypto_utils.cryptography_oid_to_name(self.cert.signature_algorithm_oid)
-
- def _get_subject_ordered(self):
- result = []
- for attribute in self.cert.subject:
- result.append([crypto_utils.cryptography_oid_to_name(attribute.oid), attribute.value])
- return result
-
- def _get_issuer_ordered(self):
- result = []
- for attribute in self.cert.issuer:
- result.append([crypto_utils.cryptography_oid_to_name(attribute.oid), attribute.value])
- return result
-
- def _get_version(self):
- if self.cert.version == x509.Version.v1:
- return 1
- if self.cert.version == x509.Version.v3:
- return 3
- return "unknown"
-
- def _get_key_usage(self):
- try:
- current_key_ext = self.cert.extensions.get_extension_for_class(x509.KeyUsage)
- current_key_usage = current_key_ext.value
- key_usage = dict(
- digital_signature=current_key_usage.digital_signature,
- content_commitment=current_key_usage.content_commitment,
- key_encipherment=current_key_usage.key_encipherment,
- data_encipherment=current_key_usage.data_encipherment,
- key_agreement=current_key_usage.key_agreement,
- key_cert_sign=current_key_usage.key_cert_sign,
- crl_sign=current_key_usage.crl_sign,
- encipher_only=False,
- decipher_only=False,
- )
- if key_usage['key_agreement']:
- key_usage.update(dict(
- encipher_only=current_key_usage.encipher_only,
- decipher_only=current_key_usage.decipher_only
- ))
-
- key_usage_names = dict(
- digital_signature='Digital Signature',
- content_commitment='Non Repudiation',
- key_encipherment='Key Encipherment',
- data_encipherment='Data Encipherment',
- key_agreement='Key Agreement',
- key_cert_sign='Certificate Sign',
- crl_sign='CRL Sign',
- encipher_only='Encipher Only',
- decipher_only='Decipher Only',
- )
- return sorted([
- key_usage_names[name] for name, value in key_usage.items() if value
- ]), current_key_ext.critical
- except cryptography.x509.ExtensionNotFound:
- return None, False
-
- def _get_extended_key_usage(self):
- try:
- ext_keyusage_ext = self.cert.extensions.get_extension_for_class(x509.ExtendedKeyUsage)
- return sorted([
- crypto_utils.cryptography_oid_to_name(eku) for eku in ext_keyusage_ext.value
- ]), ext_keyusage_ext.critical
- except cryptography.x509.ExtensionNotFound:
- return None, False
-
- def _get_basic_constraints(self):
- try:
- ext_keyusage_ext = self.cert.extensions.get_extension_for_class(x509.BasicConstraints)
- result = []
- result.append('CA:{0}'.format('TRUE' if ext_keyusage_ext.value.ca else 'FALSE'))
- if ext_keyusage_ext.value.path_length is not None:
- result.append('pathlen:{0}'.format(ext_keyusage_ext.value.path_length))
- return sorted(result), ext_keyusage_ext.critical
- except cryptography.x509.ExtensionNotFound:
- return None, False
-
- def _get_ocsp_must_staple(self):
- try:
- try:
- # This only works with cryptography >= 2.1
- tlsfeature_ext = self.cert.extensions.get_extension_for_class(x509.TLSFeature)
- value = cryptography.x509.TLSFeatureType.status_request in tlsfeature_ext.value
- except AttributeError as dummy:
- # Fallback for cryptography < 2.1
- oid = x509.oid.ObjectIdentifier("1.3.6.1.5.5.7.1.24")
- tlsfeature_ext = self.cert.extensions.get_extension_for_oid(oid)
- value = tlsfeature_ext.value.value == b"\x30\x03\x02\x01\x05"
- return value, tlsfeature_ext.critical
- except cryptography.x509.ExtensionNotFound:
- return None, False
-
- def _get_subject_alt_name(self):
- try:
- san_ext = self.cert.extensions.get_extension_for_class(x509.SubjectAlternativeName)
- result = [crypto_utils.cryptography_decode_name(san) for san in san_ext.value]
- return result, san_ext.critical
- except cryptography.x509.ExtensionNotFound:
- return None, False
-
- def _get_not_before(self):
- return self.cert.not_valid_before
-
- def _get_not_after(self):
- return self.cert.not_valid_after
-
- def _get_public_key(self, binary):
- return self.cert.public_key().public_bytes(
- serialization.Encoding.DER if binary else serialization.Encoding.PEM,
- serialization.PublicFormat.SubjectPublicKeyInfo
- )
-
- def _get_subject_key_identifier(self):
- try:
- ext = self.cert.extensions.get_extension_for_class(x509.SubjectKeyIdentifier)
- return ext.value.digest
- except cryptography.x509.ExtensionNotFound:
- return None
-
- def _get_authority_key_identifier(self):
- try:
- ext = self.cert.extensions.get_extension_for_class(x509.AuthorityKeyIdentifier)
- issuer = None
- if ext.value.authority_cert_issuer is not None:
- issuer = [crypto_utils.cryptography_decode_name(san) for san in ext.value.authority_cert_issuer]
- return ext.value.key_identifier, issuer, ext.value.authority_cert_serial_number
- except cryptography.x509.ExtensionNotFound:
- return None, None, None
-
- def _get_serial_number(self):
- return self.cert.serial_number
-
- def _get_all_extensions(self):
- return crypto_utils.cryptography_get_extensions_from_cert(self.cert)
-
- def _get_ocsp_uri(self):
- try:
- ext = self.cert.extensions.get_extension_for_class(x509.AuthorityInformationAccess)
- for desc in ext.value:
- if desc.access_method == x509.oid.AuthorityInformationAccessOID.OCSP:
- if isinstance(desc.access_location, x509.UniformResourceIdentifier):
- return desc.access_location.value
- except x509.ExtensionNotFound as dummy:
- pass
- return None
-
-
-class CertificateInfoPyOpenSSL(CertificateInfo):
- """validate the supplied certificate."""
-
- def __init__(self, module):
- super(CertificateInfoPyOpenSSL, self).__init__(module, 'pyopenssl')
-
- def _get_signature_algorithm(self):
- return to_text(self.cert.get_signature_algorithm())
-
- def __get_name(self, name):
- result = []
- for sub in name.get_components():
- result.append([crypto_utils.pyopenssl_normalize_name(sub[0]), to_text(sub[1])])
- return result
-
- def _get_subject_ordered(self):
- return self.__get_name(self.cert.get_subject())
-
- def _get_issuer_ordered(self):
- return self.__get_name(self.cert.get_issuer())
-
- def _get_version(self):
- # Version numbers in certs are off by one:
- # v1: 0, v2: 1, v3: 2 ...
- return self.cert.get_version() + 1
-
- def _get_extension(self, short_name):
- for extension_idx in range(0, self.cert.get_extension_count()):
- extension = self.cert.get_extension(extension_idx)
- if extension.get_short_name() == short_name:
- result = [
- crypto_utils.pyopenssl_normalize_name(usage.strip()) for usage in to_text(extension, errors='surrogate_or_strict').split(',')
- ]
- return sorted(result), bool(extension.get_critical())
- return None, False
-
- def _get_key_usage(self):
- return self._get_extension(b'keyUsage')
-
- def _get_extended_key_usage(self):
- return self._get_extension(b'extendedKeyUsage')
-
- def _get_basic_constraints(self):
- return self._get_extension(b'basicConstraints')
-
- def _get_ocsp_must_staple(self):
- extensions = [self.cert.get_extension(i) for i in range(0, self.cert.get_extension_count())]
- oms_ext = [
- ext for ext in extensions
- if to_bytes(ext.get_short_name()) == OPENSSL_MUST_STAPLE_NAME and to_bytes(ext) == OPENSSL_MUST_STAPLE_VALUE
- ]
- if OpenSSL.SSL.OPENSSL_VERSION_NUMBER < 0x10100000:
- # Older versions of libssl don't know about OCSP Must Staple
- oms_ext.extend([ext for ext in extensions if ext.get_short_name() == b'UNDEF' and ext.get_data() == b'\x30\x03\x02\x01\x05'])
- if oms_ext:
- return True, bool(oms_ext[0].get_critical())
- else:
- return None, False
-
- def _normalize_san(self, san):
- if san.startswith('IP Address:'):
- san = 'IP:' + san[len('IP Address:'):]
- if san.startswith('IP:'):
- ip = compat_ipaddress.ip_address(san[3:])
- san = 'IP:{0}'.format(ip.compressed)
- return san
-
- def _get_subject_alt_name(self):
- for extension_idx in range(0, self.cert.get_extension_count()):
- extension = self.cert.get_extension(extension_idx)
- if extension.get_short_name() == b'subjectAltName':
- result = [self._normalize_san(altname.strip()) for altname in
- to_text(extension, errors='surrogate_or_strict').split(', ')]
- return result, bool(extension.get_critical())
- return None, False
-
- def _get_not_before(self):
- time_string = to_native(self.cert.get_notBefore())
- return datetime.datetime.strptime(time_string, "%Y%m%d%H%M%SZ")
-
- def _get_not_after(self):
- time_string = to_native(self.cert.get_notAfter())
- return datetime.datetime.strptime(time_string, "%Y%m%d%H%M%SZ")
-
- def _get_public_key(self, binary):
- try:
- return crypto.dump_publickey(
- crypto.FILETYPE_ASN1 if binary else crypto.FILETYPE_PEM,
- self.cert.get_pubkey()
- )
- except AttributeError:
- try:
- # pyOpenSSL < 16.0:
- bio = crypto._new_mem_buf()
- if binary:
- rc = crypto._lib.i2d_PUBKEY_bio(bio, self.cert.get_pubkey()._pkey)
- else:
- rc = crypto._lib.PEM_write_bio_PUBKEY(bio, self.cert.get_pubkey()._pkey)
- if rc != 1:
- crypto._raise_current_error()
- return crypto._bio_to_string(bio)
- except AttributeError:
- self.module.warn('Your pyOpenSSL version does not support dumping public keys. '
- 'Please upgrade to version 16.0 or newer, or use the cryptography backend.')
-
- def _get_subject_key_identifier(self):
- # Won't be implemented
- return None
-
- def _get_authority_key_identifier(self):
- # Won't be implemented
- return None, None, None
-
- def _get_serial_number(self):
- return self.cert.get_serial_number()
-
- def _get_all_extensions(self):
- return crypto_utils.pyopenssl_get_extensions_from_cert(self.cert)
-
- def _get_ocsp_uri(self):
- for i in range(self.cert.get_extension_count()):
- ext = self.cert.get_extension(i)
- if ext.get_short_name() == b'authorityInfoAccess':
- v = str(ext)
- m = re.search('^OCSP - URI:(.*)$', v, flags=re.MULTILINE)
- if m:
- return m.group(1)
- return None
-
-
-def main():
- module = AnsibleModule(
- argument_spec=dict(
- path=dict(type='path'),
- content=dict(type='str'),
- valid_at=dict(type='dict'),
- select_crypto_backend=dict(type='str', default='auto', choices=['auto', 'cryptography', 'pyopenssl']),
- ),
- required_one_of=(
- ['path', 'content'],
- ),
- mutually_exclusive=(
- ['path', 'content'],
- ),
- supports_check_mode=True,
- )
-
- try:
- if module.params['path'] is not None:
- base_dir = os.path.dirname(module.params['path']) or '.'
- if not os.path.isdir(base_dir):
- module.fail_json(
- name=base_dir,
- msg='The directory %s does not exist or the file is not a directory' % base_dir
- )
-
- backend = module.params['select_crypto_backend']
- if backend == 'auto':
- # Detect what backend we can use
- can_use_cryptography = CRYPTOGRAPHY_FOUND and CRYPTOGRAPHY_VERSION >= LooseVersion(MINIMAL_CRYPTOGRAPHY_VERSION)
- can_use_pyopenssl = PYOPENSSL_FOUND and PYOPENSSL_VERSION >= LooseVersion(MINIMAL_PYOPENSSL_VERSION)
-
- # If cryptography is available we'll use it
- if can_use_cryptography:
- backend = 'cryptography'
- elif can_use_pyopenssl:
- backend = 'pyopenssl'
-
- # Fail if no backend has been found
- if backend == 'auto':
- module.fail_json(msg=("Can't detect any of the required Python libraries "
- "cryptography (>= {0}) or PyOpenSSL (>= {1})").format(
- MINIMAL_CRYPTOGRAPHY_VERSION,
- MINIMAL_PYOPENSSL_VERSION))
-
- if backend == 'pyopenssl':
- if not PYOPENSSL_FOUND:
- module.fail_json(msg=missing_required_lib('pyOpenSSL >= {0}'.format(MINIMAL_PYOPENSSL_VERSION)),
- exception=PYOPENSSL_IMP_ERR)
- try:
- getattr(crypto.X509Req, 'get_extensions')
- except AttributeError:
- module.fail_json(msg='You need to have PyOpenSSL>=0.15')
-
- module.deprecate('The module is using the PyOpenSSL backend. This backend has been deprecated',
- version='2.13', collection_name='ansible.builtin')
- certificate = CertificateInfoPyOpenSSL(module)
- elif backend == 'cryptography':
- if not CRYPTOGRAPHY_FOUND:
- module.fail_json(msg=missing_required_lib('cryptography >= {0}'.format(MINIMAL_CRYPTOGRAPHY_VERSION)),
- exception=CRYPTOGRAPHY_IMP_ERR)
- certificate = CertificateInfoCryptography(module)
-
- result = certificate.get_info()
- module.exit_json(**result)
- except crypto_utils.OpenSSLObjectError as exc:
- module.fail_json(msg=to_native(exc))
-
-
-if __name__ == "__main__":
- main()
diff --git a/test/support/integration/plugins/modules/openssl_csr.py b/test/support/integration/plugins/modules/openssl_csr.py
deleted file mode 100644
index 2d831f35bf..0000000000
--- a/test/support/integration/plugins/modules/openssl_csr.py
+++ /dev/null
@@ -1,1161 +0,0 @@
-#!/usr/bin/python
-# -*- coding: utf-8 -*-
-
-# Copyrigt: (c) 2017, Yanis Guenane <yanis+ansible@guenane.org>
-# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
-
-from __future__ import absolute_import, division, print_function
-__metaclass__ = type
-
-ANSIBLE_METADATA = {'metadata_version': '1.1',
- 'status': ['preview'],
- 'supported_by': 'community'}
-
-DOCUMENTATION = r'''
----
-module: openssl_csr
-version_added: '2.4'
-short_description: Generate OpenSSL Certificate Signing Request (CSR)
-description:
- - This module allows one to (re)generate OpenSSL certificate signing requests.
- - It uses the pyOpenSSL python library to interact with openssl. This module supports
- the subjectAltName, keyUsage, extendedKeyUsage, basicConstraints and OCSP Must Staple
- extensions.
- - "Please note that the module regenerates existing CSR if it doesn't match the module's
- options, or if it seems to be corrupt. If you are concerned that this could overwrite
- your existing CSR, consider using the I(backup) option."
- - The module can use the cryptography Python library, or the pyOpenSSL Python
- library. By default, it tries to detect which one is available. This can be
- overridden with the I(select_crypto_backend) option. Please note that the
- PyOpenSSL backend was deprecated in Ansible 2.9 and will be removed in Ansible 2.13."
-requirements:
- - Either cryptography >= 1.3
- - Or pyOpenSSL >= 0.15
-author:
-- Yanis Guenane (@Spredzy)
-options:
- state:
- description:
- - Whether the certificate signing request should exist or not, taking action if the state is different from what is stated.
- type: str
- default: present
- choices: [ absent, present ]
- digest:
- description:
- - The digest used when signing the certificate signing request with the private key.
- type: str
- default: sha256
- privatekey_path:
- description:
- - The path to the private key to use when signing the certificate signing request.
- - Either I(privatekey_path) or I(privatekey_content) must be specified if I(state) is C(present), but not both.
- type: path
- privatekey_content:
- description:
- - The content of the private key to use when signing the certificate signing request.
- - Either I(privatekey_path) or I(privatekey_content) must be specified if I(state) is C(present), but not both.
- type: str
- version_added: "2.10"
- privatekey_passphrase:
- description:
- - The passphrase for the private key.
- - This is required if the private key is password protected.
- type: str
- version:
- description:
- - The version of the certificate signing request.
- - "The only allowed value according to L(RFC 2986,https://tools.ietf.org/html/rfc2986#section-4.1)
- is 1."
- - This option will no longer accept unsupported values from Ansible 2.14 on.
- type: int
- default: 1
- force:
- description:
- - Should the certificate signing request be forced regenerated by this ansible module.
- type: bool
- default: no
- path:
- description:
- - The name of the file into which the generated OpenSSL certificate signing request will be written.
- type: path
- required: true
- subject:
- description:
- - Key/value pairs that will be present in the subject name field of the certificate signing request.
- - If you need to specify more than one value with the same key, use a list as value.
- type: dict
- version_added: '2.5'
- country_name:
- description:
- - The countryName field of the certificate signing request subject.
- type: str
- aliases: [ C, countryName ]
- state_or_province_name:
- description:
- - The stateOrProvinceName field of the certificate signing request subject.
- type: str
- aliases: [ ST, stateOrProvinceName ]
- locality_name:
- description:
- - The localityName field of the certificate signing request subject.
- type: str
- aliases: [ L, localityName ]
- organization_name:
- description:
- - The organizationName field of the certificate signing request subject.
- type: str
- aliases: [ O, organizationName ]
- organizational_unit_name:
- description:
- - The organizationalUnitName field of the certificate signing request subject.
- type: str
- aliases: [ OU, organizationalUnitName ]
- common_name:
- description:
- - The commonName field of the certificate signing request subject.
- type: str
- aliases: [ CN, commonName ]
- email_address:
- description:
- - The emailAddress field of the certificate signing request subject.
- type: str
- aliases: [ E, emailAddress ]
- subject_alt_name:
- description:
- - SAN extension to attach to the certificate signing request.
- - This can either be a 'comma separated string' or a YAML list.
- - Values must be prefixed by their options. (i.e., C(email), C(URI), C(DNS), C(RID), C(IP), C(dirName),
- C(otherName) and the ones specific to your CA)
- - Note that if no SAN is specified, but a common name, the common
- name will be added as a SAN except if C(useCommonNameForSAN) is
- set to I(false).
- - More at U(https://tools.ietf.org/html/rfc5280#section-4.2.1.6).
- type: list
- elements: str
- aliases: [ subjectAltName ]
- subject_alt_name_critical:
- description:
- - Should the subjectAltName extension be considered as critical.
- type: bool
- aliases: [ subjectAltName_critical ]
- use_common_name_for_san:
- description:
- - If set to C(yes), the module will fill the common name in for
- C(subject_alt_name) with C(DNS:) prefix if no SAN is specified.
- type: bool
- default: yes
- version_added: '2.8'
- aliases: [ useCommonNameForSAN ]
- key_usage:
- description:
- - This defines the purpose (e.g. encipherment, signature, certificate signing)
- of the key contained in the certificate.
- type: list
- elements: str
- aliases: [ keyUsage ]
- key_usage_critical:
- description:
- - Should the keyUsage extension be considered as critical.
- type: bool
- aliases: [ keyUsage_critical ]
- extended_key_usage:
- description:
- - Additional restrictions (e.g. client authentication, server authentication)
- on the allowed purposes for which the public key may be used.
- type: list
- elements: str
- aliases: [ extKeyUsage, extendedKeyUsage ]
- extended_key_usage_critical:
- description:
- - Should the extkeyUsage extension be considered as critical.
- type: bool
- aliases: [ extKeyUsage_critical, extendedKeyUsage_critical ]
- basic_constraints:
- description:
- - Indicates basic constraints, such as if the certificate is a CA.
- type: list
- elements: str
- version_added: '2.5'
- aliases: [ basicConstraints ]
- basic_constraints_critical:
- description:
- - Should the basicConstraints extension be considered as critical.
- type: bool
- version_added: '2.5'
- aliases: [ basicConstraints_critical ]
- ocsp_must_staple:
- description:
- - Indicates that the certificate should contain the OCSP Must Staple
- extension (U(https://tools.ietf.org/html/rfc7633)).
- type: bool
- version_added: '2.5'
- aliases: [ ocspMustStaple ]
- ocsp_must_staple_critical:
- description:
- - Should the OCSP Must Staple extension be considered as critical
- - Note that according to the RFC, this extension should not be marked
- as critical, as old clients not knowing about OCSP Must Staple
- are required to reject such certificates
- (see U(https://tools.ietf.org/html/rfc7633#section-4)).
- type: bool
- version_added: '2.5'
- aliases: [ ocspMustStaple_critical ]
- select_crypto_backend:
- description:
- - Determines which crypto backend to use.
- - The default choice is C(auto), which tries to use C(cryptography) if available, and falls back to C(pyopenssl).
- - If set to C(pyopenssl), will try to use the L(pyOpenSSL,https://pypi.org/project/pyOpenSSL/) library.
- - If set to C(cryptography), will try to use the L(cryptography,https://cryptography.io/) library.
- - Please note that the C(pyopenssl) backend has been deprecated in Ansible 2.9, and will be removed in Ansible 2.13.
- From that point on, only the C(cryptography) backend will be available.
- type: str
- default: auto
- choices: [ auto, cryptography, pyopenssl ]
- version_added: '2.8'
- backup:
- description:
- - Create a backup file including a timestamp so you can get the original
- CSR back if you overwrote it with a new one by accident.
- type: bool
- default: no
- version_added: "2.8"
- create_subject_key_identifier:
- description:
- - Create the Subject Key Identifier from the public key.
- - "Please note that commercial CAs can ignore the value, respectively use a value of
- their own choice instead. Specifying this option is mostly useful for self-signed
- certificates or for own CAs."
- - Note that this is only supported if the C(cryptography) backend is used!
- type: bool
- default: no
- version_added: "2.9"
- subject_key_identifier:
- description:
- - The subject key identifier as a hex string, where two bytes are separated by colons.
- - "Example: C(00:11:22:33:44:55:66:77:88:99:aa:bb:cc:dd:ee:ff:00:11:22:33)"
- - "Please note that commercial CAs ignore this value, respectively use a value of their
- own choice. Specifying this option is mostly useful for self-signed certificates
- or for own CAs."
- - Note that this option can only be used if I(create_subject_key_identifier) is C(no).
- - Note that this is only supported if the C(cryptography) backend is used!
- type: str
- version_added: "2.9"
- authority_key_identifier:
- description:
- - The authority key identifier as a hex string, where two bytes are separated by colons.
- - "Example: C(00:11:22:33:44:55:66:77:88:99:aa:bb:cc:dd:ee:ff:00:11:22:33)"
- - If specified, I(authority_cert_issuer) must also be specified.
- - "Please note that commercial CAs ignore this value, respectively use a value of their
- own choice. Specifying this option is mostly useful for self-signed certificates
- or for own CAs."
- - Note that this is only supported if the C(cryptography) backend is used!
- - The C(AuthorityKeyIdentifier) will only be added if at least one of I(authority_key_identifier),
- I(authority_cert_issuer) and I(authority_cert_serial_number) is specified.
- type: str
- version_added: "2.9"
- authority_cert_issuer:
- description:
- - Names that will be present in the authority cert issuer field of the certificate signing request.
- - Values must be prefixed by their options. (i.e., C(email), C(URI), C(DNS), C(RID), C(IP), C(dirName),
- C(otherName) and the ones specific to your CA)
- - "Example: C(DNS:ca.example.org)"
- - If specified, I(authority_key_identifier) must also be specified.
- - "Please note that commercial CAs ignore this value, respectively use a value of their
- own choice. Specifying this option is mostly useful for self-signed certificates
- or for own CAs."
- - Note that this is only supported if the C(cryptography) backend is used!
- - The C(AuthorityKeyIdentifier) will only be added if at least one of I(authority_key_identifier),
- I(authority_cert_issuer) and I(authority_cert_serial_number) is specified.
- type: list
- elements: str
- version_added: "2.9"
- authority_cert_serial_number:
- description:
- - The authority cert serial number.
- - Note that this is only supported if the C(cryptography) backend is used!
- - "Please note that commercial CAs ignore this value, respectively use a value of their
- own choice. Specifying this option is mostly useful for self-signed certificates
- or for own CAs."
- - The C(AuthorityKeyIdentifier) will only be added if at least one of I(authority_key_identifier),
- I(authority_cert_issuer) and I(authority_cert_serial_number) is specified.
- type: int
- version_added: "2.9"
- return_content:
- description:
- - If set to C(yes), will return the (current or generated) CSR's content as I(csr).
- type: bool
- default: no
- version_added: "2.10"
-extends_documentation_fragment:
-- files
-notes:
- - If the certificate signing request already exists it will be checked whether subjectAltName,
- keyUsage, extendedKeyUsage and basicConstraints only contain the requested values, whether
- OCSP Must Staple is as requested, and if the request was signed by the given private key.
-seealso:
-- module: openssl_certificate
-- module: openssl_dhparam
-- module: openssl_pkcs12
-- module: openssl_privatekey
-- module: openssl_publickey
-'''
-
-EXAMPLES = r'''
-- name: Generate an OpenSSL Certificate Signing Request
- openssl_csr:
- path: /etc/ssl/csr/www.ansible.com.csr
- privatekey_path: /etc/ssl/private/ansible.com.pem
- common_name: www.ansible.com
-
-- name: Generate an OpenSSL Certificate Signing Request with an inline key
- openssl_csr:
- path: /etc/ssl/csr/www.ansible.com.csr
- privatekey_content: "{{ private_key_content }}"
- common_name: www.ansible.com
-
-- name: Generate an OpenSSL Certificate Signing Request with a passphrase protected private key
- openssl_csr:
- path: /etc/ssl/csr/www.ansible.com.csr
- privatekey_path: /etc/ssl/private/ansible.com.pem
- privatekey_passphrase: ansible
- common_name: www.ansible.com
-
-- name: Generate an OpenSSL Certificate Signing Request with Subject information
- openssl_csr:
- path: /etc/ssl/csr/www.ansible.com.csr
- privatekey_path: /etc/ssl/private/ansible.com.pem
- country_name: FR
- organization_name: Ansible
- email_address: jdoe@ansible.com
- common_name: www.ansible.com
-
-- name: Generate an OpenSSL Certificate Signing Request with subjectAltName extension
- openssl_csr:
- path: /etc/ssl/csr/www.ansible.com.csr
- privatekey_path: /etc/ssl/private/ansible.com.pem
- subject_alt_name: 'DNS:www.ansible.com,DNS:m.ansible.com'
-
-- name: Generate an OpenSSL CSR with subjectAltName extension with dynamic list
- openssl_csr:
- path: /etc/ssl/csr/www.ansible.com.csr
- privatekey_path: /etc/ssl/private/ansible.com.pem
- subject_alt_name: "{{ item.value | map('regex_replace', '^', 'DNS:') | list }}"
- with_dict:
- dns_server:
- - www.ansible.com
- - m.ansible.com
-
-- name: Force regenerate an OpenSSL Certificate Signing Request
- openssl_csr:
- path: /etc/ssl/csr/www.ansible.com.csr
- privatekey_path: /etc/ssl/private/ansible.com.pem
- force: yes
- common_name: www.ansible.com
-
-- name: Generate an OpenSSL Certificate Signing Request with special key usages
- openssl_csr:
- path: /etc/ssl/csr/www.ansible.com.csr
- privatekey_path: /etc/ssl/private/ansible.com.pem
- common_name: www.ansible.com
- key_usage:
- - digitalSignature
- - keyAgreement
- extended_key_usage:
- - clientAuth
-
-- name: Generate an OpenSSL Certificate Signing Request with OCSP Must Staple
- openssl_csr:
- path: /etc/ssl/csr/www.ansible.com.csr
- privatekey_path: /etc/ssl/private/ansible.com.pem
- common_name: www.ansible.com
- ocsp_must_staple: yes
-'''
-
-RETURN = r'''
-privatekey:
- description:
- - Path to the TLS/SSL private key the CSR was generated for
- - Will be C(none) if the private key has been provided in I(privatekey_content).
- returned: changed or success
- type: str
- sample: /etc/ssl/private/ansible.com.pem
-filename:
- description: Path to the generated Certificate Signing Request
- returned: changed or success
- type: str
- sample: /etc/ssl/csr/www.ansible.com.csr
-subject:
- description: A list of the subject tuples attached to the CSR
- returned: changed or success
- type: list
- elements: list
- sample: "[('CN', 'www.ansible.com'), ('O', 'Ansible')]"
-subjectAltName:
- description: The alternative names this CSR is valid for
- returned: changed or success
- type: list
- elements: str
- sample: [ 'DNS:www.ansible.com', 'DNS:m.ansible.com' ]
-keyUsage:
- description: Purpose for which the public key may be used
- returned: changed or success
- type: list
- elements: str
- sample: [ 'digitalSignature', 'keyAgreement' ]
-extendedKeyUsage:
- description: Additional restriction on the public key purposes
- returned: changed or success
- type: list
- elements: str
- sample: [ 'clientAuth' ]
-basicConstraints:
- description: Indicates if the certificate belongs to a CA
- returned: changed or success
- type: list
- elements: str
- sample: ['CA:TRUE', 'pathLenConstraint:0']
-ocsp_must_staple:
- description: Indicates whether the certificate has the OCSP
- Must Staple feature enabled
- returned: changed or success
- type: bool
- sample: false
-backup_file:
- description: Name of backup file created.
- returned: changed and if I(backup) is C(yes)
- type: str
- sample: /path/to/www.ansible.com.csr.2019-03-09@11:22~
-csr:
- description: The (current or generated) CSR's content.
- returned: if I(state) is C(present) and I(return_content) is C(yes)
- type: str
- version_added: "2.10"
-'''
-
-import abc
-import binascii
-import os
-import traceback
-from distutils.version import LooseVersion
-
-from ansible.module_utils import crypto as crypto_utils
-from ansible.module_utils.basic import AnsibleModule, missing_required_lib
-from ansible.module_utils._text import to_native, to_bytes, to_text
-from ansible.module_utils.compat import ipaddress as compat_ipaddress
-
-MINIMAL_PYOPENSSL_VERSION = '0.15'
-MINIMAL_CRYPTOGRAPHY_VERSION = '1.3'
-
-PYOPENSSL_IMP_ERR = None
-try:
- import OpenSSL
- from OpenSSL import crypto
- PYOPENSSL_VERSION = LooseVersion(OpenSSL.__version__)
-except ImportError:
- PYOPENSSL_IMP_ERR = traceback.format_exc()
- PYOPENSSL_FOUND = False
-else:
- PYOPENSSL_FOUND = True
- if OpenSSL.SSL.OPENSSL_VERSION_NUMBER >= 0x10100000:
- # OpenSSL 1.1.0 or newer
- OPENSSL_MUST_STAPLE_NAME = b"tlsfeature"
- OPENSSL_MUST_STAPLE_VALUE = b"status_request"
- else:
- # OpenSSL 1.0.x or older
- OPENSSL_MUST_STAPLE_NAME = b"1.3.6.1.5.5.7.1.24"
- OPENSSL_MUST_STAPLE_VALUE = b"DER:30:03:02:01:05"
-
-CRYPTOGRAPHY_IMP_ERR = None
-try:
- import cryptography
- import cryptography.x509
- import cryptography.x509.oid
- import cryptography.exceptions
- import cryptography.hazmat.backends
- import cryptography.hazmat.primitives.serialization
- import cryptography.hazmat.primitives.hashes
- CRYPTOGRAPHY_VERSION = LooseVersion(cryptography.__version__)
-except ImportError:
- CRYPTOGRAPHY_IMP_ERR = traceback.format_exc()
- CRYPTOGRAPHY_FOUND = False
-else:
- CRYPTOGRAPHY_FOUND = True
- CRYPTOGRAPHY_MUST_STAPLE_NAME = cryptography.x509.oid.ObjectIdentifier("1.3.6.1.5.5.7.1.24")
- CRYPTOGRAPHY_MUST_STAPLE_VALUE = b"\x30\x03\x02\x01\x05"
-
-
-class CertificateSigningRequestError(crypto_utils.OpenSSLObjectError):
- pass
-
-
-class CertificateSigningRequestBase(crypto_utils.OpenSSLObject):
-
- def __init__(self, module):
- super(CertificateSigningRequestBase, self).__init__(
- module.params['path'],
- module.params['state'],
- module.params['force'],
- module.check_mode
- )
- self.digest = module.params['digest']
- self.privatekey_path = module.params['privatekey_path']
- self.privatekey_content = module.params['privatekey_content']
- if self.privatekey_content is not None:
- self.privatekey_content = self.privatekey_content.encode('utf-8')
- self.privatekey_passphrase = module.params['privatekey_passphrase']
- self.version = module.params['version']
- self.subjectAltName = module.params['subject_alt_name']
- self.subjectAltName_critical = module.params['subject_alt_name_critical']
- self.keyUsage = module.params['key_usage']
- self.keyUsage_critical = module.params['key_usage_critical']
- self.extendedKeyUsage = module.params['extended_key_usage']
- self.extendedKeyUsage_critical = module.params['extended_key_usage_critical']
- self.basicConstraints = module.params['basic_constraints']
- self.basicConstraints_critical = module.params['basic_constraints_critical']
- self.ocspMustStaple = module.params['ocsp_must_staple']
- self.ocspMustStaple_critical = module.params['ocsp_must_staple_critical']
- self.create_subject_key_identifier = module.params['create_subject_key_identifier']
- self.subject_key_identifier = module.params['subject_key_identifier']
- self.authority_key_identifier = module.params['authority_key_identifier']
- self.authority_cert_issuer = module.params['authority_cert_issuer']
- self.authority_cert_serial_number = module.params['authority_cert_serial_number']
- self.request = None
- self.privatekey = None
- self.csr_bytes = None
- self.return_content = module.params['return_content']
-
- if self.create_subject_key_identifier and self.subject_key_identifier is not None:
- module.fail_json(msg='subject_key_identifier cannot be specified if create_subject_key_identifier is true')
-
- self.backup = module.params['backup']
- self.backup_file = None
-
- self.subject = [
- ('C', module.params['country_name']),
- ('ST', module.params['state_or_province_name']),
- ('L', module.params['locality_name']),
- ('O', module.params['organization_name']),
- ('OU', module.params['organizational_unit_name']),
- ('CN', module.params['common_name']),
- ('emailAddress', module.params['email_address']),
- ]
-
- if module.params['subject']:
- self.subject = self.subject + crypto_utils.parse_name_field(module.params['subject'])
- self.subject = [(entry[0], entry[1]) for entry in self.subject if entry[1]]
-
- if not self.subjectAltName and module.params['use_common_name_for_san']:
- for sub in self.subject:
- if sub[0] in ('commonName', 'CN'):
- self.subjectAltName = ['DNS:%s' % sub[1]]
- break
-
- if self.subject_key_identifier is not None:
- try:
- self.subject_key_identifier = binascii.unhexlify(self.subject_key_identifier.replace(':', ''))
- except Exception as e:
- raise CertificateSigningRequestError('Cannot parse subject_key_identifier: {0}'.format(e))
-
- if self.authority_key_identifier is not None:
- try:
- self.authority_key_identifier = binascii.unhexlify(self.authority_key_identifier.replace(':', ''))
- except Exception as e:
- raise CertificateSigningRequestError('Cannot parse authority_key_identifier: {0}'.format(e))
-
- @abc.abstractmethod
- def _generate_csr(self):
- pass
-
- def generate(self, module):
- '''Generate the certificate signing request.'''
- if not self.check(module, perms_required=False) or self.force:
- result = self._generate_csr()
- if self.backup:
- self.backup_file = module.backup_local(self.path)
- if self.return_content:
- self.csr_bytes = result
- crypto_utils.write_file(module, result)
- self.changed = True
-
- file_args = module.load_file_common_arguments(module.params)
- if module.set_fs_attributes_if_different(file_args, False):
- self.changed = True
-
- @abc.abstractmethod
- def _load_private_key(self):
- pass
-
- @abc.abstractmethod
- def _check_csr(self):
- pass
-
- def check(self, module, perms_required=True):
- """Ensure the resource is in its desired state."""
- state_and_perms = super(CertificateSigningRequestBase, self).check(module, perms_required)
-
- self._load_private_key()
-
- if not state_and_perms:
- return False
-
- return self._check_csr()
-
- def remove(self, module):
- if self.backup:
- self.backup_file = module.backup_local(self.path)
- super(CertificateSigningRequestBase, self).remove(module)
-
- def dump(self):
- '''Serialize the object into a dictionary.'''
-
- result = {
- 'privatekey': self.privatekey_path,
- 'filename': self.path,
- 'subject': self.subject,
- 'subjectAltName': self.subjectAltName,
- 'keyUsage': self.keyUsage,
- 'extendedKeyUsage': self.extendedKeyUsage,
- 'basicConstraints': self.basicConstraints,
- 'ocspMustStaple': self.ocspMustStaple,
- 'changed': self.changed
- }
- if self.backup_file:
- result['backup_file'] = self.backup_file
- if self.return_content:
- if self.csr_bytes is None:
- self.csr_bytes = crypto_utils.load_file_if_exists(self.path, ignore_errors=True)
- result['csr'] = self.csr_bytes.decode('utf-8') if self.csr_bytes else None
-
- return result
-
-
-class CertificateSigningRequestPyOpenSSL(CertificateSigningRequestBase):
-
- def __init__(self, module):
- if module.params['create_subject_key_identifier']:
- module.fail_json(msg='You cannot use create_subject_key_identifier with the pyOpenSSL backend!')
- for o in ('subject_key_identifier', 'authority_key_identifier', 'authority_cert_issuer', 'authority_cert_serial_number'):
- if module.params[o] is not None:
- module.fail_json(msg='You cannot use {0} with the pyOpenSSL backend!'.format(o))
- super(CertificateSigningRequestPyOpenSSL, self).__init__(module)
-
- def _generate_csr(self):
- req = crypto.X509Req()
- req.set_version(self.version - 1)
- subject = req.get_subject()
- for entry in self.subject:
- if entry[1] is not None:
- # Workaround for https://github.com/pyca/pyopenssl/issues/165
- nid = OpenSSL._util.lib.OBJ_txt2nid(to_bytes(entry[0]))
- if nid == 0:
- raise CertificateSigningRequestError('Unknown subject field identifier "{0}"'.format(entry[0]))
- res = OpenSSL._util.lib.X509_NAME_add_entry_by_NID(subject._name, nid, OpenSSL._util.lib.MBSTRING_UTF8, to_bytes(entry[1]), -1, -1, 0)
- if res == 0:
- raise CertificateSigningRequestError('Invalid value for subject field identifier "{0}": {1}'.format(entry[0], entry[1]))
-
- extensions = []
- if self.subjectAltName:
- altnames = ', '.join(self.subjectAltName)
- try:
- extensions.append(crypto.X509Extension(b"subjectAltName", self.subjectAltName_critical, altnames.encode('ascii')))
- except OpenSSL.crypto.Error as e:
- raise CertificateSigningRequestError(
- 'Error while parsing Subject Alternative Names {0} (check for missing type prefix, such as "DNS:"!): {1}'.format(
- ', '.join(["{0}".format(san) for san in self.subjectAltName]), str(e)
- )
- )
-
- if self.keyUsage:
- usages = ', '.join(self.keyUsage)
- extensions.append(crypto.X509Extension(b"keyUsage", self.keyUsage_critical, usages.encode('ascii')))
-
- if self.extendedKeyUsage:
- usages = ', '.join(self.extendedKeyUsage)
- extensions.append(crypto.X509Extension(b"extendedKeyUsage", self.extendedKeyUsage_critical, usages.encode('ascii')))
-
- if self.basicConstraints:
- usages = ', '.join(self.basicConstraints)
- extensions.append(crypto.X509Extension(b"basicConstraints", self.basicConstraints_critical, usages.encode('ascii')))
-
- if self.ocspMustStaple:
- extensions.append(crypto.X509Extension(OPENSSL_MUST_STAPLE_NAME, self.ocspMustStaple_critical, OPENSSL_MUST_STAPLE_VALUE))
-
- if extensions:
- req.add_extensions(extensions)
-
- req.set_pubkey(self.privatekey)
- req.sign(self.privatekey, self.digest)
- self.request = req
-
- return crypto.dump_certificate_request(crypto.FILETYPE_PEM, self.request)
-
- def _load_private_key(self):
- try:
- self.privatekey = crypto_utils.load_privatekey(
- path=self.privatekey_path,
- content=self.privatekey_content,
- passphrase=self.privatekey_passphrase
- )
- except crypto_utils.OpenSSLBadPassphraseError as exc:
- raise CertificateSigningRequestError(exc)
-
- def _normalize_san(self, san):
- # Apparently OpenSSL returns 'IP address' not 'IP' as specifier when converting the subjectAltName to string
- # although it won't accept this specifier when generating the CSR. (https://github.com/openssl/openssl/issues/4004)
- if san.startswith('IP Address:'):
- san = 'IP:' + san[len('IP Address:'):]
- if san.startswith('IP:'):
- ip = compat_ipaddress.ip_address(san[3:])
- san = 'IP:{0}'.format(ip.compressed)
- return san
-
- def _check_csr(self):
- def _check_subject(csr):
- subject = [(OpenSSL._util.lib.OBJ_txt2nid(to_bytes(sub[0])), to_bytes(sub[1])) for sub in self.subject]
- current_subject = [(OpenSSL._util.lib.OBJ_txt2nid(to_bytes(sub[0])), to_bytes(sub[1])) for sub in csr.get_subject().get_components()]
- if not set(subject) == set(current_subject):
- return False
-
- return True
-
- def _check_subjectAltName(extensions):
- altnames_ext = next((ext for ext in extensions if ext.get_short_name() == b'subjectAltName'), '')
- altnames = [self._normalize_san(altname.strip()) for altname in
- to_text(altnames_ext, errors='surrogate_or_strict').split(',') if altname.strip()]
- if self.subjectAltName:
- if (set(altnames) != set([self._normalize_san(to_text(name)) for name in self.subjectAltName]) or
- altnames_ext.get_critical() != self.subjectAltName_critical):
- return False
- else:
- if altnames:
- return False
-
- return True
-
- def _check_keyUsage_(extensions, extName, expected, critical):
- usages_ext = [ext for ext in extensions if ext.get_short_name() == extName]
- if (not usages_ext and expected) or (usages_ext and not expected):
- return False
- elif not usages_ext and not expected:
- return True
- else:
- current = [OpenSSL._util.lib.OBJ_txt2nid(to_bytes(usage.strip())) for usage in str(usages_ext[0]).split(',')]
- expected = [OpenSSL._util.lib.OBJ_txt2nid(to_bytes(usage)) for usage in expected]
- return set(current) == set(expected) and usages_ext[0].get_critical() == critical
-
- def _check_keyUsage(extensions):
- usages_ext = [ext for ext in extensions if ext.get_short_name() == b'keyUsage']
- if (not usages_ext and self.keyUsage) or (usages_ext and not self.keyUsage):
- return False
- elif not usages_ext and not self.keyUsage:
- return True
- else:
- # OpenSSL._util.lib.OBJ_txt2nid() always returns 0 for all keyUsage values
- # (since keyUsage has a fixed bitfield for these values and is not extensible).
- # Therefore, we create an extension for the wanted values, and compare the
- # data of the extensions (which is the serialized bitfield).
- expected_ext = crypto.X509Extension(b"keyUsage", False, ', '.join(self.keyUsage).encode('ascii'))
- return usages_ext[0].get_data() == expected_ext.get_data() and usages_ext[0].get_critical() == self.keyUsage_critical
-
- def _check_extenededKeyUsage(extensions):
- return _check_keyUsage_(extensions, b'extendedKeyUsage', self.extendedKeyUsage, self.extendedKeyUsage_critical)
-
- def _check_basicConstraints(extensions):
- return _check_keyUsage_(extensions, b'basicConstraints', self.basicConstraints, self.basicConstraints_critical)
-
- def _check_ocspMustStaple(extensions):
- oms_ext = [ext for ext in extensions if to_bytes(ext.get_short_name()) == OPENSSL_MUST_STAPLE_NAME and to_bytes(ext) == OPENSSL_MUST_STAPLE_VALUE]
- if OpenSSL.SSL.OPENSSL_VERSION_NUMBER < 0x10100000:
- # Older versions of libssl don't know about OCSP Must Staple
- oms_ext.extend([ext for ext in extensions if ext.get_short_name() == b'UNDEF' and ext.get_data() == b'\x30\x03\x02\x01\x05'])
- if self.ocspMustStaple:
- return len(oms_ext) > 0 and oms_ext[0].get_critical() == self.ocspMustStaple_critical
- else:
- return len(oms_ext) == 0
-
- def _check_extensions(csr):
- extensions = csr.get_extensions()
- return (_check_subjectAltName(extensions) and _check_keyUsage(extensions) and
- _check_extenededKeyUsage(extensions) and _check_basicConstraints(extensions) and
- _check_ocspMustStaple(extensions))
-
- def _check_signature(csr):
- try:
- return csr.verify(self.privatekey)
- except crypto.Error:
- return False
-
- try:
- csr = crypto_utils.load_certificate_request(self.path, backend='pyopenssl')
- except Exception as dummy:
- return False
-
- return _check_subject(csr) and _check_extensions(csr) and _check_signature(csr)
-
-
-class CertificateSigningRequestCryptography(CertificateSigningRequestBase):
-
- def __init__(self, module):
- super(CertificateSigningRequestCryptography, self).__init__(module)
- self.cryptography_backend = cryptography.hazmat.backends.default_backend()
- self.module = module
- if self.version != 1:
- module.warn('The cryptography backend only supports version 1. (The only valid value according to RFC 2986.)')
-
- def _generate_csr(self):
- csr = cryptography.x509.CertificateSigningRequestBuilder()
- try:
- csr = csr.subject_name(cryptography.x509.Name([
- cryptography.x509.NameAttribute(crypto_utils.cryptography_name_to_oid(entry[0]), to_text(entry[1])) for entry in self.subject
- ]))
- except ValueError as e:
- raise CertificateSigningRequestError(e)
-
- if self.subjectAltName:
- csr = csr.add_extension(cryptography.x509.SubjectAlternativeName([
- crypto_utils.cryptography_get_name(name) for name in self.subjectAltName
- ]), critical=self.subjectAltName_critical)
-
- if self.keyUsage:
- params = crypto_utils.cryptography_parse_key_usage_params(self.keyUsage)
- csr = csr.add_extension(cryptography.x509.KeyUsage(**params), critical=self.keyUsage_critical)
-
- if self.extendedKeyUsage:
- usages = [crypto_utils.cryptography_name_to_oid(usage) for usage in self.extendedKeyUsage]
- csr = csr.add_extension(cryptography.x509.ExtendedKeyUsage(usages), critical=self.extendedKeyUsage_critical)
-
- if self.basicConstraints:
- params = {}
- ca, path_length = crypto_utils.cryptography_get_basic_constraints(self.basicConstraints)
- csr = csr.add_extension(cryptography.x509.BasicConstraints(ca, path_length), critical=self.basicConstraints_critical)
-
- if self.ocspMustStaple:
- try:
- # This only works with cryptography >= 2.1
- csr = csr.add_extension(cryptography.x509.TLSFeature([cryptography.x509.TLSFeatureType.status_request]), critical=self.ocspMustStaple_critical)
- except AttributeError as dummy:
- csr = csr.add_extension(
- cryptography.x509.UnrecognizedExtension(CRYPTOGRAPHY_MUST_STAPLE_NAME, CRYPTOGRAPHY_MUST_STAPLE_VALUE),
- critical=self.ocspMustStaple_critical
- )
-
- if self.create_subject_key_identifier:
- csr = csr.add_extension(
- cryptography.x509.SubjectKeyIdentifier.from_public_key(self.privatekey.public_key()),
- critical=False
- )
- elif self.subject_key_identifier is not None:
- csr = csr.add_extension(cryptography.x509.SubjectKeyIdentifier(self.subject_key_identifier), critical=False)
-
- if self.authority_key_identifier is not None or self.authority_cert_issuer is not None or self.authority_cert_serial_number is not None:
- issuers = None
- if self.authority_cert_issuer is not None:
- issuers = [crypto_utils.cryptography_get_name(n) for n in self.authority_cert_issuer]
- csr = csr.add_extension(
- cryptography.x509.AuthorityKeyIdentifier(self.authority_key_identifier, issuers, self.authority_cert_serial_number),
- critical=False
- )
-
- digest = None
- if crypto_utils.cryptography_key_needs_digest_for_signing(self.privatekey):
- if self.digest == 'sha256':
- digest = cryptography.hazmat.primitives.hashes.SHA256()
- elif self.digest == 'sha384':
- digest = cryptography.hazmat.primitives.hashes.SHA384()
- elif self.digest == 'sha512':
- digest = cryptography.hazmat.primitives.hashes.SHA512()
- elif self.digest == 'sha1':
- digest = cryptography.hazmat.primitives.hashes.SHA1()
- elif self.digest == 'md5':
- digest = cryptography.hazmat.primitives.hashes.MD5()
- # FIXME
- else:
- raise CertificateSigningRequestError('Unsupported digest "{0}"'.format(self.digest))
- try:
- self.request = csr.sign(self.privatekey, digest, self.cryptography_backend)
- except TypeError as e:
- if str(e) == 'Algorithm must be a registered hash algorithm.' and digest is None:
- self.module.fail_json(msg='Signing with Ed25519 and Ed448 keys requires cryptography 2.8 or newer.')
- raise
-
- return self.request.public_bytes(cryptography.hazmat.primitives.serialization.Encoding.PEM)
-
- def _load_private_key(self):
- try:
- if self.privatekey_content is not None:
- content = self.privatekey_content
- else:
- with open(self.privatekey_path, 'rb') as f:
- content = f.read()
- self.privatekey = cryptography.hazmat.primitives.serialization.load_pem_private_key(
- content,
- None if self.privatekey_passphrase is None else to_bytes(self.privatekey_passphrase),
- backend=self.cryptography_backend
- )
- except Exception as e:
- raise CertificateSigningRequestError(e)
-
- def _check_csr(self):
- def _check_subject(csr):
- subject = [(crypto_utils.cryptography_name_to_oid(entry[0]), entry[1]) for entry in self.subject]
- current_subject = [(sub.oid, sub.value) for sub in csr.subject]
- return set(subject) == set(current_subject)
-
- def _find_extension(extensions, exttype):
- return next(
- (ext for ext in extensions if isinstance(ext.value, exttype)),
- None
- )
-
- def _check_subjectAltName(extensions):
- current_altnames_ext = _find_extension(extensions, cryptography.x509.SubjectAlternativeName)
- current_altnames = [str(altname) for altname in current_altnames_ext.value] if current_altnames_ext else []
- altnames = [str(crypto_utils.cryptography_get_name(altname)) for altname in self.subjectAltName] if self.subjectAltName else []
- if set(altnames) != set(current_altnames):
- return False
- if altnames:
- if current_altnames_ext.critical != self.subjectAltName_critical:
- return False
- return True
-
- def _check_keyUsage(extensions):
- current_keyusage_ext = _find_extension(extensions, cryptography.x509.KeyUsage)
- if not self.keyUsage:
- return current_keyusage_ext is None
- elif current_keyusage_ext is None:
- return False
- params = crypto_utils.cryptography_parse_key_usage_params(self.keyUsage)
- for param in params:
- if getattr(current_keyusage_ext.value, '_' + param) != params[param]:
- return False
- if current_keyusage_ext.critical != self.keyUsage_critical:
- return False
- return True
-
- def _check_extenededKeyUsage(extensions):
- current_usages_ext = _find_extension(extensions, cryptography.x509.ExtendedKeyUsage)
- current_usages = [str(usage) for usage in current_usages_ext.value] if current_usages_ext else []
- usages = [str(crypto_utils.cryptography_name_to_oid(usage)) for usage in self.extendedKeyUsage] if self.extendedKeyUsage else []
- if set(current_usages) != set(usages):
- return False
- if usages:
- if current_usages_ext.critical != self.extendedKeyUsage_critical:
- return False
- return True
-
- def _check_basicConstraints(extensions):
- bc_ext = _find_extension(extensions, cryptography.x509.BasicConstraints)
- current_ca = bc_ext.value.ca if bc_ext else False
- current_path_length = bc_ext.value.path_length if bc_ext else None
- ca, path_length = crypto_utils.cryptography_get_basic_constraints(self.basicConstraints)
- # Check CA flag
- if ca != current_ca:
- return False
- # Check path length
- if path_length != current_path_length:
- return False
- # Check criticality
- if self.basicConstraints:
- if bc_ext.critical != self.basicConstraints_critical:
- return False
- return True
-
- def _check_ocspMustStaple(extensions):
- try:
- # This only works with cryptography >= 2.1
- tlsfeature_ext = _find_extension(extensions, cryptography.x509.TLSFeature)
- has_tlsfeature = True
- except AttributeError as dummy:
- tlsfeature_ext = next(
- (ext for ext in extensions if ext.value.oid == CRYPTOGRAPHY_MUST_STAPLE_NAME),
- None
- )
- has_tlsfeature = False
- if self.ocspMustStaple:
- if not tlsfeature_ext or tlsfeature_ext.critical != self.ocspMustStaple_critical:
- return False
- if has_tlsfeature:
- return cryptography.x509.TLSFeatureType.status_request in tlsfeature_ext.value
- else:
- return tlsfeature_ext.value.value == CRYPTOGRAPHY_MUST_STAPLE_VALUE
- else:
- return tlsfeature_ext is None
-
- def _check_subject_key_identifier(extensions):
- ext = _find_extension(extensions, cryptography.x509.SubjectKeyIdentifier)
- if self.create_subject_key_identifier or self.subject_key_identifier is not None:
- if not ext or ext.critical:
- return False
- if self.create_subject_key_identifier:
- digest = cryptography.x509.SubjectKeyIdentifier.from_public_key(self.privatekey.public_key()).digest
- return ext.value.digest == digest
- else:
- return ext.value.digest == self.subject_key_identifier
- else:
- return ext is None
-
- def _check_authority_key_identifier(extensions):
- ext = _find_extension(extensions, cryptography.x509.AuthorityKeyIdentifier)
- if self.authority_key_identifier is not None or self.authority_cert_issuer is not None or self.authority_cert_serial_number is not None:
- if not ext or ext.critical:
- return False
- aci = None
- csr_aci = None
- if self.authority_cert_issuer is not None:
- aci = [str(crypto_utils.cryptography_get_name(n)) for n in self.authority_cert_issuer]
- if ext.value.authority_cert_issuer is not None:
- csr_aci = [str(n) for n in ext.value.authority_cert_issuer]
- return (ext.value.key_identifier == self.authority_key_identifier
- and csr_aci == aci
- and ext.value.authority_cert_serial_number == self.authority_cert_serial_number)
- else:
- return ext is None
-
- def _check_extensions(csr):
- extensions = csr.extensions
- return (_check_subjectAltName(extensions) and _check_keyUsage(extensions) and
- _check_extenededKeyUsage(extensions) and _check_basicConstraints(extensions) and
- _check_ocspMustStaple(extensions) and _check_subject_key_identifier(extensions) and
- _check_authority_key_identifier(extensions))
-
- def _check_signature(csr):
- if not csr.is_signature_valid:
- return False
- # To check whether public key of CSR belongs to private key,
- # encode both public keys and compare PEMs.
- key_a = csr.public_key().public_bytes(
- cryptography.hazmat.primitives.serialization.Encoding.PEM,
- cryptography.hazmat.primitives.serialization.PublicFormat.SubjectPublicKeyInfo
- )
- key_b = self.privatekey.public_key().public_bytes(
- cryptography.hazmat.primitives.serialization.Encoding.PEM,
- cryptography.hazmat.primitives.serialization.PublicFormat.SubjectPublicKeyInfo
- )
- return key_a == key_b
-
- try:
- csr = crypto_utils.load_certificate_request(self.path, backend='cryptography')
- except Exception as dummy:
- return False
-
- return _check_subject(csr) and _check_extensions(csr) and _check_signature(csr)
-
-
-def main():
- module = AnsibleModule(
- argument_spec=dict(
- state=dict(type='str', default='present', choices=['absent', 'present']),
- digest=dict(type='str', default='sha256'),
- privatekey_path=dict(type='path'),
- privatekey_content=dict(type='str'),
- privatekey_passphrase=dict(type='str', no_log=True),
- version=dict(type='int', default=1),
- force=dict(type='bool', default=False),
- path=dict(type='path', required=True),
- subject=dict(type='dict'),
- country_name=dict(type='str', aliases=['C', 'countryName']),
- state_or_province_name=dict(type='str', aliases=['ST', 'stateOrProvinceName']),
- locality_name=dict(type='str', aliases=['L', 'localityName']),
- organization_name=dict(type='str', aliases=['O', 'organizationName']),
- organizational_unit_name=dict(type='str', aliases=['OU', 'organizationalUnitName']),
- common_name=dict(type='str', aliases=['CN', 'commonName']),
- email_address=dict(type='str', aliases=['E', 'emailAddress']),
- subject_alt_name=dict(type='list', elements='str', aliases=['subjectAltName']),
- subject_alt_name_critical=dict(type='bool', default=False, aliases=['subjectAltName_critical']),
- use_common_name_for_san=dict(type='bool', default=True, aliases=['useCommonNameForSAN']),
- key_usage=dict(type='list', elements='str', aliases=['keyUsage']),
- key_usage_critical=dict(type='bool', default=False, aliases=['keyUsage_critical']),
- extended_key_usage=dict(type='list', elements='str', aliases=['extKeyUsage', 'extendedKeyUsage']),
- extended_key_usage_critical=dict(type='bool', default=False, aliases=['extKeyUsage_critical', 'extendedKeyUsage_critical']),
- basic_constraints=dict(type='list', elements='str', aliases=['basicConstraints']),
- basic_constraints_critical=dict(type='bool', default=False, aliases=['basicConstraints_critical']),
- ocsp_must_staple=dict(type='bool', default=False, aliases=['ocspMustStaple']),
- ocsp_must_staple_critical=dict(type='bool', default=False, aliases=['ocspMustStaple_critical']),
- backup=dict(type='bool', default=False),
- create_subject_key_identifier=dict(type='bool', default=False),
- subject_key_identifier=dict(type='str'),
- authority_key_identifier=dict(type='str'),
- authority_cert_issuer=dict(type='list', elements='str'),
- authority_cert_serial_number=dict(type='int'),
- select_crypto_backend=dict(type='str', default='auto', choices=['auto', 'cryptography', 'pyopenssl']),
- return_content=dict(type='bool', default=False),
- ),
- required_together=[('authority_cert_issuer', 'authority_cert_serial_number')],
- required_if=[('state', 'present', ['privatekey_path', 'privatekey_content'], True)],
- mutually_exclusive=(
- ['privatekey_path', 'privatekey_content'],
- ),
- add_file_common_args=True,
- supports_check_mode=True,
- )
-
- if module.params['version'] != 1:
- module.deprecate('The version option will only support allowed values from Ansible 2.14 on. '
- 'Currently, only the value 1 is allowed by RFC 2986',
- version='2.14', collection_name='ansible.builtin')
-
- base_dir = os.path.dirname(module.params['path']) or '.'
- if not os.path.isdir(base_dir):
- module.fail_json(name=base_dir, msg='The directory %s does not exist or the file is not a directory' % base_dir)
-
- backend = module.params['select_crypto_backend']
- if backend == 'auto':
- # Detection what is possible
- can_use_cryptography = CRYPTOGRAPHY_FOUND and CRYPTOGRAPHY_VERSION >= LooseVersion(MINIMAL_CRYPTOGRAPHY_VERSION)
- can_use_pyopenssl = PYOPENSSL_FOUND and PYOPENSSL_VERSION >= LooseVersion(MINIMAL_PYOPENSSL_VERSION)
-
- # First try cryptography, then pyOpenSSL
- if can_use_cryptography:
- backend = 'cryptography'
- elif can_use_pyopenssl:
- backend = 'pyopenssl'
-
- # Success?
- if backend == 'auto':
- module.fail_json(msg=("Can't detect any of the required Python libraries "
- "cryptography (>= {0}) or PyOpenSSL (>= {1})").format(
- MINIMAL_CRYPTOGRAPHY_VERSION,
- MINIMAL_PYOPENSSL_VERSION))
- try:
- if backend == 'pyopenssl':
- if not PYOPENSSL_FOUND:
- module.fail_json(msg=missing_required_lib('pyOpenSSL >= {0}'.format(MINIMAL_PYOPENSSL_VERSION)),
- exception=PYOPENSSL_IMP_ERR)
- try:
- getattr(crypto.X509Req, 'get_extensions')
- except AttributeError:
- module.fail_json(msg='You need to have PyOpenSSL>=0.15 to generate CSRs')
-
- module.deprecate('The module is using the PyOpenSSL backend. This backend has been deprecated',
- version='2.13', collection_name='ansible.builtin')
- csr = CertificateSigningRequestPyOpenSSL(module)
- elif backend == 'cryptography':
- if not CRYPTOGRAPHY_FOUND:
- module.fail_json(msg=missing_required_lib('cryptography >= {0}'.format(MINIMAL_CRYPTOGRAPHY_VERSION)),
- exception=CRYPTOGRAPHY_IMP_ERR)
- csr = CertificateSigningRequestCryptography(module)
-
- if module.params['state'] == 'present':
- if module.check_mode:
- result = csr.dump()
- result['changed'] = module.params['force'] or not csr.check(module)
- module.exit_json(**result)
-
- csr.generate(module)
-
- else:
- if module.check_mode:
- result = csr.dump()
- result['changed'] = os.path.exists(module.params['path'])
- module.exit_json(**result)
-
- csr.remove(module)
-
- result = csr.dump()
- module.exit_json(**result)
- except crypto_utils.OpenSSLObjectError as exc:
- module.fail_json(msg=to_native(exc))
-
-
-if __name__ == "__main__":
- main()
diff --git a/test/support/integration/plugins/modules/openssl_privatekey.py b/test/support/integration/plugins/modules/openssl_privatekey.py
deleted file mode 100644
index 9c247a3942..0000000000
--- a/test/support/integration/plugins/modules/openssl_privatekey.py
+++ /dev/null
@@ -1,944 +0,0 @@
-#!/usr/bin/python
-# -*- coding: utf-8 -*-
-
-# Copyright: (c) 2016, Yanis Guenane <yanis+ansible@guenane.org>
-# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
-
-from __future__ import absolute_import, division, print_function
-__metaclass__ = type
-
-ANSIBLE_METADATA = {'metadata_version': '1.1',
- 'status': ['preview'],
- 'supported_by': 'community'}
-
-DOCUMENTATION = r'''
----
-module: openssl_privatekey
-version_added: "2.3"
-short_description: Generate OpenSSL private keys
-description:
- - This module allows one to (re)generate OpenSSL private keys.
- - One can generate L(RSA,https://en.wikipedia.org/wiki/RSA_%28cryptosystem%29),
- L(DSA,https://en.wikipedia.org/wiki/Digital_Signature_Algorithm),
- L(ECC,https://en.wikipedia.org/wiki/Elliptic-curve_cryptography) or
- L(EdDSA,https://en.wikipedia.org/wiki/EdDSA) private keys.
- - Keys are generated in PEM format.
- - "Please note that the module regenerates private keys if they don't match
- the module's options. In particular, if you provide another passphrase
- (or specify none), change the keysize, etc., the private key will be
- regenerated. If you are concerned that this could **overwrite your private key**,
- consider using the I(backup) option."
- - The module can use the cryptography Python library, or the pyOpenSSL Python
- library. By default, it tries to detect which one is available. This can be
- overridden with the I(select_crypto_backend) option. Please note that the
- PyOpenSSL backend was deprecated in Ansible 2.9 and will be removed in Ansible 2.13."
-requirements:
- - Either cryptography >= 1.2.3 (older versions might work as well)
- - Or pyOpenSSL
-author:
- - Yanis Guenane (@Spredzy)
- - Felix Fontein (@felixfontein)
-options:
- state:
- description:
- - Whether the private key should exist or not, taking action if the state is different from what is stated.
- type: str
- default: present
- choices: [ absent, present ]
- size:
- description:
- - Size (in bits) of the TLS/SSL key to generate.
- type: int
- default: 4096
- type:
- description:
- - The algorithm used to generate the TLS/SSL private key.
- - Note that C(ECC), C(X25519), C(X448), C(Ed25519) and C(Ed448) require the C(cryptography) backend.
- C(X25519) needs cryptography 2.5 or newer, while C(X448), C(Ed25519) and C(Ed448) require
- cryptography 2.6 or newer. For C(ECC), the minimal cryptography version required depends on the
- I(curve) option.
- type: str
- default: RSA
- choices: [ DSA, ECC, Ed25519, Ed448, RSA, X25519, X448 ]
- curve:
- description:
- - Note that not all curves are supported by all versions of C(cryptography).
- - For maximal interoperability, C(secp384r1) or C(secp256r1) should be used.
- - We use the curve names as defined in the
- L(IANA registry for TLS,https://www.iana.org/assignments/tls-parameters/tls-parameters.xhtml#tls-parameters-8).
- type: str
- choices:
- - secp384r1
- - secp521r1
- - secp224r1
- - secp192r1
- - secp256r1
- - secp256k1
- - brainpoolP256r1
- - brainpoolP384r1
- - brainpoolP512r1
- - sect571k1
- - sect409k1
- - sect283k1
- - sect233k1
- - sect163k1
- - sect571r1
- - sect409r1
- - sect283r1
- - sect233r1
- - sect163r2
- version_added: "2.8"
- force:
- description:
- - Should the key be regenerated even if it already exists.
- type: bool
- default: no
- path:
- description:
- - Name of the file in which the generated TLS/SSL private key will be written. It will have 0600 mode.
- type: path
- required: true
- passphrase:
- description:
- - The passphrase for the private key.
- type: str
- version_added: "2.4"
- cipher:
- description:
- - The cipher to encrypt the private key. (Valid values can be found by
- running `openssl list -cipher-algorithms` or `openssl list-cipher-algorithms`,
- depending on your OpenSSL version.)
- - When using the C(cryptography) backend, use C(auto).
- type: str
- version_added: "2.4"
- select_crypto_backend:
- description:
- - Determines which crypto backend to use.
- - The default choice is C(auto), which tries to use C(cryptography) if available, and falls back to C(pyopenssl).
- - If set to C(pyopenssl), will try to use the L(pyOpenSSL,https://pypi.org/project/pyOpenSSL/) library.
- - If set to C(cryptography), will try to use the L(cryptography,https://cryptography.io/) library.
- - Please note that the C(pyopenssl) backend has been deprecated in Ansible 2.9, and will be removed in Ansible 2.13.
- From that point on, only the C(cryptography) backend will be available.
- type: str
- default: auto
- choices: [ auto, cryptography, pyopenssl ]
- version_added: "2.8"
- format:
- description:
- - Determines which format the private key is written in. By default, PKCS1 (traditional OpenSSL format)
- is used for all keys which support it. Please note that not every key can be exported in any format.
- - The value C(auto) selects a fromat based on the key format. The value C(auto_ignore) does the same,
- but for existing private key files, it will not force a regenerate when its format is not the automatically
- selected one for generation.
- - Note that if the format for an existing private key mismatches, the key is *regenerated* by default.
- To change this behavior, use the I(format_mismatch) option.
- - The I(format) option is only supported by the C(cryptography) backend. The C(pyopenssl) backend will
- fail if a value different from C(auto_ignore) is used.
- type: str
- default: auto_ignore
- choices: [ pkcs1, pkcs8, raw, auto, auto_ignore ]
- version_added: "2.10"
- format_mismatch:
- description:
- - Determines behavior of the module if the format of a private key does not match the expected format, but all
- other parameters are as expected.
- - If set to C(regenerate) (default), generates a new private key.
- - If set to C(convert), the key will be converted to the new format instead.
- - Only supported by the C(cryptography) backend.
- type: str
- default: regenerate
- choices: [ regenerate, convert ]
- version_added: "2.10"
- backup:
- description:
- - Create a backup file including a timestamp so you can get
- the original private key back if you overwrote it with a new one by accident.
- type: bool
- default: no
- version_added: "2.8"
- return_content:
- description:
- - If set to C(yes), will return the (current or generated) private key's content as I(privatekey).
- - Note that especially if the private key is not encrypted, you have to make sure that the returned
- value is treated appropriately and not accidentally written to logs etc.! Use with care!
- type: bool
- default: no
- version_added: "2.10"
- regenerate:
- description:
- - Allows to configure in which situations the module is allowed to regenerate private keys.
- The module will always generate a new key if the destination file does not exist.
- - By default, the key will be regenerated when it doesn't match the module's options,
- except when the key cannot be read or the passphrase does not match. Please note that
- this B(changed) for Ansible 2.10. For Ansible 2.9, the behavior was as if C(full_idempotence)
- is specified.
- - If set to C(never), the module will fail if the key cannot be read or the passphrase
- isn't matching, and will never regenerate an existing key.
- - If set to C(fail), the module will fail if the key does not correspond to the module's
- options.
- - If set to C(partial_idempotence), the key will be regenerated if it does not conform to
- the module's options. The key is B(not) regenerated if it cannot be read (broken file),
- the key is protected by an unknown passphrase, or when they key is not protected by a
- passphrase, but a passphrase is specified.
- - If set to C(full_idempotence), the key will be regenerated if it does not conform to the
- module's options. This is also the case if the key cannot be read (broken file), the key
- is protected by an unknown passphrase, or when they key is not protected by a passphrase,
- but a passphrase is specified. Make sure you have a B(backup) when using this option!
- - If set to C(always), the module will always regenerate the key. This is equivalent to
- setting I(force) to C(yes).
- - Note that if I(format_mismatch) is set to C(convert) and everything matches except the
- format, the key will always be converted, except if I(regenerate) is set to C(always).
- type: str
- choices:
- - never
- - fail
- - partial_idempotence
- - full_idempotence
- - always
- default: full_idempotence
- version_added: '2.10'
-extends_documentation_fragment:
-- files
-seealso:
-- module: openssl_certificate
-- module: openssl_csr
-- module: openssl_dhparam
-- module: openssl_pkcs12
-- module: openssl_publickey
-'''
-
-EXAMPLES = r'''
-- name: Generate an OpenSSL private key with the default values (4096 bits, RSA)
- openssl_privatekey:
- path: /etc/ssl/private/ansible.com.pem
-
-- name: Generate an OpenSSL private key with the default values (4096 bits, RSA) and a passphrase
- openssl_privatekey:
- path: /etc/ssl/private/ansible.com.pem
- passphrase: ansible
- cipher: aes256
-
-- name: Generate an OpenSSL private key with a different size (2048 bits)
- openssl_privatekey:
- path: /etc/ssl/private/ansible.com.pem
- size: 2048
-
-- name: Force regenerate an OpenSSL private key if it already exists
- openssl_privatekey:
- path: /etc/ssl/private/ansible.com.pem
- force: yes
-
-- name: Generate an OpenSSL private key with a different algorithm (DSA)
- openssl_privatekey:
- path: /etc/ssl/private/ansible.com.pem
- type: DSA
-'''
-
-RETURN = r'''
-size:
- description: Size (in bits) of the TLS/SSL private key.
- returned: changed or success
- type: int
- sample: 4096
-type:
- description: Algorithm used to generate the TLS/SSL private key.
- returned: changed or success
- type: str
- sample: RSA
-curve:
- description: Elliptic curve used to generate the TLS/SSL private key.
- returned: changed or success, and I(type) is C(ECC)
- type: str
- sample: secp256r1
-filename:
- description: Path to the generated TLS/SSL private key file.
- returned: changed or success
- type: str
- sample: /etc/ssl/private/ansible.com.pem
-fingerprint:
- description:
- - The fingerprint of the public key. Fingerprint will be generated for each C(hashlib.algorithms) available.
- - The PyOpenSSL backend requires PyOpenSSL >= 16.0 for meaningful output.
- returned: changed or success
- type: dict
- sample:
- md5: "84:75:71:72:8d:04:b5:6c:4d:37:6d:66:83:f5:4c:29"
- sha1: "51:cc:7c:68:5d:eb:41:43:88:7e:1a:ae:c7:f8:24:72:ee:71:f6:10"
- sha224: "b1:19:a6:6c:14:ac:33:1d:ed:18:50:d3:06:5c:b2:32:91:f1:f1:52:8c:cb:d5:75:e9:f5:9b:46"
- sha256: "41:ab:c7:cb:d5:5f:30:60:46:99:ac:d4:00:70:cf:a1:76:4f:24:5d:10:24:57:5d:51:6e:09:97:df:2f:de:c7"
- sha384: "85:39:50:4e:de:d9:19:33:40:70:ae:10:ab:59:24:19:51:c3:a2:e4:0b:1c:b1:6e:dd:b3:0c:d9:9e:6a:46:af:da:18:f8:ef:ae:2e:c0:9a:75:2c:9b:b3:0f:3a:5f:3d"
- sha512: "fd:ed:5e:39:48:5f:9f:fe:7f:25:06:3f:79:08:cd:ee:a5:e7:b3:3d:13:82:87:1f:84:e1:f5:c7:28:77:53:94:86:56:38:69:f0:d9:35:22:01:1e:a6:60:...:0f:9b"
-backup_file:
- description: Name of backup file created.
- returned: changed and if I(backup) is C(yes)
- type: str
- sample: /path/to/privatekey.pem.2019-03-09@11:22~
-privatekey:
- description:
- - The (current or generated) private key's content.
- - Will be Base64-encoded if the key is in raw format.
- returned: if I(state) is C(present) and I(return_content) is C(yes)
- type: str
- version_added: "2.10"
-'''
-
-import abc
-import base64
-import os
-import traceback
-from distutils.version import LooseVersion
-
-MINIMAL_PYOPENSSL_VERSION = '0.6'
-MINIMAL_CRYPTOGRAPHY_VERSION = '1.2.3'
-
-PYOPENSSL_IMP_ERR = None
-try:
- import OpenSSL
- from OpenSSL import crypto
- PYOPENSSL_VERSION = LooseVersion(OpenSSL.__version__)
-except ImportError:
- PYOPENSSL_IMP_ERR = traceback.format_exc()
- PYOPENSSL_FOUND = False
-else:
- PYOPENSSL_FOUND = True
-
-CRYPTOGRAPHY_IMP_ERR = None
-try:
- import cryptography
- import cryptography.exceptions
- import cryptography.hazmat.backends
- import cryptography.hazmat.primitives.serialization
- import cryptography.hazmat.primitives.asymmetric.rsa
- import cryptography.hazmat.primitives.asymmetric.dsa
- import cryptography.hazmat.primitives.asymmetric.ec
- import cryptography.hazmat.primitives.asymmetric.utils
- CRYPTOGRAPHY_VERSION = LooseVersion(cryptography.__version__)
-except ImportError:
- CRYPTOGRAPHY_IMP_ERR = traceback.format_exc()
- CRYPTOGRAPHY_FOUND = False
-else:
- CRYPTOGRAPHY_FOUND = True
-
-from ansible.module_utils.crypto import (
- CRYPTOGRAPHY_HAS_X25519,
- CRYPTOGRAPHY_HAS_X25519_FULL,
- CRYPTOGRAPHY_HAS_X448,
- CRYPTOGRAPHY_HAS_ED25519,
- CRYPTOGRAPHY_HAS_ED448,
-)
-
-from ansible.module_utils import crypto as crypto_utils
-from ansible.module_utils._text import to_native, to_bytes
-from ansible.module_utils.basic import AnsibleModule, missing_required_lib
-
-
-class PrivateKeyError(crypto_utils.OpenSSLObjectError):
- pass
-
-
-class PrivateKeyBase(crypto_utils.OpenSSLObject):
-
- def __init__(self, module):
- super(PrivateKeyBase, self).__init__(
- module.params['path'],
- module.params['state'],
- module.params['force'],
- module.check_mode
- )
- self.size = module.params['size']
- self.passphrase = module.params['passphrase']
- self.cipher = module.params['cipher']
- self.privatekey = None
- self.fingerprint = {}
- self.format = module.params['format']
- self.format_mismatch = module.params['format_mismatch']
- self.privatekey_bytes = None
- self.return_content = module.params['return_content']
- self.regenerate = module.params['regenerate']
- if self.regenerate == 'always':
- self.force = True
-
- self.backup = module.params['backup']
- self.backup_file = None
-
- if module.params['mode'] is None:
- module.params['mode'] = '0600'
-
- @abc.abstractmethod
- def _generate_private_key(self):
- """(Re-)Generate private key."""
- pass
-
- @abc.abstractmethod
- def _ensure_private_key_loaded(self):
- """Make sure that the private key has been loaded."""
- pass
-
- @abc.abstractmethod
- def _get_private_key_data(self):
- """Return bytes for self.privatekey"""
- pass
-
- @abc.abstractmethod
- def _get_fingerprint(self):
- pass
-
- def generate(self, module):
- """Generate a keypair."""
-
- if not self.check(module, perms_required=False, ignore_conversion=True) or self.force:
- # Regenerate
- if self.backup:
- self.backup_file = module.backup_local(self.path)
- self._generate_private_key()
- privatekey_data = self._get_private_key_data()
- if self.return_content:
- self.privatekey_bytes = privatekey_data
- crypto_utils.write_file(module, privatekey_data, 0o600)
- self.changed = True
- elif not self.check(module, perms_required=False, ignore_conversion=False):
- # Convert
- if self.backup:
- self.backup_file = module.backup_local(self.path)
- self._ensure_private_key_loaded()
- privatekey_data = self._get_private_key_data()
- if self.return_content:
- self.privatekey_bytes = privatekey_data
- crypto_utils.write_file(module, privatekey_data, 0o600)
- self.changed = True
-
- self.fingerprint = self._get_fingerprint()
- file_args = module.load_file_common_arguments(module.params)
- if module.set_fs_attributes_if_different(file_args, False):
- self.changed = True
-
- def remove(self, module):
- if self.backup:
- self.backup_file = module.backup_local(self.path)
- super(PrivateKeyBase, self).remove(module)
-
- @abc.abstractmethod
- def _check_passphrase(self):
- pass
-
- @abc.abstractmethod
- def _check_size_and_type(self):
- pass
-
- @abc.abstractmethod
- def _check_format(self):
- pass
-
- def check(self, module, perms_required=True, ignore_conversion=True):
- """Ensure the resource is in its desired state."""
-
- state_and_perms = super(PrivateKeyBase, self).check(module, perms_required=False)
-
- if not state_and_perms:
- # key does not exist
- return False
-
- if not self._check_passphrase():
- if self.regenerate in ('full_idempotence', 'always'):
- return False
- module.fail_json(msg='Unable to read the key. The key is protected with a another passphrase / no passphrase or broken.'
- ' Will not proceed. To force regeneration, call the module with `generate`'
- ' set to `full_idempotence` or `always`, or with `force=yes`.')
-
- if self.regenerate != 'never':
- if not self._check_size_and_type():
- if self.regenerate in ('partial_idempotence', 'full_idempotence', 'always'):
- return False
- module.fail_json(msg='Key has wrong type and/or size.'
- ' Will not proceed. To force regeneration, call the module with `generate`'
- ' set to `partial_idempotence`, `full_idempotence` or `always`, or with `force=yes`.')
-
- if not self._check_format():
- # During conversion step, convert if format does not match and format_mismatch == 'convert'
- if not ignore_conversion and self.format_mismatch == 'convert':
- return False
- # During generation step, regenerate if format does not match and format_mismatch == 'regenerate'
- if ignore_conversion and self.format_mismatch == 'regenerate' and self.regenerate != 'never':
- if not ignore_conversion or self.regenerate in ('partial_idempotence', 'full_idempotence', 'always'):
- return False
- module.fail_json(msg='Key has wrong format.'
- ' Will not proceed. To force regeneration, call the module with `generate`'
- ' set to `partial_idempotence`, `full_idempotence` or `always`, or with `force=yes`.'
- ' To convert the key, set `format_mismatch` to `convert`.')
-
- # check whether permissions are correct (in case that needs to be checked)
- return not perms_required or super(PrivateKeyBase, self).check(module, perms_required=perms_required)
-
- def dump(self):
- """Serialize the object into a dictionary."""
-
- result = {
- 'size': self.size,
- 'filename': self.path,
- 'changed': self.changed,
- 'fingerprint': self.fingerprint,
- }
- if self.backup_file:
- result['backup_file'] = self.backup_file
- if self.return_content:
- if self.privatekey_bytes is None:
- self.privatekey_bytes = crypto_utils.load_file_if_exists(self.path, ignore_errors=True)
- if self.privatekey_bytes:
- if crypto_utils.identify_private_key_format(self.privatekey_bytes) == 'raw':
- result['privatekey'] = base64.b64encode(self.privatekey_bytes)
- else:
- result['privatekey'] = self.privatekey_bytes.decode('utf-8')
- else:
- result['privatekey'] = None
-
- return result
-
-
-# Implementation with using pyOpenSSL
-class PrivateKeyPyOpenSSL(PrivateKeyBase):
-
- def __init__(self, module):
- super(PrivateKeyPyOpenSSL, self).__init__(module)
-
- if module.params['type'] == 'RSA':
- self.type = crypto.TYPE_RSA
- elif module.params['type'] == 'DSA':
- self.type = crypto.TYPE_DSA
- else:
- module.fail_json(msg="PyOpenSSL backend only supports RSA and DSA keys.")
-
- if self.format != 'auto_ignore':
- module.fail_json(msg="PyOpenSSL backend only supports auto_ignore format.")
-
- def _generate_private_key(self):
- """(Re-)Generate private key."""
- self.privatekey = crypto.PKey()
- try:
- self.privatekey.generate_key(self.type, self.size)
- except (TypeError, ValueError) as exc:
- raise PrivateKeyError(exc)
-
- def _ensure_private_key_loaded(self):
- """Make sure that the private key has been loaded."""
- if self.privatekey is None:
- try:
- self.privatekey = privatekey = crypto_utils.load_privatekey(self.path, self.passphrase)
- except crypto_utils.OpenSSLBadPassphraseError as exc:
- raise PrivateKeyError(exc)
-
- def _get_private_key_data(self):
- """Return bytes for self.privatekey"""
- if self.cipher and self.passphrase:
- return crypto.dump_privatekey(crypto.FILETYPE_PEM, self.privatekey,
- self.cipher, to_bytes(self.passphrase))
- else:
- return crypto.dump_privatekey(crypto.FILETYPE_PEM, self.privatekey)
-
- def _get_fingerprint(self):
- return crypto_utils.get_fingerprint(self.path, self.passphrase)
-
- def _check_passphrase(self):
- try:
- crypto_utils.load_privatekey(self.path, self.passphrase)
- return True
- except Exception as dummy:
- return False
-
- def _check_size_and_type(self):
- def _check_size(privatekey):
- return self.size == privatekey.bits()
-
- def _check_type(privatekey):
- return self.type == privatekey.type()
-
- self._ensure_private_key_loaded()
- return _check_size(self.privatekey) and _check_type(self.privatekey)
-
- def _check_format(self):
- # Not supported by this backend
- return True
-
- def dump(self):
- """Serialize the object into a dictionary."""
-
- result = super(PrivateKeyPyOpenSSL, self).dump()
-
- if self.type == crypto.TYPE_RSA:
- result['type'] = 'RSA'
- else:
- result['type'] = 'DSA'
-
- return result
-
-
-# Implementation with using cryptography
-class PrivateKeyCryptography(PrivateKeyBase):
-
- def _get_ec_class(self, ectype):
- ecclass = cryptography.hazmat.primitives.asymmetric.ec.__dict__.get(ectype)
- if ecclass is None:
- self.module.fail_json(msg='Your cryptography version does not support {0}'.format(ectype))
- return ecclass
-
- def _add_curve(self, name, ectype, deprecated=False):
- def create(size):
- ecclass = self._get_ec_class(ectype)
- return ecclass()
-
- def verify(privatekey):
- ecclass = self._get_ec_class(ectype)
- return isinstance(privatekey.private_numbers().public_numbers.curve, ecclass)
-
- self.curves[name] = {
- 'create': create,
- 'verify': verify,
- 'deprecated': deprecated,
- }
-
- def __init__(self, module):
- super(PrivateKeyCryptography, self).__init__(module)
-
- self.curves = dict()
- self._add_curve('secp384r1', 'SECP384R1')
- self._add_curve('secp521r1', 'SECP521R1')
- self._add_curve('secp224r1', 'SECP224R1')
- self._add_curve('secp192r1', 'SECP192R1')
- self._add_curve('secp256r1', 'SECP256R1')
- self._add_curve('secp256k1', 'SECP256K1')
- self._add_curve('brainpoolP256r1', 'BrainpoolP256R1', deprecated=True)
- self._add_curve('brainpoolP384r1', 'BrainpoolP384R1', deprecated=True)
- self._add_curve('brainpoolP512r1', 'BrainpoolP512R1', deprecated=True)
- self._add_curve('sect571k1', 'SECT571K1', deprecated=True)
- self._add_curve('sect409k1', 'SECT409K1', deprecated=True)
- self._add_curve('sect283k1', 'SECT283K1', deprecated=True)
- self._add_curve('sect233k1', 'SECT233K1', deprecated=True)
- self._add_curve('sect163k1', 'SECT163K1', deprecated=True)
- self._add_curve('sect571r1', 'SECT571R1', deprecated=True)
- self._add_curve('sect409r1', 'SECT409R1', deprecated=True)
- self._add_curve('sect283r1', 'SECT283R1', deprecated=True)
- self._add_curve('sect233r1', 'SECT233R1', deprecated=True)
- self._add_curve('sect163r2', 'SECT163R2', deprecated=True)
-
- self.module = module
- self.cryptography_backend = cryptography.hazmat.backends.default_backend()
-
- self.type = module.params['type']
- self.curve = module.params['curve']
- if not CRYPTOGRAPHY_HAS_X25519 and self.type == 'X25519':
- self.module.fail_json(msg='Your cryptography version does not support X25519')
- if not CRYPTOGRAPHY_HAS_X25519_FULL and self.type == 'X25519':
- self.module.fail_json(msg='Your cryptography version does not support X25519 serialization')
- if not CRYPTOGRAPHY_HAS_X448 and self.type == 'X448':
- self.module.fail_json(msg='Your cryptography version does not support X448')
- if not CRYPTOGRAPHY_HAS_ED25519 and self.type == 'Ed25519':
- self.module.fail_json(msg='Your cryptography version does not support Ed25519')
- if not CRYPTOGRAPHY_HAS_ED448 and self.type == 'Ed448':
- self.module.fail_json(msg='Your cryptography version does not support Ed448')
-
- def _get_wanted_format(self):
- if self.format not in ('auto', 'auto_ignore'):
- return self.format
- if self.type in ('X25519', 'X448', 'Ed25519', 'Ed448'):
- return 'pkcs8'
- else:
- return 'pkcs1'
-
- def _generate_private_key(self):
- """(Re-)Generate private key."""
- try:
- if self.type == 'RSA':
- self.privatekey = cryptography.hazmat.primitives.asymmetric.rsa.generate_private_key(
- public_exponent=65537, # OpenSSL always uses this
- key_size=self.size,
- backend=self.cryptography_backend
- )
- if self.type == 'DSA':
- self.privatekey = cryptography.hazmat.primitives.asymmetric.dsa.generate_private_key(
- key_size=self.size,
- backend=self.cryptography_backend
- )
- if CRYPTOGRAPHY_HAS_X25519_FULL and self.type == 'X25519':
- self.privatekey = cryptography.hazmat.primitives.asymmetric.x25519.X25519PrivateKey.generate()
- if CRYPTOGRAPHY_HAS_X448 and self.type == 'X448':
- self.privatekey = cryptography.hazmat.primitives.asymmetric.x448.X448PrivateKey.generate()
- if CRYPTOGRAPHY_HAS_ED25519 and self.type == 'Ed25519':
- self.privatekey = cryptography.hazmat.primitives.asymmetric.ed25519.Ed25519PrivateKey.generate()
- if CRYPTOGRAPHY_HAS_ED448 and self.type == 'Ed448':
- self.privatekey = cryptography.hazmat.primitives.asymmetric.ed448.Ed448PrivateKey.generate()
- if self.type == 'ECC' and self.curve in self.curves:
- if self.curves[self.curve]['deprecated']:
- self.module.warn('Elliptic curves of type {0} should not be used for new keys!'.format(self.curve))
- self.privatekey = cryptography.hazmat.primitives.asymmetric.ec.generate_private_key(
- curve=self.curves[self.curve]['create'](self.size),
- backend=self.cryptography_backend
- )
- except cryptography.exceptions.UnsupportedAlgorithm as dummy:
- self.module.fail_json(msg='Cryptography backend does not support the algorithm required for {0}'.format(self.type))
-
- def _ensure_private_key_loaded(self):
- """Make sure that the private key has been loaded."""
- if self.privatekey is None:
- self.privatekey = self._load_privatekey()
-
- def _get_private_key_data(self):
- """Return bytes for self.privatekey"""
- # Select export format and encoding
- try:
- export_format = self._get_wanted_format()
- export_encoding = cryptography.hazmat.primitives.serialization.Encoding.PEM
- if export_format == 'pkcs1':
- # "TraditionalOpenSSL" format is PKCS1
- export_format = cryptography.hazmat.primitives.serialization.PrivateFormat.TraditionalOpenSSL
- elif export_format == 'pkcs8':
- export_format = cryptography.hazmat.primitives.serialization.PrivateFormat.PKCS8
- elif export_format == 'raw':
- export_format = cryptography.hazmat.primitives.serialization.PrivateFormat.Raw
- export_encoding = cryptography.hazmat.primitives.serialization.Encoding.Raw
- except AttributeError:
- self.module.fail_json(msg='Cryptography backend does not support the selected output format "{0}"'.format(self.format))
-
- # Select key encryption
- encryption_algorithm = cryptography.hazmat.primitives.serialization.NoEncryption()
- if self.cipher and self.passphrase:
- if self.cipher == 'auto':
- encryption_algorithm = cryptography.hazmat.primitives.serialization.BestAvailableEncryption(to_bytes(self.passphrase))
- else:
- self.module.fail_json(msg='Cryptography backend can only use "auto" for cipher option.')
-
- # Serialize key
- try:
- return self.privatekey.private_bytes(
- encoding=export_encoding,
- format=export_format,
- encryption_algorithm=encryption_algorithm
- )
- except ValueError as dummy:
- self.module.fail_json(
- msg='Cryptography backend cannot serialize the private key in the required format "{0}"'.format(self.format)
- )
- except Exception as dummy:
- self.module.fail_json(
- msg='Error while serializing the private key in the required format "{0}"'.format(self.format),
- exception=traceback.format_exc()
- )
-
- def _load_privatekey(self):
- try:
- # Read bytes
- with open(self.path, 'rb') as f:
- data = f.read()
- # Interpret bytes depending on format.
- format = crypto_utils.identify_private_key_format(data)
- if format == 'raw':
- if len(data) == 56 and CRYPTOGRAPHY_HAS_X448:
- return cryptography.hazmat.primitives.asymmetric.x448.X448PrivateKey.from_private_bytes(data)
- if len(data) == 57 and CRYPTOGRAPHY_HAS_ED448:
- return cryptography.hazmat.primitives.asymmetric.ed448.Ed448PrivateKey.from_private_bytes(data)
- if len(data) == 32:
- if CRYPTOGRAPHY_HAS_X25519 and (self.type == 'X25519' or not CRYPTOGRAPHY_HAS_ED25519):
- return cryptography.hazmat.primitives.asymmetric.x25519.X25519PrivateKey.from_private_bytes(data)
- if CRYPTOGRAPHY_HAS_ED25519 and (self.type == 'Ed25519' or not CRYPTOGRAPHY_HAS_X25519):
- return cryptography.hazmat.primitives.asymmetric.ed25519.Ed25519PrivateKey.from_private_bytes(data)
- if CRYPTOGRAPHY_HAS_X25519 and CRYPTOGRAPHY_HAS_ED25519:
- try:
- return cryptography.hazmat.primitives.asymmetric.x25519.X25519PrivateKey.from_private_bytes(data)
- except Exception:
- return cryptography.hazmat.primitives.asymmetric.ed25519.Ed25519PrivateKey.from_private_bytes(data)
- raise PrivateKeyError('Cannot load raw key')
- else:
- return cryptography.hazmat.primitives.serialization.load_pem_private_key(
- data,
- None if self.passphrase is None else to_bytes(self.passphrase),
- backend=self.cryptography_backend
- )
- except Exception as e:
- raise PrivateKeyError(e)
-
- def _get_fingerprint(self):
- # Get bytes of public key
- private_key = self._load_privatekey()
- public_key = private_key.public_key()
- public_key_bytes = public_key.public_bytes(
- cryptography.hazmat.primitives.serialization.Encoding.DER,
- cryptography.hazmat.primitives.serialization.PublicFormat.SubjectPublicKeyInfo
- )
- # Get fingerprints of public_key_bytes
- return crypto_utils.get_fingerprint_of_bytes(public_key_bytes)
-
- def _check_passphrase(self):
- try:
- with open(self.path, 'rb') as f:
- data = f.read()
- format = crypto_utils.identify_private_key_format(data)
- if format == 'raw':
- # Raw keys cannot be encrypted. To avoid incompatibilities, we try to
- # actually load the key (and return False when this fails).
- self._load_privatekey()
- # Loading the key succeeded. Only return True when no passphrase was
- # provided.
- return self.passphrase is None
- else:
- return cryptography.hazmat.primitives.serialization.load_pem_private_key(
- data,
- None if self.passphrase is None else to_bytes(self.passphrase),
- backend=self.cryptography_backend
- )
- except Exception as dummy:
- return False
-
- def _check_size_and_type(self):
- self._ensure_private_key_loaded()
-
- if isinstance(self.privatekey, cryptography.hazmat.primitives.asymmetric.rsa.RSAPrivateKey):
- return self.type == 'RSA' and self.size == self.privatekey.key_size
- if isinstance(self.privatekey, cryptography.hazmat.primitives.asymmetric.dsa.DSAPrivateKey):
- return self.type == 'DSA' and self.size == self.privatekey.key_size
- if CRYPTOGRAPHY_HAS_X25519 and isinstance(self.privatekey, cryptography.hazmat.primitives.asymmetric.x25519.X25519PrivateKey):
- return self.type == 'X25519'
- if CRYPTOGRAPHY_HAS_X448 and isinstance(self.privatekey, cryptography.hazmat.primitives.asymmetric.x448.X448PrivateKey):
- return self.type == 'X448'
- if CRYPTOGRAPHY_HAS_ED25519 and isinstance(self.privatekey, cryptography.hazmat.primitives.asymmetric.ed25519.Ed25519PrivateKey):
- return self.type == 'Ed25519'
- if CRYPTOGRAPHY_HAS_ED448 and isinstance(self.privatekey, cryptography.hazmat.primitives.asymmetric.ed448.Ed448PrivateKey):
- return self.type == 'Ed448'
- if isinstance(self.privatekey, cryptography.hazmat.primitives.asymmetric.ec.EllipticCurvePrivateKey):
- if self.type != 'ECC':
- return False
- if self.curve not in self.curves:
- return False
- return self.curves[self.curve]['verify'](self.privatekey)
-
- return False
-
- def _check_format(self):
- if self.format == 'auto_ignore':
- return True
- try:
- with open(self.path, 'rb') as f:
- content = f.read()
- format = crypto_utils.identify_private_key_format(content)
- return format == self._get_wanted_format()
- except Exception as dummy:
- return False
-
- def dump(self):
- """Serialize the object into a dictionary."""
- result = super(PrivateKeyCryptography, self).dump()
- result['type'] = self.type
- if self.type == 'ECC':
- result['curve'] = self.curve
- return result
-
-
-def main():
-
- module = AnsibleModule(
- argument_spec=dict(
- state=dict(type='str', default='present', choices=['present', 'absent']),
- size=dict(type='int', default=4096),
- type=dict(type='str', default='RSA', choices=[
- 'DSA', 'ECC', 'Ed25519', 'Ed448', 'RSA', 'X25519', 'X448'
- ]),
- curve=dict(type='str', choices=[
- 'secp384r1', 'secp521r1', 'secp224r1', 'secp192r1', 'secp256r1',
- 'secp256k1', 'brainpoolP256r1', 'brainpoolP384r1', 'brainpoolP512r1',
- 'sect571k1', 'sect409k1', 'sect283k1', 'sect233k1', 'sect163k1',
- 'sect571r1', 'sect409r1', 'sect283r1', 'sect233r1', 'sect163r2',
- ]),
- force=dict(type='bool', default=False),
- path=dict(type='path', required=True),
- passphrase=dict(type='str', no_log=True),
- cipher=dict(type='str'),
- backup=dict(type='bool', default=False),
- format=dict(type='str', default='auto_ignore', choices=['pkcs1', 'pkcs8', 'raw', 'auto', 'auto_ignore']),
- format_mismatch=dict(type='str', default='regenerate', choices=['regenerate', 'convert']),
- select_crypto_backend=dict(type='str', choices=['auto', 'pyopenssl', 'cryptography'], default='auto'),
- return_content=dict(type='bool', default=False),
- regenerate=dict(
- type='str',
- default='full_idempotence',
- choices=['never', 'fail', 'partial_idempotence', 'full_idempotence', 'always']
- ),
- ),
- supports_check_mode=True,
- add_file_common_args=True,
- required_together=[
- ['cipher', 'passphrase']
- ],
- required_if=[
- ['type', 'ECC', ['curve']],
- ],
- )
-
- base_dir = os.path.dirname(module.params['path']) or '.'
- if not os.path.isdir(base_dir):
- module.fail_json(
- name=base_dir,
- msg='The directory %s does not exist or the file is not a directory' % base_dir
- )
-
- backend = module.params['select_crypto_backend']
- if backend == 'auto':
- # Detection what is possible
- can_use_cryptography = CRYPTOGRAPHY_FOUND and CRYPTOGRAPHY_VERSION >= LooseVersion(MINIMAL_CRYPTOGRAPHY_VERSION)
- can_use_pyopenssl = PYOPENSSL_FOUND and PYOPENSSL_VERSION >= LooseVersion(MINIMAL_PYOPENSSL_VERSION)
-
- # Decision
- if module.params['cipher'] and module.params['passphrase'] and module.params['cipher'] != 'auto':
- # First try pyOpenSSL, then cryptography
- if can_use_pyopenssl:
- backend = 'pyopenssl'
- elif can_use_cryptography:
- backend = 'cryptography'
- else:
- # First try cryptography, then pyOpenSSL
- if can_use_cryptography:
- backend = 'cryptography'
- elif can_use_pyopenssl:
- backend = 'pyopenssl'
-
- # Success?
- if backend == 'auto':
- module.fail_json(msg=("Can't detect any of the required Python libraries "
- "cryptography (>= {0}) or PyOpenSSL (>= {1})").format(
- MINIMAL_CRYPTOGRAPHY_VERSION,
- MINIMAL_PYOPENSSL_VERSION))
- try:
- if backend == 'pyopenssl':
- if not PYOPENSSL_FOUND:
- module.fail_json(msg=missing_required_lib('pyOpenSSL >= {0}'.format(MINIMAL_PYOPENSSL_VERSION)),
- exception=PYOPENSSL_IMP_ERR)
- module.deprecate('The module is using the PyOpenSSL backend. This backend has been deprecated',
- version='2.13', collection_name='ansible.builtin')
- private_key = PrivateKeyPyOpenSSL(module)
- elif backend == 'cryptography':
- if not CRYPTOGRAPHY_FOUND:
- module.fail_json(msg=missing_required_lib('cryptography >= {0}'.format(MINIMAL_CRYPTOGRAPHY_VERSION)),
- exception=CRYPTOGRAPHY_IMP_ERR)
- private_key = PrivateKeyCryptography(module)
-
- if private_key.state == 'present':
- if module.check_mode:
- result = private_key.dump()
- result['changed'] = private_key.force \
- or not private_key.check(module, ignore_conversion=True) \
- or not private_key.check(module, ignore_conversion=False)
- module.exit_json(**result)
-
- private_key.generate(module)
- else:
- if module.check_mode:
- result = private_key.dump()
- result['changed'] = os.path.exists(module.params['path'])
- module.exit_json(**result)
-
- private_key.remove(module)
-
- result = private_key.dump()
- module.exit_json(**result)
- except crypto_utils.OpenSSLObjectError as exc:
- module.fail_json(msg=to_native(exc))
-
-
-if __name__ == '__main__':
- main()
diff --git a/test/support/integration/plugins/modules/selinux.py b/test/support/integration/plugins/modules/selinux.py
deleted file mode 100644
index 775c87104b..0000000000
--- a/test/support/integration/plugins/modules/selinux.py
+++ /dev/null
@@ -1,266 +0,0 @@
-#!/usr/bin/python
-# -*- coding: utf-8 -*-
-
-# Copyright: (c) 2012, Derek Carter<goozbach@friocorte.com>
-# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
-
-from __future__ import absolute_import, division, print_function
-__metaclass__ = type
-
-ANSIBLE_METADATA = {
- 'metadata_version': '1.1',
- 'status': ['stableinterface'],
- 'supported_by': 'core'
-}
-
-DOCUMENTATION = r'''
----
-module: selinux
-short_description: Change policy and state of SELinux
-description:
- - Configures the SELinux mode and policy.
- - A reboot may be required after usage.
- - Ansible will not issue this reboot but will let you know when it is required.
-version_added: "0.7"
-options:
- policy:
- description:
- - The name of the SELinux policy to use (e.g. C(targeted)) will be required if state is not C(disabled).
- state:
- description:
- - The SELinux mode.
- required: true
- choices: [ disabled, enforcing, permissive ]
- configfile:
- description:
- - The path to the SELinux configuration file, if non-standard.
- default: /etc/selinux/config
- aliases: [ conf, file ]
-requirements: [ libselinux-python ]
-author:
-- Derek Carter (@goozbach) <goozbach@friocorte.com>
-'''
-
-EXAMPLES = r'''
-- name: Enable SELinux
- selinux:
- policy: targeted
- state: enforcing
-
-- name: Put SELinux in permissive mode, logging actions that would be blocked.
- selinux:
- policy: targeted
- state: permissive
-
-- name: Disable SELinux
- selinux:
- state: disabled
-'''
-
-RETURN = r'''
-msg:
- description: Messages that describe changes that were made.
- returned: always
- type: str
- sample: Config SELinux state changed from 'disabled' to 'permissive'
-configfile:
- description: Path to SELinux configuration file.
- returned: always
- type: str
- sample: /etc/selinux/config
-policy:
- description: Name of the SELinux policy.
- returned: always
- type: str
- sample: targeted
-state:
- description: SELinux mode.
- returned: always
- type: str
- sample: enforcing
-reboot_required:
- description: Whether or not an reboot is required for the changes to take effect.
- returned: always
- type: bool
- sample: true
-'''
-
-import os
-import re
-import tempfile
-import traceback
-
-SELINUX_IMP_ERR = None
-try:
- import selinux
- HAS_SELINUX = True
-except ImportError:
- SELINUX_IMP_ERR = traceback.format_exc()
- HAS_SELINUX = False
-
-from ansible.module_utils.basic import AnsibleModule, missing_required_lib
-from ansible.module_utils.facts.utils import get_file_lines
-
-
-# getter subroutines
-def get_config_state(configfile):
- lines = get_file_lines(configfile, strip=False)
-
- for line in lines:
- stateline = re.match(r'^SELINUX=.*$', line)
- if stateline:
- return line.split('=')[1].strip()
-
-
-def get_config_policy(configfile):
- lines = get_file_lines(configfile, strip=False)
-
- for line in lines:
- stateline = re.match(r'^SELINUXTYPE=.*$', line)
- if stateline:
- return line.split('=')[1].strip()
-
-
-# setter subroutines
-def set_config_state(module, state, configfile):
- # SELINUX=permissive
- # edit config file with state value
- stateline = 'SELINUX=%s' % state
- lines = get_file_lines(configfile, strip=False)
-
- tmpfd, tmpfile = tempfile.mkstemp()
-
- with open(tmpfile, "w") as write_file:
- for line in lines:
- write_file.write(re.sub(r'^SELINUX=.*', stateline, line) + '\n')
-
- module.atomic_move(tmpfile, configfile)
-
-
-def set_state(module, state):
- if state == 'enforcing':
- selinux.security_setenforce(1)
- elif state == 'permissive':
- selinux.security_setenforce(0)
- elif state == 'disabled':
- pass
- else:
- msg = 'trying to set invalid runtime state %s' % state
- module.fail_json(msg=msg)
-
-
-def set_config_policy(module, policy, configfile):
- if not os.path.exists('/etc/selinux/%s/policy' % policy):
- module.fail_json(msg='Policy %s does not exist in /etc/selinux/' % policy)
-
- # edit config file with state value
- # SELINUXTYPE=targeted
- policyline = 'SELINUXTYPE=%s' % policy
- lines = get_file_lines(configfile, strip=False)
-
- tmpfd, tmpfile = tempfile.mkstemp()
-
- with open(tmpfile, "w") as write_file:
- for line in lines:
- write_file.write(re.sub(r'^SELINUXTYPE=.*', policyline, line) + '\n')
-
- module.atomic_move(tmpfile, configfile)
-
-
-def main():
- module = AnsibleModule(
- argument_spec=dict(
- policy=dict(type='str'),
- state=dict(type='str', required='True', choices=['enforcing', 'permissive', 'disabled']),
- configfile=dict(type='str', default='/etc/selinux/config', aliases=['conf', 'file']),
- ),
- supports_check_mode=True,
- )
-
- if not HAS_SELINUX:
- module.fail_json(msg=missing_required_lib('libselinux-python'), exception=SELINUX_IMP_ERR)
-
- # global vars
- changed = False
- msgs = []
- configfile = module.params['configfile']
- policy = module.params['policy']
- state = module.params['state']
- runtime_enabled = selinux.is_selinux_enabled()
- runtime_policy = selinux.selinux_getpolicytype()[1]
- runtime_state = 'disabled'
- reboot_required = False
-
- if runtime_enabled:
- # enabled means 'enforcing' or 'permissive'
- if selinux.security_getenforce():
- runtime_state = 'enforcing'
- else:
- runtime_state = 'permissive'
-
- if not os.path.isfile(configfile):
- module.fail_json(msg="Unable to find file {0}".format(configfile),
- details="Please install SELinux-policy package, "
- "if this package is not installed previously.")
-
- config_policy = get_config_policy(configfile)
- config_state = get_config_state(configfile)
-
- # check to see if policy is set if state is not 'disabled'
- if state != 'disabled':
- if not policy:
- module.fail_json(msg="Policy is required if state is not 'disabled'")
- else:
- if not policy:
- policy = config_policy
-
- # check changed values and run changes
- if policy != runtime_policy:
- if module.check_mode:
- module.exit_json(changed=True)
- # cannot change runtime policy
- msgs.append("Running SELinux policy changed from '%s' to '%s'" % (runtime_policy, policy))
- changed = True
-
- if policy != config_policy:
- if module.check_mode:
- module.exit_json(changed=True)
- set_config_policy(module, policy, configfile)
- msgs.append("SELinux policy configuration in '%s' changed from '%s' to '%s'" % (configfile, config_policy, policy))
- changed = True
-
- if state != runtime_state:
- if runtime_enabled:
- if state == 'disabled':
- if runtime_state != 'permissive':
- # Temporarily set state to permissive
- if not module.check_mode:
- set_state(module, 'permissive')
- module.warn("SELinux state temporarily changed from '%s' to 'permissive'. State change will take effect next reboot." % (runtime_state))
- changed = True
- else:
- module.warn('SELinux state change will take effect next reboot')
- reboot_required = True
- else:
- if not module.check_mode:
- set_state(module, state)
- msgs.append("SELinux state changed from '%s' to '%s'" % (runtime_state, state))
-
- # Only report changes if the file is changed.
- # This prevents the task from reporting changes every time the task is run.
- changed = True
- else:
- module.warn("Reboot is required to set SELinux state to '%s'" % state)
- reboot_required = True
-
- if state != config_state:
- if not module.check_mode:
- set_config_state(module, state, configfile)
- msgs.append("Config SELinux state changed from '%s' to '%s'" % (config_state, state))
- changed = True
-
- module.exit_json(changed=changed, msg=', '.join(msgs), configfile=configfile, policy=policy, state=state, reboot_required=reboot_required)
-
-
-if __name__ == '__main__':
- main()
diff --git a/test/support/integration/plugins/modules/ufw.py b/test/support/integration/plugins/modules/ufw.py
deleted file mode 100644
index 6452f7c910..0000000000
--- a/test/support/integration/plugins/modules/ufw.py
+++ /dev/null
@@ -1,598 +0,0 @@
-#!/usr/bin/python
-# -*- coding: utf-8 -*-
-
-# Copyright: (c) 2014, Ahti Kitsik <ak@ahtik.com>
-# Copyright: (c) 2014, Jarno Keskikangas <jarno.keskikangas@gmail.com>
-# Copyright: (c) 2013, Aleksey Ovcharenko <aleksey.ovcharenko@gmail.com>
-# Copyright: (c) 2013, James Martin <jmartin@basho.com>
-# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
-
-from __future__ import absolute_import, division, print_function
-__metaclass__ = type
-
-ANSIBLE_METADATA = {'metadata_version': '1.1',
- 'status': ['preview'],
- 'supported_by': 'community'}
-
-DOCUMENTATION = r'''
----
-module: ufw
-short_description: Manage firewall with UFW
-description:
- - Manage firewall with UFW.
-version_added: 1.6
-author:
- - Aleksey Ovcharenko (@ovcharenko)
- - Jarno Keskikangas (@pyykkis)
- - Ahti Kitsik (@ahtik)
-notes:
- - See C(man ufw) for more examples.
-requirements:
- - C(ufw) package
-options:
- state:
- description:
- - C(enabled) reloads firewall and enables firewall on boot.
- - C(disabled) unloads firewall and disables firewall on boot.
- - C(reloaded) reloads firewall.
- - C(reset) disables and resets firewall to installation defaults.
- type: str
- choices: [ disabled, enabled, reloaded, reset ]
- default:
- description:
- - Change the default policy for incoming or outgoing traffic.
- type: str
- choices: [ allow, deny, reject ]
- aliases: [ policy ]
- direction:
- description:
- - Select direction for a rule or default policy command. Mutually
- exclusive with I(interface_in) and I(interface_out).
- type: str
- choices: [ in, incoming, out, outgoing, routed ]
- logging:
- description:
- - Toggles logging. Logged packets use the LOG_KERN syslog facility.
- type: str
- choices: [ 'on', 'off', low, medium, high, full ]
- insert:
- description:
- - Insert the corresponding rule as rule number NUM.
- - Note that ufw numbers rules starting with 1.
- type: int
- insert_relative_to:
- description:
- - Allows to interpret the index in I(insert) relative to a position.
- - C(zero) interprets the rule number as an absolute index (i.e. 1 is
- the first rule).
- - C(first-ipv4) interprets the rule number relative to the index of the
- first IPv4 rule, or relative to the position where the first IPv4 rule
- would be if there is currently none.
- - C(last-ipv4) interprets the rule number relative to the index of the
- last IPv4 rule, or relative to the position where the last IPv4 rule
- would be if there is currently none.
- - C(first-ipv6) interprets the rule number relative to the index of the
- first IPv6 rule, or relative to the position where the first IPv6 rule
- would be if there is currently none.
- - C(last-ipv6) interprets the rule number relative to the index of the
- last IPv6 rule, or relative to the position where the last IPv6 rule
- would be if there is currently none.
- type: str
- choices: [ first-ipv4, first-ipv6, last-ipv4, last-ipv6, zero ]
- default: zero
- version_added: "2.8"
- rule:
- description:
- - Add firewall rule
- type: str
- choices: [ allow, deny, limit, reject ]
- log:
- description:
- - Log new connections matched to this rule
- type: bool
- from_ip:
- description:
- - Source IP address.
- type: str
- default: any
- aliases: [ from, src ]
- from_port:
- description:
- - Source port.
- type: str
- to_ip:
- description:
- - Destination IP address.
- type: str
- default: any
- aliases: [ dest, to]
- to_port:
- description:
- - Destination port.
- type: str
- aliases: [ port ]
- proto:
- description:
- - TCP/IP protocol.
- type: str
- choices: [ any, tcp, udp, ipv6, esp, ah, gre, igmp ]
- aliases: [ protocol ]
- name:
- description:
- - Use profile located in C(/etc/ufw/applications.d).
- type: str
- aliases: [ app ]
- delete:
- description:
- - Delete rule.
- type: bool
- interface:
- description:
- - Specify interface for the rule. The direction (in or out) used
- for the interface depends on the value of I(direction). See
- I(interface_in) and I(interface_out) for routed rules that needs
- to supply both an input and output interface. Mutually
- exclusive with I(interface_in) and I(interface_out).
- type: str
- aliases: [ if ]
- interface_in:
- description:
- - Specify input interface for the rule. This is mutually
- exclusive with I(direction) and I(interface). However, it is
- compatible with I(interface_out) for routed rules.
- type: str
- aliases: [ if_in ]
- version_added: "2.10"
- interface_out:
- description:
- - Specify output interface for the rule. This is mutually
- exclusive with I(direction) and I(interface). However, it is
- compatible with I(interface_in) for routed rules.
- type: str
- aliases: [ if_out ]
- version_added: "2.10"
- route:
- description:
- - Apply the rule to routed/forwarded packets.
- type: bool
- comment:
- description:
- - Add a comment to the rule. Requires UFW version >=0.35.
- type: str
- version_added: "2.4"
-'''
-
-EXAMPLES = r'''
-- name: Allow everything and enable UFW
- ufw:
- state: enabled
- policy: allow
-
-- name: Set logging
- ufw:
- logging: 'on'
-
-# Sometimes it is desirable to let the sender know when traffic is
-# being denied, rather than simply ignoring it. In these cases, use
-# reject instead of deny. In addition, log rejected connections:
-- ufw:
- rule: reject
- port: auth
- log: yes
-
-# ufw supports connection rate limiting, which is useful for protecting
-# against brute-force login attacks. ufw will deny connections if an IP
-# address has attempted to initiate 6 or more connections in the last
-# 30 seconds. See http://www.debian-administration.org/articles/187
-# for details. Typical usage is:
-- ufw:
- rule: limit
- port: ssh
- proto: tcp
-
-# Allow OpenSSH. (Note that as ufw manages its own state, simply removing
-# a rule=allow task can leave those ports exposed. Either use delete=yes
-# or a separate state=reset task)
-- ufw:
- rule: allow
- name: OpenSSH
-
-- name: Delete OpenSSH rule
- ufw:
- rule: allow
- name: OpenSSH
- delete: yes
-
-- name: Deny all access to port 53
- ufw:
- rule: deny
- port: '53'
-
-- name: Allow port range 60000-61000
- ufw:
- rule: allow
- port: 60000:61000
- proto: tcp
-
-- name: Allow all access to tcp port 80
- ufw:
- rule: allow
- port: '80'
- proto: tcp
-
-- name: Allow all access from RFC1918 networks to this host
- ufw:
- rule: allow
- src: '{{ item }}'
- loop:
- - 10.0.0.0/8
- - 172.16.0.0/12
- - 192.168.0.0/16
-
-- name: Deny access to udp port 514 from host 1.2.3.4 and include a comment
- ufw:
- rule: deny
- proto: udp
- src: 1.2.3.4
- port: '514'
- comment: Block syslog
-
-- name: Allow incoming access to eth0 from 1.2.3.5 port 5469 to 1.2.3.4 port 5469
- ufw:
- rule: allow
- interface: eth0
- direction: in
- proto: udp
- src: 1.2.3.5
- from_port: '5469'
- dest: 1.2.3.4
- to_port: '5469'
-
-# Note that IPv6 must be enabled in /etc/default/ufw for IPv6 firewalling to work.
-- name: Deny all traffic from the IPv6 2001:db8::/32 to tcp port 25 on this host
- ufw:
- rule: deny
- proto: tcp
- src: 2001:db8::/32
- port: '25'
-
-- name: Deny all IPv6 traffic to tcp port 20 on this host
- # this should be the first IPv6 rule
- ufw:
- rule: deny
- proto: tcp
- port: '20'
- to_ip: "::"
- insert: 0
- insert_relative_to: first-ipv6
-
-- name: Deny all IPv4 traffic to tcp port 20 on this host
- # This should be the third to last IPv4 rule
- # (insert: -1 addresses the second to last IPv4 rule;
- # so the new rule will be inserted before the second
- # to last IPv4 rule, and will be come the third to last
- # IPv4 rule.)
- ufw:
- rule: deny
- proto: tcp
- port: '20'
- to_ip: "::"
- insert: -1
- insert_relative_to: last-ipv4
-
-# Can be used to further restrict a global FORWARD policy set to allow
-- name: Deny forwarded/routed traffic from subnet 1.2.3.0/24 to subnet 4.5.6.0/24
- ufw:
- rule: deny
- route: yes
- src: 1.2.3.0/24
- dest: 4.5.6.0/24
-'''
-
-import re
-
-from operator import itemgetter
-
-from ansible.module_utils.basic import AnsibleModule
-
-
-def compile_ipv4_regexp():
- r = r"((25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9])\.){3,3}"
- r += r"(25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9])"
- return re.compile(r)
-
-
-def compile_ipv6_regexp():
- """
- validation pattern provided by :
- https://stackoverflow.com/questions/53497/regular-expression-that-matches-
- valid-ipv6-addresses#answer-17871737
- """
- r = r"(([0-9a-fA-F]{1,4}:){7,7}[0-9a-fA-F]{1,4}|([0-9a-fA-F]{1,4}:){1,7}:"
- r += r"|([0-9a-fA-F]{1,4}:){1,6}:[0-9a-fA-F]{1,4}|([0-9a-fA-F]{1,4}:){1,5}"
- r += r"(:[0-9a-fA-F]{1,4}){1,2}|([0-9a-fA-F]{1,4}:){1,4}(:[0-9a-fA-F]{1,4})"
- r += r"{1,3}|([0-9a-fA-F]{1,4}:){1,3}(:[0-9a-fA-F]{1,4}){1,4}|([0-9a-fA-F]"
- r += r"{1,4}:){1,2}(:[0-9a-fA-F]{1,4}){1,5}|[0-9a-fA-F]{1,4}:((:[0-9a-fA-F]"
- r += r"{1,4}){1,6})|:((:[0-9a-fA-F]{1,4}){1,7}|:)|fe80:(:[0-9a-fA-F]{0,4})"
- r += r"{0,4}%[0-9a-zA-Z]{1,}|::(ffff(:0{1,4}){0,1}:){0,1}((25[0-5]|(2[0-4]"
- r += r"|1{0,1}[0-9]){0,1}[0-9])\.){3,3}(25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}"
- r += r"[0-9])|([0-9a-fA-F]{1,4}:){1,4}:((25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}"
- r += r"[0-9])\.){3,3}(25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9]))"
- return re.compile(r)
-
-
-def main():
- command_keys = ['state', 'default', 'rule', 'logging']
-
- module = AnsibleModule(
- argument_spec=dict(
- state=dict(type='str', choices=['enabled', 'disabled', 'reloaded', 'reset']),
- default=dict(type='str', aliases=['policy'], choices=['allow', 'deny', 'reject']),
- logging=dict(type='str', choices=['full', 'high', 'low', 'medium', 'off', 'on']),
- direction=dict(type='str', choices=['in', 'incoming', 'out', 'outgoing', 'routed']),
- delete=dict(type='bool', default=False),
- route=dict(type='bool', default=False),
- insert=dict(type='int'),
- insert_relative_to=dict(choices=['zero', 'first-ipv4', 'last-ipv4', 'first-ipv6', 'last-ipv6'], default='zero'),
- rule=dict(type='str', choices=['allow', 'deny', 'limit', 'reject']),
- interface=dict(type='str', aliases=['if']),
- interface_in=dict(type='str', aliases=['if_in']),
- interface_out=dict(type='str', aliases=['if_out']),
- log=dict(type='bool', default=False),
- from_ip=dict(type='str', default='any', aliases=['from', 'src']),
- from_port=dict(type='str'),
- to_ip=dict(type='str', default='any', aliases=['dest', 'to']),
- to_port=dict(type='str', aliases=['port']),
- proto=dict(type='str', aliases=['protocol'], choices=['ah', 'any', 'esp', 'ipv6', 'tcp', 'udp', 'gre', 'igmp']),
- name=dict(type='str', aliases=['app']),
- comment=dict(type='str'),
- ),
- supports_check_mode=True,
- mutually_exclusive=[
- ['name', 'proto', 'logging'],
- # Mutual exclusivity with `interface` implied by `required_by`.
- ['direction', 'interface_in'],
- ['direction', 'interface_out'],
- ],
- required_one_of=([command_keys]),
- required_by=dict(
- interface=('direction', ),
- ),
- )
-
- cmds = []
-
- ipv4_regexp = compile_ipv4_regexp()
- ipv6_regexp = compile_ipv6_regexp()
-
- def filter_line_that_not_start_with(pattern, content):
- return ''.join([line for line in content.splitlines(True) if line.startswith(pattern)])
-
- def filter_line_that_contains(pattern, content):
- return [line for line in content.splitlines(True) if pattern in line]
-
- def filter_line_that_not_contains(pattern, content):
- return ''.join([line for line in content.splitlines(True) if not line.contains(pattern)])
-
- def filter_line_that_match_func(match_func, content):
- return ''.join([line for line in content.splitlines(True) if match_func(line) is not None])
-
- def filter_line_that_contains_ipv4(content):
- return filter_line_that_match_func(ipv4_regexp.search, content)
-
- def filter_line_that_contains_ipv6(content):
- return filter_line_that_match_func(ipv6_regexp.search, content)
-
- def is_starting_by_ipv4(ip):
- return ipv4_regexp.match(ip) is not None
-
- def is_starting_by_ipv6(ip):
- return ipv6_regexp.match(ip) is not None
-
- def execute(cmd, ignore_error=False):
- cmd = ' '.join(map(itemgetter(-1), filter(itemgetter(0), cmd)))
-
- cmds.append(cmd)
- (rc, out, err) = module.run_command(cmd, environ_update={"LANG": "C"})
-
- if rc != 0 and not ignore_error:
- module.fail_json(msg=err or out, commands=cmds)
-
- return out
-
- def get_current_rules():
- user_rules_files = ["/lib/ufw/user.rules",
- "/lib/ufw/user6.rules",
- "/etc/ufw/user.rules",
- "/etc/ufw/user6.rules",
- "/var/lib/ufw/user.rules",
- "/var/lib/ufw/user6.rules"]
-
- cmd = [[grep_bin], ["-h"], ["'^### tuple'"]]
-
- cmd.extend([[f] for f in user_rules_files])
- return execute(cmd, ignore_error=True)
-
- def ufw_version():
- """
- Returns the major and minor version of ufw installed on the system.
- """
- out = execute([[ufw_bin], ["--version"]])
-
- lines = [x for x in out.split('\n') if x.strip() != '']
- if len(lines) == 0:
- module.fail_json(msg="Failed to get ufw version.", rc=0, out=out)
-
- matches = re.search(r'^ufw.+(\d+)\.(\d+)(?:\.(\d+))?.*$', lines[0])
- if matches is None:
- module.fail_json(msg="Failed to get ufw version.", rc=0, out=out)
-
- # Convert version to numbers
- major = int(matches.group(1))
- minor = int(matches.group(2))
- rev = 0
- if matches.group(3) is not None:
- rev = int(matches.group(3))
-
- return major, minor, rev
-
- params = module.params
-
- commands = dict((key, params[key]) for key in command_keys if params[key])
-
- # Ensure ufw is available
- ufw_bin = module.get_bin_path('ufw', True)
- grep_bin = module.get_bin_path('grep', True)
-
- # Save the pre state and rules in order to recognize changes
- pre_state = execute([[ufw_bin], ['status verbose']])
- pre_rules = get_current_rules()
-
- changed = False
-
- # Execute filter
- for (command, value) in commands.items():
-
- cmd = [[ufw_bin], [module.check_mode, '--dry-run']]
-
- if command == 'state':
- states = {'enabled': 'enable', 'disabled': 'disable',
- 'reloaded': 'reload', 'reset': 'reset'}
-
- if value in ['reloaded', 'reset']:
- changed = True
-
- if module.check_mode:
- # "active" would also match "inactive", hence the space
- ufw_enabled = pre_state.find(" active") != -1
- if (value == 'disabled' and ufw_enabled) or (value == 'enabled' and not ufw_enabled):
- changed = True
- else:
- execute(cmd + [['-f'], [states[value]]])
-
- elif command == 'logging':
- extract = re.search(r'Logging: (on|off)(?: \(([a-z]+)\))?', pre_state)
- if extract:
- current_level = extract.group(2)
- current_on_off_value = extract.group(1)
- if value != "off":
- if current_on_off_value == "off":
- changed = True
- elif value != "on" and value != current_level:
- changed = True
- elif current_on_off_value != "off":
- changed = True
- else:
- changed = True
-
- if not module.check_mode:
- execute(cmd + [[command], [value]])
-
- elif command == 'default':
- if params['direction'] not in ['outgoing', 'incoming', 'routed', None]:
- module.fail_json(msg='For default, direction must be one of "outgoing", "incoming" and "routed", or direction must not be specified.')
- if module.check_mode:
- regexp = r'Default: (deny|allow|reject) \(incoming\), (deny|allow|reject) \(outgoing\), (deny|allow|reject|disabled) \(routed\)'
- extract = re.search(regexp, pre_state)
- if extract is not None:
- current_default_values = {}
- current_default_values["incoming"] = extract.group(1)
- current_default_values["outgoing"] = extract.group(2)
- current_default_values["routed"] = extract.group(3)
- v = current_default_values[params['direction'] or 'incoming']
- if v not in (value, 'disabled'):
- changed = True
- else:
- changed = True
- else:
- execute(cmd + [[command], [value], [params['direction']]])
-
- elif command == 'rule':
- if params['direction'] not in ['in', 'out', None]:
- module.fail_json(msg='For rules, direction must be one of "in" and "out", or direction must not be specified.')
- if not params['route'] and params['interface_in'] and params['interface_out']:
- module.fail_json(msg='Only route rules can combine '
- 'interface_in and interface_out')
- # Rules are constructed according to the long format
- #
- # ufw [--dry-run] [route] [delete] [insert NUM] allow|deny|reject|limit [in|out on INTERFACE] [log|log-all] \
- # [from ADDRESS [port PORT]] [to ADDRESS [port PORT]] \
- # [proto protocol] [app application] [comment COMMENT]
- cmd.append([module.boolean(params['route']), 'route'])
- cmd.append([module.boolean(params['delete']), 'delete'])
- if params['insert'] is not None:
- relative_to_cmd = params['insert_relative_to']
- if relative_to_cmd == 'zero':
- insert_to = params['insert']
- else:
- (dummy, numbered_state, dummy) = module.run_command([ufw_bin, 'status', 'numbered'])
- numbered_line_re = re.compile(R'^\[ *([0-9]+)\] ')
- lines = [(numbered_line_re.match(line), '(v6)' in line) for line in numbered_state.splitlines()]
- lines = [(int(matcher.group(1)), ipv6) for (matcher, ipv6) in lines if matcher]
- last_number = max([no for (no, ipv6) in lines]) if lines else 0
- has_ipv4 = any([not ipv6 for (no, ipv6) in lines])
- has_ipv6 = any([ipv6 for (no, ipv6) in lines])
- if relative_to_cmd == 'first-ipv4':
- relative_to = 1
- elif relative_to_cmd == 'last-ipv4':
- relative_to = max([no for (no, ipv6) in lines if not ipv6]) if has_ipv4 else 1
- elif relative_to_cmd == 'first-ipv6':
- relative_to = max([no for (no, ipv6) in lines if not ipv6]) + 1 if has_ipv4 else 1
- elif relative_to_cmd == 'last-ipv6':
- relative_to = last_number if has_ipv6 else last_number + 1
- insert_to = params['insert'] + relative_to
- if insert_to > last_number:
- # ufw does not like it when the insert number is larger than the
- # maximal rule number for IPv4/IPv6.
- insert_to = None
- cmd.append([insert_to is not None, "insert %s" % insert_to])
- cmd.append([value])
- cmd.append([params['direction'], "%s" % params['direction']])
- cmd.append([params['interface'], "on %s" % params['interface']])
- cmd.append([params['interface_in'], "in on %s" % params['interface_in']])
- cmd.append([params['interface_out'], "out on %s" % params['interface_out']])
- cmd.append([module.boolean(params['log']), 'log'])
-
- for (key, template) in [('from_ip', "from %s"), ('from_port', "port %s"),
- ('to_ip', "to %s"), ('to_port', "port %s"),
- ('proto', "proto %s"), ('name', "app '%s'")]:
- value = params[key]
- cmd.append([value, template % (value)])
-
- ufw_major, ufw_minor, dummy = ufw_version()
- # comment is supported only in ufw version after 0.35
- if (ufw_major == 0 and ufw_minor >= 35) or ufw_major > 0:
- cmd.append([params['comment'], "comment '%s'" % params['comment']])
-
- rules_dry = execute(cmd)
-
- if module.check_mode:
-
- nb_skipping_line = len(filter_line_that_contains("Skipping", rules_dry))
-
- if not (nb_skipping_line > 0 and nb_skipping_line == len(rules_dry.splitlines(True))):
-
- rules_dry = filter_line_that_not_start_with("### tuple", rules_dry)
- # ufw dry-run doesn't send all rules so have to compare ipv4 or ipv6 rules
- if is_starting_by_ipv4(params['from_ip']) or is_starting_by_ipv4(params['to_ip']):
- if filter_line_that_contains_ipv4(pre_rules) != filter_line_that_contains_ipv4(rules_dry):
- changed = True
- elif is_starting_by_ipv6(params['from_ip']) or is_starting_by_ipv6(params['to_ip']):
- if filter_line_that_contains_ipv6(pre_rules) != filter_line_that_contains_ipv6(rules_dry):
- changed = True
- elif pre_rules != rules_dry:
- changed = True
-
- # Get the new state
- if module.check_mode:
- return module.exit_json(changed=changed, commands=cmds)
- else:
- post_state = execute([[ufw_bin], ['status'], ['verbose']])
- if not changed:
- post_rules = get_current_rules()
- changed = (pre_state != post_state) or (pre_rules != post_rules)
- return module.exit_json(changed=changed, commands=cmds, msg=post_state.rstrip())
-
-
-if __name__ == '__main__':
- main()
diff --git a/test/support/integration/plugins/modules/vmware_guest_custom_attributes.py b/test/support/integration/plugins/modules/vmware_guest_custom_attributes.py
deleted file mode 100644
index e55a3ad754..0000000000
--- a/test/support/integration/plugins/modules/vmware_guest_custom_attributes.py
+++ /dev/null
@@ -1,259 +0,0 @@
-#!/usr/bin/python
-# -*- coding: utf-8 -*-
-
-# Copyright, (c) 2018, Ansible Project
-# Copyright, (c) 2018, Abhijeet Kasurde <akasurde@redhat.com>
-#
-# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
-
-from __future__ import absolute_import, division, print_function
-__metaclass__ = type
-
-
-ANSIBLE_METADATA = {
- 'metadata_version': '1.1',
- 'status': ['preview'],
- 'supported_by': 'community'
-}
-
-
-DOCUMENTATION = '''
----
-module: vmware_guest_custom_attributes
-short_description: Manage custom attributes from VMware for the given virtual machine
-description:
- - This module can be used to add, remove and update custom attributes for the given virtual machine.
-version_added: 2.7
-author:
- - Jimmy Conner (@cigamit)
- - Abhijeet Kasurde (@Akasurde)
-notes:
- - Tested on vSphere 6.5
-requirements:
- - "python >= 2.6"
- - PyVmomi
-options:
- name:
- description:
- - Name of the virtual machine to work with.
- - This is required parameter, if C(uuid) or C(moid) is not supplied.
- type: str
- state:
- description:
- - The action to take.
- - If set to C(present), then custom attribute is added or updated.
- - If set to C(absent), then custom attribute is removed.
- default: 'present'
- choices: ['present', 'absent']
- type: str
- uuid:
- description:
- - UUID of the virtual machine to manage if known. This is VMware's unique identifier.
- - This is required parameter, if C(name) or C(moid) is not supplied.
- type: str
- moid:
- description:
- - Managed Object ID of the instance to manage if known, this is a unique identifier only within a single vCenter instance.
- - This is required if C(name) or C(uuid) is not supplied.
- version_added: '2.9'
- type: str
- use_instance_uuid:
- description:
- - Whether to use the VMware instance UUID rather than the BIOS UUID.
- default: no
- type: bool
- version_added: '2.8'
- folder:
- description:
- - Absolute path to find an existing guest.
- - This is required parameter, if C(name) is supplied and multiple virtual machines with same name are found.
- type: str
- datacenter:
- description:
- - Datacenter name where the virtual machine is located in.
- required: True
- type: str
- attributes:
- description:
- - A list of name and value of custom attributes that needs to be manage.
- - Value of custom attribute is not required and will be ignored, if C(state) is set to C(absent).
- default: []
- type: list
-extends_documentation_fragment: vmware.documentation
-'''
-
-EXAMPLES = '''
-- name: Add virtual machine custom attributes
- vmware_guest_custom_attributes:
- hostname: "{{ vcenter_hostname }}"
- username: "{{ vcenter_username }}"
- password: "{{ vcenter_password }}"
- uuid: 421e4592-c069-924d-ce20-7e7533fab926
- state: present
- attributes:
- - name: MyAttribute
- value: MyValue
- delegate_to: localhost
- register: attributes
-
-- name: Add multiple virtual machine custom attributes
- vmware_guest_custom_attributes:
- hostname: "{{ vcenter_hostname }}"
- username: "{{ vcenter_username }}"
- password: "{{ vcenter_password }}"
- uuid: 421e4592-c069-924d-ce20-7e7533fab926
- state: present
- attributes:
- - name: MyAttribute
- value: MyValue
- - name: MyAttribute2
- value: MyValue2
- delegate_to: localhost
- register: attributes
-
-- name: Remove virtual machine Attribute
- vmware_guest_custom_attributes:
- hostname: "{{ vcenter_hostname }}"
- username: "{{ vcenter_username }}"
- password: "{{ vcenter_password }}"
- uuid: 421e4592-c069-924d-ce20-7e7533fab926
- state: absent
- attributes:
- - name: MyAttribute
- delegate_to: localhost
- register: attributes
-
-- name: Remove virtual machine Attribute using Virtual Machine MoID
- vmware_guest_custom_attributes:
- hostname: "{{ vcenter_hostname }}"
- username: "{{ vcenter_username }}"
- password: "{{ vcenter_password }}"
- moid: vm-42
- state: absent
- attributes:
- - name: MyAttribute
- delegate_to: localhost
- register: attributes
-'''
-
-RETURN = """
-custom_attributes:
- description: metadata about the virtual machine attributes
- returned: always
- type: dict
- sample: {
- "mycustom": "my_custom_value",
- "mycustom_2": "my_custom_value_2",
- "sample_1": "sample_1_value",
- "sample_2": "sample_2_value",
- "sample_3": "sample_3_value"
- }
-"""
-
-try:
- from pyVmomi import vim
-except ImportError:
- pass
-
-from ansible.module_utils.basic import AnsibleModule
-from ansible.module_utils.vmware import PyVmomi, vmware_argument_spec
-
-
-class VmAttributeManager(PyVmomi):
- def __init__(self, module):
- super(VmAttributeManager, self).__init__(module)
-
- def set_custom_field(self, vm, user_fields):
- result_fields = dict()
- change_list = list()
- changed = False
-
- for field in user_fields:
- field_key = self.check_exists(field['name'])
- found = False
- field_value = field.get('value', '')
-
- for k, v in [(x.name, v.value) for x in self.custom_field_mgr for v in vm.customValue if x.key == v.key]:
- if k == field['name']:
- found = True
- if v != field_value:
- if not self.module.check_mode:
- self.content.customFieldsManager.SetField(entity=vm, key=field_key.key, value=field_value)
- result_fields[k] = field_value
- change_list.append(True)
- if not found and field_value != "":
- if not field_key and not self.module.check_mode:
- field_key = self.content.customFieldsManager.AddFieldDefinition(name=field['name'], moType=vim.VirtualMachine)
- change_list.append(True)
- if not self.module.check_mode:
- self.content.customFieldsManager.SetField(entity=vm, key=field_key.key, value=field_value)
- result_fields[field['name']] = field_value
-
- if any(change_list):
- changed = True
-
- return {'changed': changed, 'failed': False, 'custom_attributes': result_fields}
-
- def check_exists(self, field):
- for x in self.custom_field_mgr:
- if x.name == field:
- return x
- return False
-
-
-def main():
- argument_spec = vmware_argument_spec()
- argument_spec.update(
- datacenter=dict(type='str'),
- name=dict(type='str'),
- folder=dict(type='str'),
- uuid=dict(type='str'),
- moid=dict(type='str'),
- use_instance_uuid=dict(type='bool', default=False),
- state=dict(type='str', default='present',
- choices=['absent', 'present']),
- attributes=dict(
- type='list',
- default=[],
- options=dict(
- name=dict(type='str', required=True),
- value=dict(type='str'),
- )
- ),
- )
-
- module = AnsibleModule(
- argument_spec=argument_spec,
- supports_check_mode=True,
- required_one_of=[
- ['name', 'uuid', 'moid']
- ],
- )
-
- if module.params.get('folder'):
- # FindByInventoryPath() does not require an absolute path
- # so we should leave the input folder path unmodified
- module.params['folder'] = module.params['folder'].rstrip('/')
-
- pyv = VmAttributeManager(module)
- results = {'changed': False, 'failed': False, 'instance': dict()}
-
- # Check if the virtual machine exists before continuing
- vm = pyv.get_vm()
-
- if vm:
- # virtual machine already exists
- if module.params['state'] == "present":
- results = pyv.set_custom_field(vm, module.params['attributes'])
- elif module.params['state'] == "absent":
- results = pyv.set_custom_field(vm, module.params['attributes'])
- module.exit_json(**results)
- else:
- # virtual machine does not exists
- vm_id = (module.params.get('name') or module.params.get('uuid') or module.params.get('moid'))
- module.fail_json(msg="Unable to manage custom attributes for non-existing"
- " virtual machine %s" % vm_id)
-
-
-if __name__ == '__main__':
- main()
diff --git a/test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/action/net_logging.py b/test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/action/net_logging.py
deleted file mode 100644
index acb6513462..0000000000
--- a/test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/action/net_logging.py
+++ /dev/null
@@ -1,30 +0,0 @@
-# (c) 2017, Ansible Inc,
-#
-# This file is part of Ansible
-#
-# Ansible is free software: you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation, either version 3 of the License, or
-# (at your option) any later version.
-#
-# Ansible is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
-#
-# You should have received a copy of the GNU General Public License
-# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
-from __future__ import absolute_import, division, print_function
-
-__metaclass__ = type
-
-from ansible_collections.ansible.netcommon.plugins.action.net_base import (
- ActionModule as _ActionModule,
-)
-
-
-class ActionModule(_ActionModule):
- def run(self, tmp=None, task_vars=None):
- result = super(ActionModule, self).run(tmp, task_vars)
- del tmp # tmp no longer has any effect
- return result
diff --git a/test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/action/net_static_route.py b/test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/action/net_static_route.py
deleted file mode 100644
index 308bddbc60..0000000000
--- a/test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/action/net_static_route.py
+++ /dev/null
@@ -1,31 +0,0 @@
-# (c) 2017, Ansible Inc,
-#
-# This file is part of Ansible
-#
-# Ansible is free software: you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation, either version 3 of the License, or
-# (at your option) any later version.
-#
-# Ansible is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
-#
-# You should have received a copy of the GNU General Public License
-# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
-from __future__ import absolute_import, division, print_function
-
-__metaclass__ = type
-
-from ansible_collections.ansible.netcommon.plugins.action.net_base import (
- ActionModule as _ActionModule,
-)
-
-
-class ActionModule(_ActionModule):
- def run(self, tmp=None, task_vars=None):
- result = super(ActionModule, self).run(tmp, task_vars)
- del tmp # tmp no longer has any effect
-
- return result
diff --git a/test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/modules/net_logging.py b/test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/modules/net_logging.py
deleted file mode 100644
index 44412ea6cb..0000000000
--- a/test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/modules/net_logging.py
+++ /dev/null
@@ -1,110 +0,0 @@
-#!/usr/bin/python
-# -*- coding: utf-8 -*-
-
-# (c) 2017, Ansible by Red Hat, inc
-# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
-
-from __future__ import absolute_import, division, print_function
-
-__metaclass__ = type
-
-
-ANSIBLE_METADATA = {
- "metadata_version": "1.1",
- "status": ["deprecated"],
- "supported_by": "network",
-}
-
-
-DOCUMENTATION = """module: net_logging
-author: Ganesh Nalawade (@ganeshrn)
-short_description: Manage logging on network devices
-description:
-- This module provides declarative management of logging on network devices.
-deprecated:
- removed_in: '2.13'
- alternative: Use platform-specific "[netos]_logging" module
- why: Updated modules released with more functionality
-extends_documentation_fragment:
-- ansible.netcommon.network_agnostic
-options:
- dest:
- description:
- - Destination of the logs.
- choices:
- - console
- - host
- name:
- description:
- - If value of C(dest) is I(host) it indicates file-name the host name to be notified.
- facility:
- description:
- - Set logging facility.
- level:
- description:
- - Set logging severity levels.
- aggregate:
- description: List of logging definitions.
- purge:
- description:
- - Purge logging not defined in the I(aggregate) parameter.
- default: false
- state:
- description:
- - State of the logging configuration.
- default: present
- choices:
- - present
- - absent
-"""
-
-EXAMPLES = """
-- name: configure console logging
- net_logging:
- dest: console
- facility: any
- level: critical
-
-- name: remove console logging configuration
- net_logging:
- dest: console
- state: absent
-
-- name: configure host logging
- net_logging:
- dest: host
- name: 192.0.2.1
- facility: kernel
- level: critical
-
-- name: Configure file logging using aggregate
- net_logging:
- dest: file
- aggregate:
- - name: test-1
- facility: pfe
- level: critical
- - name: test-2
- facility: kernel
- level: emergency
-- name: Delete file logging using aggregate
- net_logging:
- dest: file
- aggregate:
- - name: test-1
- facility: pfe
- level: critical
- - name: test-2
- facility: kernel
- level: emergency
- state: absent
-"""
-
-RETURN = """
-commands:
- description: The list of configuration mode commands to send to the device
- returned: always, except for the platforms that use Netconf transport to manage the device.
- type: list
- sample:
- - logging console critical
-"""
diff --git a/test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/modules/net_static_route.py b/test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/modules/net_static_route.py
deleted file mode 100644
index 7ab2ccbc5c..0000000000
--- a/test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/modules/net_static_route.py
+++ /dev/null
@@ -1,98 +0,0 @@
-#!/usr/bin/python
-# -*- coding: utf-8 -*-
-
-# (c) 2017, Ansible by Red Hat, inc
-# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
-
-from __future__ import absolute_import, division, print_function
-
-__metaclass__ = type
-
-
-ANSIBLE_METADATA = {
- "metadata_version": "1.1",
- "status": ["deprecated"],
- "supported_by": "network",
-}
-
-
-DOCUMENTATION = """module: net_static_route
-author: Ricardo Carrillo Cruz (@rcarrillocruz)
-short_description: Manage static IP routes on network appliances (routers, switches
- et. al.)
-description:
-- This module provides declarative management of static IP routes on network appliances
- (routers, switches et. al.).
-deprecated:
- removed_in: '2.13'
- alternative: Use platform-specific "[netos]_static_route" module
- why: Updated modules released with more functionality
-extends_documentation_fragment:
-- ansible.netcommon.network_agnostic
-options:
- prefix:
- description:
- - Network prefix of the static route.
- required: true
- mask:
- description:
- - Network prefix mask of the static route.
- required: true
- next_hop:
- description:
- - Next hop IP of the static route.
- required: true
- admin_distance:
- description:
- - Admin distance of the static route.
- aggregate:
- description: List of static route definitions
- purge:
- description:
- - Purge static routes not defined in the I(aggregate) parameter.
- default: false
- state:
- description:
- - State of the static route configuration.
- default: present
- choices:
- - present
- - absent
-"""
-
-EXAMPLES = """
-- name: configure static route
- net_static_route:
- prefix: 192.168.2.0
- mask: 255.255.255.0
- next_hop: 10.0.0.1
-
-- name: remove configuration
- net_static_route:
- prefix: 192.168.2.0
- mask: 255.255.255.0
- next_hop: 10.0.0.1
- state: absent
-
-- name: configure aggregates of static routes
- net_static_route:
- aggregate:
- - { prefix: 192.168.2.0, mask: 255.255.255.0, next_hop: 10.0.0.1 }
- - { prefix: 192.168.3.0, mask: 255.255.255.0, next_hop: 10.0.2.1 }
-
-- name: Remove static route collections
- net_static_route:
- aggregate:
- - { prefix: 172.24.1.0/24, next_hop: 192.168.42.64 }
- - { prefix: 172.24.3.0/24, next_hop: 192.168.42.64 }
- state: absent
-"""
-
-RETURN = """
-commands:
- description: The list of configuration mode commands to send to the device
- returned: always
- type: list
- sample:
- - ip route 192.168.2.0/24 10.0.0.1
-"""
diff --git a/test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/modules/vyos_logging.py b/test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/modules/vyos_logging.py
deleted file mode 100644
index 9f81eb9e5c..0000000000
--- a/test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/modules/vyos_logging.py
+++ /dev/null
@@ -1,300 +0,0 @@
-#!/usr/bin/python
-# -*- coding: utf-8 -*-
-
-# (c) 2017, Ansible by Red Hat, inc
-#
-# This file is part of Ansible by Red Hat
-#
-# Ansible is free software: you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation, either version 3 of the License, or
-# (at your option) any later version.
-#
-# Ansible is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
-#
-# You should have received a copy of the GNU General Public License
-# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
-#
-
-ANSIBLE_METADATA = {
- "metadata_version": "1.1",
- "status": ["preview"],
- "supported_by": "network",
-}
-
-DOCUMENTATION = """module: vyos_logging
-author: Trishna Guha (@trishnaguha)
-short_description: Manage logging on network devices
-description:
-- This module provides declarative management of logging on Vyatta Vyos devices.
-notes:
-- Tested against VyOS 1.1.8 (helium).
-- This module works with connection C(network_cli). See L(the VyOS OS Platform Options,../network/user_guide/platform_vyos.html).
-options:
- dest:
- description:
- - Destination of the logs.
- choices:
- - console
- - file
- - global
- - host
- - user
- name:
- description:
- - If value of C(dest) is I(file) it indicates file-name, for I(user) it indicates
- username and for I(host) indicates the host name to be notified.
- facility:
- description:
- - Set logging facility.
- level:
- description:
- - Set logging severity levels.
- aggregate:
- description: List of logging definitions.
- state:
- description:
- - State of the logging configuration.
- default: present
- choices:
- - present
- - absent
-extends_documentation_fragment:
-- vyos.vyos.vyos
-"""
-
-EXAMPLES = """
-- name: configure console logging
- vyos_logging:
- dest: console
- facility: all
- level: crit
-
-- name: remove console logging configuration
- vyos_logging:
- dest: console
- state: absent
-
-- name: configure file logging
- vyos_logging:
- dest: file
- name: test
- facility: local3
- level: err
-
-- name: Add logging aggregate
- vyos_logging:
- aggregate:
- - { dest: file, name: test1, facility: all, level: info }
- - { dest: file, name: test2, facility: news, level: debug }
- state: present
-
-- name: Remove logging aggregate
- vyos_logging:
- aggregate:
- - { dest: console, facility: all, level: info }
- - { dest: console, facility: daemon, level: warning }
- - { dest: file, name: test2, facility: news, level: debug }
- state: absent
-"""
-
-RETURN = """
-commands:
- description: The list of configuration mode commands to send to the device
- returned: always
- type: list
- sample:
- - set system syslog global facility all level notice
-"""
-
-import re
-
-from copy import deepcopy
-
-from ansible.module_utils.basic import AnsibleModule
-from ansible_collections.ansible.netcommon.plugins.module_utils.network.common.utils import (
- remove_default_spec,
-)
-from ansible_collections.vyos.vyos.plugins.module_utils.network.vyos.vyos import (
- get_config,
- load_config,
-)
-from ansible_collections.vyos.vyos.plugins.module_utils.network.vyos.vyos import (
- vyos_argument_spec,
-)
-
-
-def spec_to_commands(updates, module):
- commands = list()
- want, have = updates
-
- for w in want:
- dest = w["dest"]
- name = w["name"]
- facility = w["facility"]
- level = w["level"]
- state = w["state"]
- del w["state"]
-
- if state == "absent" and w in have:
- if w["name"]:
- commands.append(
- "delete system syslog {0} {1} facility {2} level {3}".format(
- dest, name, facility, level
- )
- )
- else:
- commands.append(
- "delete system syslog {0} facility {1} level {2}".format(
- dest, facility, level
- )
- )
- elif state == "present" and w not in have:
- if w["name"]:
- commands.append(
- "set system syslog {0} {1} facility {2} level {3}".format(
- dest, name, facility, level
- )
- )
- else:
- commands.append(
- "set system syslog {0} facility {1} level {2}".format(
- dest, facility, level
- )
- )
-
- return commands
-
-
-def config_to_dict(module):
- data = get_config(module)
- obj = []
-
- for line in data.split("\n"):
- if line.startswith("set system syslog"):
- match = re.search(r"set system syslog (\S+)", line, re.M)
- dest = match.group(1)
- if dest == "host":
- match = re.search(r"host (\S+)", line, re.M)
- name = match.group(1)
- elif dest == "file":
- match = re.search(r"file (\S+)", line, re.M)
- name = match.group(1)
- elif dest == "user":
- match = re.search(r"user (\S+)", line, re.M)
- name = match.group(1)
- else:
- name = None
-
- if "facility" in line:
- match = re.search(r"facility (\S+)", line, re.M)
- facility = match.group(1)
- if "level" in line:
- match = re.search(r"level (\S+)", line, re.M)
- level = match.group(1).strip("'")
-
- obj.append(
- {
- "dest": dest,
- "name": name,
- "facility": facility,
- "level": level,
- }
- )
-
- return obj
-
-
-def map_params_to_obj(module, required_if=None):
- obj = []
-
- aggregate = module.params.get("aggregate")
- if aggregate:
- for item in aggregate:
- for key in item:
- if item.get(key) is None:
- item[key] = module.params[key]
-
- module._check_required_if(required_if, item)
- obj.append(item.copy())
-
- else:
- if module.params["dest"] not in ("host", "file", "user"):
- module.params["name"] = None
-
- obj.append(
- {
- "dest": module.params["dest"],
- "name": module.params["name"],
- "facility": module.params["facility"],
- "level": module.params["level"],
- "state": module.params["state"],
- }
- )
-
- return obj
-
-
-def main():
- """ main entry point for module execution
- """
- element_spec = dict(
- dest=dict(
- type="str", choices=["console", "file", "global", "host", "user"]
- ),
- name=dict(type="str"),
- facility=dict(type="str"),
- level=dict(type="str"),
- state=dict(default="present", choices=["present", "absent"]),
- )
-
- aggregate_spec = deepcopy(element_spec)
-
- # remove default in aggregate spec, to handle common arguments
- remove_default_spec(aggregate_spec)
-
- argument_spec = dict(
- aggregate=dict(type="list", elements="dict", options=aggregate_spec),
- )
-
- argument_spec.update(element_spec)
-
- argument_spec.update(vyos_argument_spec)
- required_if = [
- ("dest", "host", ["name", "facility", "level"]),
- ("dest", "file", ["name", "facility", "level"]),
- ("dest", "user", ["name", "facility", "level"]),
- ("dest", "console", ["facility", "level"]),
- ("dest", "global", ["facility", "level"]),
- ]
-
- module = AnsibleModule(
- argument_spec=argument_spec,
- required_if=required_if,
- supports_check_mode=True,
- )
-
- warnings = list()
-
- result = {"changed": False}
- if warnings:
- result["warnings"] = warnings
- want = map_params_to_obj(module, required_if=required_if)
- have = config_to_dict(module)
-
- commands = spec_to_commands((want, have), module)
- result["commands"] = commands
-
- if commands:
- commit = not module.check_mode
- load_config(module, commands, commit=commit)
- result["changed"] = True
-
- module.exit_json(**result)
-
-
-if __name__ == "__main__":
- main()
diff --git a/test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/modules/vyos_static_route.py b/test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/modules/vyos_static_route.py
deleted file mode 100644
index af9a1e3fef..0000000000
--- a/test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/modules/vyos_static_route.py
+++ /dev/null
@@ -1,302 +0,0 @@
-#!/usr/bin/python
-# -*- coding: utf-8 -*-
-
-# (c) 2017, Ansible by Red Hat, inc
-#
-# This file is part of Ansible by Red Hat
-#
-# Ansible is free software: you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation, either version 3 of the License, or
-# (at your option) any later version.
-#
-# Ansible is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
-#
-# You should have received a copy of the GNU General Public License
-# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
-#
-
-ANSIBLE_METADATA = {
- "metadata_version": "1.1",
- "status": ["deprecated"],
- "supported_by": "network",
-}
-
-
-DOCUMENTATION = """module: vyos_static_route
-author: Trishna Guha (@trishnaguha)
-short_description: Manage static IP routes on Vyatta VyOS network devices
-description:
-- This module provides declarative management of static IP routes on Vyatta VyOS network
- devices.
-deprecated:
- removed_in: '2.13'
- alternative: vyos_static_routes
- why: Updated modules released with more functionality.
-notes:
-- Tested against VyOS 1.1.8 (helium).
-- This module works with connection C(network_cli). See L(the VyOS OS Platform Options,../network/user_guide/platform_vyos.html).
-options:
- prefix:
- description:
- - Network prefix of the static route. C(mask) param should be ignored if C(prefix)
- is provided with C(mask) value C(prefix/mask).
- type: str
- mask:
- description:
- - Network prefix mask of the static route.
- type: str
- next_hop:
- description:
- - Next hop IP of the static route.
- type: str
- admin_distance:
- description:
- - Admin distance of the static route.
- type: int
- aggregate:
- description: List of static route definitions
- type: list
- state:
- description:
- - State of the static route configuration.
- default: present
- choices:
- - present
- - absent
- type: str
-extends_documentation_fragment:
-- vyos.vyos.vyos
-"""
-
-EXAMPLES = """
-- name: configure static route
- vyos_static_route:
- prefix: 192.168.2.0
- mask: 24
- next_hop: 10.0.0.1
-
-- name: configure static route prefix/mask
- vyos_static_route:
- prefix: 192.168.2.0/16
- next_hop: 10.0.0.1
-
-- name: remove configuration
- vyos_static_route:
- prefix: 192.168.2.0
- mask: 16
- next_hop: 10.0.0.1
- state: absent
-
-- name: configure aggregates of static routes
- vyos_static_route:
- aggregate:
- - { prefix: 192.168.2.0, mask: 24, next_hop: 10.0.0.1 }
- - { prefix: 192.168.3.0, mask: 16, next_hop: 10.0.2.1 }
- - { prefix: 192.168.3.0/16, next_hop: 10.0.2.1 }
-
-- name: Remove static route collections
- vyos_static_route:
- aggregate:
- - { prefix: 172.24.1.0/24, next_hop: 192.168.42.64 }
- - { prefix: 172.24.3.0/24, next_hop: 192.168.42.64 }
- state: absent
-"""
-
-RETURN = """
-commands:
- description: The list of configuration mode commands to send to the device
- returned: always
- type: list
- sample:
- - set protocols static route 192.168.2.0/16 next-hop 10.0.0.1
-"""
-import re
-
-from copy import deepcopy
-
-from ansible.module_utils.basic import AnsibleModule
-from ansible_collections.ansible.netcommon.plugins.module_utils.network.common.utils import (
- remove_default_spec,
-)
-from ansible_collections.vyos.vyos.plugins.module_utils.network.vyos.vyos import (
- get_config,
- load_config,
-)
-from ansible_collections.vyos.vyos.plugins.module_utils.network.vyos.vyos import (
- vyos_argument_spec,
-)
-
-
-def spec_to_commands(updates, module):
- commands = list()
- want, have = updates
- for w in want:
- prefix = w["prefix"]
- mask = w["mask"]
- next_hop = w["next_hop"]
- admin_distance = w["admin_distance"]
- state = w["state"]
- del w["state"]
-
- if state == "absent" and w in have:
- commands.append(
- "delete protocols static route %s/%s" % (prefix, mask)
- )
- elif state == "present" and w not in have:
- cmd = "set protocols static route %s/%s next-hop %s" % (
- prefix,
- mask,
- next_hop,
- )
- if admin_distance != "None":
- cmd += " distance %s" % (admin_distance)
- commands.append(cmd)
-
- return commands
-
-
-def config_to_dict(module):
- data = get_config(module)
- obj = []
-
- for line in data.split("\n"):
- if line.startswith("set protocols static route"):
- match = re.search(r"static route (\S+)", line, re.M)
- prefix = match.group(1).split("/")[0]
- mask = match.group(1).split("/")[1]
- if "next-hop" in line:
- match_hop = re.search(r"next-hop (\S+)", line, re.M)
- next_hop = match_hop.group(1).strip("'")
-
- match_distance = re.search(r"distance (\S+)", line, re.M)
- if match_distance is not None:
- admin_distance = match_distance.group(1)[1:-1]
- else:
- admin_distance = None
-
- if admin_distance is not None:
- obj.append(
- {
- "prefix": prefix,
- "mask": mask,
- "next_hop": next_hop,
- "admin_distance": admin_distance,
- }
- )
- else:
- obj.append(
- {
- "prefix": prefix,
- "mask": mask,
- "next_hop": next_hop,
- "admin_distance": "None",
- }
- )
-
- return obj
-
-
-def map_params_to_obj(module, required_together=None):
- obj = []
- aggregate = module.params.get("aggregate")
- if aggregate:
- for item in aggregate:
- for key in item:
- if item.get(key) is None:
- item[key] = module.params[key]
-
- module._check_required_together(required_together, item)
- d = item.copy()
- if "/" in d["prefix"]:
- d["mask"] = d["prefix"].split("/")[1]
- d["prefix"] = d["prefix"].split("/")[0]
-
- if "admin_distance" in d:
- d["admin_distance"] = str(d["admin_distance"])
-
- obj.append(d)
- else:
- prefix = module.params["prefix"].strip()
- if "/" in prefix:
- mask = prefix.split("/")[1]
- prefix = prefix.split("/")[0]
- else:
- mask = module.params["mask"].strip()
- next_hop = module.params["next_hop"].strip()
- admin_distance = str(module.params["admin_distance"])
- state = module.params["state"]
-
- obj.append(
- {
- "prefix": prefix,
- "mask": mask,
- "next_hop": next_hop,
- "admin_distance": admin_distance,
- "state": state,
- }
- )
-
- return obj
-
-
-def main():
- """ main entry point for module execution
- """
- element_spec = dict(
- prefix=dict(type="str"),
- mask=dict(type="str"),
- next_hop=dict(type="str"),
- admin_distance=dict(type="int"),
- state=dict(default="present", choices=["present", "absent"]),
- )
-
- aggregate_spec = deepcopy(element_spec)
- aggregate_spec["prefix"] = dict(required=True)
-
- # remove default in aggregate spec, to handle common arguments
- remove_default_spec(aggregate_spec)
-
- argument_spec = dict(
- aggregate=dict(type="list", elements="dict", options=aggregate_spec),
- )
-
- argument_spec.update(element_spec)
- argument_spec.update(vyos_argument_spec)
-
- required_one_of = [["aggregate", "prefix"]]
- required_together = [["prefix", "next_hop"]]
- mutually_exclusive = [["aggregate", "prefix"]]
-
- module = AnsibleModule(
- argument_spec=argument_spec,
- required_one_of=required_one_of,
- required_together=required_together,
- mutually_exclusive=mutually_exclusive,
- supports_check_mode=True,
- )
-
- warnings = list()
-
- result = {"changed": False}
- if warnings:
- result["warnings"] = warnings
- want = map_params_to_obj(module, required_together=required_together)
- have = config_to_dict(module)
-
- commands = spec_to_commands((want, have), module)
- result["commands"] = commands
-
- if commands:
- commit = not module.check_mode
- load_config(module, commands, commit=commit)
- result["changed"] = True
-
- module.exit_json(**result)
-
-
-if __name__ == "__main__":
- main()
diff --git a/test/support/windows-integration/plugins/modules/win_hosts.ps1 b/test/support/windows-integration/plugins/modules/win_hosts.ps1
deleted file mode 100644
index 9e617c6664..0000000000
--- a/test/support/windows-integration/plugins/modules/win_hosts.ps1
+++ /dev/null
@@ -1,257 +0,0 @@
-#!powershell
-
-# Copyright: (c) 2018, Micah Hunsberger (@mhunsber)
-# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
-
-#AnsibleRequires -CSharpUtil Ansible.Basic
-
-Set-StrictMode -Version 2
-$ErrorActionPreference = "Stop"
-
-$spec = @{
- options = @{
- state = @{ type = "str"; choices = "absent", "present"; default = "present" }
- aliases = @{ type = "list"; elements = "str" }
- canonical_name = @{ type = "str" }
- ip_address = @{ type = "str" }
- action = @{ type = "str"; choices = "add", "remove", "set"; default = "set" }
- }
- required_if = @(,@( "state", "present", @("canonical_name", "ip_address")))
- supports_check_mode = $true
-}
-
-$module = [Ansible.Basic.AnsibleModule]::Create($args, $spec)
-
-$state = $module.Params.state
-$aliases = $module.Params.aliases
-$canonical_name = $module.Params.canonical_name
-$ip_address = $module.Params.ip_address
-$action = $module.Params.action
-
-$tmp = [ipaddress]::None
-if($ip_address -and -not [ipaddress]::TryParse($ip_address, [ref]$tmp)){
- $module.FailJson("win_hosts: Argument ip_address needs to be a valid ip address, but was $ip_address")
-}
-$ip_address_type = $tmp.AddressFamily
-
-$hosts_file = Get-Item -LiteralPath "$env:SystemRoot\System32\drivers\etc\hosts"
-
-Function Get-CommentIndex($line) {
- $c_index = $line.IndexOf('#')
- if($c_index -lt 0) {
- $c_index = $line.Length
- }
- return $c_index
-}
-
-Function Get-HostEntryParts($line) {
- $success = $true
- $c_index = Get-CommentIndex -line $line
- $pure_line = $line.Substring(0,$c_index).Trim()
- $bits = $pure_line -split "\s+"
- if($bits.Length -lt 2){
- return @{
- success = $false
- ip_address = ""
- ip_type = ""
- canonical_name = ""
- aliases = @()
- }
- }
- $ip_obj = [ipaddress]::None
- if(-not [ipaddress]::TryParse($bits[0], [ref]$ip_obj) ){
- $success = $false
- }
- $cname = $bits[1]
- $als = New-Object string[] ($bits.Length - 2)
- [array]::Copy($bits, 2, $als, 0, $als.Length)
- return @{
- success = $success
- ip_address = $ip_obj.IPAddressToString
- ip_type = $ip_obj.AddressFamily
- canonical_name = $cname
- aliases = $als
- }
-}
-
-Function Find-HostName($line, $name) {
- $c_idx = Get-CommentIndex -line $line
- $re = New-Object regex ("\s+$($name.Replace('.',"\."))(\s|$)", [System.Text.RegularExpressions.RegexOptions]::IgnoreCase)
- $match = $re.Match($line, 0, $c_idx)
- return $match
-}
-
-Function Remove-HostEntry($list, $idx) {
- $module.Result.changed = $true
- $list.RemoveAt($idx)
-}
-
-Function Add-HostEntry($list, $cname, $aliases, $ip) {
- $module.Result.changed = $true
- $line = "$ip $cname $($aliases -join ' ')"
- $list.Add($line) | Out-Null
-}
-
-Function Remove-HostnamesFromEntry($list, $idx, $aliases) {
- $line = $list[$idx]
- $line_removed = $false
-
- foreach($name in $aliases){
- $match = Find-HostName -line $line -name $name
- if($match.Success){
- $line = $line.Remove($match.Index + 1, $match.Length -1)
- # was this the last alias? (check for space characters after trimming)
- if($line.Substring(0,(Get-CommentIndex -line $line)).Trim() -inotmatch "\s") {
- $list.RemoveAt($idx)
- $line_removed = $true
- # we're done
- return @{
- line_removed = $line_removed
- }
- }
- }
- }
- if($line -ne $list[$idx]){
- $module.Result.changed = $true
- $list[$idx] = $line
- }
- return @{
- line_removed = $line_removed
- }
-}
-
-Function Add-AliasesToEntry($list, $idx, $aliases) {
- $line = $list[$idx]
- foreach($name in $aliases){
- $match = Find-HostName -line $line -name $name
- if(-not $match.Success) {
- # just add the alias before the comment
- $line = $line.Insert((Get-CommentIndex -line $line), " $name ")
- }
- }
- if($line -ne $list[$idx]){
- $module.Result.changed = $true
- $list[$idx] = $line
- }
-}
-
-$hosts_lines = New-Object System.Collections.ArrayList
-
-Get-Content -LiteralPath $hosts_file.FullName | ForEach-Object { $hosts_lines.Add($_) } | Out-Null
-$module.Diff.before = ($hosts_lines -join "`n") + "`n"
-
-if ($state -eq 'absent') {
- # go through and remove canonical_name and ip
- for($idx = 0; $idx -lt $hosts_lines.Count; $idx++) {
- $entry = $hosts_lines[$idx]
- # skip comment lines
- if(-not $entry.Trim().StartsWith('#')) {
- $entry_parts = Get-HostEntryParts -line $entry
- if($entry_parts.success) {
- if(-not $ip_address -or $entry_parts.ip_address -eq $ip_address) {
- if(-not $canonical_name -or $entry_parts.canonical_name -eq $canonical_name) {
- if(Remove-HostEntry -list $hosts_lines -idx $idx){
- # keep index correct if we removed the line
- $idx = $idx - 1
- }
- }
- }
- }
- }
- }
-}
-if($state -eq 'present') {
- $entry_idx = -1
- $aliases_to_keep = @()
- # go through lines, find the entry and determine what to remove based on action
- for($idx = 0; $idx -lt $hosts_lines.Count; $idx++) {
- $entry = $hosts_lines[$idx]
- # skip comment lines
- if(-not $entry.Trim().StartsWith('#')) {
- $entry_parts = Get-HostEntryParts -line $entry
- if($entry_parts.success) {
- $aliases_to_remove = @()
- if($entry_parts.ip_address -eq $ip_address) {
- if($entry_parts.canonical_name -eq $canonical_name) {
- $entry_idx = $idx
-
- if($action -eq 'set') {
- $aliases_to_remove = $entry_parts.aliases | Where-Object { $aliases -notcontains $_ }
- } elseif($action -eq 'remove') {
- $aliases_to_remove = $aliases
- }
- } else {
- # this is the right ip_address, but not the cname we were looking for.
- # we need to make sure none of aliases or canonical_name exist for this entry
- # since the given canonical_name should be an A/AAAA record,
- # and aliases should be cname records for the canonical_name.
- $aliases_to_remove = $aliases + $canonical_name
- }
- } else {
- # this is not the ip_address we are looking for
- if ($ip_address_type -eq $entry_parts.ip_type) {
- if ($entry_parts.canonical_name -eq $canonical_name) {
- Remove-HostEntry -list $hosts_lines -idx $idx
- $idx = $idx - 1
- if ($action -ne "set") {
- # keep old aliases intact
- $aliases_to_keep += $entry_parts.aliases | Where-Object { ($aliases + $aliases_to_keep + $canonical_name) -notcontains $_ }
- }
- } elseif ($action -eq "remove") {
- $aliases_to_remove = $canonical_name
- } elseif ($aliases -contains $entry_parts.canonical_name) {
- Remove-HostEntry -list $hosts_lines -idx $idx
- $idx = $idx - 1
- if ($action -eq "add") {
- # keep old aliases intact
- $aliases_to_keep += $entry_parts.aliases | Where-Object { ($aliases + $aliases_to_keep + $canonical_name) -notcontains $_ }
- }
- } else {
- $aliases_to_remove = $aliases + $canonical_name
- }
- } else {
- # TODO: Better ipv6 support. There is odd behavior for when an alias can be used for both ipv6 and ipv4
- }
- }
-
- if($aliases_to_remove) {
- if((Remove-HostnamesFromEntry -list $hosts_lines -idx $idx -aliases $aliases_to_remove).line_removed) {
- $idx = $idx - 1
- }
- }
- }
- }
- }
-
- if($entry_idx -ge 0) {
- $aliases_to_add = @()
- $entry_parts = Get-HostEntryParts -line $hosts_lines[$entry_idx]
- if($action -eq 'remove') {
- $aliases_to_add = $aliases_to_keep | Where-Object { $entry_parts.aliases -notcontains $_ }
- } else {
- $aliases_to_add = ($aliases + $aliases_to_keep) | Where-Object { $entry_parts.aliases -notcontains $_ }
- }
-
- if($aliases_to_add) {
- Add-AliasesToEntry -list $hosts_lines -idx $entry_idx -aliases $aliases_to_add
- }
- } else {
- # add the entry at the end
- if($action -eq 'remove') {
- if($aliases_to_keep) {
- Add-HostEntry -list $hosts_lines -ip $ip_address -cname $canonical_name -aliases $aliases_to_keep
- } else {
- Add-HostEntry -list $hosts_lines -ip $ip_address -cname $canonical_name
- }
- } else {
- Add-HostEntry -list $hosts_lines -ip $ip_address -cname $canonical_name -aliases ($aliases + $aliases_to_keep)
- }
- }
-}
-
-$module.Diff.after = ($hosts_lines -join "`n") + "`n"
-if( $module.Result.changed -and -not $module.CheckMode ) {
- Set-Content -LiteralPath $hosts_file.FullName -Value $hosts_lines
-}
-
-$module.ExitJson()
diff --git a/test/support/windows-integration/plugins/modules/win_hosts.py b/test/support/windows-integration/plugins/modules/win_hosts.py
deleted file mode 100644
index 9fd2d1d10d..0000000000
--- a/test/support/windows-integration/plugins/modules/win_hosts.py
+++ /dev/null
@@ -1,126 +0,0 @@
-#!/usr/bin/python
-# -*- coding: utf-8 -*-
-
-# Copyright: (c) 2018, Micah Hunsberger (@mhunsber)
-# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
-
-# this is a windows documentation stub. actual code lives in the .ps1
-# file of the same name
-
-ANSIBLE_METADATA = {'metadata_version': '1.1',
- 'status': ['preview'],
- 'supported_by': 'community'}
-
-DOCUMENTATION = r'''
----
-module: win_hosts
-version_added: '2.8'
-short_description: Manages hosts file entries on Windows.
-description:
- - Manages hosts file entries on Windows.
- - Maps IPv4 or IPv6 addresses to canonical names.
- - Adds, removes, or sets cname records for ip and hostname pairs.
- - Modifies %windir%\\system32\\drivers\\etc\\hosts.
-options:
- state:
- description:
- - Whether the entry should be present or absent.
- - If only I(canonical_name) is provided when C(state=absent), then
- all hosts entries with the canonical name of I(canonical_name)
- will be removed.
- - If only I(ip_address) is provided when C(state=absent), then all
- hosts entries with the ip address of I(ip_address) will be removed.
- - If I(ip_address) and I(canonical_name) are both omitted when
- C(state=absent), then all hosts entries will be removed.
- choices:
- - absent
- - present
- default: present
- type: str
- canonical_name:
- description:
- - A canonical name for the host entry.
- - required for C(state=present).
- type: str
- ip_address:
- description:
- - The ip address for the host entry.
- - Can be either IPv4 (A record) or IPv6 (AAAA record).
- - Required for C(state=present).
- type: str
- aliases:
- description:
- - A list of additional names (cname records) for the host entry.
- - Only applicable when C(state=present).
- type: list
- action:
- choices:
- - add
- - remove
- - set
- description:
- - Controls the behavior of I(aliases).
- - Only applicable when C(state=present).
- - If C(add), each alias in I(aliases) will be added to the host entry.
- - If C(set), each alias in I(aliases) will be added to the host entry,
- and other aliases will be removed from the entry.
- default: set
- type: str
-author:
- - Micah Hunsberger (@mhunsber)
-notes:
- - Each canonical name can only be mapped to one IPv4 and one IPv6 address.
- If I(canonical_name) is provided with C(state=present) and is found
- to be mapped to another IP address that is the same type as, but unique
- from I(ip_address), then I(canonical_name) and all I(aliases) will
- be removed from the entry and added to an entry with the provided IP address.
- - Each alias can only be mapped to one canonical name. If I(aliases) is provided
- with C(state=present) and an alias is found to be mapped to another canonical
- name, then the alias will be removed from the entry and either added to or removed
- from (depending on I(action)) an entry with the provided canonical name.
-seealso:
- - module: win_template
- - module: win_file
- - module: win_copy
-'''
-
-EXAMPLES = r'''
-- name: Add 127.0.0.1 as an A record for localhost
- win_hosts:
- state: present
- canonical_name: localhost
- ip_address: 127.0.0.1
-
-- name: Add ::1 as an AAAA record for localhost
- win_hosts:
- state: present
- canonical_name: localhost
- ip_address: '::1'
-
-- name: Remove 'bar' and 'zed' from the list of aliases for foo (192.168.1.100)
- win_hosts:
- state: present
- canoncial_name: foo
- ip_address: 192.168.1.100
- action: remove
- aliases:
- - bar
- - zed
-
-- name: Remove hosts entries with canonical name 'bar'
- win_hosts:
- state: absent
- canonical_name: bar
-
-- name: Remove 10.2.0.1 from the list of hosts
- win_hosts:
- state: absent
- ip_address: 10.2.0.1
-
-- name: Ensure all name resolution is handled by DNS
- win_hosts:
- state: absent
-'''
-
-RETURN = r'''
-'''