summaryrefslogtreecommitdiff
path: root/doc
diff options
context:
space:
mode:
Diffstat (limited to 'doc')
-rw-r--r--doc/README4
-rw-r--r--doc/examples/cloud-config-add-apt-repos.txt34
-rw-r--r--doc/examples/cloud-config-archive-launch-index.txt30
-rw-r--r--doc/examples/cloud-config-archive.txt16
-rw-r--r--doc/examples/cloud-config-boot-cmds.txt15
-rw-r--r--doc/examples/cloud-config-ca-certs.txt31
-rw-r--r--doc/examples/cloud-config-chef-oneiric.txt90
-rw-r--r--doc/examples/cloud-config-chef.txt95
-rw-r--r--doc/examples/cloud-config-datasources.txt73
-rw-r--r--doc/examples/cloud-config-disk-setup.txt251
-rw-r--r--doc/examples/cloud-config-final-message.txt7
-rw-r--r--doc/examples/cloud-config-gluster.txt18
-rw-r--r--doc/examples/cloud-config-growpart.txt31
-rw-r--r--doc/examples/cloud-config-install-packages.txt15
-rw-r--r--doc/examples/cloud-config-landscape.txt22
-rw-r--r--doc/examples/cloud-config-launch-index.txt23
-rw-r--r--doc/examples/cloud-config-lxd.txt55
-rw-r--r--doc/examples/cloud-config-mcollective.txt49
-rw-r--r--doc/examples/cloud-config-mount-points.txt46
-rw-r--r--doc/examples/cloud-config-phone-home.txt14
-rw-r--r--doc/examples/cloud-config-power-state.txt40
-rw-r--r--doc/examples/cloud-config-puppet.txt51
-rw-r--r--doc/examples/cloud-config-reporting.txt17
-rw-r--r--doc/examples/cloud-config-resolv-conf.txt20
-rw-r--r--doc/examples/cloud-config-rh_subscription.txt49
-rw-r--r--doc/examples/cloud-config-rsyslog.txt46
-rw-r--r--doc/examples/cloud-config-run-cmds.txt22
-rw-r--r--doc/examples/cloud-config-salt-minion.txt53
-rw-r--r--doc/examples/cloud-config-seed-random.txt32
-rw-r--r--doc/examples/cloud-config-ssh-keys.txt46
-rw-r--r--doc/examples/cloud-config-update-apt.txt7
-rw-r--r--doc/examples/cloud-config-update-packages.txt8
-rw-r--r--doc/examples/cloud-config-user-groups.txt109
-rw-r--r--doc/examples/cloud-config-vendor-data.txt16
-rw-r--r--doc/examples/cloud-config-write-files.txt33
-rw-r--r--doc/examples/cloud-config-yum-repo.txt20
-rw-r--r--doc/examples/cloud-config.txt752
-rw-r--r--doc/examples/include-once.txt7
-rw-r--r--doc/examples/include.txt5
-rw-r--r--doc/examples/kernel-cmdline.txt18
-rw-r--r--doc/examples/part-handler-v2.txt38
-rw-r--r--doc/examples/part-handler.txt23
-rw-r--r--doc/examples/plain-ignored.txt2
-rw-r--r--doc/examples/seed/README22
-rw-r--r--doc/examples/seed/meta-data30
-rw-r--r--doc/examples/seed/user-data3
-rw-r--r--doc/examples/upstart-cloud-config.txt12
-rw-r--r--doc/examples/upstart-rclocal.txt12
-rw-r--r--doc/examples/user-script.txt8
-rw-r--r--doc/merging.rst194
-rw-r--r--doc/rtd/conf.py77
-rw-r--r--doc/rtd/index.rst31
-rw-r--r--doc/rtd/static/logo.pngbin12751 -> 0 bytes
-rw-r--r--doc/rtd/static/logo.svg89
-rw-r--r--doc/rtd/topics/availability.rst20
-rw-r--r--doc/rtd/topics/capabilities.rst24
-rw-r--r--doc/rtd/topics/datasources.rst200
-rw-r--r--doc/rtd/topics/dir_layout.rst81
-rw-r--r--doc/rtd/topics/examples.rst133
-rw-r--r--doc/rtd/topics/format.rst159
-rw-r--r--doc/rtd/topics/hacking.rst1
-rw-r--r--doc/rtd/topics/merging.rst5
-rw-r--r--doc/rtd/topics/modules.rst342
-rw-r--r--doc/rtd/topics/moreinfo.rst12
-rw-r--r--doc/sources/altcloud/README.rst87
-rw-r--r--doc/sources/azure/README.rst134
-rw-r--r--doc/sources/cloudsigma/README.rst38
-rw-r--r--doc/sources/cloudstack/README.rst29
-rw-r--r--doc/sources/configdrive/README.rst123
-rw-r--r--doc/sources/digitalocean/README.rst21
-rw-r--r--doc/sources/kernel-cmdline.txt48
-rw-r--r--doc/sources/nocloud/README.rst71
-rw-r--r--doc/sources/opennebula/README.rst142
-rw-r--r--doc/sources/openstack/README.rst24
-rw-r--r--doc/sources/ovf/README83
-rw-r--r--doc/sources/ovf/example/ovf-env.xml46
-rw-r--r--doc/sources/ovf/example/ubuntu-server.ovf130
-rwxr-xr-xdoc/sources/ovf/make-iso156
-rw-r--r--doc/sources/ovf/ovf-env.xml.tmpl28
-rw-r--r--doc/sources/ovf/ovfdemo.pem27
-rw-r--r--doc/sources/ovf/user-data7
-rw-r--r--doc/sources/smartos/README.rst149
-rw-r--r--doc/status.txt53
-rw-r--r--doc/userdata.txt79
-rw-r--r--doc/var-lib-cloud.txt63
-rw-r--r--doc/vendordata.txt53
86 files changed, 0 insertions, 5379 deletions
diff --git a/doc/README b/doc/README
deleted file mode 100644
index 83559192..00000000
--- a/doc/README
+++ /dev/null
@@ -1,4 +0,0 @@
-This project is cloud-init it is hosted on launchpad at
-https://launchpad.net/cloud-init
-
-The package was previously named ec2-init.
diff --git a/doc/examples/cloud-config-add-apt-repos.txt b/doc/examples/cloud-config-add-apt-repos.txt
deleted file mode 100644
index be9d5472..00000000
--- a/doc/examples/cloud-config-add-apt-repos.txt
+++ /dev/null
@@ -1,34 +0,0 @@
-#cloud-config
-
-# Add apt repositories
-#
-# Default: auto select based on cloud metadata
-# in ec2, the default is <region>.archive.ubuntu.com
-# apt_mirror:
-# use the provided mirror
-# apt_mirror_search:
-# search the list for the first mirror.
-# this is currently very limited, only verifying that
-# the mirror is dns resolvable or an IP address
-#
-# if neither apt_mirror nor apt_mirror search is set (the default)
-# then use the mirror provided by the DataSource found.
-# In EC2, that means using <region>.ec2.archive.ubuntu.com
-#
-# if no mirror is provided by the DataSource, and 'apt_mirror_search_dns' is
-# true, then search for dns names '<distro>-mirror' in each of
-# - fqdn of this host per cloud metadata
-# - localdomain
-# - no domain (which would search domains listed in /etc/resolv.conf)
-# If there is a dns entry for <distro>-mirror, then it is assumed that there
-# is a distro mirror at http://<distro>-mirror.<domain>/<distro>
-#
-# That gives the cloud provider the opportunity to set mirrors of a distro
-# up and expose them only by creating dns entries.
-#
-# if none of that is found, then the default distro mirror is used
-apt_mirror: http://us.archive.ubuntu.com/ubuntu/
-apt_mirror_search:
- - http://local-mirror.mydomain
- - http://archive.ubuntu.com
-apt_mirror_search_dns: False
diff --git a/doc/examples/cloud-config-archive-launch-index.txt b/doc/examples/cloud-config-archive-launch-index.txt
deleted file mode 100644
index e2ac2869..00000000
--- a/doc/examples/cloud-config-archive-launch-index.txt
+++ /dev/null
@@ -1,30 +0,0 @@
-#cloud-config-archive
-
-# This is an example of a cloud archive
-# format which includes a set of launch indexes
-# that will be filtered on (thus only showing
-# up in instances with that launch index), this
-# is done by adding the 'launch-index' key which
-# maps to the integer 'launch-index' that the
-# corresponding content should be used with.
-#
-# It is possible to leave this value out which
-# will mean that the content will be applicable
-# for all instances
-
-- type: foo/wark
- filename: bar
- content: |
- This is my payload
- hello
- launch-index: 1 # I will only be used on launch-index 1
-- this is also payload
-- |
- multi line payload
- here
--
- type: text/upstart-job
- filename: my-upstart.conf
- content: |
- whats this, yo?
- launch-index: 0 # I will only be used on launch-index 0
diff --git a/doc/examples/cloud-config-archive.txt b/doc/examples/cloud-config-archive.txt
deleted file mode 100644
index 23b1024c..00000000
--- a/doc/examples/cloud-config-archive.txt
+++ /dev/null
@@ -1,16 +0,0 @@
-#cloud-config-archive
-- type: foo/wark
- filename: bar
- content: |
- This is my payload
- hello
-- this is also payload
-- |
- multi line payload
- here
--
- type: text/upstart-job
- filename: my-upstart.conf
- content: |
- whats this, yo?
-
diff --git a/doc/examples/cloud-config-boot-cmds.txt b/doc/examples/cloud-config-boot-cmds.txt
deleted file mode 100644
index 3e59755d..00000000
--- a/doc/examples/cloud-config-boot-cmds.txt
+++ /dev/null
@@ -1,15 +0,0 @@
-#cloud-config
-
-# boot commands
-# default: none
-# this is very similar to runcmd, but commands run very early
-# in the boot process, only slightly after a 'boothook' would run.
-# bootcmd should really only be used for things that could not be
-# done later in the boot process. bootcmd is very much like
-# boothook, but possibly with more friendly.
-# - bootcmd will run on every boot
-# - the INSTANCE_ID variable will be set to the current instance id.
-# - you can use 'cloud-init-boot-per' command to help only run once
-bootcmd:
- - echo 192.168.1.130 us.archive.ubuntu.com > /etc/hosts
- - [ cloud-init-per, once, mymkfs, mkfs, /dev/vdb ]
diff --git a/doc/examples/cloud-config-ca-certs.txt b/doc/examples/cloud-config-ca-certs.txt
deleted file mode 100644
index 5e9115a0..00000000
--- a/doc/examples/cloud-config-ca-certs.txt
+++ /dev/null
@@ -1,31 +0,0 @@
-#cloud-config
-#
-# This is an example file to configure an instance's trusted CA certificates
-# system-wide for SSL/TLS trust establishment when the instance boots for the
-# first time.
-#
-# Make sure that this file is valid yaml before starting instances.
-# It should be passed as user-data when starting the instance.
-
-ca-certs:
- # If present and set to True, the 'remove-defaults' parameter will remove
- # all the default trusted CA certificates that are normally shipped with
- # Ubuntu.
- # This is mainly for paranoid admins - most users will not need this
- # functionality.
- remove-defaults: true
-
- # If present, the 'trusted' parameter should contain a certificate (or list
- # of certificates) to add to the system as trusted CA certificates.
- # Pay close attention to the YAML multiline list syntax. The example shown
- # here is for a list of multiline certificates.
- trusted:
- - |
- -----BEGIN CERTIFICATE-----
- YOUR-ORGS-TRUSTED-CA-CERT-HERE
- -----END CERTIFICATE-----
- - |
- -----BEGIN CERTIFICATE-----
- YOUR-ORGS-TRUSTED-CA-CERT-HERE
- -----END CERTIFICATE-----
-
diff --git a/doc/examples/cloud-config-chef-oneiric.txt b/doc/examples/cloud-config-chef-oneiric.txt
deleted file mode 100644
index 2e5f4b16..00000000
--- a/doc/examples/cloud-config-chef-oneiric.txt
+++ /dev/null
@@ -1,90 +0,0 @@
-#cloud-config
-#
-# This is an example file to automatically install chef-client and run a
-# list of recipes when the instance boots for the first time.
-# Make sure that this file is valid yaml before starting instances.
-# It should be passed as user-data when starting the instance.
-#
-# This example assumes the instance is 11.10 (oneiric)
-
-
-# The default is to install from packages.
-
-# Key from http://apt.opscode.com/packages@opscode.com.gpg.key
-apt_sources:
- - source: "deb http://apt.opscode.com/ $RELEASE-0.10 main"
- key: |
- -----BEGIN PGP PUBLIC KEY BLOCK-----
- Version: GnuPG v1.4.9 (GNU/Linux)
-
- mQGiBEppC7QRBADfsOkZU6KZK+YmKw4wev5mjKJEkVGlus+NxW8wItX5sGa6kdUu
- twAyj7Yr92rF+ICFEP3gGU6+lGo0Nve7KxkN/1W7/m3G4zuk+ccIKmjp8KS3qn99
- dxy64vcji9jIllVa+XXOGIp0G8GEaj7mbkixL/bMeGfdMlv8Gf2XPpp9vwCgn/GC
- JKacfnw7MpLKUHOYSlb//JsEAJqao3ViNfav83jJKEkD8cf59Y8xKia5OpZqTK5W
- ShVnNWS3U5IVQk10ZDH97Qn/YrK387H4CyhLE9mxPXs/ul18ioiaars/q2MEKU2I
- XKfV21eMLO9LYd6Ny/Kqj8o5WQK2J6+NAhSwvthZcIEphcFignIuobP+B5wNFQpe
- DbKfA/0WvN2OwFeWRcmmd3Hz7nHTpcnSF+4QX6yHRF/5BgxkG6IqBIACQbzPn6Hm
- sMtm/SVf11izmDqSsQptCrOZILfLX/mE+YOl+CwWSHhl+YsFts1WOuh1EhQD26aO
- Z84HuHV5HFRWjDLw9LriltBVQcXbpfSrRP5bdr7Wh8vhqJTPjrQnT3BzY29kZSBQ
- YWNrYWdlcyA8cGFja2FnZXNAb3BzY29kZS5jb20+iGAEExECACAFAkppC7QCGwMG
- CwkIBwMCBBUCCAMEFgIDAQIeAQIXgAAKCRApQKupg++Caj8sAKCOXmdG36gWji/K
- +o+XtBfvdMnFYQCfTCEWxRy2BnzLoBBFCjDSK6sJqCu5Ag0ESmkLtBAIAIO2SwlR
- lU5i6gTOp42RHWW7/pmW78CwUqJnYqnXROrt3h9F9xrsGkH0Fh1FRtsnncgzIhvh
- DLQnRHnkXm0ws0jV0PF74ttoUT6BLAUsFi2SPP1zYNJ9H9fhhK/pjijtAcQwdgxu
- wwNJ5xCEscBZCjhSRXm0d30bK1o49Cow8ZIbHtnXVP41c9QWOzX/LaGZsKQZnaMx
- EzDk8dyyctR2f03vRSVyTFGgdpUcpbr9eTFVgikCa6ODEBv+0BnCH6yGTXwBid9g
- w0o1e/2DviKUWCC+AlAUOubLmOIGFBuI4UR+rux9affbHcLIOTiKQXv79lW3P7W8
- AAfniSQKfPWXrrcAAwUH/2XBqD4Uxhbs25HDUUiM/m6Gnlj6EsStg8n0nMggLhuN
- QmPfoNByMPUqvA7sULyfr6xCYzbzRNxABHSpf85FzGQ29RF4xsA4vOOU8RDIYQ9X
- Q8NqqR6pydprRFqWe47hsAN7BoYuhWqTtOLSBmnAnzTR5pURoqcquWYiiEavZixJ
- 3ZRAq/HMGioJEtMFrvsZjGXuzef7f0ytfR1zYeLVWnL9Bd32CueBlI7dhYwkFe+V
- Ep5jWOCj02C1wHcwt+uIRDJV6TdtbIiBYAdOMPk15+VBdweBXwMuYXr76+A7VeDL
- zIhi7tKFo6WiwjKZq0dzctsJJjtIfr4K4vbiD9Ojg1iISQQYEQIACQUCSmkLtAIb
- DAAKCRApQKupg++CauISAJ9CxYPOKhOxalBnVTLeNUkAHGg2gACeIsbobtaD4ZHG
- 0GLl8EkfA8uhluM=
- =zKAm
- -----END PGP PUBLIC KEY BLOCK-----
-
-chef:
-
- # 11.10 will fail if install_type is "gems" (LP: #960576)
- install_type: "packages"
-
- # Chef settings
- server_url: "https://chef.yourorg.com:4000"
-
- # Node Name
- # Defaults to the instance-id if not present
- node_name: "your-node-name"
-
- # Environment
- # Defaults to '_default' if not present
- environment: "production"
-
- # Default validation name is chef-validator
- validation_name: "yourorg-validator"
-
- # value of validation_cert is not used if validation_key defined,
- # but variable needs to be defined (LP: #960547)
- validation_cert: "unused"
- validation_key: |
- -----BEGIN RSA PRIVATE KEY-----
- YOUR-ORGS-VALIDATION-KEY-HERE
- -----END RSA PRIVATE KEY-----
-
- # A run list for a first boot json
- run_list:
- - "recipe[apache2]"
- - "role[db]"
-
- # Specify a list of initial attributes used by the cookbooks
- initial_attributes:
- apache:
- prefork:
- maxclients: 100
- keepalive: "off"
-
-
-# Capture all subprocess output into a logfile
-# Useful for troubleshooting cloud-init issues
-output: {all: '| tee -a /var/log/cloud-init-output.log'}
diff --git a/doc/examples/cloud-config-chef.txt b/doc/examples/cloud-config-chef.txt
deleted file mode 100644
index b886cba2..00000000
--- a/doc/examples/cloud-config-chef.txt
+++ /dev/null
@@ -1,95 +0,0 @@
-#cloud-config
-#
-# This is an example file to automatically install chef-client and run a
-# list of recipes when the instance boots for the first time.
-# Make sure that this file is valid yaml before starting instances.
-# It should be passed as user-data when starting the instance.
-#
-# This example assumes the instance is 12.04 (precise)
-
-
-# The default is to install from packages.
-
-# Key from http://apt.opscode.com/packages@opscode.com.gpg.key
-apt_sources:
- - source: "deb http://apt.opscode.com/ $RELEASE-0.10 main"
- key: |
- -----BEGIN PGP PUBLIC KEY BLOCK-----
- Version: GnuPG v1.4.9 (GNU/Linux)
-
- mQGiBEppC7QRBADfsOkZU6KZK+YmKw4wev5mjKJEkVGlus+NxW8wItX5sGa6kdUu
- twAyj7Yr92rF+ICFEP3gGU6+lGo0Nve7KxkN/1W7/m3G4zuk+ccIKmjp8KS3qn99
- dxy64vcji9jIllVa+XXOGIp0G8GEaj7mbkixL/bMeGfdMlv8Gf2XPpp9vwCgn/GC
- JKacfnw7MpLKUHOYSlb//JsEAJqao3ViNfav83jJKEkD8cf59Y8xKia5OpZqTK5W
- ShVnNWS3U5IVQk10ZDH97Qn/YrK387H4CyhLE9mxPXs/ul18ioiaars/q2MEKU2I
- XKfV21eMLO9LYd6Ny/Kqj8o5WQK2J6+NAhSwvthZcIEphcFignIuobP+B5wNFQpe
- DbKfA/0WvN2OwFeWRcmmd3Hz7nHTpcnSF+4QX6yHRF/5BgxkG6IqBIACQbzPn6Hm
- sMtm/SVf11izmDqSsQptCrOZILfLX/mE+YOl+CwWSHhl+YsFts1WOuh1EhQD26aO
- Z84HuHV5HFRWjDLw9LriltBVQcXbpfSrRP5bdr7Wh8vhqJTPjrQnT3BzY29kZSBQ
- YWNrYWdlcyA8cGFja2FnZXNAb3BzY29kZS5jb20+iGAEExECACAFAkppC7QCGwMG
- CwkIBwMCBBUCCAMEFgIDAQIeAQIXgAAKCRApQKupg++Caj8sAKCOXmdG36gWji/K
- +o+XtBfvdMnFYQCfTCEWxRy2BnzLoBBFCjDSK6sJqCu5Ag0ESmkLtBAIAIO2SwlR
- lU5i6gTOp42RHWW7/pmW78CwUqJnYqnXROrt3h9F9xrsGkH0Fh1FRtsnncgzIhvh
- DLQnRHnkXm0ws0jV0PF74ttoUT6BLAUsFi2SPP1zYNJ9H9fhhK/pjijtAcQwdgxu
- wwNJ5xCEscBZCjhSRXm0d30bK1o49Cow8ZIbHtnXVP41c9QWOzX/LaGZsKQZnaMx
- EzDk8dyyctR2f03vRSVyTFGgdpUcpbr9eTFVgikCa6ODEBv+0BnCH6yGTXwBid9g
- w0o1e/2DviKUWCC+AlAUOubLmOIGFBuI4UR+rux9affbHcLIOTiKQXv79lW3P7W8
- AAfniSQKfPWXrrcAAwUH/2XBqD4Uxhbs25HDUUiM/m6Gnlj6EsStg8n0nMggLhuN
- QmPfoNByMPUqvA7sULyfr6xCYzbzRNxABHSpf85FzGQ29RF4xsA4vOOU8RDIYQ9X
- Q8NqqR6pydprRFqWe47hsAN7BoYuhWqTtOLSBmnAnzTR5pURoqcquWYiiEavZixJ
- 3ZRAq/HMGioJEtMFrvsZjGXuzef7f0ytfR1zYeLVWnL9Bd32CueBlI7dhYwkFe+V
- Ep5jWOCj02C1wHcwt+uIRDJV6TdtbIiBYAdOMPk15+VBdweBXwMuYXr76+A7VeDL
- zIhi7tKFo6WiwjKZq0dzctsJJjtIfr4K4vbiD9Ojg1iISQQYEQIACQUCSmkLtAIb
- DAAKCRApQKupg++CauISAJ9CxYPOKhOxalBnVTLeNUkAHGg2gACeIsbobtaD4ZHG
- 0GLl8EkfA8uhluM=
- =zKAm
- -----END PGP PUBLIC KEY BLOCK-----
-
-chef:
-
- # Valid values are 'gems' and 'packages' and 'omnibus'
- install_type: "packages"
-
- # Boolean: run 'install_type' code even if chef-client
- # appears already installed.
- force_install: false
-
- # Chef settings
- server_url: "https://chef.yourorg.com:4000"
-
- # Node Name
- # Defaults to the instance-id if not present
- node_name: "your-node-name"
-
- # Environment
- # Defaults to '_default' if not present
- environment: "production"
-
- # Default validation name is chef-validator
- validation_name: "yourorg-validator"
- # if validation_cert's value is "system" then it is expected
- # that the file already exists on the system.
- validation_cert: |
- -----BEGIN RSA PRIVATE KEY-----
- YOUR-ORGS-VALIDATION-KEY-HERE
- -----END RSA PRIVATE KEY-----
-
- # A run list for a first boot json
- run_list:
- - "recipe[apache2]"
- - "role[db]"
-
- # Specify a list of initial attributes used by the cookbooks
- initial_attributes:
- apache:
- prefork:
- maxclients: 100
- keepalive: "off"
-
- # if install_type is 'omnibus', change the url to download
- omnibus_url: "https://www.opscode.com/chef/install.sh"
-
-
-# Capture all subprocess output into a logfile
-# Useful for troubleshooting cloud-init issues
-output: {all: '| tee -a /var/log/cloud-init-output.log'}
diff --git a/doc/examples/cloud-config-datasources.txt b/doc/examples/cloud-config-datasources.txt
deleted file mode 100644
index 2651c027..00000000
--- a/doc/examples/cloud-config-datasources.txt
+++ /dev/null
@@ -1,73 +0,0 @@
-# Documentation on data sources configuration options
-datasource:
- # Ec2
- Ec2:
- # timeout: the timeout value for a request at metadata service
- timeout : 50
- # The length in seconds to wait before giving up on the metadata
- # service. The actual total wait could be up to
- # len(resolvable_metadata_urls)*timeout
- max_wait : 120
-
- #metadata_url: a list of URLs to check for metadata services
- metadata_urls:
- - http://169.254.169.254:80
- - http://instance-data:8773
-
- MAAS:
- timeout : 50
- max_wait : 120
-
- # there are no default values for metadata_url or oauth credentials
- # If no credentials are present, non-authed attempts will be made.
- metadata_url: http://mass-host.localdomain/source
- consumer_key: Xh234sdkljf
- token_key: kjfhgb3n
- token_secret: 24uysdfx1w4
-
- NoCloud:
- # default seedfrom is None
- # if found, then it should contain a url with:
- # <url>/user-data and <url>/meta-data
- # seedfrom: http://my.example.com/i-abcde
- seedfrom: None
-
- # fs_label: the label on filesystems to be searched for NoCloud source
- fs_label: cidata
-
- # these are optional, but allow you to basically provide a datasource
- # right here
- user-data: |
- # This is the user-data verbatum
- meta-data:
- instance-id: i-87018aed
- local-hostname: myhost.internal
-
- Azure:
- agent_command: [service, walinuxagent, start]
- set_hostname: True
- hostname_bounce:
- interface: eth0
- policy: on # [can be 'on', 'off' or 'force']
-
- SmartOS:
- # For KVM guests:
- # Smart OS datasource works over a serial console interacting with
- # a server on the other end. By default, the second serial console is the
- # device. SmartOS also uses a serial timeout of 60 seconds.
- serial_device: /dev/ttyS1
- serial_timeout: 60
-
- # For LX-Brand Zones guests:
- # Smart OS datasource works over a socket interacting with
- # the host on the other end. By default, the socket file is in
- # the native .zoncontrol directory.
- metadata_sockfile: /native/.zonecontrol/metadata.sock
-
- # a list of keys that will not be base64 decoded even if base64_all
- no_base64_decode: ['root_authorized_keys', 'motd_sys_info',
- 'iptables_disable']
- # a plaintext, comma delimited list of keys whose values are b64 encoded
- base64_keys: []
- # a boolean indicating that all keys not in 'no_base64_decode' are encoded
- base64_all: False
diff --git a/doc/examples/cloud-config-disk-setup.txt b/doc/examples/cloud-config-disk-setup.txt
deleted file mode 100644
index 3e46a22e..00000000
--- a/doc/examples/cloud-config-disk-setup.txt
+++ /dev/null
@@ -1,251 +0,0 @@
-# Cloud-init supports the creation of simple partition tables and file systems
-# on devices.
-
-# Default disk definitions for AWS
-# --------------------------------
-# (Not implemented yet, but provided for future documentation)
-
-disk_setup:
- ephmeral0:
- table_type: 'mbr'
- layout: True
- overwrite: False
-
-fs_setup:
- - label: None,
- filesystem: ext3
- device: ephemeral0
- partition: auto
-
-# Default disk definitions for Windows Azure
-# ------------------------------------------
-
-device_aliases: {'ephemeral0': '/dev/sdb'}
-disk_setup:
- ephemeral0:
- table_type: mbr
- layout: True
- overwrite: False
-
-fs_setup:
- - label: ephemeral0
- filesystem: ext4
- device: ephemeral0.1
- replace_fs: ntfs
-
-
-# Default disk definitions for SmartOS
-# ------------------------------------
-
-device_aliases: {'ephemeral0': '/dev/sdb'}
-disk_setup:
- ephemeral0:
- table_type: mbr
- layout: False
- overwrite: False
-
-fs_setup:
- - label: ephemeral0
- filesystem: ext3
- device: ephemeral0.0
-
-# Cavaut for SmartOS: if ephemeral disk is not defined, then the disk will
-# not be automatically added to the mounts.
-
-
-# The default definition is used to make sure that the ephemeral storage is
-# setup properly.
-
-# "disk_setup": disk partitioning
-# --------------------------------
-
-# The disk_setup directive instructs Cloud-init to partition a disk. The format is:
-
-disk_setup:
- ephmeral0:
- table_type: 'mbr'
- layout: 'auto'
- /dev/xvdh:
- table_type: 'mbr'
- layout:
- - 33
- - [33, 82]
- - 33
- overwrite: True
-
-# The format is a list of dicts of dicts. The first value is the name of the
-# device and the subsequent values define how to create and layout the
-# partition.
-# The general format is:
-# disk_setup:
-# <DEVICE>:
-# table_type: 'mbr'
-# layout: <LAYOUT|BOOL>
-# overwrite: <BOOL>
-#
-# Where:
-# <DEVICE>: The name of the device. 'ephemeralX' and 'swap' are special
-# values which are specific to the cloud. For these devices
-# Cloud-init will look up what the real devices is and then
-# use it.
-#
-# For other devices, the kernel device name is used. At this
-# time only simply kernel devices are supported, meaning
-# that device mapper and other targets may not work.
-#
-# Note: At this time, there is no handling or setup of
-# device mapper targets.
-#
-# table_type=<TYPE>: Currently the following are supported:
-# 'mbr': default and setups a MS-DOS partition table
-#
-# Note: At this time only 'mbr' partition tables are allowed.
-# It is anticipated in the future that we'll have GPT as
-# option in the future, or even "RAID" to create a mdadm
-# RAID.
-#
-# layout={...}: The device layout. This is a list of values, with the
-# percentage of disk that partition will take.
-# Valid options are:
-# [<SIZE>, [<SIZE>, <PART_TYPE]]
-#
-# Where <SIZE> is the _percentage_ of the disk to use, while
-# <PART_TYPE> is the numerical value of the partition type.
-#
-# The following setups two partitions, with the first
-# partition having a swap label, taking 1/3 of the disk space
-# and the remainder being used as the second partition.
-# /dev/xvdh':
-# table_type: 'mbr'
-# layout:
-# - [33,82]
-# - 66
-# overwrite: True
-#
-# When layout is "true" it means single partition the entire
-# device.
-#
-# When layout is "false" it means don't partition or ignore
-# existing partitioning.
-#
-# If layout is set to "true" and overwrite is set to "false",
-# it will skip partitioning the device without a failure.
-#
-# overwrite=<BOOL>: This describes whether to ride with saftey's on and
-# everything holstered.
-#
-# 'false' is the default, which means that:
-# 1. The device will be checked for a partition table
-# 2. The device will be checked for a file system
-# 3. If either a partition of file system is found, then
-# the operation will be _skipped_.
-#
-# 'true' is cowboy mode. There are no checks and things are
-# done blindly. USE with caution, you can do things you
-# really, really don't want to do.
-#
-#
-# fs_setup: Setup the file system
-# -------------------------------
-#
-# fs_setup describes the how the file systems are supposed to look.
-
-fs_setup:
- - label: ephemeral0
- filesystem: 'ext3'
- device: 'ephemeral0'
- partition: 'auto'
- - label: mylabl2
- filesystem: 'ext4'
- device: '/dev/xvda1'
- - special:
- cmd: mkfs -t %(FILESYSTEM)s -L %(LABEL)s %(DEVICE)s
- filesystem: 'btrfs'
- device: '/dev/xvdh'
-
-# The general format is:
-# fs_setup:
-# - label: <LABEL>
-# filesystem: <FS_TYPE>
-# device: <DEVICE>
-# partition: <PART_VALUE>
-# overwrite: <OVERWRITE>
-# replace_fs: <FS_TYPE>
-#
-# Where:
-# <LABEL>: The file system label to be used. If set to None, no label is
-# used.
-#
-# <FS_TYPE>: The file system type. It is assumed that the there
-# will be a "mkfs.<FS_TYPE>" that behaves likes "mkfs". On a standard
-# Ubuntu Cloud Image, this means that you have the option of ext{2,3,4},
-# and vfat by default.
-#
-# <DEVICE>: The device name. Special names of 'ephemeralX' or 'swap'
-# are allowed and the actual device is acquired from the cloud datasource.
-# When using 'ephemeralX' (i.e. ephemeral0), make sure to leave the
-# label as 'ephemeralX' otherwise there may be issues with the mounting
-# of the ephemeral storage layer.
-#
-# If you define the device as 'ephemeralX.Y' then Y will be interpetted
-# as a partition value. However, ephermalX.0 is the _same_ as ephemeralX.
-#
-# <PART_VALUE>:
-# Partition definitions are overwriten if you use the '<DEVICE>.Y' notation.
-#
-# The valid options are:
-# "auto|any": tell cloud-init not to care whether there is a partition
-# or not. Auto will use the first partition that does not contain a
-# file system already. In the absence of a partition table, it will
-# put it directly on the disk.
-#
-# "auto": If a file system that matches the specification in terms of
-# label, type and device, then cloud-init will skip the creation of
-# the file system.
-#
-# "any": If a file system that matches the file system type and device,
-# then cloud-init will skip the creation of the file system.
-#
-# Devices are selected based on first-detected, starting with partitions
-# and then the raw disk. Consider the following:
-# NAME FSTYPE LABEL
-# xvdb
-# |-xvdb1 ext4
-# |-xvdb2
-# |-xvdb3 btrfs test
-# \-xvdb4 ext4 test
-#
-# If you ask for 'auto', label of 'test, and file system of 'ext4'
-# then cloud-init will select the 2nd partition, even though there
-# is a partition match at the 4th partition.
-#
-# If you ask for 'any' and a label of 'test', then cloud-init will
-# select the 1st partition.
-#
-# If you ask for 'auto' and don't define label, then cloud-init will
-# select the 1st partition.
-#
-# In general, if you have a specific partition configuration in mind,
-# you should define either the device or the partition number. 'auto'
-# and 'any' are specifically intended for formating ephemeral storage or
-# for simple schemes.
-#
-# "none": Put the file system directly on the device.
-#
-# <NUM>: where NUM is the actual partition number.
-#
-# <OVERWRITE>: Defines whether or not to overwrite any existing
-# filesystem.
-#
-# "true": Indiscriminately destroy any pre-existing file system. Use at
-# your own peril.
-#
-# "false": If an existing file system exists, skip the creation.
-#
-# <REPLACE_FS>: This is a special directive, used for Windows Azure that
-# instructs cloud-init to replace a file system of <FS_TYPE>. NOTE:
-# unless you define a label, this requires the use of the 'any' partition
-# directive.
-#
-# Behavior Caveat: The default behavior is to _check_ if the file system exists.
-# If a file system matches the specification, then the operation is a no-op.
diff --git a/doc/examples/cloud-config-final-message.txt b/doc/examples/cloud-config-final-message.txt
deleted file mode 100644
index 0ce31467..00000000
--- a/doc/examples/cloud-config-final-message.txt
+++ /dev/null
@@ -1,7 +0,0 @@
-#cloud-config
-
-# final_message
-# default: cloud-init boot finished at $TIMESTAMP. Up $UPTIME seconds
-# this message is written by cloud-final when the system is finished
-# its first boot
-final_message: "The system is finally up, after $UPTIME seconds"
diff --git a/doc/examples/cloud-config-gluster.txt b/doc/examples/cloud-config-gluster.txt
deleted file mode 100644
index f8183e77..00000000
--- a/doc/examples/cloud-config-gluster.txt
+++ /dev/null
@@ -1,18 +0,0 @@
-#cloud-config
-# vim: syntax=yaml
-# Mounts volfile exported by glusterfsd running on
-# "volfile-server-hostname" onto the local mount point '/mnt/data'
-#
-# In reality, replace 'volfile-server-hostname' with one of your nodes
-# running glusterfsd.
-#
-packages:
- - glusterfs-client
-
-mounts:
- - [ 'volfile-server-hostname:6996', /mnt/data, glusterfs, "defaults,nobootwait", "0", "2" ]
-
-runcmd:
- - [ modprobe, fuse ]
- - [ mkdir, '-p', /mnt/data ]
- - [ mount, '-a' ]
diff --git a/doc/examples/cloud-config-growpart.txt b/doc/examples/cloud-config-growpart.txt
deleted file mode 100644
index 393d5164..00000000
--- a/doc/examples/cloud-config-growpart.txt
+++ /dev/null
@@ -1,31 +0,0 @@
-#cloud-config
-#
-# growpart entry is a dict, if it is not present at all
-# in config, then the default is used ({'mode': 'auto', 'devices': ['/']})
-#
-# mode:
-# values:
-# * auto: use any option possible (any available)
-# if none are available, do not warn, but debug.
-# * growpart: use growpart to grow partitions
-# if growpart is not available, this is an error.
-# * off, false
-#
-# devices:
-# a list of things to resize.
-# items can be filesystem paths or devices (in /dev)
-# examples:
-# devices: [/, /dev/vdb1]
-#
-# ignore_growroot_disabled:
-# a boolean, default is false.
-# if the file /etc/growroot-disabled exists, then cloud-init will not grow
-# the root partition. This is to allow a single file to disable both
-# cloud-initramfs-growroot and cloud-init's growroot support.
-#
-# true indicates that /etc/growroot-disabled should be ignored
-#
-growpart:
- mode: auto
- devices: ['/']
- ignore_growroot_disabled: false
diff --git a/doc/examples/cloud-config-install-packages.txt b/doc/examples/cloud-config-install-packages.txt
deleted file mode 100644
index 2edc63da..00000000
--- a/doc/examples/cloud-config-install-packages.txt
+++ /dev/null
@@ -1,15 +0,0 @@
-#cloud-config
-
-# Install additional packages on first boot
-#
-# Default: none
-#
-# if packages are specified, this apt_update will be set to true
-#
-# packages may be supplied as a single package name or as a list
-# with the format [<package>, <version>] wherein the specifc
-# package version will be installed.
-packages:
- - pwgen
- - pastebinit
- - [libpython2.7, 2.7.3-0ubuntu3.1]
diff --git a/doc/examples/cloud-config-landscape.txt b/doc/examples/cloud-config-landscape.txt
deleted file mode 100644
index d7ff8ef8..00000000
--- a/doc/examples/cloud-config-landscape.txt
+++ /dev/null
@@ -1,22 +0,0 @@
-# Landscape-client configuration
-#
-# Anything under the top 'landscape: client' entry
-# will be basically rendered into a ConfigObj formated file
-# under the '[client]' section of /etc/landscape/client.conf
-#
-# Note: 'tags' should be specified as a comma delimited string
-# rather than a list.
-#
-# You can get example key/values by running 'landscape-config',
-# answer question, then look at /etc/landscape/client.config
-landscape:
- client:
- url: "https://landscape.canonical.com/message-system"
- ping_url: "http://landscape.canonical.com/ping"
- data_path: "/var/lib/landscape/client"
- http_proxy: "http://my.proxy.com/foobar"
- tags: "server,cloud"
- computer_title: footitle
- https_proxy: fooproxy
- registration_key: fookey
- account_name: fooaccount
diff --git a/doc/examples/cloud-config-launch-index.txt b/doc/examples/cloud-config-launch-index.txt
deleted file mode 100644
index e7dfdc0c..00000000
--- a/doc/examples/cloud-config-launch-index.txt
+++ /dev/null
@@ -1,23 +0,0 @@
-#cloud-config
-# vim: syntax=yaml
-
-#
-# This is the configuration syntax that can be provided to have
-# a given set of cloud config data show up on a certain launch
-# index (and not other launches) by provided a key here which
-# will act as a filter on the instances userdata. When
-# this key is left out (or non-integer) then the content
-# of this file will always be used for all launch-indexes
-# (ie the previous behavior).
-launch-index: 5
-
-# Upgrade the instance on first boot
-# (ie run apt-get upgrade)
-#
-# Default: false
-#
-apt_upgrade: true
-
-# Other yaml keys below...
-# .......
-# .......
diff --git a/doc/examples/cloud-config-lxd.txt b/doc/examples/cloud-config-lxd.txt
deleted file mode 100644
index e96f314b..00000000
--- a/doc/examples/cloud-config-lxd.txt
+++ /dev/null
@@ -1,55 +0,0 @@
-#cloud-config
-
-# configure lxd
-# default: none
-# all options default to none if not specified
-# lxd: config sections for lxd
-# init: dict of options for lxd init, see 'man lxd'
-# network_address: address for lxd to listen on
-# network_port: port for lxd to listen on
-# storage_backend: either 'zfs' or 'dir'
-# storage_create_device: device based storage using specified device
-# storage_create_loop: set up loop based storage with size in GB
-# storage_pool: name of storage pool to use or create
-# trust_password: password required to add new clients
-# bridge: dict of options for the lxd bridge
-# mode: one of "new", "existing" or "none". Defaults to "new"
-# name: the name of the bridge. Defaults to "lxdbr0"
-# ipv4_address: an IPv4 address (e.g. 10.0.8.1)
-# ipv4_netmask: a CIDR mask value (e.g. 24)
-# ipv4_dhcp_first: the first IP of the DHCP range (e.g. 10.0.8.2)
-# ipv4_dhcp_last: the last IP of the DHCP range (e.g. 10.0.8.254)
-# ipv4_dhcp_leases: the size of the DHCP pool (e.g. 250)
-# ipv4_nat: either "true" or "false"
-# ipv6_address: an IPv6 address (e.g. fd98:9e0:3744::1)
-# ipv6_netmask: a CIDR mask value (e.g. 64)
-# ipv6_nat: either "true" or "false"
-# domain: domain name to use for the bridge
-
-
-lxd:
- init:
- network_address: 0.0.0.0
- network_port: 8443
- storage_backend: zfs
- storage_pool: datapool
- storage_create_loop: 10
- bridge:
- mode: new
- name: lxdbr0
- ipv4_address: 10.0.8.1
- ipv4_netmask: 24
- ipv4_dhcp_first: 10.0.8.2
- ipv4_dhcp_last: 10.0.8.3
- ipv4_dhcp_leases: 250
- ipv4_nat: true
- ipv6_address: fd98:9e0:3744::1
- ipv6_netmask: 64
- ipv6_nat: true
- domain: lxd
-
-
-# The simplist working configuration is
-# lxd:
-# init:
-# storage_backend: dir
diff --git a/doc/examples/cloud-config-mcollective.txt b/doc/examples/cloud-config-mcollective.txt
deleted file mode 100644
index 67735682..00000000
--- a/doc/examples/cloud-config-mcollective.txt
+++ /dev/null
@@ -1,49 +0,0 @@
-#cloud-config
-#
-# This is an example file to automatically setup and run mcollective
-# when the instance boots for the first time.
-# Make sure that this file is valid yaml before starting instances.
-# It should be passed as user-data when starting the instance.
-mcollective:
- # Every key present in the conf object will be added to server.cfg:
- # key: value
- #
- # For example the configuration below will have the following key
- # added to server.cfg:
- # plugin.stomp.host: dbhost
- conf:
- plugin.stomp.host: dbhost
- # This will add ssl certs to mcollective
- # WARNING WARNING WARNING
- # The ec2 metadata service is a network service, and thus is readable
- # by non-root users on the system (ie: 'ec2metadata --user-data')
- # If you want security for this, please use include-once + SSL urls
- public-cert: |
- -----BEGIN CERTIFICATE-----
- MIICCTCCAXKgAwIBAgIBATANBgkqhkiG9w0BAQUFADANMQswCQYDVQQDDAJjYTAe
- Fw0xMDAyMTUxNzI5MjFaFw0xNTAyMTQxNzI5MjFaMA0xCzAJBgNVBAMMAmNhMIGf
- MA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCu7Q40sm47/E1Pf+r8AYb/V/FWGPgc
- b014OmNoX7dgCxTDvps/h8Vw555PdAFsW5+QhsGr31IJNI3kSYprFQcYf7A8tNWu
- 1MASW2CfaEiOEi9F1R3R4Qlz4ix+iNoHiUDTjazw/tZwEdxaQXQVLwgTGRwVa+aA
- qbutJKi93MILLwIDAQABo3kwdzA4BglghkgBhvhCAQ0EKxYpUHVwcGV0IFJ1Ynkv
- T3BlblNTTCBHZW5lcmF0ZWQgQ2VydGlmaWNhdGUwDwYDVR0TAQH/BAUwAwEB/zAd
- BgNVHQ4EFgQUu4+jHB+GYE5Vxo+ol1OAhevspjAwCwYDVR0PBAQDAgEGMA0GCSqG
- SIb3DQEBBQUAA4GBAH/rxlUIjwNb3n7TXJcDJ6MMHUlwjr03BDJXKb34Ulndkpaf
- +GAlzPXWa7bO908M9I8RnPfvtKnteLbvgTK+h+zX1XCty+S2EQWk29i2AdoqOTxb
- hppiGMp0tT5Havu4aceCXiy2crVcudj3NFciy8X66SoECemW9UYDCb9T5D0d
- -----END CERTIFICATE-----
- private-cert: |
- -----BEGIN CERTIFICATE-----
- MIICCTCCAXKgAwIBAgIBATANBgkqhkiG9w0BAQUFADANMQswCQYDVQQDDAJjYTAe
- Fw0xMDAyMTUxNzI5MjFaFw0xNTAyMTQxNzI5MjFaMA0xCzAJBgNVBAMMAmNhMIGf
- MA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCu7Q40sm47/E1Pf+r8AYb/V/FWGPgc
- b014OmNoX7dgCxTDvps/h8Vw555PdAFsW5+QhsGr31IJNI3kSYprFQcYf7A8tNWu
- 1MASW2CfaEiOEi9F1R3R4Qlz4ix+iNoHiUDTjazw/tZwEdxaQXQVLwgTGRwVa+aA
- qbutJKi93MILLwIDAQABo3kwdzA4BglghkgBhvhCAQ0EKxYpUHVwcGV0IFJ1Ynkv
- T3BlblNTTCBHZW5lcmF0ZWQgQ2VydGlmaWNhdGUwDwYDVR0TAQH/BAUwAwEB/zAd
- BgNVHQ4EFgQUu4+jHB+GYE5Vxo+ol1OAhevspjAwCwYDVR0PBAQDAgEGMA0GCSqG
- SIb3DQEBBQUAA4GBAH/rxlUIjwNb3n7TXJcDJ6MMHUlwjr03BDJXKb34Ulndkpaf
- +GAlzPXWa7bO908M9I8RnPfvtKnteLbvgTK+h+zX1XCty+S2EQWk29i2AdoqOTxb
- hppiGMp0tT5Havu4aceCXiy2crVcudj3NFciy8X66SoECemW9UYDCb9T5D0d
- -----END CERTIFICATE-----
-
diff --git a/doc/examples/cloud-config-mount-points.txt b/doc/examples/cloud-config-mount-points.txt
deleted file mode 100644
index aa676c24..00000000
--- a/doc/examples/cloud-config-mount-points.txt
+++ /dev/null
@@ -1,46 +0,0 @@
-#cloud-config
-
-# set up mount points
-# 'mounts' contains a list of lists
-# the inner list are entries for an /etc/fstab line
-# ie : [ fs_spec, fs_file, fs_vfstype, fs_mntops, fs-freq, fs_passno ]
-#
-# default:
-# mounts:
-# - [ ephemeral0, /mnt ]
-# - [ swap, none, swap, sw, 0, 0 ]
-#
-# in order to remove a previously listed mount (ie, one from defaults)
-# list only the fs_spec. For example, to override the default, of
-# mounting swap:
-# - [ swap ]
-# or
-# - [ swap, null ]
-#
-# - if a device does not exist at the time, an entry will still be
-# written to /etc/fstab.
-# - '/dev' can be ommitted for device names that begin with: xvd, sd, hd, vd
-# - if an entry does not have all 6 fields, they will be filled in
-# with values from 'mount_default_fields' below.
-#
-# Note, that you should set 'nobootwait' (see man fstab) for volumes that may
-# not be attached at instance boot (or reboot)
-#
-mounts:
- - [ ephemeral0, /mnt, auto, "defaults,noexec" ]
- - [ sdc, /opt/data ]
- - [ xvdh, /opt/data, "auto", "defaults,nobootwait", "0", "0" ]
- - [ dd, /dev/zero ]
-
-# mount_default_fields
-# These values are used to fill in any entries in 'mounts' that are not
-# complete. This must be an array, and must have 7 fields.
-mount_default_fields: [ None, None, "auto", "defaults,nobootwait", "0", "2" ]
-
-
-# swap can also be set up by the 'mounts' module
-# default is to not create any swap files, because 'size' is set to 0
-swap:
- filename: /swap.img
- size: "auto" # or size in bytes
- maxsize: size in bytes
diff --git a/doc/examples/cloud-config-phone-home.txt b/doc/examples/cloud-config-phone-home.txt
deleted file mode 100644
index 7f2b69f7..00000000
--- a/doc/examples/cloud-config-phone-home.txt
+++ /dev/null
@@ -1,14 +0,0 @@
-#cloud-config
-
-# phone_home: if this dictionary is present, then the phone_home
-# cloud-config module will post specified data back to the given
-# url
-# default: none
-# phone_home:
-# url: http://my.foo.bar/$INSTANCE/
-# post: all
-# tries: 10
-#
-phone_home:
- url: http://my.example.com/$INSTANCE_ID/
- post: [ pub_key_dsa, pub_key_rsa, pub_key_ecdsa, instance_id ]
diff --git a/doc/examples/cloud-config-power-state.txt b/doc/examples/cloud-config-power-state.txt
deleted file mode 100644
index b470153d..00000000
--- a/doc/examples/cloud-config-power-state.txt
+++ /dev/null
@@ -1,40 +0,0 @@
-#cloud-config
-
-## poweroff or reboot system after finished
-# default: none
-#
-# power_state can be used to make the system shutdown, reboot or
-# halt after boot is finished. This same thing can be acheived by
-# user-data scripts or by runcmd by simply invoking 'shutdown'.
-#
-# Doing it this way ensures that cloud-init is entirely finished with
-# modules that would be executed, and avoids any error/log messages
-# that may go to the console as a result of system services like
-# syslog being taken down while cloud-init is running.
-#
-# If you delay '+5' (5 minutes) and have a timeout of
-# 120 (2 minutes), then the max time until shutdown will be 7 minutes.
-# cloud-init will invoke 'shutdown +5' after the process finishes, or
-# when 'timeout' seconds have elapsed.
-#
-# delay: form accepted by shutdown. default is 'now'. other format
-# accepted is +m (m in minutes)
-# mode: required. must be one of 'poweroff', 'halt', 'reboot'
-# message: provided as the message argument to 'shutdown'. default is none.
-# timeout: the amount of time to give the cloud-init process to finish
-# before executing shutdown.
-# condition: apply state change only if condition is met.
-# May be boolean True (always met), or False (never met),
-# or a command string or list to be executed.
-# command's exit code indicates:
-# 0: condition met
-# 1: condition not met
-# other exit codes will result in 'not met', but are reserved
-# for future use.
-#
-power_state:
- delay: "+30"
- mode: poweroff
- message: Bye Bye
- timeout: 30
- condition: True
diff --git a/doc/examples/cloud-config-puppet.txt b/doc/examples/cloud-config-puppet.txt
deleted file mode 100644
index cd3c2f8e..00000000
--- a/doc/examples/cloud-config-puppet.txt
+++ /dev/null
@@ -1,51 +0,0 @@
-#cloud-config
-#
-# This is an example file to automatically setup and run puppetd
-# when the instance boots for the first time.
-# Make sure that this file is valid yaml before starting instances.
-# It should be passed as user-data when starting the instance.
-puppet:
- # Every key present in the conf object will be added to puppet.conf:
- # [name]
- # subkey=value
- #
- # For example the configuration below will have the following section
- # added to puppet.conf:
- # [puppetd]
- # server=puppetmaster.example.org
- # certname=i-0123456.ip-X-Y-Z.cloud.internal
- #
- # The puppmaster ca certificate will be available in
- # /var/lib/puppet/ssl/certs/ca.pem
- conf:
- agent:
- server: "puppetmaster.example.org"
- # certname supports substitutions at runtime:
- # %i: instanceid
- # Example: i-0123456
- # %f: fqdn of the machine
- # Example: ip-X-Y-Z.cloud.internal
- #
- # NB: the certname will automatically be lowercased as required by puppet
- certname: "%i.%f"
- # ca_cert is a special case. It won't be added to puppet.conf.
- # It holds the puppetmaster certificate in pem format.
- # It should be a multi-line string (using the | yaml notation for
- # multi-line strings).
- # The puppetmaster certificate is located in
- # /var/lib/puppet/ssl/ca/ca_crt.pem on the puppetmaster host.
- #
- ca_cert: |
- -----BEGIN CERTIFICATE-----
- MIICCTCCAXKgAwIBAgIBATANBgkqhkiG9w0BAQUFADANMQswCQYDVQQDDAJjYTAe
- Fw0xMDAyMTUxNzI5MjFaFw0xNTAyMTQxNzI5MjFaMA0xCzAJBgNVBAMMAmNhMIGf
- MA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCu7Q40sm47/E1Pf+r8AYb/V/FWGPgc
- b014OmNoX7dgCxTDvps/h8Vw555PdAFsW5+QhsGr31IJNI3kSYprFQcYf7A8tNWu
- 1MASW2CfaEiOEi9F1R3R4Qlz4ix+iNoHiUDTjazw/tZwEdxaQXQVLwgTGRwVa+aA
- qbutJKi93MILLwIDAQABo3kwdzA4BglghkgBhvhCAQ0EKxYpUHVwcGV0IFJ1Ynkv
- T3BlblNTTCBHZW5lcmF0ZWQgQ2VydGlmaWNhdGUwDwYDVR0TAQH/BAUwAwEB/zAd
- BgNVHQ4EFgQUu4+jHB+GYE5Vxo+ol1OAhevspjAwCwYDVR0PBAQDAgEGMA0GCSqG
- SIb3DQEBBQUAA4GBAH/rxlUIjwNb3n7TXJcDJ6MMHUlwjr03BDJXKb34Ulndkpaf
- +GAlzPXWa7bO908M9I8RnPfvtKnteLbvgTK+h+zX1XCty+S2EQWk29i2AdoqOTxb
- hppiGMp0tT5Havu4aceCXiy2crVcudj3NFciy8X66SoECemW9UYDCb9T5D0d
- -----END CERTIFICATE-----
diff --git a/doc/examples/cloud-config-reporting.txt b/doc/examples/cloud-config-reporting.txt
deleted file mode 100644
index ee00078f..00000000
--- a/doc/examples/cloud-config-reporting.txt
+++ /dev/null
@@ -1,17 +0,0 @@
-#cloud-config
-##
-## The following sets up 2 reporting end points.
-## A 'webhook' and a 'log' type.
-## It also disables the built in default 'log'
-reporting:
- smtest:
- type: webhook
- endpoint: "http://myhost:8000/"
- consumer_key: "ckey_foo"
- consumer_secret: "csecret_foo"
- token_key: "tkey_foo"
- token_secret: "tkey_foo"
- smlogger:
- type: log
- level: WARN
- log: null
diff --git a/doc/examples/cloud-config-resolv-conf.txt b/doc/examples/cloud-config-resolv-conf.txt
deleted file mode 100644
index 37ffc91a..00000000
--- a/doc/examples/cloud-config-resolv-conf.txt
+++ /dev/null
@@ -1,20 +0,0 @@
-#cloud-config
-#
-# This is an example file to automatically configure resolv.conf when the
-# instance boots for the first time.
-#
-# Ensure that your yaml is valid and pass this as user-data when starting
-# the instance. Also be sure that your cloud.cfg file includes this
-# configuration module in the appropirate section.
-#
-manage-resolv-conf: true
-
-resolv_conf:
- nameservers: ['8.8.4.4', '8.8.8.8']
- searchdomains:
- - foo.example.com
- - bar.example.com
- domain: example.com
- options:
- rotate: true
- timeout: 1
diff --git a/doc/examples/cloud-config-rh_subscription.txt b/doc/examples/cloud-config-rh_subscription.txt
deleted file mode 100644
index be121338..00000000
--- a/doc/examples/cloud-config-rh_subscription.txt
+++ /dev/null
@@ -1,49 +0,0 @@
-#cloud-config
-
-# register your Red Hat Enterprise Linux based operating system
-#
-# this cloud-init plugin is capable of registering by username
-# and password *or* activation and org. Following a successfully
-# registration you can:
-# - auto-attach subscriptions
-# - set the service level
-# - add subscriptions based on its pool ID
-# - enable yum repositories based on its repo id
-# - disable yum repositories based on its repo id
-# - alter the rhsm_baseurl and server-hostname in the
-# /etc/rhsm/rhs.conf file
-
-rh_subscription:
- username: joe@foo.bar
-
- ## Quote your password if it has symbols to be safe
- password: '1234abcd'
-
- ## If you prefer, you can use the activation key and
- ## org instead of username and password. Be sure to
- ## comment out username and password
-
- #activation-key: foobar
- #org: 12345
-
- ## Uncomment to auto-attach subscriptions to your system
- #auto-attach: True
-
- ## Uncomment to set the service level for your
- ## subscriptions
- #service-level: self-support
-
- ## Uncomment to add pools (needs to be a list of IDs)
- #add-pool: []
-
- ## Uncomment to add or remove yum repos
- ## (needs to be a list of repo IDs)
- #enable-repo: []
- #disable-repo: []
-
- ## Uncomment to alter the baseurl in /etc/rhsm/rhsm.conf
- #rhsm-baseurl: http://url
-
- ## Uncomment to alter the server hostname in
- ## /etc/rhsm/rhsm.conf
- #server-hostname: foo.bar.com
diff --git a/doc/examples/cloud-config-rsyslog.txt b/doc/examples/cloud-config-rsyslog.txt
deleted file mode 100644
index 28ea1f16..00000000
--- a/doc/examples/cloud-config-rsyslog.txt
+++ /dev/null
@@ -1,46 +0,0 @@
-## the rsyslog module allows you to configure the systems syslog.
-## configuration of syslog is under the top level cloud-config
-## entry 'rsyslog'.
-##
-## Example:
-#cloud-config
-rsyslog:
- remotes:
- # udp to host 'maas.mydomain' port 514
- maashost: maas.mydomain
- # udp to ipv4 host on port 514
- maas: "@[10.5.1.56]:514"
- # tcp to host ipv6 host on port 555
- maasipv6: "*.* @@[FE80::0202:B3FF:FE1E:8329]:555"
- configs:
- - "*.* @@192.158.1.1"
- - content: "*.* @@192.0.2.1:10514"
- filename: 01-example.conf
- - content: |
- *.* @@syslogd.example.com
- config_dir: /etc/rsyslog.d
- config_filename: 20-cloud-config.conf
- service_reload_command: [your, syslog, reload, command]
-
-## Additionally the following legacy format is supported
-## it is converted into the format above before use.
-## rsyslog_filename -> rsyslog/config_filename
-## rsyslog_dir -> rsyslog/config_dir
-## rsyslog -> rsyslog/configs
-# rsyslog:
-# - "*.* @@192.158.1.1"
-# - content: "*.* @@192.0.2.1:10514"
-# filename: 01-example.conf
-# - content: |
-# *.* @@syslogd.example.com
-# rsyslog_filename: 20-cloud-config.conf
-# rsyslog_dir: /etc/rsyslog.d
-
-## to configure rsyslog to accept remote logging on Ubuntu
-## write the following into /etc/rsyslog.d/20-remote-udp.conf
-## $ModLoad imudp
-## $UDPServerRun 514
-## $template LogRemote,"/var/log/maas/rsyslog/%HOSTNAME%/messages"
-## :fromhost-ip, !isequal, "127.0.0.1" ?LogRemote
-## then:
-## sudo service rsyslog restart
diff --git a/doc/examples/cloud-config-run-cmds.txt b/doc/examples/cloud-config-run-cmds.txt
deleted file mode 100644
index 3bb06864..00000000
--- a/doc/examples/cloud-config-run-cmds.txt
+++ /dev/null
@@ -1,22 +0,0 @@
-#cloud-config
-
-# run commands
-# default: none
-# runcmd contains a list of either lists or a string
-# each item will be executed in order at rc.local like level with
-# output to the console
-# - runcmd only runs during the first boot
-# - if the item is a list, the items will be properly executed as if
-# passed to execve(3) (with the first arg as the command).
-# - if the item is a string, it will be simply written to the file and
-# will be interpreted by 'sh'
-#
-# Note, that the list has to be proper yaml, so you have to quote
-# any characters yaml would eat (':' can be problematic)
-runcmd:
- - [ ls, -l, / ]
- - [ sh, -xc, "echo $(date) ': hello world!'" ]
- - [ sh, -c, echo "=========hello world'=========" ]
- - ls -l /root
- - [ wget, "http://slashdot.org", -O, /tmp/index.html ]
-
diff --git a/doc/examples/cloud-config-salt-minion.txt b/doc/examples/cloud-config-salt-minion.txt
deleted file mode 100644
index 939fdc8b..00000000
--- a/doc/examples/cloud-config-salt-minion.txt
+++ /dev/null
@@ -1,53 +0,0 @@
-#cloud-config
-#
-# This is an example file to automatically setup and run a salt
-# minion when the instance boots for the first time.
-# Make sure that this file is valid yaml before starting instances.
-# It should be passed as user-data when starting the instance.
-
-salt_minion:
- # conf contains all the directives to be assigned in /etc/salt/minion.
-
- conf:
- # Set the location of the salt master server, if the master server cannot be
- # resolved, then the minion will fail to start.
-
- master: salt.example.com
-
- # Salt keys are manually generated by: salt-key --gen-keys=GEN_KEYS,
- # where GEN_KEYS is the name of the keypair, e.g. 'minion'. The keypair
- # will be copied to /etc/salt/pki on the minion instance.
-
- public_key: |
- -----BEGIN PUBLIC KEY-----
- MIIBIDANBgkqhkiG9w0BAQEFAAOCAQ0AMIIBCAKCAQEAwI4yqk1Y12zVmu9Ejlua
- h2FD6kjrt+N9XfGqZUUVNeRb7CA0Sj5Q6NtgoaiXuIrSea2sLda6ivqAGmtxMMrP
- zpf3FwsYWxBUNF7D4YeLmYjvcTbfr3bCOIRnPNXZ+4isuvvEiM02u2cO0okZSgeb
- dofNa1NbTLYAQr9jZZb7GPKrTO4CKy0xzBih/A+sl6dL9PNDmqXQEjyJS6PXG1Vj
- PvD5jpSrxuIl5Ms/+2Ro3ALgvC8dgoY/3m3csnd06afumGKv5YOGtf+bnWLhc0bf
- 6Sk8Q6i5t0Bl+HAULSPr+B9x/I0rN76ZnPvTj1+hJ0zTof4d0hOLx/K5OQyt7AKo
- 4wIBAQ==
- -----END PUBLIC KEY-----
-
- private_key: |
- -----BEGIN RSA PRIVATE KEY-----
- Proc-Type: 4,ENCRYPTED
- DEK-Info: AES-128-CBC,ECE30DBBA56E2DF06B7BC415F8870994
-
- YQOE5HIsghqjRsxPQqiWMH/VHmyFH6xIpBcmzxzispEHwBojlvLXviwvR66YhgNw
- 7smwE10Ik4/cwwiHTZqCk++jPATPygBiqQkUijCWzcT9kfaxmqdP4PL+hu9g7kGC
- KrD2Bm8/oO08s957aThuHC1sABRcJ1V3FRzJT6Za4fwweyvHVYRnmgaDA6zH0qV8
- NqBSB2hnNXKEdh6UFz9QGcrQxnRjfdIaW64zoEX7jT7gYYL7FkGXBa3XdMOA4fnl
- adRwLFMs0jfilisZv8oUbPdZ6J6x3o8p8LVecCF8tdZt1zkcLSIXKnoDFpHSISGs
- BD9aqD+E4ejynM/tPaVFq4IHzT8viN6h6WcH8fbpClFZ66Iyy9XL3/CjAY7Jzhh9
- fnbc4Iq28cdbmO/vkR7JyVOgEMWe1BcSqtro70XoUNRY8uDJUPqohrhm/9AigFRA
- Pwyf3LqojxRnwXjHsZtGltUtEAPZzgh3fKJnx9MyRR7DPXBRig7TAHU7n2BFRhHA
- TYThy29bK6NkIc/cKc2kEQVo98Cr04PO8jVxZM332FlhiVlP0kpAp+tFj7aMzPTG
- sJumb9kPbMsgpEuTCONm3yyoufGEBFMrIJ+Po48M2RlYOh50VkO09pI+Eu7FPtVB
- H4gKzoJIpZZ/7vYXQ3djM8s9hc5gD5CVExTZV4drbsXt6ITiwHuxZ6CNHRBPL5AY
- wmF8QZz4oivv1afdSe6E6OGC3uVmX3Psn5CVq2pE8VlRDKFy1WqfU2enRAijSS2B
- rtJs263fOJ8ZntDzMVMPgiAlzzfA285KUletpAeUmz+peR1gNzkE0eKSG6THOCi0
- rfmR8SeEzyNvin0wQ3qgYiiHjHbbFhJIMAQxoX+0hDSooM7Wo5wkLREULpGuesTg
- A6Fe3CiOivMDraNGA7H6Yg==
- -----END RSA PRIVATE KEY-----
-
diff --git a/doc/examples/cloud-config-seed-random.txt b/doc/examples/cloud-config-seed-random.txt
deleted file mode 100644
index 08f69a9f..00000000
--- a/doc/examples/cloud-config-seed-random.txt
+++ /dev/null
@@ -1,32 +0,0 @@
-#cloud-config
-#
-# random_seed is a dictionary.
-#
-# The config module will write seed data from the datasource
-# to 'file' described below.
-#
-# Entries in this dictionary are:
-# file: the file to write random data to (default is /dev/urandom)
-# data: this data will be written to 'file' before data from
-# the datasource
-# encoding: this will be used to decode 'data' provided.
-# allowed values are 'encoding', 'raw', 'base64', 'b64'
-# 'gzip', or 'gz'. Default is 'raw'
-#
-# command: execute this command to seed random.
-# the command will have RANDOM_SEED_FILE in its environment
-# set to the value of 'file' above.
-# command_required: default False
-# if true, and 'command' is not available to be run
-# then exception is raised and cloud-init will record failure.
-# Otherwise, only debug error is mentioned.
-#
-# Note: command could be ['pollinate',
-# '--server=http://local.pollinate.server']
-# which would have pollinate populate /dev/urandom from provided server
-seed_random:
- file: '/dev/urandom'
- data: 'my random string'
- encoding: 'raw'
- command: ['sh', '-c', 'dd if=/dev/urandom of=$RANDOM_SEED_FILE']
- command_required: True
diff --git a/doc/examples/cloud-config-ssh-keys.txt b/doc/examples/cloud-config-ssh-keys.txt
deleted file mode 100644
index 235a114f..00000000
--- a/doc/examples/cloud-config-ssh-keys.txt
+++ /dev/null
@@ -1,46 +0,0 @@
-#cloud-config
-
-# add each entry to ~/.ssh/authorized_keys for the configured user or the
-# first user defined in the user definition directive.
-ssh_authorized_keys:
- - ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAGEA3FSyQwBI6Z+nCSjUUk8EEAnnkhXlukKoUPND/RRClWz2s5TCzIkd3Ou5+Cyz71X0XmazM3l5WgeErvtIwQMyT1KjNoMhoJMrJnWqQPOt5Q8zWd9qG7PBl9+eiH5qV7NZ mykey@host
- - ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA3I7VUf2l5gSn5uavROsc5HRDpZdQueUq5ozemNSj8T7enqKHOEaFoU2VoPgGEWC9RyzSQVeyD6s7APMcE82EtmW4skVEgEGSbDc1pvxzxtchBj78hJP6Cf5TCMFSXw+Fz5rF1dR23QDbN1mkHs7adr8GW4kSWqU7Q7NDwfIrJJtO7Hi42GyXtvEONHbiRPOe8stqUly7MvUoN+5kfjBM8Qqpfl2+FNhTYWpMfYdPUnE7u536WqzFmsaqJctz3gBxH9Ex7dFtrxR4qiqEr9Qtlu3xGn7Bw07/+i1D+ey3ONkZLN+LQ714cgj8fRS4Hj29SCmXp5Kt5/82cD/VN3NtHw== smoser@brickies
-
-# Send pre-generated ssh private keys to the server
-# If these are present, they will be written to /etc/ssh and
-# new random keys will not be generated
-# in addition to 'rsa' and 'dsa' as shown below, 'ecdsa' is also supported
-ssh_keys:
- rsa_private: |
- -----BEGIN RSA PRIVATE KEY-----
- MIIBxwIBAAJhAKD0YSHy73nUgysO13XsJmd4fHiFyQ+00R7VVu2iV9Qcon2LZS/x
- 1cydPZ4pQpfjEha6WxZ6o8ci/Ea/w0n+0HGPwaxlEG2Z9inNtj3pgFrYcRztfECb
- 1j6HCibZbAzYtwIBIwJgO8h72WjcmvcpZ8OvHSvTwAguO2TkR6mPgHsgSaKy6GJo
- PUJnaZRWuba/HX0KGyhz19nPzLpzG5f0fYahlMJAyc13FV7K6kMBPXTRR6FxgHEg
- L0MPC7cdqAwOVNcPY6A7AjEA1bNaIjOzFN2sfZX0j7OMhQuc4zP7r80zaGc5oy6W
- p58hRAncFKEvnEq2CeL3vtuZAjEAwNBHpbNsBYTRPCHM7rZuG/iBtwp8Rxhc9I5w
- ixvzMgi+HpGLWzUIBS+P/XhekIjPAjA285rVmEP+DR255Ls65QbgYhJmTzIXQ2T9
- luLvcmFBC6l35Uc4gTgg4ALsmXLn71MCMGMpSWspEvuGInayTCL+vEjmNBT+FAdO
- W7D4zCpI43jRS9U06JVOeSc9CDk2lwiA3wIwCTB/6uc8Cq85D9YqpM10FuHjKpnP
- REPPOyrAspdeOAV+6VKRavstea7+2DZmSUgE
- -----END RSA PRIVATE KEY-----
-
- rsa_public: ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAGEAoPRhIfLvedSDKw7XdewmZ3h8eIXJD7TRHtVW7aJX1ByifYtlL/HVzJ09nilCl+MSFrpbFnqjxyL8Rr/DSf7QcY/BrGUQbZn2Kc22PemAWthxHO18QJvWPocKJtlsDNi3 smoser@localhost
-
- dsa_private: |
- -----BEGIN DSA PRIVATE KEY-----
- MIIBuwIBAAKBgQDP2HLu7pTExL89USyM0264RCyWX/CMLmukxX0Jdbm29ax8FBJT
- pLrO8TIXVY5rPAJm1dTHnpuyJhOvU9G7M8tPUABtzSJh4GVSHlwaCfycwcpLv9TX
- DgWIpSj+6EiHCyaRlB1/CBp9RiaB+10QcFbm+lapuET+/Au6vSDp9IRtlQIVAIMR
- 8KucvUYbOEI+yv+5LW9u3z/BAoGBAI0q6JP+JvJmwZFaeCMMVxXUbqiSko/P1lsa
- LNNBHZ5/8MOUIm8rB2FC6ziidfueJpqTMqeQmSAlEBCwnwreUnGfRrKoJpyPNENY
- d15MG6N5J+z81sEcHFeprryZ+D3Ge9VjPq3Tf3NhKKwCDQ0240aPezbnjPeFm4mH
- bYxxcZ9GAoGAXmLIFSQgiAPu459rCKxT46tHJtM0QfnNiEnQLbFluefZ/yiI4DI3
- 8UzTCOXLhUA7ybmZha+D/csj15Y9/BNFuO7unzVhikCQV9DTeXX46pG4s1o23JKC
- /QaYWNMZ7kTRv+wWow9MhGiVdML4ZN4XnifuO5krqAybngIy66PMEoQCFEIsKKWv
- 99iziAH0KBMVbxy03Trz
- -----END DSA PRIVATE KEY-----
-
- dsa_public: ssh-dss AAAAB3NzaC1kc3MAAACBAM/Ycu7ulMTEvz1RLIzTbrhELJZf8Iwua6TFfQl1ubb1rHwUElOkus7xMhdVjms8AmbV1Meem7ImE69T0bszy09QAG3NImHgZVIeXBoJ/JzByku/1NcOBYilKP7oSIcLJpGUHX8IGn1GJoH7XRBwVub6Vqm4RP78C7q9IOn0hG2VAAAAFQCDEfCrnL1GGzhCPsr/uS1vbt8/wQAAAIEAjSrok/4m8mbBkVp4IwxXFdRuqJKSj8/WWxos00Ednn/ww5QibysHYULrOKJ1+54mmpMyp5CZICUQELCfCt5ScZ9GsqgmnI80Q1h3Xkwbo3kn7PzWwRwcV6muvJn4PcZ71WM+rdN/c2EorAINDTbjRo97NueM94WbiYdtjHFxn0YAAACAXmLIFSQgiAPu459rCKxT46tHJtM0QfnNiEnQLbFluefZ/yiI4DI38UzTCOXLhUA7ybmZha+D/csj15Y9/BNFuO7unzVhikCQV9DTeXX46pG4s1o23JKC/QaYWNMZ7kTRv+wWow9MhGiVdML4ZN4XnifuO5krqAybngIy66PMEoQ= smoser@localhost
-
-
diff --git a/doc/examples/cloud-config-update-apt.txt b/doc/examples/cloud-config-update-apt.txt
deleted file mode 100644
index a83ce3f7..00000000
--- a/doc/examples/cloud-config-update-apt.txt
+++ /dev/null
@@ -1,7 +0,0 @@
-#cloud-config
-# Update apt database on first boot
-# (ie run apt-get update)
-#
-# Default: true
-# Aliases: apt_update
-package_update: false
diff --git a/doc/examples/cloud-config-update-packages.txt b/doc/examples/cloud-config-update-packages.txt
deleted file mode 100644
index 56b72c63..00000000
--- a/doc/examples/cloud-config-update-packages.txt
+++ /dev/null
@@ -1,8 +0,0 @@
-#cloud-config
-
-# Upgrade the instance on first boot
-# (ie run apt-get upgrade)
-#
-# Default: false
-# Aliases: apt_upgrade
-package_upgrade: true
diff --git a/doc/examples/cloud-config-user-groups.txt b/doc/examples/cloud-config-user-groups.txt
deleted file mode 100644
index 0e8ed243..00000000
--- a/doc/examples/cloud-config-user-groups.txt
+++ /dev/null
@@ -1,109 +0,0 @@
-# Add groups to the system
-# The following example adds the ubuntu group with members foo and bar and
-# the group cloud-users.
-groups:
- - ubuntu: [foo,bar]
- - cloud-users
-
-# Add users to the system. Users are added after groups are added.
-users:
- - default
- - name: foobar
- gecos: Foo B. Bar
- primary-group: foobar
- groups: users
- selinux-user: staff_u
- expiredate: 2012-09-01
- ssh-import-id: foobar
- lock_passwd: false
- passwd: $6$j212wezy$7H/1LT4f9/N3wpgNunhsIqtMj62OKiS3nyNwuizouQc3u7MbYCarYeAHWYPYb2FT.lbioDm2RrkJPb9BZMN1O/
- - name: barfoo
- gecos: Bar B. Foo
- sudo: ALL=(ALL) NOPASSWD:ALL
- groups: users, admin
- ssh-import-id: None
- lock_passwd: true
- ssh-authorized-keys:
- - <ssh pub key 1>
- - <ssh pub key 2>
- - name: cloudy
- gecos: Magic Cloud App Daemon User
- inactive: true
- system: true
-
-# Valid Values:
-# name: The user's login name
-# gecos: The user name's real name, i.e. "Bob B. Smith"
-# homedir: Optional. Set to the local path you want to use. Defaults to
-# /home/<username>
-# primary-group: define the primary group. Defaults to a new group created
-# named after the user.
-# groups: Optional. Additional groups to add the user to. Defaults to none
-# selinux-user: Optional. The SELinux user for the user's login, such as
-# "staff_u". When this is omitted the system will select the default
-# SELinux user.
-# lock_passwd: Defaults to true. Lock the password to disable password login
-# inactive: Create the user as inactive
-# passwd: The hash -- not the password itself -- of the password you want
-# to use for this user. You can generate a safe hash via:
-# mkpasswd --method=SHA-512 --rounds=4096
-# (the above command would create from stdin an SHA-512 password hash
-# with 4096 salt rounds)
-#
-# Please note: while the use of a hashed password is better than
-# plain text, the use of this feature is not ideal. Also,
-# using a high number of salting rounds will help, but it should
-# not be relied upon.
-#
-# To highlight this risk, running John the Ripper against the
-# example hash above, with a readily available wordlist, revealed
-# the true password in 12 seconds on a i7-2620QM.
-#
-# In other words, this feature is a potential security risk and is
-# provided for your convenience only. If you do not fully trust the
-# medium over which your cloud-config will be transmitted, then you
-# should use SSH authentication only.
-#
-# You have thus been warned.
-# no-create-home: When set to true, do not create home directory.
-# no-user-group: When set to true, do not create a group named after the user.
-# no-log-init: When set to true, do not initialize lastlog and faillog database.
-# ssh-import-id: Optional. Import SSH ids
-# ssh-authorized-keys: Optional. [list] Add keys to user's authorized keys file
-# sudo: Defaults to none. Set to the sudo string you want to use, i.e.
-# ALL=(ALL) NOPASSWD:ALL. To add multiple rules, use the following
-# format.
-# sudo:
-# - ALL=(ALL) NOPASSWD:/bin/mysql
-# - ALL=(ALL) ALL
-# Note: Please double check your syntax and make sure it is valid.
-# cloud-init does not parse/check the syntax of the sudo
-# directive.
-# system: Create the user as a system user. This means no home directory.
-#
-
-# Default user creation:
-#
-# Unless you define users, you will get a 'ubuntu' user on ubuntu systems with the
-# legacy permission (no password sudo, locked user, etc). If however, you want
-# to have the 'ubuntu' user in addition to other users, you need to instruct
-# cloud-init that you also want the default user. To do this use the following
-# syntax:
-# users:
-# - default
-# - bob
-# - ....
-# foobar: ...
-#
-# users[0] (the first user in users) overrides the user directive.
-#
-# The 'default' user above references the distro's config:
-# system_info:
-# default_user:
-# name: Ubuntu
-# plain_text_passwd: 'ubuntu'
-# home: /home/ubuntu
-# shell: /bin/bash
-# lock_passwd: True
-# gecos: Ubuntu
-# groups: [adm, audio, cdrom, dialout, floppy, video, plugdev, dip, netdev]
diff --git a/doc/examples/cloud-config-vendor-data.txt b/doc/examples/cloud-config-vendor-data.txt
deleted file mode 100644
index 7f90847b..00000000
--- a/doc/examples/cloud-config-vendor-data.txt
+++ /dev/null
@@ -1,16 +0,0 @@
-#cloud-config
-#
-# This explains how to control vendordata via a cloud-config
-#
-# On select Datasources, vendors have a channel for the consumptions
-# of all support user-data types via a special channel called
-# vendordata. Users of the end system are given ultimate control.
-#
-vendor_data:
- enabled: True
- prefix: /usr/bin/ltrace
-
-# enabled: whether it is enabled or not
-# prefix: the command to run before any vendor scripts.
-# Note: this is a fairly weak method of containment. It should
-# be used to profile a script, not to prevent its run
diff --git a/doc/examples/cloud-config-write-files.txt b/doc/examples/cloud-config-write-files.txt
deleted file mode 100644
index ec98bc93..00000000
--- a/doc/examples/cloud-config-write-files.txt
+++ /dev/null
@@ -1,33 +0,0 @@
-#cloud-config
-# vim: syntax=yaml
-#
-# This is the configuration syntax that the write_files module
-# will know how to understand. encoding can be given b64 or gzip or (gz+b64).
-# The content will be decoded accordingly and then written to the path that is
-# provided.
-#
-# Note: Content strings here are truncated for example purposes.
-write_files:
-- encoding: b64
- content: CiMgVGhpcyBmaWxlIGNvbnRyb2xzIHRoZSBzdGF0ZSBvZiBTRUxpbnV4...
- owner: root:root
- path: /etc/sysconfig/selinux
- permissions: '0644'
-- content: |
- # My new /etc/sysconfig/samba file
-
- SMBDOPTIONS="-D"
- path: /etc/sysconfig/samba
-- content: !!binary |
- f0VMRgIBAQAAAAAAAAAAAAIAPgABAAAAwARAAAAAAABAAAAAAAAAAJAVAAAAAAAAAAAAAEAAOAAI
- AEAAHgAdAAYAAAAFAAAAQAAAAAAAAABAAEAAAAAAAEAAQAAAAAAAwAEAAAAAAADAAQAAAAAAAAgA
- AAAAAAAAAwAAAAQAAAAAAgAAAAAAAAACQAAAAAAAAAJAAAAAAAAcAAAAAAAAABwAAAAAAAAAAQAA
- ....
- path: /bin/arch
- permissions: '0555'
-- encoding: gzip
- content: !!binary |
- H4sIAIDb/U8C/1NW1E/KzNMvzuBKTc7IV8hIzcnJVyjPL8pJ4QIA6N+MVxsAAAA=
- path: /usr/bin/hello
- permissions: '0755'
-
diff --git a/doc/examples/cloud-config-yum-repo.txt b/doc/examples/cloud-config-yum-repo.txt
deleted file mode 100644
index ab2c031e..00000000
--- a/doc/examples/cloud-config-yum-repo.txt
+++ /dev/null
@@ -1,20 +0,0 @@
-#cloud-config
-# vim: syntax=yaml
-#
-# Add yum repository configuration to the system
-#
-# The following example adds the file /etc/yum.repos.d/epel_testing.repo
-# which can then subsequently be used by yum for later operations.
-yum_repos:
- # The name of the repository
- epel-testing:
- # Any repository configuration options
- # See: man yum.conf
- #
- # This one is required!
- baseurl: http://download.fedoraproject.org/pub/epel/testing/5/$basearch
- enabled: false
- failovermethod: priority
- gpgcheck: true
- gpgkey: file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL
- name: Extra Packages for Enterprise Linux 5 - Testing
diff --git a/doc/examples/cloud-config.txt b/doc/examples/cloud-config.txt
deleted file mode 100644
index 3cc9c055..00000000
--- a/doc/examples/cloud-config.txt
+++ /dev/null
@@ -1,752 +0,0 @@
-#cloud-config
-# Update apt database on first boot
-# (ie run apt-get update)
-#
-# Default: true
-# Aliases: apt_update
-package_update: false
-
-# Upgrade the instance on first boot
-# (ie run apt-get upgrade)
-#
-# Default: false
-# Aliases: apt_upgrade
-package_upgrade: true
-
-# Reboot after package install/update if necessary
-# Default: false
-# Aliases: apt_reboot_if_required
-package_reboot_if_required: true
-
-# Add apt repositories
-#
-# Default: auto select based on cloud metadata
-# in ec2, the default is <region>.archive.ubuntu.com
-# apt_mirror:
-# use the provided mirror
-# apt_mirror_search:
-# search the list for the first mirror.
-# this is currently very limited, only verifying that
-# the mirror is dns resolvable or an IP address
-#
-# if neither apt_mirror nor apt_mirror search is set (the default)
-# then use the mirror provided by the DataSource found.
-# In EC2, that means using <region>.ec2.archive.ubuntu.com
-#
-# if no mirror is provided by the DataSource, and 'apt_mirror_search_dns' is
-# true, then search for dns names '<distro>-mirror' in each of
-# - fqdn of this host per cloud metadata
-# - localdomain
-# - no domain (which would search domains listed in /etc/resolv.conf)
-# If there is a dns entry for <distro>-mirror, then it is assumed that there
-# is a distro mirror at http://<distro>-mirror.<domain>/<distro>
-#
-# That gives the cloud provider the opportunity to set mirrors of a distro
-# up and expose them only by creating dns entries.
-#
-# if none of that is found, then the default distro mirror is used
-apt_mirror: http://us.archive.ubuntu.com/ubuntu/
-apt_mirror_search:
- - http://local-mirror.mydomain
- - http://archive.ubuntu.com
-
-apt_mirror_search_dns: False
-
-# apt_proxy (configure Acquire::HTTP::Proxy)
-# 'apt_http_proxy' is an alias for 'apt_proxy'.
-# Also, available are 'apt_ftp_proxy' and 'apt_https_proxy'.
-# These affect Acquire::FTP::Proxy and Acquire::HTTPS::Proxy respectively
-apt_proxy: http://my.apt.proxy:3128
-
-# apt_pipelining (configure Acquire::http::Pipeline-Depth)
-# Default: disables HTTP pipelining. Certain web servers, such
-# as S3 do not pipeline properly (LP: #948461).
-# Valid options:
-# False/default: Disables pipelining for APT
-# None/Unchanged: Use OS default
-# Number: Set pipelining to some number (not recommended)
-apt_pipelining: False
-
-# Preserve existing /etc/apt/sources.list
-# Default: overwrite sources_list with mirror. If this is true
-# then apt_mirror above will have no effect
-apt_preserve_sources_list: true
-
-# Provide a custom template for rendering sources.list
-# Default: a default template for Ubuntu/Debain will be used as packaged in
-# Ubuntu: /etc/cloud/templates/sources.list.ubuntu.tmpl
-# Debian: /etc/cloud/templates/sources.list.debian.tmpl
-# Others: n/a
-# This will follow the normal mirror/codename replacement rules before
-# being written to disk.
-apt_custom_sources_list: |
- ## template:jinja
- ## Note, this file is written by cloud-init on first boot of an instance
- ## modifications made here will not survive a re-bundle.
- ## if you wish to make changes you can:
- ## a.) add 'apt_preserve_sources_list: true' to /etc/cloud/cloud.cfg
- ## or do the same in user-data
- ## b.) add sources in /etc/apt/sources.list.d
- ## c.) make changes to template file /etc/cloud/templates/sources.list.tmpl
- deb {{mirror}} {{codename}} main restricted
- deb-src {{mirror}} {{codename}} main restricted
-
- # could drop some of the usually used entries
-
- # could refer to other mirrors
- deb http://ddebs.ubuntu.com {{codename}} main restricted universe multiverse
- deb http://ddebs.ubuntu.com {{codename}}-updates main restricted universe multiverse
- deb http://ddebs.ubuntu.com {{codename}}-proposed main restricted universe multiverse
-
- # or even more uncommon examples like local or NFS mounted repos,
- # eventually whatever is compatible with sources.list syntax
- deb file:/home/apt/debian unstable main contrib non-free
-
-# 'source' entries in apt-sources that match this python regex
-# expression will be passed to add-apt-repository
-add_apt_repo_match: '^[\w-]+:\w'
-
-# 'apt_sources' is a dictionary
-# The key is the filename and will be prepended by /etc/apt/sources.list.d/ if
-# it doesn't start with a '/'.
-# There are certain cases - where no content is written into a source.list file
-# where the filename will be ignored - yet it can still be used as index for
-# merging.
-# The value it maps to is a dictionary with the following optional entries:
-# source: a sources.list entry (some variable replacements apply)
-# keyid: providing a key to import via shortid or fingerprint
-# key: providing a raw PGP key
-# keyserver: keyserver to fetch keys from, default is keyserver.ubuntu.com
-# filename: for compatibility with the older format (now the key to this
-# dictionary is the filename). If specified this overwrites the
-# filename given as key.
-
-# the new "filename: {specification-dictionary}, filename2: ..." format allows
-# better merging between multiple input files than a list like:
-# cloud-config1
-# sources:
-# s1: {'key': 'key1', 'source': 'source1'}
-# cloud-config2
-# sources:
-# s2: {'key': 'key2'}
-# s1: {filename: 'foo'}
-# this would be merged to
-#sources:
-# s1:
-# filename: foo
-# key: key1
-# source: source1
-# s2:
-# key: key2
-# Be aware that this style of merging is not the default (for backward
-# compatibility reasons). You should specify the following merge_how to get
-# this more complete and modern merging behaviour:
-# merge_how: "list()+dict()+str()"
-# This would then also be equivalent to the config merging used in curtin
-# (https://launchpad.net/curtin).
-
-# for more details see below in the various examples
-
-apt_sources:
- byobu-ppa.list:
- source: "deb http://ppa.launchpad.net/byobu/ppa/ubuntu karmic main"
- keyid: F430BBA5 # GPG key ID published on a key server
- # adding a source.list line, importing a gpg key for a given key id and
- # storing it in the file /etc/apt/sources.list.d/byobu-ppa.list
-
- # PPA shortcut:
- # * Setup correct apt sources.list line
- # * Import the signing key from LP
- #
- # See https://help.launchpad.net/Packaging/PPA for more information
- # this requires 'add-apt-repository'
- # due to that the filename key is ignored in this case
- ignored1:
- source: "ppa:smoser/ppa" # Quote the string
-
- # Custom apt repository:
- # * all that is required is 'source'
- # * Creates a file in /etc/apt/sources.list.d/ for the sources list entry
- # * [optional] Import the apt signing key from the keyserver
- # * Defaults:
- # + keyserver: keyserver.ubuntu.com
- #
- # See sources.list man page for more information about the format
- my-repo.list:
- source: deb http://archive.ubuntu.com/ubuntu karmic-backports main universe multiverse restricted
-
- # sources can use $MIRROR and $RELEASE and they will be replaced
- # with the local mirror for this cloud, and the running release
- # the entry below would be possibly turned into:
- # source: deb http://us-east-1.ec2.archive.ubuntu.com/ubuntu natty multiverse
- my-repo.list:
- source: deb $MIRROR $RELEASE multiverse
-
- # this would have the same end effect as 'ppa:byobu/ppa'
- my-repo.list:
- source: "deb http://ppa.launchpad.net/byobu/ppa/ubuntu karmic main"
- keyid: F430BBA5 # GPG key ID published on a key server
- filename: byobu-ppa.list
-
- # this would only import the key without adding a ppa or other source spec
- # since this doesn't generate a source.list file the filename key is ignored
- ignored2:
- keyid: F430BBA5 # GPG key ID published on a key server
-
- # In general keyid's can also be specified via their long fingerprints
- # since this doesn't generate a source.list file the filename key is ignored
- ignored3:
- keyid: B59D 5F15 97A5 04B7 E230 6DCA 0620 BBCF 0368 3F77
-
- # Custom apt repository:
- # * The apt signing key can also be specified
- # by providing a pgp public key block
- # * Providing the PGP key here is the most robust method for
- # specifying a key, as it removes dependency on a remote key server
- my-repo.list:
- source: deb http://ppa.launchpad.net/alestic/ppa/ubuntu karmic main
- key: | # The value needs to start with -----BEGIN PGP PUBLIC KEY BLOCK-----
- -----BEGIN PGP PUBLIC KEY BLOCK-----
- Version: SKS 1.0.10
-
- mI0ESpA3UQEEALdZKVIMq0j6qWAXAyxSlF63SvPVIgxHPb9Nk0DZUixn+akqytxG4zKCONz6
- qLjoBBfHnynyVLfT4ihg9an1PqxRnTO+JKQxl8NgKGz6Pon569GtAOdWNKw15XKinJTDLjnj
- 9y96ljJqRcpV9t/WsIcdJPcKFR5voHTEoABE2aEXABEBAAG0GUxhdW5jaHBhZCBQUEEgZm9y
- IEFsZXN0aWOItgQTAQIAIAUCSpA3UQIbAwYLCQgHAwIEFQIIAwQWAgMBAh4BAheAAAoJEA7H
- 5Qi+CcVxWZ8D/1MyYvfj3FJPZUm2Yo1zZsQ657vHI9+pPouqflWOayRR9jbiyUFIn0VdQBrP
- t0FwvnOFArUovUWoKAEdqR8hPy3M3APUZjl5K4cMZR/xaMQeQRZ5CHpS4DBKURKAHC0ltS5o
- uBJKQOZm5iltJp15cgyIkBkGe8Mx18VFyVglAZey
- =Y2oI
- -----END PGP PUBLIC KEY BLOCK-----
-
- # Custom gpg key:
- # * As with keyid, a key may also be specified without a related source.
- # * all other facts mentioned above still apply
- # since this doesn't generate a source.list file the filename key is ignored
- ignored4:
- key: | # The value needs to start with -----BEGIN PGP PUBLIC KEY BLOCK-----
- -----BEGIN PGP PUBLIC KEY BLOCK-----
- Version: SKS 1.0.10
-
- mI0ESpA3UQEEALdZKVIMq0j6qWAXAyxSlF63SvPVIgxHPb9Nk0DZUixn+akqytxG4zKCONz6
- qLjoBBfHnynyVLfT4ihg9an1PqxRnTO+JKQxl8NgKGz6Pon569GtAOdWNKw15XKinJTDLjnj
- 9y96ljJqRcpV9t/WsIcdJPcKFR5voHTEoABE2aEXABEBAAG0GUxhdW5jaHBhZCBQUEEgZm9y
- IEFsZXN0aWOItgQTAQIAIAUCSpA3UQIbAwYLCQgHAwIEFQIIAwQWAgMBAh4BAheAAAoJEA7H
- 5Qi+CcVxWZ8D/1MyYvfj3FJPZUm2Yo1zZsQ657vHI9+pPouqflWOayRR9jbiyUFIn0VdQBrP
- t0FwvnOFArUovUWoKAEdqR8hPy3M3APUZjl5K4cMZR/xaMQeQRZ5CHpS4DBKURKAHC0ltS5o
- uBJKQOZm5iltJp15cgyIkBkGe8Mx18VFyVglAZey
- =Y2oI
- -----END PGP PUBLIC KEY BLOCK-----
-
-
-## apt config via system_info:
-# under the 'system_info', you can further customize cloud-init's interaction
-# with apt.
-# system_info:
-# apt_get_command: [command, argument, argument]
-# apt_get_upgrade_subcommand: dist-upgrade
-#
-# apt_get_command:
-# To specify a different 'apt-get' command, set 'apt_get_command'.
-# This must be a list, and the subcommand (update, upgrade) is appended to it.
-# default is:
-# ['apt-get', '--option=Dpkg::Options::=--force-confold',
-# '--option=Dpkg::options::=--force-unsafe-io', '--assume-yes', '--quiet']
-#
-# apt_get_upgrade_subcommand:
-# Specify a different subcommand for 'upgrade. The default is 'dist-upgrade'.
-# This is the subcommand that is invoked if package_upgrade is set to true above.
-#
-# apt_get_wrapper:
-# command: eatmydata
-# enabled: [True, False, "auto"]
-#
-
-# Install additional packages on first boot
-#
-# Default: none
-#
-# if packages are specified, this apt_update will be set to true
-#
-packages:
- - pwgen
- - pastebinit
-
-# set up mount points
-# 'mounts' contains a list of lists
-# the inner list are entries for an /etc/fstab line
-# ie : [ fs_spec, fs_file, fs_vfstype, fs_mntops, fs-freq, fs_passno ]
-#
-# default:
-# mounts:
-# - [ ephemeral0, /mnt ]
-# - [ swap, none, swap, sw, 0, 0 ]
-#
-# in order to remove a previously listed mount (ie, one from defaults)
-# list only the fs_spec. For example, to override the default, of
-# mounting swap:
-# - [ swap ]
-# or
-# - [ swap, null ]
-#
-# - if a device does not exist at the time, an entry will still be
-# written to /etc/fstab.
-# - '/dev' can be ommitted for device names that begin with: xvd, sd, hd, vd
-# - if an entry does not have all 6 fields, they will be filled in
-# with values from 'mount_default_fields' below.
-#
-# Note, that you should set 'nobootwait' (see man fstab) for volumes that may
-# not be attached at instance boot (or reboot)
-#
-mounts:
- - [ ephemeral0, /mnt, auto, "defaults,noexec" ]
- - [ sdc, /opt/data ]
- - [ xvdh, /opt/data, "auto", "defaults,nobootwait", "0", "0" ]
- - [ dd, /dev/zero ]
-
-# mount_default_fields
-# These values are used to fill in any entries in 'mounts' that are not
-# complete. This must be an array, and must have 7 fields.
-mount_default_fields: [ None, None, "auto", "defaults,nobootwait", "0", "2" ]
-
-# add each entry to ~/.ssh/authorized_keys for the configured user or the
-# first user defined in the user definition directive.
-ssh_authorized_keys:
- - ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAGEA3FSyQwBI6Z+nCSjUUk8EEAnnkhXlukKoUPND/RRClWz2s5TCzIkd3Ou5+Cyz71X0XmazM3l5WgeErvtIwQMyT1KjNoMhoJMrJnWqQPOt5Q8zWd9qG7PBl9+eiH5qV7NZ mykey@host
- - ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA3I7VUf2l5gSn5uavROsc5HRDpZdQueUq5ozemNSj8T7enqKHOEaFoU2VoPgGEWC9RyzSQVeyD6s7APMcE82EtmW4skVEgEGSbDc1pvxzxtchBj78hJP6Cf5TCMFSXw+Fz5rF1dR23QDbN1mkHs7adr8GW4kSWqU7Q7NDwfIrJJtO7Hi42GyXtvEONHbiRPOe8stqUly7MvUoN+5kfjBM8Qqpfl2+FNhTYWpMfYdPUnE7u536WqzFmsaqJctz3gBxH9Ex7dFtrxR4qiqEr9Qtlu3xGn7Bw07/+i1D+ey3ONkZLN+LQ714cgj8fRS4Hj29SCmXp5Kt5/82cD/VN3NtHw== smoser@brickies
-
-# Send pre-generated ssh private keys to the server
-# If these are present, they will be written to /etc/ssh and
-# new random keys will not be generated
-# in addition to 'rsa' and 'dsa' as shown below, 'ecdsa' is also supported
-ssh_keys:
- rsa_private: |
- -----BEGIN RSA PRIVATE KEY-----
- MIIBxwIBAAJhAKD0YSHy73nUgysO13XsJmd4fHiFyQ+00R7VVu2iV9Qcon2LZS/x
- 1cydPZ4pQpfjEha6WxZ6o8ci/Ea/w0n+0HGPwaxlEG2Z9inNtj3pgFrYcRztfECb
- 1j6HCibZbAzYtwIBIwJgO8h72WjcmvcpZ8OvHSvTwAguO2TkR6mPgHsgSaKy6GJo
- PUJnaZRWuba/HX0KGyhz19nPzLpzG5f0fYahlMJAyc13FV7K6kMBPXTRR6FxgHEg
- L0MPC7cdqAwOVNcPY6A7AjEA1bNaIjOzFN2sfZX0j7OMhQuc4zP7r80zaGc5oy6W
- p58hRAncFKEvnEq2CeL3vtuZAjEAwNBHpbNsBYTRPCHM7rZuG/iBtwp8Rxhc9I5w
- ixvzMgi+HpGLWzUIBS+P/XhekIjPAjA285rVmEP+DR255Ls65QbgYhJmTzIXQ2T9
- luLvcmFBC6l35Uc4gTgg4ALsmXLn71MCMGMpSWspEvuGInayTCL+vEjmNBT+FAdO
- W7D4zCpI43jRS9U06JVOeSc9CDk2lwiA3wIwCTB/6uc8Cq85D9YqpM10FuHjKpnP
- REPPOyrAspdeOAV+6VKRavstea7+2DZmSUgE
- -----END RSA PRIVATE KEY-----
-
- rsa_public: ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAGEAoPRhIfLvedSDKw7XdewmZ3h8eIXJD7TRHtVW7aJX1ByifYtlL/HVzJ09nilCl+MSFrpbFnqjxyL8Rr/DSf7QcY/BrGUQbZn2Kc22PemAWthxHO18QJvWPocKJtlsDNi3 smoser@localhost
-
- dsa_private: |
- -----BEGIN DSA PRIVATE KEY-----
- MIIBuwIBAAKBgQDP2HLu7pTExL89USyM0264RCyWX/CMLmukxX0Jdbm29ax8FBJT
- pLrO8TIXVY5rPAJm1dTHnpuyJhOvU9G7M8tPUABtzSJh4GVSHlwaCfycwcpLv9TX
- DgWIpSj+6EiHCyaRlB1/CBp9RiaB+10QcFbm+lapuET+/Au6vSDp9IRtlQIVAIMR
- 8KucvUYbOEI+yv+5LW9u3z/BAoGBAI0q6JP+JvJmwZFaeCMMVxXUbqiSko/P1lsa
- LNNBHZ5/8MOUIm8rB2FC6ziidfueJpqTMqeQmSAlEBCwnwreUnGfRrKoJpyPNENY
- d15MG6N5J+z81sEcHFeprryZ+D3Ge9VjPq3Tf3NhKKwCDQ0240aPezbnjPeFm4mH
- bYxxcZ9GAoGAXmLIFSQgiAPu459rCKxT46tHJtM0QfnNiEnQLbFluefZ/yiI4DI3
- 8UzTCOXLhUA7ybmZha+D/csj15Y9/BNFuO7unzVhikCQV9DTeXX46pG4s1o23JKC
- /QaYWNMZ7kTRv+wWow9MhGiVdML4ZN4XnifuO5krqAybngIy66PMEoQCFEIsKKWv
- 99iziAH0KBMVbxy03Trz
- -----END DSA PRIVATE KEY-----
-
- dsa_public: ssh-dss AAAAB3NzaC1kc3MAAACBAM/Ycu7ulMTEvz1RLIzTbrhELJZf8Iwua6TFfQl1ubb1rHwUElOkus7xMhdVjms8AmbV1Meem7ImE69T0bszy09QAG3NImHgZVIeXBoJ/JzByku/1NcOBYilKP7oSIcLJpGUHX8IGn1GJoH7XRBwVub6Vqm4RP78C7q9IOn0hG2VAAAAFQCDEfCrnL1GGzhCPsr/uS1vbt8/wQAAAIEAjSrok/4m8mbBkVp4IwxXFdRuqJKSj8/WWxos00Ednn/ww5QibysHYULrOKJ1+54mmpMyp5CZICUQELCfCt5ScZ9GsqgmnI80Q1h3Xkwbo3kn7PzWwRwcV6muvJn4PcZ71WM+rdN/c2EorAINDTbjRo97NueM94WbiYdtjHFxn0YAAACAXmLIFSQgiAPu459rCKxT46tHJtM0QfnNiEnQLbFluefZ/yiI4DI38UzTCOXLhUA7ybmZha+D/csj15Y9/BNFuO7unzVhikCQV9DTeXX46pG4s1o23JKC/QaYWNMZ7kTRv+wWow9MhGiVdML4ZN4XnifuO5krqAybngIy66PMEoQ= smoser@localhost
-
-
-# remove access to the ec2 metadata service early in boot via null route
-# the null route can be removed (by root) with:
-# route del -host 169.254.169.254 reject
-# default: false (service available)
-disable_ec2_metadata: true
-
-# run commands
-# default: none
-# runcmd contains a list of either lists or a string
-# each item will be executed in order at rc.local like level with
-# output to the console
-# - if the item is a list, the items will be properly executed as if
-# passed to execve(3) (with the first arg as the command).
-# - if the item is a string, it will be simply written to the file and
-# will be interpreted by 'sh'
-#
-# Note, that the list has to be proper yaml, so you have to escape
-# any characters yaml would eat (':' can be problematic)
-runcmd:
- - [ ls, -l, / ]
- - [ sh, -xc, "echo $(date) ': hello world!'" ]
- - [ sh, -c, echo "=========hello world'=========" ]
- - ls -l /root
- - [ wget, "http://slashdot.org", -O, /tmp/index.html ]
-
-
-# boot commands
-# default: none
-# this is very similar to runcmd above, but commands run very early
-# in the boot process, only slightly after a 'boothook' would run.
-# bootcmd should really only be used for things that could not be
-# done later in the boot process. bootcmd is very much like
-# boothook, but possibly with more friendly.
-# * bootcmd will run on every boot
-# * the INSTANCE_ID variable will be set to the current instance id.
-# * you can use 'cloud-init-per' command to help only run once
-bootcmd:
- - echo 192.168.1.130 us.archive.ubuntu.com > /etc/hosts
- - [ cloud-init-per, once, mymkfs, mkfs, /dev/vdb ]
-
-# cloud_config_modules:
-# default:
-# cloud_config_modules:
-# - mounts
-# - ssh
-# - apt-update-upgrade
-# - puppet
-# - updates-check
-# - disable-ec2-metadata
-# - runcmd
-#
-# This is an array of arrays or strings.
-# if item is a string, then it is read as a module name
-# if the item is an array it is of the form:
-# name, frequency, arguments
-# where 'frequency' is one of:
-# once-per-instance
-# always
-# a python file in the CloudConfig/ module directory named
-# cc_<name>.py
-# example:
-cloud_config_modules:
- - mounts
- - ssh-import-id
- - ssh
- - grub-dpkg
- - [ apt-update-upgrade, always ]
- - puppet
- - updates-check
- - disable-ec2-metadata
- - runcmd
- - byobu
-
-# unverified_modules: []
-# if a config module declares a set of distros as supported then it will be
-# skipped if running on a different distro. to override this sanity check,
-# provide a list of modules that should be run anyway in 'unverified_modules'.
-# The default is an empty list (ie, trust modules).
-#
-# Example:
-# unverified_modules: ['apt-update-upgrade']
-# default: []
-
-# ssh_import_id: [ user1, user2 ]
-# ssh_import_id will feed the list in that variable to
-# ssh-import-id, so that public keys stored in launchpad
-# can easily be imported into the configured user
-# This can be a single string ('smoser') or a list ([smoser, kirkland])
-ssh_import_id: [smoser]
-
-# Provide debconf answers / debian preseed values
-#
-# See debconf-set-selections man page.
-#
-# Default: none
-#
-debconf_selections: | # Need to perserve newlines
- # Force debconf priority to critical.
- debconf debconf/priority select critical
-
- # Override default frontend to readline, but allow user to select.
- debconf debconf/frontend select readline
- debconf debconf/frontend seen false
-
-# manage byobu defaults
-# byobu_by_default:
-# 'user' or 'enable-user': set byobu 'launch-by-default' for the default user
-# 'system' or 'enable-system' or 'enable':
-# enable 'launch-by-default' for all users, do not modify default user
-# 'disable': disable both default user and system
-# 'disable-system': disable system
-# 'disable-user': disable for default user
-# not-set: no changes made
-byobu_by_default: system
-
-# disable ssh access as root.
-# if you want to be able to ssh in to the system as the root user
-# rather than as the 'ubuntu' user, then you must set this to false
-# default: true
-disable_root: false
-
-# disable_root_opts: the value of this variable will prefix the
-# respective key in /root/.ssh/authorized_keys if disable_root is true
-# see 'man authorized_keys' for more information on what you can do here
-#
-# The string '$USER' will be replaced with the username of the default user
-#
-# disable_root_opts: no-port-forwarding,no-agent-forwarding,no-X11-forwarding,command="echo 'Please login as the user \"$USER\" rather than the user \"root\".';echo;sleep 10"
-
-
-# set the locale to a given locale
-# default: en_US.UTF-8
-locale: en_US.UTF-8
-# render template default-locale.tmpl to locale_configfile
-locale_configfile: /etc/default/locale
-
-# add entries to rsyslog configuration
-# The first occurance of a given filename will truncate.
-# subsequent entries will append.
-# if value is a scalar, its content is assumed to be 'content', and the
-# default filename is used.
-# if filename is not provided, it will default to 'rsylog_filename'
-# if filename does not start with a '/', it will be put in 'rsyslog_dir'
-# rsyslog_dir default: /etc/rsyslog.d
-# rsyslog_filename default: 20-cloud-config.conf
-rsyslog:
- - ':syslogtag, isequal, "[CLOUDINIT]" /var/log/cloud-foo.log'
- - content: "*.* @@192.0.2.1:10514"
- - filename: 01-examplecom.conf
- content: "*.* @@syslogd.example.com"
-
-# resize_rootfs should the / filesytem be resized on first boot
-# this allows you to launch an instance with a larger disk / partition
-# and have the instance automatically grow / to accomoddate it
-# set to 'False' to disable
-# by default, the resizefs is done early in boot, and blocks
-# if resize_rootfs is set to 'noblock', then it will be run in parallel
-resize_rootfs: True
-
-## hostname and /etc/hosts management
-# cloud-init can handle updating some entries in /etc/hosts,
-# and can set your hostname for you.
-#
-# if you do nothing you'll end up with:
-# * /etc/hostname (and `hostname`) managed via: 'preserve_hostame: false'
-# if you do not change /etc/hostname, it will be updated with the cloud
-# provided hostname on each boot. If you make a change, then manual
-# maintenance takes over, and cloud-init will not modify it.
-#
-# * /etc/hosts managed via: 'manage_etc_hosts: false'
-# cloud-init will not manage /etc/hosts at all. It is in full manual
-# maintenance mode.
-#
-# You can change the above behavior with the following config variables:
-# Remember that these can be set in cloud-config via user-data,
-# /etc/cloud/cloud.cfg or any file in /etc/cloud/cloud.cfg.d/
-#
-# == Hostname management (via /etc/hostname) ==
-# * preserve_hostname:
-# default: False
-# If this option is set to True, then /etc/hostname will never updated
-# The default behavior is to update it if it has not been modified by
-# the user.
-#
-# * hostname:
-# this option will be used wherever the 'hostname' is needed
-# simply substitute it in the description above.
-# ** If you wish to set your hostname, set it here **
-# default: 'hostname' as returned by the metadata service
-# on EC2, the hostname portion of 'local-hostname' is used
-# which is something like 'ip-10-244-170-199'
-#
-# * fqdn:
-# this option will be used wherever 'fqdn' is needed.
-# simply substitue it in the description above.
-# default: fqdn as returned by the metadata service. on EC2 'hostname'
-# is used, so this is like: ip-10-244-170-199.ec2.internal
-#
-# == /etc/hosts management ==
-#
-# The cloud-config variable that covers management of /etc/hosts is
-# 'manage_etc_hosts'
-#
-# By default, its value is 'false' (boolean False)
-#
-# * manage_etc_hosts:
-# default: false
-#
-# false:
-# cloud-init will not modify /etc/hosts at all.
-# * Whatever is present at instance boot time will be present after boot.
-# * User changes will not be overwritten
-#
-# true or 'template':
-# on every boot, /etc/hosts will be re-written from
-# /etc/cloud/templates/hosts.tmpl.
-# The strings '$hostname' and '$fqdn' are replaced in the template
-# with the appropriate values.
-# To make modifications persistant across a reboot, you must make
-# modificatoins to /etc/cloud/templates/hosts.tmpl
-#
-# localhost:
-# This option ensures that an entry is present for fqdn as described in
-# section 5.1.2 of the debian manual
-# http://www.debian.org/doc/manuals/debian-reference/ch05.en.html
-#
-# cloud-init will generally own the 127.0.1.1 entry, and will update
-# it to the hostname and fqdn on every boot. All other entries will
-# be left as is. 'ping `hostname`' will ping 127.0.1.1
-#
-# If you want a fqdn entry with aliases other than 'hostname' to resolve
-# to a localhost interface, you'll need to use something other than
-# 127.0.1.1. For example:
-# 127.0.1.2 myhost.fqdn.example.com myhost whatup.example.com
-
-# final_message
-# default: cloud-init boot finished at $TIMESTAMP. Up $UPTIME seconds
-# this message is written by cloud-final when the system is finished
-# its first boot.
-# This message is rendered as if it were a template. If you
-# want jinja, you have to start the line with '## template:jinja\n'
-final_message: "The system is finally up, after $UPTIME seconds"
-
-# configure where output will go
-# 'output' entry is a dict with 'init', 'config', 'final' or 'all'
-# entries. Each one defines where
-# cloud-init, cloud-config, cloud-config-final or all output will go
-# each entry in the dict can be a string, list or dict.
-# if it is a string, it refers to stdout and stderr
-# if it is a list, entry 0 is stdout, entry 1 is stderr
-# if it is a dict, it is expected to have 'output' and 'error' fields
-# default is to write to console only
-# the special entry "&1" for an error means "same location as stdout"
-# (Note, that '&1' has meaning in yaml, so it must be quoted)
-output:
- init: "> /var/log/my-cloud-init.log"
- config: [ ">> /tmp/foo.out", "> /tmp/foo.err" ]
- final:
- output: "| tee /tmp/final.stdout | tee /tmp/bar.stdout"
- error: "&1"
-
-
-# phone_home: if this dictionary is present, then the phone_home
-# cloud-config module will post specified data back to the given
-# url
-# default: none
-# phone_home:
-# url: http://my.foo.bar/$INSTANCE/
-# post: all
-# tries: 10
-#
-phone_home:
- url: http://my.example.com/$INSTANCE_ID/
- post: [ pub_key_dsa, pub_key_rsa, pub_key_ecdsa, instance_id ]
-
-# timezone: set the timezone for this instance
-# the value of 'timezone' must exist in /usr/share/zoneinfo
-timezone: US/Eastern
-
-# def_log_file and syslog_fix_perms work together
-# if
-# - logging is set to go to a log file 'L' both with and without syslog
-# - and 'L' does not exist
-# - and syslog is configured to write to 'L'
-# then 'L' will be initially created with root:root ownership (during
-# cloud-init), and then at cloud-config time (when syslog is available)
-# the syslog daemon will be unable to write to the file.
-#
-# to remedy this situation, 'def_log_file' can be set to a filename
-# and syslog_fix_perms to a string containing "<user>:<group>"
-# if syslog_fix_perms is a list, it will iterate through and use the
-# first pair that does not raise error.
-#
-# the default values are '/var/log/cloud-init.log' and 'syslog:adm'
-# the value of 'def_log_file' should match what is configured in logging
-# if either is empty, then no change of ownership will be done
-def_log_file: /var/log/my-logging-file.log
-syslog_fix_perms: syslog:root
-
-# you can set passwords for a user or multiple users
-# this is off by default.
-# to set the default user's password, use the 'password' option.
-# if set, to 'R' or 'RANDOM', then a random password will be
-# generated and written to stdout (the console)
-# password: passw0rd
-#
-# also note, that this will expire the password, forcing a change
-# on first login. If you do not want to expire, see 'chpasswd' below.
-#
-# By default in the UEC images password authentication is disabled
-# Thus, simply setting 'password' as above will only allow you to login
-# via the console.
-#
-# in order to enable password login via ssh you must set
-# 'ssh_pwauth'.
-# If it is set, to 'True' or 'False', then sshd_config will be updated
-# to ensure the desired function. If not set, or set to '' or 'unchanged'
-# then sshd_config will not be updated.
-# ssh_pwauth: True
-#
-# there is also an option to set multiple users passwords, using 'chpasswd'
-# That looks like the following, with 'expire' set to 'True' by default.
-# to not expire users passwords, set 'expire' to 'False':
-# chpasswd:
-# list: |
-# user1:password1
-# user2:RANDOM
-# expire: True
-# ssh_pwauth: [ True, False, "" or "unchanged" ]
-#
-# So, a simple working example to allow login via ssh, and not expire
-# for the default user would look like:
-password: passw0rd
-chpasswd: { expire: False }
-ssh_pwauth: True
-
-# manual cache clean.
-# By default, the link from /var/lib/cloud/instance to
-# the specific instance in /var/lib/cloud/instances/ is removed on every
-# boot. The cloud-init code then searches for a DataSource on every boot
-# if your DataSource will not be present on every boot, then you can set
-# this option to 'True', and maintain (remove) that link before the image
-# will be booted as a new instance.
-# default is False
-manual_cache_clean: False
-
-# When cloud-init is finished running including having run
-# cloud_init_modules, then it will run this command. The default
-# is to emit an upstart signal as shown below. If the value is a
-# list, it will be passed to Popen. If it is a string, it will be
-# invoked through 'sh -c'.
-#
-# default value:
-# cc_ready_cmd: [ initctl, emit, cloud-config, CLOUD_CFG=/var/lib/instance//cloud-config.txt ]
-# example:
-# cc_ready_cmd: [ sh, -c, 'echo HI MOM > /tmp/file' ]
-
-## configure interaction with ssh server
-# ssh_svcname: ssh
-# set the name of the option to 'service restart'
-# in order to restart the ssh daemon. For fedora, use 'sshd'
-# default: ssh
-# ssh_deletekeys: True
-# boolean indicating if existing ssh keys should be deleted on a
-# per-instance basis. On a public image, this should absolutely be set
-# to 'True'
-# ssh_genkeytypes: ['rsa', 'dsa', 'ecdsa']
-# a list of the ssh key types that should be generated
-# These are passed to 'ssh-keygen -t'
-
-## configuration of ssh keys output to console
-# ssh_fp_console_blacklist: []
-# ssh_key_console_blacklist: [ssh-dss]
-# A list of key types (first token of a /etc/ssh/ssh_key_*.pub file)
-# that should be skipped when outputting key fingerprints and keys
-# to the console respectively.
-
-## poweroff or reboot system after finished
-# default: none
-#
-# power_state can be used to make the system shutdown, reboot or
-# halt after boot is finished. This same thing can be acheived by
-# user-data scripts or by runcmd by simply invoking 'shutdown'.
-#
-# Doing it this way ensures that cloud-init is entirely finished with
-# modules that would be executed, and avoids any error/log messages
-# that may go to the console as a result of system services like
-# syslog being taken down while cloud-init is running.
-#
-# delay: form accepted by shutdown. default is 'now'. other format
-# accepted is +m (m in minutes)
-# mode: required. must be one of 'poweroff', 'halt', 'reboot'
-# message: provided as the message argument to 'shutdown'. default is none.
-power_state:
- delay: 30
- mode: poweroff
- message: Bye Bye
diff --git a/doc/examples/include-once.txt b/doc/examples/include-once.txt
deleted file mode 100644
index 0cf74e5e..00000000
--- a/doc/examples/include-once.txt
+++ /dev/null
@@ -1,7 +0,0 @@
-#include-once
-# entries are one url per line. comment lines beginning with '#' are allowed
-# urls are passed to urllib.urlopen, so the format must be supported there
-# This entries will just be processed ONE TIME by cloud-init, any further
-# iterations won't process this file
-http://www.ubuntu.com/robots.txt
-http://www.w3schools.com/html/lastpage.htm
diff --git a/doc/examples/include.txt b/doc/examples/include.txt
deleted file mode 100644
index 5bdc7991..00000000
--- a/doc/examples/include.txt
+++ /dev/null
@@ -1,5 +0,0 @@
-#include
-# entries are one url per line. comment lines beginning with '#' are allowed
-# urls are passed to urllib.urlopen, so the format must be supported there
-http://www.ubuntu.com/robots.txt
-http://www.w3schools.com/html/lastpage.htm
diff --git a/doc/examples/kernel-cmdline.txt b/doc/examples/kernel-cmdline.txt
deleted file mode 100644
index f043baef..00000000
--- a/doc/examples/kernel-cmdline.txt
+++ /dev/null
@@ -1,18 +0,0 @@
-cloud-config can be provided via the kernel command line.
-configuration that comes from the kernel command line has higher priority
-than configuration in /etc/cloud/cloud.cfg
-
-The format is:
- cc: <yaml content here> [end_cc]
-
-cloud-config will consider any content after 'cc:' to be cloud-config
-data. If an 'end_cc' string is present, then it will stop reading there.
-otherwise it considers everthing after 'cc:' to be cloud-config content.
-
-In order to allow carriage returns, you must enter '\\n', literally,
-on the command line two backslashes followed by a letter 'n'.
-
-Here are some examples:
- root=/dev/sda1 cc: ssh_import_id: [smoser, kirkland]\\n
- root=LABEL=uec-rootfs cc: ssh_import_id: [smoser, bob]\\nruncmd: [ [ ls, -l ], echo hi ] end_cc
- cc:ssh_import_id: [smoser] end_cc cc:runcmd: [ [ ls, -l ] ] end_cc root=/dev/sda1
diff --git a/doc/examples/part-handler-v2.txt b/doc/examples/part-handler-v2.txt
deleted file mode 100644
index 554c34a5..00000000
--- a/doc/examples/part-handler-v2.txt
+++ /dev/null
@@ -1,38 +0,0 @@
-#part-handler
-# vi: syntax=python ts=4
-# this is an example of a version 2 part handler.
-# the differences between the initial part-handler version
-# and v2 is:
-# * handle_part receives a 5th argument, 'frequency'
-# frequency will be either 'always' or 'per-instance'
-# * handler_version must be set
-#
-# A handler declaring version 2 will be called on all instance boots, with a
-# different 'frequency' argument.
-
-handler_version = 2
-
-def list_types():
- # return a list of mime-types that are handled by this module
- return(["text/plain", "text/go-cubs-go"])
-
-def handle_part(data,ctype,filename,payload,frequency):
- # data: the cloudinit object
- # ctype: '__begin__', '__end__', or the specific mime-type of the part
- # filename: the filename for the part, or dynamically generated part if
- # no filename is given attribute is present
- # payload: the content of the part (empty for begin or end)
- # frequency: the frequency that this cloud-init run is running for
- # this is either 'per-instance' or 'always'. 'per-instance'
- # will be invoked only on the first boot. 'always' will
- # will be called on subsequent boots.
- if ctype == "__begin__":
- print "my handler is beginning, frequency=%s" % frequency
- return
- if ctype == "__end__":
- print "my handler is ending, frequency=%s" % frequency
- return
-
- print "==== received ctype=%s filename=%s ====" % (ctype,filename)
- print payload
- print "==== end ctype=%s filename=%s" % (ctype, filename)
diff --git a/doc/examples/part-handler.txt b/doc/examples/part-handler.txt
deleted file mode 100644
index a6e66415..00000000
--- a/doc/examples/part-handler.txt
+++ /dev/null
@@ -1,23 +0,0 @@
-#part-handler
-# vi: syntax=python ts=4
-
-def list_types():
- # return a list of mime-types that are handled by this module
- return(["text/plain", "text/go-cubs-go"])
-
-def handle_part(data,ctype,filename,payload):
- # data: the cloudinit object
- # ctype: '__begin__', '__end__', or the specific mime-type of the part
- # filename: the filename for the part, or dynamically generated part if
- # no filename is given attribute is present
- # payload: the content of the part (empty for begin or end)
- if ctype == "__begin__":
- print "my handler is beginning"
- return
- if ctype == "__end__":
- print "my handler is ending"
- return
-
- print "==== received ctype=%s filename=%s ====" % (ctype,filename)
- print payload
- print "==== end ctype=%s filename=%s" % (ctype, filename)
diff --git a/doc/examples/plain-ignored.txt b/doc/examples/plain-ignored.txt
deleted file mode 100644
index fb2b59dc..00000000
--- a/doc/examples/plain-ignored.txt
+++ /dev/null
@@ -1,2 +0,0 @@
-#ignored
-Nothing will be done with this part by the UserDataHandler
diff --git a/doc/examples/seed/README b/doc/examples/seed/README
deleted file mode 100644
index cc15839e..00000000
--- a/doc/examples/seed/README
+++ /dev/null
@@ -1,22 +0,0 @@
-This directory is an example of a 'seed' directory.
-
-
-copying these files inside an instance's
- /var/lib/cloud/seed/nocloud
-or
- /var/lib/cloud/seed/nocloud-net
-
-will cause the 'DataSourceNoCloud' and 'DataSourceNoCloudNet' modules
-to enable and read the given data.
-
-The directory must have both files.
-
-- user-data:
- This is the user data, as would be consumed from ec2's metadata service
- see examples in doc/examples.
-- meta-data:
- This file is yaml formated data similar to what is in the ec2 metadata
- service under meta-data/. See the example, or, on an ec2 instance,
- run:
- python -c 'import boto.utils, yaml; print(
- yaml.dump(boto.utils.get_instance_metadata()))'
diff --git a/doc/examples/seed/meta-data b/doc/examples/seed/meta-data
deleted file mode 100644
index d0551448..00000000
--- a/doc/examples/seed/meta-data
+++ /dev/null
@@ -1,30 +0,0 @@
-# this is yaml formated data
-# it is expected to be roughly what you would get from running the following
-# on an ec2 instance:
-# python -c 'import boto.utils, yaml; print(yaml.dump(boto.utils.get_instance_metadata()))'
-ami-id: ami-fd4aa494
-ami-launch-index: '0'
-ami-manifest-path: ubuntu-images-us/ubuntu-lucid-10.04-amd64-server-20100427.1.manifest.xml
-block-device-mapping: {ami: sda1, ephemeral0: sdb, ephemeral1: sdc, root: /dev/sda1}
-hostname: domU-12-31-38-07-19-44.compute-1.internal
-instance-action: none
-instance-id: i-87018aed
-instance-type: m1.large
-kernel-id: aki-c8b258a1
-local-hostname: domU-12-31-38-07-19-44.compute-1.internal
-local-ipv4: 10.223.26.178
-placement: {availability-zone: us-east-1d}
-public-hostname: ec2-184-72-174-120.compute-1.amazonaws.com
-public-ipv4: 184.72.174.120
-public-keys:
- ec2-keypair.us-east-1: [ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCD9dlT00vOUC8Ttq6YH8RzUCVqPQl6HaSfWSTKYnZiVCpTBj1CaRZPLRLmkSB9Nziy4aRJa/LZMbBHXytQKnB1psvNknqC2UNlrXXMk+Vx5S4vg21MXYYimK4uZEY0Qz29QUiTyNsx18jpAaF4ocUpTpRhxPEBCcSCDmMbc27MU2XuTbasM2NjW/w0bBF3ZFhdH68dZICXdTxS2jUrtrCnc1D/QXVZ5kQO3jsmSyJg8E0nE+6Onpx2YRoVRSwjpGzVZ+BlXPnN5xBREBG8XxzhNFHJbek+RgK5TfL+k4yD4XhnVZuZu53cBAFhj+xPKhtisSd+YmaEq+Jt9uS0Ekd5
- ec2-keypair.us-east-1, '']
-reservation-id: r-e2225889
-security-groups: default
-
-# of the fields above:
-# required:
-# instance-id
-# suggested:
-# local-hostname
-# public-keys
diff --git a/doc/examples/seed/user-data b/doc/examples/seed/user-data
deleted file mode 100644
index 2bc87c0b..00000000
--- a/doc/examples/seed/user-data
+++ /dev/null
@@ -1,3 +0,0 @@
-#cloud-config
-runcmd:
- - [ sh, -c, 'echo ==== $(date) ====; echo HI WORLD; echo =======' ]
diff --git a/doc/examples/upstart-cloud-config.txt b/doc/examples/upstart-cloud-config.txt
deleted file mode 100644
index 1fcec34d..00000000
--- a/doc/examples/upstart-cloud-config.txt
+++ /dev/null
@@ -1,12 +0,0 @@
-#upstart-job
-description "My test job"
-
-start on cloud-config
-console output
-task
-
-script
-echo "====BEGIN======="
-echo "HELLO WORLD: $UPSTART_JOB"
-echo "=====END========"
-end script
diff --git a/doc/examples/upstart-rclocal.txt b/doc/examples/upstart-rclocal.txt
deleted file mode 100644
index 5cd049a9..00000000
--- a/doc/examples/upstart-rclocal.txt
+++ /dev/null
@@ -1,12 +0,0 @@
-#upstart-job
-description "a test upstart job"
-
-start on stopped rc RUNLEVEL=[2345]
-console output
-task
-
-script
-echo "====BEGIN======="
-echo "HELLO RC.LOCAL LIKE WORLD: $UPSTART_JOB"
-echo "=====END========"
-end script
diff --git a/doc/examples/user-script.txt b/doc/examples/user-script.txt
deleted file mode 100644
index 6a87cad5..00000000
--- a/doc/examples/user-script.txt
+++ /dev/null
@@ -1,8 +0,0 @@
-#!/bin/sh
-
-cat <<EOF
-============================
-My name is ${0}
-I was input via user data
-============================
-EOF
diff --git a/doc/merging.rst b/doc/merging.rst
deleted file mode 100644
index afe1a6dd..00000000
--- a/doc/merging.rst
+++ /dev/null
@@ -1,194 +0,0 @@
-Overview
---------
-
-This was implemented because it has been a common feature request that there be
-a way to specify how cloud-config yaml "dictionaries" provided as user-data are
-merged together when there are multiple yamls to merge together (say when
-performing an #include).
-
-Since previously the merging algorithm was very simple and would only overwrite
-and not append lists, or strings, and so on it was decided to create a new and
-improved way to merge dictionaries (and there contained objects) together in a
-way that is customizable, thus allowing for users who provide cloud-config
-user-data to determine exactly how there objects will be merged.
-
-For example.
-
-.. code-block:: yaml
-
- #cloud-config (1)
- run_cmd:
- - bash1
- - bash2
-
- #cloud-config (2)
- run_cmd:
- - bash3
- - bash4
-
-The previous way of merging the following 2 objects would result in a final
-cloud-config object that contains the following.
-
-.. code-block:: yaml
-
- #cloud-config (merged)
- run_cmd:
- - bash3
- - bash4
-
-Typically this is not what users want, instead they would likely prefer:
-
-.. code-block:: yaml
-
- #cloud-config (merged)
- run_cmd:
- - bash1
- - bash2
- - bash3
- - bash4
-
-This way makes it easier to combine the various cloud-config objects you have
-into a more useful list, thus reducing duplication that would have had to
-occur in the previous method to accomplish the same result.
-
-Customizability
----------------
-
-Since the above merging algorithm may not always be the desired merging
-algorithm (like how the previous merging algorithm was not always the preferred
-one) the concept of customizing how merging can be done was introduced through
-a new concept call 'merge classes'.
-
-A merge class is a class defintion which provides functions that can be used
-to merge a given type with another given type.
-
-An example of one of these merging classes is the following:
-
-.. code-block:: python
-
- class Merger(object):
- def __init__(self, merger, opts):
- self._merger = merger
- self._overwrite = 'overwrite' in opts
-
- # This merging algorithm will attempt to merge with
- # another dictionary, on encountering any other type of object
- # it will not merge with said object, but will instead return
- # the original value
- #
- # On encountering a dictionary, it will create a new dictionary
- # composed of the original and the one to merge with, if 'overwrite'
- # is enabled then keys that exist in the original will be overwritten
- # by keys in the one to merge with (and associated values). Otherwise
- # if not in overwrite mode the 2 conflicting keys themselves will
- # be merged.
- def _on_dict(self, value, merge_with):
- if not isinstance(merge_with, (dict)):
- return value
- merged = dict(value)
- for (k, v) in merge_with.items():
- if k in merged:
- if not self._overwrite:
- merged[k] = self._merger.merge(merged[k], v)
- else:
- merged[k] = v
- else:
- merged[k] = v
- return merged
-
-As you can see there is a '_on_dict' method here that will be given a source value
-and a value to merge with. The result will be the merged object. This code itself
-is called by another merging class which 'directs' the merging to happen by
-analyzing the types of the objects to merge and attempting to find a know object
-that will merge that type. I will avoid pasting that here, but it can be found
-in the `mergers/__init__.py` file (see `LookupMerger` and `UnknownMerger`).
-
-So following the typical cloud-init way of allowing source code to be downloaded
-and used dynamically, it is possible for users to inject there own merging files
-to handle specific types of merging as they choose (the basic ones included will
-handle lists, dicts, and strings). Note how each merge can have options associated
-with it which affect how the merging is performed, for example a dictionary merger
-can be told to overwrite instead of attempt to merge, or a string merger can be
-told to append strings instead of discarding other strings to merge with.
-
-How to activate
----------------
-
-There are a few ways to activate the merging algorithms, and to customize them
-for your own usage.
-
-1. The first way involves the usage of MIME messages in cloud-init to specify
- multipart documents (this is one way in which multiple cloud-config is joined
- together into a single cloud-config). Two new headers are looked for, both
- of which can define the way merging is done (the first header to exist wins).
- These new headers (in lookup order) are 'Merge-Type' and 'X-Merge-Type'. The value
- should be a string which will satisfy the new merging format defintion (see
- below for this format).
-2. The second way is actually specifying the merge-type in the body of the
- cloud-config dictionary. There are 2 ways to specify this, either as a string
- or as a dictionary (see format below). The keys that are looked up for this
- definition are the following (in order), 'merge_how', 'merge_type'.
-
-String format
-*************
-
-The string format that is expected is the following.
-
-::
-
- classname1(option1,option2)+classname2(option3,option4)....
-
-The class name there will be connected to class names used when looking for the
-class that can be used to merge and options provided will be given to the class
-on construction of that class.
-
-For example, the default string that is used when none is provided is the following:
-
-::
-
- list()+dict()+str()
-
-Dictionary format
-*****************
-
-In cases where a dictionary can be used to specify the same information as the
-string format (ie option #2 of above) it can be used, for example.
-
-.. code-block:: python
-
- {'merge_how': [{'name': 'list', 'settings': ['extend']},
- {'name': 'dict', 'settings': []},
- {'name': 'str', 'settings': ['append']}]}
-
-This would be the equivalent format for default string format but in dictionary
-form instead of string form.
-
-Specifying multiple types and its effect
-----------------------------------------
-
-Now you may be asking yourself, if I specify a merge-type header or dictionary
-for every cloud-config that I provide, what exactly happens?
-
-The answer is that when merging, a stack of 'merging classes' is kept, the
-first one on that stack is the default merging classes, this set of mergers
-will be used when the first cloud-config is merged with the initial empty
-cloud-config dictionary. If the cloud-config that was just merged provided a
-set of merging classes (via the above formats) then those merging classes will
-be pushed onto the stack. Now if there is a second cloud-config to be merged then
-the merging classes from the cloud-config before the first will be used (not the
-default) and so on. This way a cloud-config can decide how it will merge with a
-cloud-config dictionary coming after it.
-
-Other uses
-----------
-
-In addition to being used for merging user-data sections, the default merging
-algorithm for merging 'conf.d' yaml files (which form an initial yaml config
-for cloud-init) was also changed to use this mechanism so its full
-benefits (and customization) can also be used there as well. Other places that
-used the previous merging are also, similarly, now extensible (metadata
-merging, for example).
-
-Note, however, that merge algorithms are not used *across* types of
-configuration. As was the case before merging was implemented,
-user-data will overwrite conf.d configuration without merging.
diff --git a/doc/rtd/conf.py b/doc/rtd/conf.py
deleted file mode 100644
index 8a391f21..00000000
--- a/doc/rtd/conf.py
+++ /dev/null
@@ -1,77 +0,0 @@
-import os
-import sys
-
-# If extensions (or modules to document with autodoc) are in another directory,
-# add these directories to sys.path here. If the directory is relative to the
-# documentation root, use os.path.abspath to make it absolute, like shown here.
-sys.path.insert(0, os.path.abspath('../../'))
-sys.path.insert(0, os.path.abspath('../'))
-sys.path.insert(0, os.path.abspath('./'))
-sys.path.insert(0, os.path.abspath('.'))
-
-from cloudinit import version
-
-# Supress warnings for docs that aren't used yet
-# unused_docs = [
-# ]
-
-# General information about the project.
-project = 'Cloud-Init'
-
-# -- General configuration ----------------------------------------------------
-
-# If your documentation needs a minimal Sphinx version, state it here.
-# needs_sphinx = '1.0'
-
-# Add any Sphinx extension module names here, as strings. They can be
-# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
-extensions = [
- 'sphinx.ext.intersphinx',
- 'sphinx.ext.autodoc',
- 'sphinx.ext.viewcode',
-]
-
-intersphinx_mapping = {
- 'sphinx': ('http://sphinx.pocoo.org', None)
-}
-
-# The suffix of source filenames.
-source_suffix = '.rst'
-
-# The master toctree document.
-master_doc = 'index'
-
-# The version info for the project you're documenting, acts as replacement for
-# |version| and |release|, also used in various other places throughout the
-# built documents.
-version = version.version_string()
-release = version
-
-# Set the default Pygments syntax
-highlight_language = 'python'
-
-# List of patterns, relative to source directory, that match files and
-# directories to ignore when looking for source files.
-exclude_patterns = []
-
-# If true, sectionauthor and moduleauthor directives will be shown in the
-# output. They are ignored by default.
-show_authors = False
-
-# -- Options for HTML output --------------------------------------------------
-
-# The theme to use for HTML and HTML Help pages. See the documentation for
-# a list of builtin themes.
-html_theme = 'default'
-
-# Theme options are theme-specific and customize the look and feel of a theme
-# further. For a list of options available for each theme, see the
-# documentation.
-html_theme_options = {
- "bodyfont": "Ubuntu, Arial, sans-serif",
- "headfont": "Ubuntu, Arial, sans-serif"
-}
-
-# The name of an image file (relative to this directory) to place at the top
-# of the sidebar.
-html_logo = 'static/logo.png'
diff --git a/doc/rtd/index.rst b/doc/rtd/index.rst
deleted file mode 100644
index fe04b1a9..00000000
--- a/doc/rtd/index.rst
+++ /dev/null
@@ -1,31 +0,0 @@
-.. _index:
-
-=====================
-Documentation
-=====================
-
-.. rubric:: Everything about cloud-init, a set of **python** scripts and utilities to make your cloud images be all they can be!
-
-Summary
------------------
-
-`Cloud-init`_ is the *defacto* multi-distribution package that handles early initialization of a cloud instance.
-
-
-----
-
-.. toctree::
- :maxdepth: 2
-
- topics/capabilities
- topics/availability
- topics/format
- topics/dir_layout
- topics/examples
- topics/datasources
- topics/modules
- topics/merging
- topics/moreinfo
- topics/hacking
-
-.. _Cloud-init: https://launchpad.net/cloud-init
diff --git a/doc/rtd/static/logo.png b/doc/rtd/static/logo.png
deleted file mode 100644
index e980fdea..00000000
--- a/doc/rtd/static/logo.png
+++ /dev/null
Binary files differ
diff --git a/doc/rtd/static/logo.svg b/doc/rtd/static/logo.svg
deleted file mode 100644
index 7a2ae21b..00000000
--- a/doc/rtd/static/logo.svg
+++ /dev/null
@@ -1,89 +0,0 @@
-<?xml version="1.0" encoding="utf-8"?>
-<!-- Generator: Adobe Illustrator 16.0.4, SVG Export Plug-In . SVG Version: 6.00 Build 0) -->
-<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
-<svg version="1.1" id="artwork" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" x="0px" y="0px"
- width="1190.551px" height="841.89px" viewBox="0 0 1190.551 841.89" enable-background="new 0 0 1190.551 841.89"
- xml:space="preserve">
-<g>
- <g>
- <path fill="#000020" d="M219.086,722.686c-9.061,0-17.009-1.439-23.838-4.314c-6.833-2.875-12.586-6.902-17.258-12.082
- c-4.675-5.176-8.164-11.324-10.463-18.443c-2.302-7.119-3.452-14.992-3.452-23.623c0-8.629,1.257-16.537,3.775-23.73
- c2.515-7.189,6.074-13.408,10.679-18.658c4.601-5.25,10.247-9.348,16.935-12.299c6.688-2.945,14.13-4.422,22.328-4.422
- c5.033,0,10.065,0.432,15.101,1.295c5.033,0.863,9.85,2.232,14.454,4.1l-4.53,17.041c-3.021-1.436-6.509-2.588-10.463-3.451
- c-3.957-0.863-8.164-1.295-12.62-1.295c-11.218,0-19.813,3.527-25.779,10.572c-5.97,7.049-8.953,17.332-8.953,30.848
- c0,6.041,0.681,11.58,2.05,16.611c1.365,5.037,3.522,9.35,6.472,12.945c2.946,3.596,6.721,6.361,11.326,8.305
- c4.601,1.941,10.21,2.912,16.827,2.912c5.319,0,10.139-0.502,14.454-1.51c4.314-1.006,7.692-2.084,10.139-3.236l2.804,16.826
- c-1.153,0.723-2.804,1.402-4.962,2.051c-2.157,0.646-4.604,1.219-7.334,1.725c-2.734,0.502-5.646,0.934-8.737,1.295
- C224.944,722.504,221.961,722.686,219.086,722.686z"/>
- <path fill="#000020" d="M298.343,722.254c-12.371-0.289-21.141-2.945-26.319-7.982c-5.178-5.031-7.766-12.869-7.766-23.514
- V556.143l20.062-3.451v134.83c0,3.311,0.287,6.041,0.863,8.197c0.573,2.158,1.51,3.885,2.805,5.178
- c1.294,1.295,3.02,2.266,5.177,2.914c2.158,0.646,4.817,1.186,7.982,1.617L298.343,722.254z"/>
- <path fill="#000020" d="M415.288,664.008c0,8.92-1.294,16.971-3.883,24.162c-2.589,7.193-6.223,13.375-10.895,18.553
- c-4.675,5.176-10.247,9.168-16.72,11.971c-6.471,2.807-13.52,4.207-21.141,4.207c-7.624,0-14.669-1.4-21.141-4.207
- c-6.472-2.803-12.047-6.795-16.72-11.971c-4.675-5.178-8.305-11.359-10.894-18.553c-2.588-7.191-3.883-15.242-3.883-24.162
- c0-8.771,1.294-16.789,3.883-24.055c2.589-7.26,6.219-13.482,10.894-18.66c4.673-5.178,10.248-9.168,16.72-11.973
- c6.472-2.805,13.517-4.207,21.141-4.207c7.621,0,14.67,1.402,21.141,4.207c6.473,2.805,12.045,6.795,16.72,11.973
- c4.672,5.178,8.306,11.4,10.895,18.66C413.994,647.219,415.288,655.236,415.288,664.008z M394.362,664.008
- c0-12.654-2.841-22.686-8.521-30.094c-5.684-7.406-13.412-11.111-23.191-11.111c-9.781,0-17.512,3.705-23.19,11.111
- c-5.683,7.408-8.521,17.439-8.521,30.094c0,12.656,2.838,22.688,8.521,30.094c5.679,7.41,13.409,11.109,23.19,11.109
- c9.779,0,17.508-3.699,23.191-11.109C391.521,686.695,394.362,676.664,394.362,664.008z"/>
- <path fill="#000020" d="M527.186,716.861c-4.604,1.152-10.678,2.373-18.229,3.666c-7.551,1.295-16.288,1.943-26.211,1.943
- c-8.629,0-15.893-1.262-21.789-3.777c-5.898-2.514-10.645-6.072-14.238-10.678c-3.596-4.602-6.184-10.031-7.766-16.287
- c-1.584-6.256-2.373-13.193-2.373-20.818v-62.992h20.062v58.678c0,13.666,2.158,23.443,6.472,29.34
- c4.315,5.898,11.575,8.844,21.789,8.844c2.157,0,4.386-0.07,6.688-0.215c2.299-0.141,4.455-0.324,6.471-0.539
- c2.012-0.215,3.846-0.432,5.501-0.648c1.652-0.215,2.839-0.465,3.56-0.754v-94.705h20.062V716.861z"/>
- <path fill="#000020" d="M628.963,556.143l20.064-3.451v164.17c-4.605,1.293-10.502,2.588-17.691,3.883
- c-7.193,1.295-15.461,1.941-24.809,1.941c-8.629,0-16.396-1.369-23.298-4.098c-6.903-2.732-12.802-6.615-17.69-11.65
- c-4.891-5.033-8.666-11.182-11.326-18.445c-2.662-7.26-3.99-15.424-3.99-24.484c0-8.629,1.113-16.537,3.344-23.73
- c2.229-7.189,5.502-13.375,9.816-18.553c4.313-5.178,9.6-9.201,15.855-12.08c6.256-2.875,13.409-4.314,21.465-4.314
- c6.471,0,12.188,0.863,17.15,2.588c4.961,1.727,8.662,3.381,11.109,4.963V556.143z M628.963,631.648
- c-2.447-2.014-5.969-3.953-10.57-5.824c-4.604-1.867-9.637-2.805-15.102-2.805c-5.754,0-10.678,1.045-14.776,3.127
- c-4.1,2.088-7.443,4.963-10.033,8.631c-2.588,3.666-4.459,8.018-5.607,13.051c-1.153,5.035-1.727,10.43-1.727,16.18
- c0,13.088,3.236,23.189,9.708,30.311c6.472,7.119,15.101,10.678,25.888,10.678c5.463,0,10.031-0.25,13.697-0.756
- c3.668-0.502,6.506-1.041,8.521-1.617V631.648z"/>
- <path fill="#000020" d="M671.375,646.102h53.283v18.77h-53.283V646.102z"/>
- <path fill="#000020" d="M755.745,587.641c-3.598,0-6.654-1.188-9.168-3.561c-2.52-2.373-3.777-5.57-3.777-9.6
- c0-4.025,1.258-7.227,3.777-9.6c2.514-2.373,5.57-3.559,9.168-3.559c3.592,0,6.65,1.186,9.168,3.559
- c2.516,2.373,3.775,5.574,3.775,9.6c0,4.029-1.26,7.227-3.775,9.6C762.395,586.453,759.336,587.641,755.745,587.641z
- M765.883,720.098h-20.062v-112.18h20.062V720.098z"/>
- <path fill="#000020" d="M794.401,611.154c4.602-1.148,10.713-2.373,18.336-3.668c7.621-1.293,16.396-1.941,26.32-1.941
- c8.914,0,16.32,1.262,22.219,3.775c5.896,2.518,10.605,6.041,14.131,10.57c3.523,4.531,6.002,9.961,7.441,16.287
- c1.438,6.332,2.158,13.305,2.158,20.926v62.994h-20.062v-58.68c0-6.902-0.469-12.797-1.402-17.689
- c-0.938-4.887-2.48-8.844-4.639-11.863c-2.156-3.021-5.035-5.213-8.629-6.58c-3.596-1.365-8.055-2.051-13.375-2.051
- c-2.156,0-4.389,0.074-6.688,0.217c-2.303,0.145-4.496,0.322-6.58,0.539c-2.086,0.215-3.957,0.469-5.609,0.756
- c-1.654,0.289-2.84,0.504-3.559,0.646v94.705h-20.062V611.154z"/>
- <path fill="#000020" d="M922.088,587.641c-3.598,0-6.654-1.188-9.168-3.561c-2.52-2.373-3.777-5.57-3.777-9.6
- c0-4.025,1.258-7.227,3.777-9.6c2.514-2.373,5.57-3.559,9.168-3.559c3.592,0,6.65,1.186,9.168,3.559
- c2.516,2.373,3.775,5.574,3.775,9.6c0,4.029-1.26,7.227-3.775,9.6C928.739,586.453,925.68,587.641,922.088,587.641z
- M932.227,720.098h-20.062v-112.18h20.062V720.098z"/>
- <path fill="#000020" d="M979.663,607.918h42.5v16.828h-42.5v51.773c0,5.609,0.432,10.248,1.295,13.914
- c0.863,3.668,2.158,6.547,3.883,8.631c1.727,2.086,3.883,3.559,6.473,4.422c2.588,0.863,5.607,1.293,9.061,1.293
- c6.041,0,10.895-0.68,14.561-2.049c3.668-1.365,6.221-2.336,7.658-2.912l3.885,16.611c-2.016,1.008-5.539,2.264-10.57,3.775
- c-5.037,1.51-10.787,2.266-17.26,2.266c-7.625,0-13.914-0.971-18.877-2.914c-4.961-1.941-8.951-4.854-11.971-8.736
- c-3.021-3.883-5.145-8.662-6.365-14.346c-1.223-5.68-1.834-12.26-1.834-19.738V576.639l20.062-3.453V607.918z"/>
- </g>
- <g>
- <path fill="#E95420" d="M595.275,150.171c-93.932,0-170.078,76.146-170.078,170.079c0,53.984,25.157,102.088,64.381,133.245
- v-37.423c0-8.807,7.137-15.943,15.943-15.943s15.947,7.137,15.947,15.943v57.45c3.478,1.678,7.024,3.233,10.629,4.677v-62.127
- c0-8.807,7.139-15.943,15.943-15.943c8.807,0,15.945,7.137,15.945,15.943v71.374c3.508,0.652,7.053,1.198,10.633,1.631v-73.005
- c0-8.807,7.139-15.943,15.945-15.943c8.805,0,15.944,7.137,15.944,15.943v73.878c3.572-0.232,7.119-0.565,10.629-1.016V352.287
- c0-8.801,7.137-15.943,15.943-15.943s15.943,7.143,15.943,15.943v129.365c67.59-22.497,116.33-86.255,116.33-161.402
- C765.354,226.317,689.208,150.171,595.275,150.171z"/>
- <path fill="#FFFFFF" d="M696.856,320.25H569.303c-21.133,0-38.27-17.125-38.27-38.27c0-20.339,15.871-36.965,35.898-38.192
- c-8.953-17.953-2.489-40.012,15.128-50.188c17.611-10.165,39.949-4.739,51.019,11.997c11.076-16.736,33.416-22.162,51.025-11.994
- c17.621,10.173,24.08,32.226,15.119,50.185c20.037,1.228,35.906,17.854,35.906,38.192
- C735.129,303.125,717.993,320.25,696.856,320.25z"/>
- <g>
- <path fill="#E95420" d="M633.014,271.05c4.074,0,7.375-3.302,7.375-7.371v-34.547c0-4.07-3.301-7.37-7.375-7.37
- c-4.068,0-7.369,3.3-7.369,7.37v34.547C625.645,267.748,628.946,271.05,633.014,271.05z"/>
- <path fill="#E95420" d="M650.254,238.483c-3.043,2.7-3.316,7.36-0.615,10.405c7.746,8.728,7.348,22.036-0.916,30.302
- c-4.174,4.176-9.729,6.476-15.639,6.476c-5.9,0-11.457-2.303-15.637-6.479c-8.291-8.291-8.67-21.625-0.863-30.356
- c2.715-3.034,2.455-7.695-0.578-10.409c-3.035-2.712-7.699-2.453-10.41,0.582c-13.02,14.556-12.391,36.787,1.43,50.607
- c6.961,6.963,16.211,10.797,26.059,10.797c0,0,0,0,0.002,0c9.846,0,19.098-3.831,26.064-10.793
- c13.77-13.776,14.432-35.96,1.512-50.514C657.961,236.055,653.299,235.777,650.254,238.483z"/>
- <path fill="#E95420" d="M632.788,344.26c-4.4,0-7.969,3.568-7.969,7.969c0,4.406,3.568,7.975,7.969,7.975
- c4.406,0,7.975-3.568,7.975-7.975C640.762,347.828,637.194,344.26,632.788,344.26z"/>
- </g>
- </g>
-</g>
-</svg>
diff --git a/doc/rtd/topics/availability.rst b/doc/rtd/topics/availability.rst
deleted file mode 100644
index 2d58f808..00000000
--- a/doc/rtd/topics/availability.rst
+++ /dev/null
@@ -1,20 +0,0 @@
-============
-Availability
-============
-
-It is currently installed in the `Ubuntu Cloud Images`_ and also in the official `Ubuntu`_ images available on EC2.
-
-Versions for other systems can be (or have been) created for the following distributions:
-
-- Ubuntu
-- Fedora
-- Debian
-- RHEL
-- CentOS
-- *and more...*
-
-So ask your distribution provider where you can obtain an image with it built-in if one is not already available ☺
-
-
-.. _Ubuntu Cloud Images: http://cloud-images.ubuntu.com/
-.. _Ubuntu: http://www.ubuntu.com/
diff --git a/doc/rtd/topics/capabilities.rst b/doc/rtd/topics/capabilities.rst
deleted file mode 100644
index 63b34270..00000000
--- a/doc/rtd/topics/capabilities.rst
+++ /dev/null
@@ -1,24 +0,0 @@
-=====================
-Capabilities
-=====================
-
-- Setting a default locale
-- Setting a instance hostname
-- Generating instance ssh private keys
-- Adding ssh keys to a users ``.ssh/authorized_keys`` so they can log in
-- Setting up ephemeral mount points
-
-User configurability
---------------------
-
-`Cloud-init`_ 's behavior can be configured via user-data.
-
- User-data can be given by the user at instance launch time.
-
-This is done via the ``--user-data`` or ``--user-data-file`` argument to ec2-run-instances for example.
-
-* Check your local clients documentation for how to provide a `user-data` string
- or `user-data` file for usage by cloud-init on instance creation.
-
-
-.. _Cloud-init: https://launchpad.net/cloud-init
diff --git a/doc/rtd/topics/datasources.rst b/doc/rtd/topics/datasources.rst
deleted file mode 100644
index 0d7d4aca..00000000
--- a/doc/rtd/topics/datasources.rst
+++ /dev/null
@@ -1,200 +0,0 @@
-.. _datasources:
-
-=========
-Datasources
-=========
-----------
- What is a datasource?
-----------
-
-Datasources are sources of configuration data for cloud-init that typically come
-from the user (aka userdata) or come from the stack that created the configuration
-drive (aka metadata). Typical userdata would include files, yaml, and shell scripts
-while typical metadata would include server name, instance id, display name and other
-cloud specific details. Since there are multiple ways to provide this data (each cloud
-solution seems to prefer its own way) internally a datasource abstract class was
-created to allow for a single way to access the different cloud systems methods
-to provide this data through the typical usage of subclasses.
-
-The current interface that a datasource object must provide is the following:
-
-.. sourcecode:: python
-
- # returns a mime multipart message that contains
- # all the various fully-expanded components that
- # were found from processing the raw userdata string
- # - when filtering only the mime messages targeting
- # this instance id will be returned (or messages with
- # no instance id)
- def get_userdata(self, apply_filter=False)
-
- # returns the raw userdata string (or none)
- def get_userdata_raw(self)
-
- # returns a integer (or none) which can be used to identify
- # this instance in a group of instances which are typically
- # created from a single command, thus allowing programatic
- # filtering on this launch index (or other selective actions)
- @property
- def launch_index(self)
-
- # the data sources' config_obj is a cloud-config formated
- # object that came to it from ways other than cloud-config
- # because cloud-config content would be handled elsewhere
- def get_config_obj(self)
-
- #returns a list of public ssh keys
- def get_public_ssh_keys(self)
-
- # translates a device 'short' name into the actual physical device
- # fully qualified name (or none if said physical device is not attached
- # or does not exist)
- def device_name_to_device(self, name)
-
- # gets the locale string this instance should be applying
- # which typically used to adjust the instances locale settings files
- def get_locale(self)
-
- @property
- def availability_zone(self)
-
- # gets the instance id that was assigned to this instance by the
- # cloud provider or when said instance id does not exist in the backing
- # metadata this will return 'iid-datasource'
- def get_instance_id(self)
-
- # gets the fully qualified domain name that this host should be using
- # when configuring network or hostname releated settings, typically
- # assigned either by the cloud provider or the user creating the vm
- def get_hostname(self, fqdn=False)
-
- def get_package_mirror_info(self)
-
----------------------------
-EC2
----------------------------
-
-The EC2 datasource is the oldest and most widely used datasource that cloud-init
-supports. This datasource interacts with a *magic* ip that is provided to the
-instance by the cloud provider. Typically this ip is ``169.254.169.254`` of which
-at this ip a http server is provided to the instance so that the instance can make
-calls to get instance userdata and instance metadata.
-
-Metadata is accessible via the following URL:
-
-::
-
- GET http://169.254.169.254/2009-04-04/meta-data/
- ami-id
- ami-launch-index
- ami-manifest-path
- block-device-mapping/
- hostname
- instance-id
- instance-type
- local-hostname
- local-ipv4
- placement/
- public-hostname
- public-ipv4
- public-keys/
- reservation-id
- security-groups
-
-Userdata is accessible via the following URL:
-
-::
-
- GET http://169.254.169.254/2009-04-04/user-data
- 1234,fred,reboot,true | 4512,jimbo, | 173,,,
-
-Note that there are multiple versions of this data provided, cloud-init
-by default uses **2009-04-04** but newer versions can be supported with
-relative ease (newer versions have more data exposed, while maintaining
-backward compatibility with the previous versions).
-
-To see which versions are supported from your cloud provider use the following URL:
-
-::
-
- GET http://169.254.169.254/
- 1.0
- 2007-01-19
- 2007-03-01
- 2007-08-29
- 2007-10-10
- 2007-12-15
- 2008-02-01
- 2008-09-01
- 2009-04-04
- ...
- latest
-
----------------------------
-Config Drive
----------------------------
-
-.. include:: ../../sources/configdrive/README.rst
-
----------------------------
-OpenNebula
----------------------------
-
-.. include:: ../../sources/opennebula/README.rst
-
----------------------------
-Alt cloud
----------------------------
-
-.. include:: ../../sources/altcloud/README.rst
-
----------------------------
-No cloud
----------------------------
-
-.. include:: ../../sources/nocloud/README.rst
-
----------------------------
-MAAS
----------------------------
-
-*TODO*
-
-For now see: http://maas.ubuntu.com/
-
----------------------------
-CloudStack
----------------------------
-
-.. include:: ../../sources/cloudstack/README.rst
-
----------------------------
-OVF
----------------------------
-
-*TODO*
-
-For now see: https://bazaar.launchpad.net/~cloud-init-dev/cloud-init/trunk/files/head:/doc/sources/ovf/
-
----------------------------
-OpenStack
----------------------------
-
-.. include:: ../../sources/openstack/README.rst
-
----------------------------
-Fallback/None
----------------------------
-
-This is the fallback datasource when no other datasource can be selected. It is
-the equivalent of a *empty* datasource in that it provides a empty string as userdata
-and a empty dictionary as metadata. It is useful for testing as well as for when
-you do not have a need to have an actual datasource to meet your instance
-requirements (ie you just want to run modules that are not concerned with any
-external data). It is typically put at the end of the datasource search list
-so that if all other datasources are not matched, then this one will be so that
-the user is not left with an inaccessible instance.
-
-**Note:** the instance id that this datasource provides is ``iid-datasource-none``.
-
-.. _boto: http://docs.pythonboto.org/en/latest/
diff --git a/doc/rtd/topics/dir_layout.rst b/doc/rtd/topics/dir_layout.rst
deleted file mode 100644
index 8815d33d..00000000
--- a/doc/rtd/topics/dir_layout.rst
+++ /dev/null
@@ -1,81 +0,0 @@
-=========
-Directory layout
-=========
-
-Cloudinits's directory structure is somewhat different from a regular application::
-
- /var/lib/cloud/
- - data/
- - instance-id
- - previous-instance-id
- - datasource
- - previous-datasource
- - previous-hostname
- - handlers/
- - instance
- - instances/
- i-00000XYZ/
- - boot-finished
- - cloud-config.txt
- - datasource
- - handlers/
- - obj.pkl
- - scripts/
- - sem/
- - user-data.txt
- - user-data.txt.i
- - scripts/
- - per-boot/
- - per-instance/
- - per-once/
- - seed/
- - sem/
-
-``/var/lib/cloud``
-
- The main directory containing the cloud-init specific subdirectories.
- It is typically located at ``/var/lib`` but there are certain configuration
- scenarios where this can be altered.
-
- TBD, describe this overriding more.
-
-``data/``
-
- Contains information releated to instance ids, datasources and hostnames of the previous
- and current instance if they are different. These can be examined as needed to
- determine any information releated to a previous boot (if applicable).
-
-``handlers/``
-
- Custom ``part-handlers`` code is written out here. Files that end up here are written
- out with in the scheme of ``part-handler-XYZ`` where ``XYZ`` is the handler number (the
- first handler found starts at 0).
-
-
-``instance``
-
- A symlink to the current ``instances/`` subdirectory that points to the currently
- active instance (which is active is dependent on the datasource loaded).
-
-``instances/``
-
- All instances that were created using this image end up with instance identifer
- subdirectories (and corresponding data for each instance). The currently active
- instance will be symlinked the the ``instance`` symlink file defined previously.
-
-``scripts/``
-
- Scripts that are downloaded/created by the corresponding ``part-handler`` will end up
- in one of these subdirectories.
-
-``seed/``
-
- TBD
-
-``sem/``
-
- Cloud-init has a concept of a module sempahore, which basically consists
- of the module name and its frequency. These files are used to ensure a module
- is only ran `per-once`, `per-instance`, `per-always`. This folder contains
- sempaphore `files` which are only supposed to run `per-once` (not tied to the instance id).
-
diff --git a/doc/rtd/topics/examples.rst b/doc/rtd/topics/examples.rst
deleted file mode 100644
index 36508bde..00000000
--- a/doc/rtd/topics/examples.rst
+++ /dev/null
@@ -1,133 +0,0 @@
-.. _yaml_examples:
-
-=========
-Cloud config examples
-=========
-
-Including users and groups
----------------------------
-
-.. literalinclude:: ../../examples/cloud-config-user-groups.txt
- :language: yaml
- :linenos:
-
-
-Writing out arbitrary files
----------------------------
-
-.. literalinclude:: ../../examples/cloud-config-write-files.txt
- :language: yaml
- :linenos:
-
-
-Adding a yum repository
----------------------------
-
-.. literalinclude:: ../../examples/cloud-config-yum-repo.txt
- :language: yaml
- :linenos:
-
-Configure an instances trusted CA certificates
-------------------------------------------------------
-
-.. literalinclude:: ../../examples/cloud-config-ca-certs.txt
- :language: yaml
- :linenos:
-
-Configure an instances resolv.conf
-------------------------------------------------------
-
-*Note:* when using a config drive and a RHEL like system resolv.conf
-will also be managed 'automatically' due to the available information
-provided for dns servers in the config drive network format. For those
-that wish to have different settings use this module.
-
-.. literalinclude:: ../../examples/cloud-config-resolv-conf.txt
- :language: yaml
- :linenos:
-
-Install and run `chef`_ recipes
-------------------------------------------------------
-
-.. literalinclude:: ../../examples/cloud-config-chef.txt
- :language: yaml
- :linenos:
-
-Setup and run `puppet`_
-------------------------------------------------------
-
-.. literalinclude:: ../../examples/cloud-config-puppet.txt
- :language: yaml
- :linenos:
-
-Add apt repositories
----------------------------
-
-.. literalinclude:: ../../examples/cloud-config-add-apt-repos.txt
- :language: yaml
- :linenos:
-
-Run commands on first boot
----------------------------
-
-.. literalinclude:: ../../examples/cloud-config-boot-cmds.txt
- :language: yaml
- :linenos:
-
-.. literalinclude:: ../../examples/cloud-config-run-cmds.txt
- :language: yaml
- :linenos:
-
-
-Alter the completion message
----------------------------
-
-.. literalinclude:: ../../examples/cloud-config-final-message.txt
- :language: yaml
- :linenos:
-
-Install arbitrary packages
----------------------------
-
-.. literalinclude:: ../../examples/cloud-config-install-packages.txt
- :language: yaml
- :linenos:
-
-Run apt or yum upgrade
----------------------------
-
-.. literalinclude:: ../../examples/cloud-config-update-packages.txt
- :language: yaml
- :linenos:
-
-Adjust mount points mounted
----------------------------
-
-.. literalinclude:: ../../examples/cloud-config-mount-points.txt
- :language: yaml
- :linenos:
-
-Call a url when finished
----------------------------
-
-.. literalinclude:: ../../examples/cloud-config-phone-home.txt
- :language: yaml
- :linenos:
-
-Reboot/poweroff when finished
----------------------------
-
-.. literalinclude:: ../../examples/cloud-config-power-state.txt
- :language: yaml
- :linenos:
-
-Configure instances ssh-keys
----------------------------
-
-.. literalinclude:: ../../examples/cloud-config-ssh-keys.txt
- :language: yaml
- :linenos:
-
-
-.. _chef: http://www.opscode.com/chef/
-.. _puppet: http://puppetlabs.com/
diff --git a/doc/rtd/topics/format.rst b/doc/rtd/topics/format.rst
deleted file mode 100644
index eba9533f..00000000
--- a/doc/rtd/topics/format.rst
+++ /dev/null
@@ -1,159 +0,0 @@
-=========
-Formats
-=========
-
-User data that will be acted upon by cloud-init must be in one of the following types.
-
-Gzip Compressed Content
-------------------------
-
-Content found to be gzip compressed will be uncompressed.
-The uncompressed data will then be used as if it were not compressed.
-This is typically is useful because user-data is limited to ~16384 [#]_ bytes.
-
-Mime Multi Part Archive
-------------------------
-
-This list of rules is applied to each part of this multi-part file.
-Using a mime-multi part file, the user can specify more than one type of data.
-
-For example, both a user data script and a cloud-config type could be specified.
-
-Supported content-types:
-
-- text/x-include-once-url
-- text/x-include-url
-- text/cloud-config-archive
-- text/upstart-job
-- text/cloud-config
-- text/part-handler
-- text/x-shellscript
-- text/cloud-boothook
-
-Helper script to generate mime messages
-~~~~~~~~~~~~~~~~
-
-.. code-block:: python
-
- #!/usr/bin/python
-
- import sys
-
- from email.mime.multipart import MIMEMultipart
- from email.mime.text import MIMEText
-
- if len(sys.argv) == 1:
- print("%s input-file:type ..." % (sys.argv[0]))
- sys.exit(1)
-
- combined_message = MIMEMultipart()
- for i in sys.argv[1:]:
- (filename, format_type) = i.split(":", 1)
- with open(filename) as fh:
- contents = fh.read()
- sub_message = MIMEText(contents, format_type, sys.getdefaultencoding())
- sub_message.add_header('Content-Disposition', 'attachment; filename="%s"' % (filename))
- combined_message.attach(sub_message)
-
- print(combined_message)
-
-
-User-Data Script
-------------------------
-
-Typically used by those who just want to execute a shell script.
-
-Begins with: ``#!`` or ``Content-Type: text/x-shellscript`` when using a MIME archive.
-
-Example
-~~~~~~~
-
-::
-
- $ cat myscript.sh
-
- #!/bin/sh
- echo "Hello World. The time is now $(date -R)!" | tee /root/output.txt
-
- $ euca-run-instances --key mykey --user-data-file myscript.sh ami-a07d95c9
-
-Include File
-------------
-
-This content is a ``include`` file.
-
-The file contains a list of urls, one per line.
-Each of the URLs will be read, and their content will be passed through this same set of rules.
-Ie, the content read from the URL can be gzipped, mime-multi-part, or plain text.
-
-Begins with: ``#include`` or ``Content-Type: text/x-include-url`` when using a MIME archive.
-
-Cloud Config Data
------------------
-
-Cloud-config is the simplest way to accomplish some things
-via user-data. Using cloud-config syntax, the user can specify certain things in a human friendly format.
-
-These things include:
-
-- apt upgrade should be run on first boot
-- a different apt mirror should be used
-- additional apt sources should be added
-- certain ssh keys should be imported
-- *and many more...*
-
-**Note:** The file must be valid yaml syntax.
-
-See the :ref:`yaml_examples` section for a commented set of examples of supported cloud config formats.
-
-Begins with: ``#cloud-config`` or ``Content-Type: text/cloud-config`` when using a MIME archive.
-
-Upstart Job
------------
-
-Content is placed into a file in ``/etc/init``, and will be consumed by upstart as any other upstart job.
-
-Begins with: ``#upstart-job`` or ``Content-Type: text/upstart-job`` when using a MIME archive.
-
-Cloud Boothook
---------------
-
-This content is ``boothook`` data. It is stored in a file under ``/var/lib/cloud`` and then executed immediately.
-This is the earliest ``hook`` available. Note, that there is no mechanism provided for running only once. The boothook must take care of this itself.
-It is provided with the instance id in the environment variable ``INSTANCE_I``. This could be made use of to provide a 'once-per-instance' type of functionality.
-
-Begins with: ``#cloud-boothook`` or ``Content-Type: text/cloud-boothook`` when using a MIME archive.
-
-Part Handler
-------------
-
-This is a ``part-handler``. It will be written to a file in ``/var/lib/cloud/data`` based on its filename (which is generated).
-This must be python code that contains a ``list_types`` method and a ``handle_type`` method.
-Once the section is read the ``list_types`` method will be called. It must return a list of mime-types that this part-handler handles.
-
-The ``handle_type`` method must be like:
-
-.. code-block:: python
-
- def handle_part(data, ctype, filename, payload):
- # data = the cloudinit object
- # ctype = "__begin__", "__end__", or the mime-type of the part that is being handled.
- # filename = the filename of the part (or a generated filename if none is present in mime data)
- # payload = the parts' content
-
-Cloud-init will then call the ``handle_type`` method once at begin, once per part received, and once at end.
-The ``begin`` and ``end`` calls are to allow the part handler to do initialization or teardown.
-
-Begins with: ``#part-handler`` or ``Content-Type: text/part-handler`` when using a MIME archive.
-
-Example
-~~~~~~~
-
-.. literalinclude:: ../../examples/part-handler.txt
- :language: python
- :linenos:
-
-Also this `blog`_ post offers another example for more advanced usage.
-
-.. [#] See your cloud provider for applicable user-data size limitations...
-.. _blog: http://foss-boss.blogspot.com/2011/01/advanced-cloud-init-custom-handlers.html
diff --git a/doc/rtd/topics/hacking.rst b/doc/rtd/topics/hacking.rst
deleted file mode 100644
index 96ab88ef..00000000
--- a/doc/rtd/topics/hacking.rst
+++ /dev/null
@@ -1 +0,0 @@
-.. include:: ../../../HACKING.rst
diff --git a/doc/rtd/topics/merging.rst b/doc/rtd/topics/merging.rst
deleted file mode 100644
index 2bd87b16..00000000
--- a/doc/rtd/topics/merging.rst
+++ /dev/null
@@ -1,5 +0,0 @@
-==========================
-Merging User-Data Sections
-==========================
-
-.. include:: ../../merging.rst
diff --git a/doc/rtd/topics/modules.rst b/doc/rtd/topics/modules.rst
deleted file mode 100644
index 4202338b..00000000
--- a/doc/rtd/topics/modules.rst
+++ /dev/null
@@ -1,342 +0,0 @@
-=======
-Modules
-=======
-
-Apt Configure
--------------
-
-**Internal name:** ``cc_apt_configure``
-
-.. automodule:: cloudinit.config.cc_apt_configure
-
-Apt Pipelining
---------------
-
-**Internal name:** ``cc_apt_pipelining``
-
-.. automodule:: cloudinit.config.cc_apt_pipelining
-
-Bootcmd
--------
-
-**Internal name:** ``cc_bootcmd``
-
-.. automodule:: cloudinit.config.cc_bootcmd
-
-Byobu
------
-
-**Internal name:** ``cc_byobu``
-
-.. automodule:: cloudinit.config.cc_byobu
-
-Ca Certs
---------
-
-**Internal name:** ``cc_ca_certs``
-
-.. automodule:: cloudinit.config.cc_ca_certs
-
-Chef
-----
-
-**Internal name:** ``cc_chef``
-
-.. automodule:: cloudinit.config.cc_chef
- :members:
-
-Debug
------
-
-**Internal name:** ``cc_debug``
-
-.. automodule:: cloudinit.config.cc_debug
- :members:
-
-Disable Ec2 Metadata
---------------------
-
-**Internal name:** ``cc_disable_ec2_metadata``
-
-.. automodule:: cloudinit.config.cc_disable_ec2_metadata
-
-Disk Setup
-----------
-
-**Internal name:** ``cc_disk_setup``
-
-.. automodule:: cloudinit.config.cc_disk_setup
-
-Emit Upstart
-------------
-
-**Internal name:** ``cc_emit_upstart``
-
-.. automodule:: cloudinit.config.cc_emit_upstart
-
-Final Message
--------------
-
-**Internal name:** ``cc_final_message``
-
-.. automodule:: cloudinit.config.cc_final_message
-
-Foo
----
-
-**Internal name:** ``cc_foo``
-
-.. automodule:: cloudinit.config.cc_foo
-
-Growpart
---------
-
-**Internal name:** ``cc_growpart``
-
-.. automodule:: cloudinit.config.cc_growpart
-
-Grub Dpkg
----------
-
-**Internal name:** ``cc_grub_dpkg``
-
-.. automodule:: cloudinit.config.cc_grub_dpkg
-
-Keys To Console
----------------
-
-**Internal name:** ``cc_keys_to_console``
-
-.. automodule:: cloudinit.config.cc_keys_to_console
-
-Landscape
----------
-
-**Internal name:** ``cc_landscape``
-
-.. automodule:: cloudinit.config.cc_landscape
-
-Locale
-------
-
-**Internal name:** ``cc_locale``
-
-.. automodule:: cloudinit.config.cc_locale
-
-Mcollective
------------
-
-**Internal name:** ``cc_mcollective``
-
-.. automodule:: cloudinit.config.cc_mcollective
-
-Migrator
---------
-
-**Internal name:** ``cc_migrator``
-
-.. automodule:: cloudinit.config.cc_migrator
-
-Mounts
-------
-
-**Internal name:** ``cc_mounts``
-
-.. automodule:: cloudinit.config.cc_mounts
-
-Package Update Upgrade Install
-------------------------------
-
-**Internal name:** ``cc_package_update_upgrade_install``
-
-.. automodule:: cloudinit.config.cc_package_update_upgrade_install
-
-Phone Home
-----------
-
-**Internal name:** ``cc_phone_home``
-
-.. automodule:: cloudinit.config.cc_phone_home
-
-Power State Change
-------------------
-
-**Internal name:** ``cc_power_state_change``
-
-.. automodule:: cloudinit.config.cc_power_state_change
-
-Puppet
-------
-
-**Internal name:** ``cc_puppet``
-
-.. automodule:: cloudinit.config.cc_puppet
-
-Resizefs
---------
-
-**Internal name:** ``cc_resizefs``
-
-.. automodule:: cloudinit.config.cc_resizefs
-
-Resolv Conf
------------
-
-**Internal name:** ``cc_resolv_conf``
-
-.. automodule:: cloudinit.config.cc_resolv_conf
-
-Rightscale Userdata
--------------------
-
-**Internal name:** ``cc_rightscale_userdata``
-
-.. automodule:: cloudinit.config.cc_rightscale_userdata
-
-Rsyslog
--------
-
-**Internal name:** ``cc_rsyslog``
-
-.. automodule:: cloudinit.config.cc_rsyslog
-
-Runcmd
-------
-
-**Internal name:** ``cc_runcmd``
-
-.. automodule:: cloudinit.config.cc_runcmd
-
-Salt Minion
------------
-
-**Internal name:** ``cc_salt_minion``
-
-.. automodule:: cloudinit.config.cc_salt_minion
-
-Scripts Per Boot
-----------------
-
-**Internal name:** ``cc_scripts_per_boot``
-
-.. automodule:: cloudinit.config.cc_scripts_per_boot
-
-Scripts Per Instance
---------------------
-
-**Internal name:** ``cc_scripts_per_instance``
-
-.. automodule:: cloudinit.config.cc_scripts_per_instance
-
-Scripts Per Once
-----------------
-
-**Internal name:** ``cc_scripts_per_once``
-
-.. automodule:: cloudinit.config.cc_scripts_per_once
-
-Scripts User
-------------
-
-**Internal name:** ``cc_scripts_user``
-
-.. automodule:: cloudinit.config.cc_scripts_user
-
-Scripts Vendor
---------------
-
-**Internal name:** ``cc_scripts_vendor``
-
-.. automodule:: cloudinit.config.cc_scripts_vendor
-
-Seed Random
------------
-
-**Internal name:** ``cc_seed_random``
-
-.. automodule:: cloudinit.config.cc_seed_random
-
-Set Hostname
-------------
-
-**Internal name:** ``cc_set_hostname``
-
-.. automodule:: cloudinit.config.cc_set_hostname
-
-Set Passwords
--------------
-
-**Internal name:** ``cc_set_passwords``
-
-.. automodule:: cloudinit.config.cc_set_passwords
-
-Ssh
----
-
-**Internal name:** ``cc_ssh``
-
-.. automodule:: cloudinit.config.cc_ssh
-
-Ssh Authkey Fingerprints
-------------------------
-
-**Internal name:** ``cc_ssh_authkey_fingerprints``
-
-.. automodule:: cloudinit.config.cc_ssh_authkey_fingerprints
-
-Ssh Import Id
--------------
-
-**Internal name:** ``cc_ssh_import_id``
-
-.. automodule:: cloudinit.config.cc_ssh_import_id
-
-Timezone
---------
-
-**Internal name:** ``cc_timezone``
-
-.. automodule:: cloudinit.config.cc_timezone
-
-Ubuntu Init Switch
-------------------
-
-**Internal name:** ``cc_ubuntu_init_switch``
-
-.. automodule:: cloudinit.config.cc_ubuntu_init_switch
- :members:
-
-Update Etc Hosts
-----------------
-
-**Internal name:** ``cc_update_etc_hosts``
-
-.. automodule:: cloudinit.config.cc_update_etc_hosts
-
-Update Hostname
----------------
-
-**Internal name:** ``cc_update_hostname``
-
-.. automodule:: cloudinit.config.cc_update_hostname
-
-Users Groups
-------------
-
-**Internal name:** ``cc_users_groups``
-
-.. automodule:: cloudinit.config.cc_users_groups
-
-Write Files
------------
-
-**Internal name:** ``cc_write_files``
-
-.. automodule:: cloudinit.config.cc_write_files
-
-Yum Add Repo
-------------
-
-**Internal name:** ``cc_yum_add_repo``
-
-.. automodule:: cloudinit.config.cc_yum_add_repo
diff --git a/doc/rtd/topics/moreinfo.rst b/doc/rtd/topics/moreinfo.rst
deleted file mode 100644
index 19e96af0..00000000
--- a/doc/rtd/topics/moreinfo.rst
+++ /dev/null
@@ -1,12 +0,0 @@
-=========
-More information
-=========
-
-Useful external references
--------------------------
-
-- `The beauty of cloudinit`_
-- `Introduction to cloud-init`_ (video)
-
-.. _Introduction to cloud-init: http://www.youtube.com/watch?v=-zL3BdbKyGY
-.. _The beauty of cloudinit: http://brandon.fuller.name/archives/2011/05/02/06.40.57/
diff --git a/doc/sources/altcloud/README.rst b/doc/sources/altcloud/README.rst
deleted file mode 100644
index b5d72ebb..00000000
--- a/doc/sources/altcloud/README.rst
+++ /dev/null
@@ -1,87 +0,0 @@
-The datasource altcloud will be used to pick up user data on `RHEVm`_ and `vSphere`_.
-
-RHEVm
-~~~~~~
-
-For `RHEVm`_ v3.0 the userdata is injected into the VM using floppy
-injection via the `RHEVm`_ dashboard "Custom Properties".
-
-The format of the Custom Properties entry must be:
-
-::
-
- floppyinject=user-data.txt:<base64 encoded data>
-
-For example to pass a simple bash script:
-
-::
-
- % cat simple_script.bash
- #!/bin/bash
- echo "Hello Joe!" >> /tmp/JJV_Joe_out.txt
-
- % base64 < simple_script.bash
- IyEvYmluL2Jhc2gKZWNobyAiSGVsbG8gSm9lISIgPj4gL3RtcC9KSlZfSm9lX291dC50eHQK
-
-To pass this example script to cloud-init running in a `RHEVm`_ v3.0 VM
-set the "Custom Properties" when creating the RHEMv v3.0 VM to:
-
-::
-
- floppyinject=user-data.txt:IyEvYmluL2Jhc2gKZWNobyAiSGVsbG8gSm9lISIgPj4gL3RtcC9KSlZfSm9lX291dC50eHQK
-
-**NOTE:** The prefix with file name must be: ``floppyinject=user-data.txt:``
-
-It is also possible to launch a `RHEVm`_ v3.0 VM and pass optional user
-data to it using the Delta Cloud.
-
-For more information on Delta Cloud see: http://deltacloud.apache.org
-
-vSphere
-~~~~~~~~
-
-For VMWare's `vSphere`_ the userdata is injected into the VM as an ISO
-via the cdrom. This can be done using the `vSphere`_ dashboard
-by connecting an ISO image to the CD/DVD drive.
-
-To pass this example script to cloud-init running in a `vSphere`_ VM
-set the CD/DVD drive when creating the vSphere VM to point to an
-ISO on the data store.
-
-**Note:** The ISO must contain the user data.
-
-For example, to pass the same ``simple_script.bash`` to vSphere:
-
-Create the ISO
------------------
-
-::
-
- % mkdir my-iso
-
-NOTE: The file name on the ISO must be: ``user-data.txt``
-
-::
-
- % cp simple_scirpt.bash my-iso/user-data.txt
- % genisoimage -o user-data.iso -r my-iso
-
-Verify the ISO
------------------
-
-::
-
- % sudo mkdir /media/vsphere_iso
- % sudo mount -o loop JoeV_CI_02.iso /media/vsphere_iso
- % cat /media/vsphere_iso/user-data.txt
- % sudo umount /media/vsphere_iso
-
-Then, launch the `vSphere`_ VM the ISO user-data.iso attached as a CDROM.
-
-It is also possible to launch a `vSphere`_ VM and pass optional user
-data to it using the Delta Cloud.
-
-For more information on Delta Cloud see: http://deltacloud.apache.org
-
-.. _RHEVm: https://www.redhat.com/virtualization/rhev/desktop/rhevm/
-.. _vSphere: https://www.vmware.com/products/datacenter-virtualization/vsphere/overview.html
diff --git a/doc/sources/azure/README.rst b/doc/sources/azure/README.rst
deleted file mode 100644
index 8239d1fa..00000000
--- a/doc/sources/azure/README.rst
+++ /dev/null
@@ -1,134 +0,0 @@
-================
-Azure Datasource
-================
-
-This datasource finds metadata and user-data from the Azure cloud platform.
-
-Azure Platform
---------------
-The azure cloud-platform provides initial data to an instance via an attached
-CD formated in UDF. That CD contains a 'ovf-env.xml' file that provides some
-information. Additional information is obtained via interaction with the
-"endpoint". The ip address of the endpoint is advertised to the instance
-inside of dhcp option 245. On ubuntu, that can be seen in
-/var/lib/dhcp/dhclient.eth0.leases as a colon delimited hex value (example:
-``option unknown-245 64:41:60:82;`` is 100.65.96.130)
-
-walinuxagent
-------------
-In order to operate correctly, cloud-init needs walinuxagent to provide much
-of the interaction with azure. In addition to "provisioning" code, walinux
-does the following on the agent is a long running daemon that handles the
-following things:
-- generate a x509 certificate and send that to the endpoint
-
-waagent.conf config
-~~~~~~~~~~~~~~~~~~~
-in order to use waagent.conf with cloud-init, the following settings are recommended. Other values can be changed or set to the defaults.
-
- ::
-
- # disabling provisioning turns off all 'Provisioning.*' function
- Provisioning.Enabled=n
- # this is currently not handled by cloud-init, so let walinuxagent do it.
- ResourceDisk.Format=y
- ResourceDisk.MountPoint=/mnt
-
-
-Userdata
---------
-Userdata is provided to cloud-init inside the ovf-env.xml file. Cloud-init
-expects that user-data will be provided as base64 encoded value inside the
-text child of a element named ``UserData`` or ``CustomData`` which is a direct
-child of the ``LinuxProvisioningConfigurationSet`` (a sibling to ``UserName``)
-If both ``UserData`` and ``CustomData`` are provided behavior is undefined on
-which will be selected.
-
-In the example below, user-data provided is 'this is my userdata', and the
-datasource config provided is ``{"agent_command": ["start", "walinuxagent"]}``.
-That agent command will take affect as if it were specified in system config.
-
-Example:
-
-.. code::
-
- <wa:ProvisioningSection>
- <wa:Version>1.0</wa:Version>
- <LinuxProvisioningConfigurationSet
- xmlns="http://schemas.microsoft.com/windowsazure"
- xmlns:i="http://www.w3.org/2001/XMLSchema-instance">
- <ConfigurationSetType>LinuxProvisioningConfiguration</ConfigurationSetType>
- <HostName>myHost</HostName>
- <UserName>myuser</UserName>
- <UserPassword/>
- <CustomData>dGhpcyBpcyBteSB1c2VyZGF0YQ===</CustomData>
- <dscfg>eyJhZ2VudF9jb21tYW5kIjogWyJzdGFydCIsICJ3YWxpbnV4YWdlbnQiXX0=</dscfg>
- <DisableSshPasswordAuthentication>true</DisableSshPasswordAuthentication>
- <SSH>
- <PublicKeys>
- <PublicKey>
- <Fingerprint>6BE7A7C3C8A8F4B123CCA5D0C2F1BE4CA7B63ED7</Fingerprint>
- <Path>this-value-unused</Path>
- </PublicKey>
- </PublicKeys>
- </SSH>
- </LinuxProvisioningConfigurationSet>
- </wa:ProvisioningSection>
-
-Configuration
--------------
-Configuration for the datasource can be read from the system config's or set
-via the `dscfg` entry in the `LinuxProvisioningConfigurationSet`. Content in
-dscfg node is expected to be base64 encoded yaml content, and it will be
-merged into the 'datasource: Azure' entry.
-
-The '``hostname_bounce: command``' entry can be either the literal string
-'builtin' or a command to execute. The command will be invoked after the
-hostname is set, and will have the 'interface' in its environment. If
-``set_hostname`` is not true, then ``hostname_bounce`` will be ignored.
-
-An example might be:
- command: ["sh", "-c", "killall dhclient; dhclient $interface"]
-
-.. code::
-
- datasource:
- agent_command
- Azure:
- agent_command: [service, walinuxagent, start]
- set_hostname: True
- hostname_bounce:
- # the name of the interface to bounce
- interface: eth0
- # policy can be 'on', 'off' or 'force'
- policy: on
- # the method 'bounce' command.
- command: "builtin"
- hostname_command: "hostname"
- }
-
-hostname
---------
-When the user launches an instance, they provide a hostname for that instance.
-The hostname is provided to the instance in the ovf-env.xml file as
-``HostName``.
-
-Whatever value the instance provides in its dhcp request will resolve in the
-domain returned in the 'search' request.
-
-The interesting issue is that a generic image will already have a hostname
-configured. The ubuntu cloud images have 'ubuntu' as the hostname of the
-system, and the initial dhcp request on eth0 is not guaranteed to occur after
-the datasource code has been run. So, on first boot, that initial value will
-be sent in the dhcp request and *that* value will resolve.
-
-In order to make the ``HostName`` provided in the ovf-env.xml resolve, a
-dhcp request must be made with the new value. Walinuxagent (in its current
-version) handles this by polling the state of hostname and bouncing ('``ifdown
-eth0; ifup eth0``' the network interface if it sees that a change has been
-made.
-
-cloud-init handles this by setting the hostname in the DataSource's 'get_data'
-method via '``hostname $HostName``', and then bouncing the interface. This
-behavior can be configured or disabled in the datasource config. See
-'Configuration' above.
diff --git a/doc/sources/cloudsigma/README.rst b/doc/sources/cloudsigma/README.rst
deleted file mode 100644
index 6509b585..00000000
--- a/doc/sources/cloudsigma/README.rst
+++ /dev/null
@@ -1,38 +0,0 @@
-=====================
-CloudSigma Datasource
-=====================
-
-This datasource finds metadata and user-data from the `CloudSigma`_ cloud platform.
-Data transfer occurs through a virtual serial port of the `CloudSigma`_'s VM and the
-presence of network adapter is **NOT** a requirement,
-
- See `server context`_ in the public documentation for more information.
-
-
-Setting a hostname
-~~~~~~~~~~~~~~~~~~
-
-By default the name of the server will be applied as a hostname on the first boot.
-
-
-Providing user-data
-~~~~~~~~~~~~~~~~~~~
-
-You can provide user-data to the VM using the dedicated `meta field`_ in the `server context`_
-``cloudinit-user-data``. By default *cloud-config* format is expected there and the ``#cloud-config``
-header could be omitted. However since this is a raw-text field you could provide any of the valid
-`config formats`_.
-
-You have the option to encode your user-data using Base64. In order to do that you have to add the
-``cloudinit-user-data`` field to the ``base64_fields``. The latter is a comma-separated field with
-all the meta fields whit base64 encoded values.
-
-If your user-data does not need an internet connection you can create a
-`meta field`_ in the `server context`_ ``cloudinit-dsmode`` and set "local" as value.
-If this field does not exist the default value is "net".
-
-
-.. _CloudSigma: http://cloudsigma.com/
-.. _server context: http://cloudsigma-docs.readthedocs.org/en/latest/server_context.html
-.. _meta field: http://cloudsigma-docs.readthedocs.org/en/latest/meta.html
-.. _config formats: http://cloudinit.readthedocs.org/en/latest/topics/format.html
diff --git a/doc/sources/cloudstack/README.rst b/doc/sources/cloudstack/README.rst
deleted file mode 100644
index eba1cd7e..00000000
--- a/doc/sources/cloudstack/README.rst
+++ /dev/null
@@ -1,29 +0,0 @@
-`Apache CloudStack`_ expose user-data, meta-data, user password and account
-sshkey thru the Virtual-Router. For more details on meta-data and user-data,
-refer the `CloudStack Administrator Guide`_.
-
-URLs to access user-data and meta-data from the Virtual Machine. Here 10.1.1.1
-is the Virtual Router IP:
-
-.. code:: bash
-
- http://10.1.1.1/latest/user-data
- http://10.1.1.1/latest/meta-data
- http://10.1.1.1/latest/meta-data/{metadata type}
-
-Configuration
-~~~~~~~~~~~~~
-
-Apache CloudStack datasource can be configured as follows:
-
-.. code:: yaml
-
- datasource:
- CloudStack: {}
- None: {}
- datasource_list:
- - CloudStack
-
-
-.. _Apache CloudStack: http://cloudstack.apache.org/
-.. _CloudStack Administrator Guide: http://docs.cloudstack.apache.org/projects/cloudstack-administration/en/latest/virtual_machines.html#user-data-and-meta-data \ No newline at end of file
diff --git a/doc/sources/configdrive/README.rst b/doc/sources/configdrive/README.rst
deleted file mode 100644
index 48ff579d..00000000
--- a/doc/sources/configdrive/README.rst
+++ /dev/null
@@ -1,123 +0,0 @@
-The configuration drive datasource supports the `OpenStack`_ configuration drive disk.
-
- See `the config drive extension`_ and `introduction`_ in the public
- documentation for more information.
-
-By default, cloud-init does *always* consider this source to be a full-fledged
-datasource. Instead, the typical behavior is to assume it is really only
-present to provide networking information. Cloud-init will copy off the
-network information, apply it to the system, and then continue on. The
-"full" datasource could then be found in the EC2 metadata service. If this is
-not the case then the files contained on the located drive must provide equivalents
-to what the EC2 metadata service would provide (which is typical of the version
-2 support listed below)
-
-Version 1
-~~~~~~~~~
-
-The following criteria are required to as a config drive:
-
-1. Must be formatted with `vfat`_ filesystem
-2. Must be a un-partitioned block device (/dev/vdb, not /dev/vdb1)
-3. Must contain *one* of the following files
-
-::
-
- /etc/network/interfaces
- /root/.ssh/authorized_keys
- /meta.js
-
-``/etc/network/interfaces``
-
- This file is laid down by nova in order to pass static networking
- information to the guest. Cloud-init will copy it off of the config-drive
- and into /etc/network/interfaces (or convert it to RH format) as soon as it can,
- and then attempt to bring up all network interfaces.
-
-``/root/.ssh/authorized_keys``
-
- This file is laid down by nova, and contains the ssk keys that were
- provided to nova on instance creation (nova-boot --key ....)
-
-``/meta.js``
-
- meta.js is populated on the config-drive in response to the user passing
- "meta flags" (nova boot --meta key=value ...). It is expected to be json
- formatted.
-
-Version 2
-~~~~~~~~~~~
-
-The following criteria are required to as a config drive:
-
-1. Must be formatted with `vfat`_ or `iso9660`_ filesystem
- or have a *filesystem* label of **config-2**
-2. Must be a un-partitioned block device (/dev/vdb, not /dev/vdb1)
-3. The files that will typically be present in the config drive are:
-
-::
-
- openstack/
- - 2012-08-10/ or latest/
- - meta_data.json
- - user_data (not mandatory)
- - content/
- - 0000 (referenced content files)
- - 0001
- - ....
- ec2
- - latest/
- - meta-data.json (not mandatory)
-
-Keys and values
-~~~~~~~~~~~
-
-Cloud-init's behavior can be modified by keys found in the meta.js (version 1 only) file in the following ways.
-
-::
-
- dsmode:
- values: local, net, pass
- default: pass
-
-
-This is what indicates if configdrive is a final data source or not.
-By default it is 'pass', meaning this datasource should not be read.
-Set it to 'local' or 'net' to stop cloud-init from continuing on to
-search for other data sources after network config.
-
-The difference between 'local' and 'net' is that local will not require
-networking to be up before user-data actions (or boothooks) are run.
-
-::
-
- instance-id:
- default: iid-dsconfigdrive
-
-This is utilized as the metadata's instance-id. It should generally
-be unique, as it is what is used to determine "is this a new instance".
-
-::
-
- public-keys:
- default: None
-
-If present, these keys will be used as the public keys for the
-instance. This value overrides the content in authorized_keys.
-
-Note: it is likely preferable to provide keys via user-data
-
-::
-
- user-data:
- default: None
-
-This provides cloud-init user-data. See :ref:`examples <yaml_examples>` for
-what all can be present here.
-
-.. _OpenStack: http://www.openstack.org/
-.. _introduction: http://docs.openstack.org/trunk/openstack-compute/admin/content/config-drive.html
-.. _python-novaclient: https://github.com/openstack/python-novaclient
-.. _iso9660: https://en.wikipedia.org/wiki/ISO_9660
-.. _vfat: https://en.wikipedia.org/wiki/File_Allocation_Table
-.. _the config drive extension: http://docs.openstack.org/user-guide/content/config-drive.html
diff --git a/doc/sources/digitalocean/README.rst b/doc/sources/digitalocean/README.rst
deleted file mode 100644
index 1bb89fe1..00000000
--- a/doc/sources/digitalocean/README.rst
+++ /dev/null
@@ -1,21 +0,0 @@
- The `DigitalOcean`_ datasource consumes the content served from DigitalOcean's `metadata service`_. This
-metadata service serves information about the running droplet via HTTP over the link local address
-169.254.169.254. The metadata API endpoints are fully described at
-`https://developers.digitalocean.com/metadata/ <https://developers.digitalocean.com/metadata/>`_.
-
-Configuration
-~~~~~~~~~~~~~
-
-DigitalOcean's datasource can be configured as follows:
-
- datasource:
- DigitalOcean:
- retries: 3
- timeout: 2
-
-- *retries*: Determines the number of times to attempt to connect to the metadata service
-- *timeout*: Determines the timeout in seconds to wait for a response from the metadata service
-
-.. _DigitalOcean: http://digitalocean.com/
-.. _metadata service: https://developers.digitalocean.com/metadata/
-.. _Full documentation: https://developers.digitalocean.com/metadata/
diff --git a/doc/sources/kernel-cmdline.txt b/doc/sources/kernel-cmdline.txt
deleted file mode 100644
index 0b77a9af..00000000
--- a/doc/sources/kernel-cmdline.txt
+++ /dev/null
@@ -1,48 +0,0 @@
-In order to allow an ephemeral, or otherwise pristine image to
-receive some configuration, cloud-init will read a url directed by
-the kernel command line and proceed as if its data had previously existed.
-
-This allows for configuring a meta-data service, or some other data.
-
-Note, that usage of the kernel command line is somewhat of a last resort,
-as it requires knowing in advance the correct command line or modifying
-the boot loader to append data.
-
-For example, when 'cloud-init start' runs, it will check to
-see if if one of 'cloud-config-url' or 'url' appear in key/value fashion
-in the kernel command line as in:
- root=/dev/sda ro url=http://foo.bar.zee/abcde
-
-Cloud-init will then read the contents of the given url.
-If the content starts with '#cloud-config', it will store
-that data to the local filesystem in a static filename
-'/etc/cloud/cloud.cfg.d/91_kernel_cmdline_url.cfg', and consider it as
-part of the config from that point forward.
-
-If that file exists already, it will not be overwritten, and the url parameters
-completely ignored.
-
-Then, when the DataSource runs, it will find that config already available.
-
-So, in able to configure the MAAS DataSource by controlling the kernel
-command line from outside the image, you can append:
- url=http://your.url.here/abcdefg
-or
- cloud-config-url=http://your.url.here/abcdefg
-
-Then, have the following content at that url:
- #cloud-config
- datasource:
- MAAS:
- metadata_url: http://mass-host.localdomain/source
- consumer_key: Xh234sdkljf
- token_key: kjfhgb3n
- token_secret: 24uysdfx1w4
-
-Notes:
- * Because 'url=' is so very generic, in order to avoid false positives,
- cloud-init requires the content to start with '#cloud-config' in order
- for it to be considered.
- * The url= is un-authed http GET, and contains credentials
- It could be set up to be randomly generated and also check source
- address in order to be more secure
diff --git a/doc/sources/nocloud/README.rst b/doc/sources/nocloud/README.rst
deleted file mode 100644
index 08a39377..00000000
--- a/doc/sources/nocloud/README.rst
+++ /dev/null
@@ -1,71 +0,0 @@
-The data source ``NoCloud`` and ``NoCloudNet`` allow the user to provide user-data
-and meta-data to the instance without running a network service (or even without
-having a network at all).
-
-You can provide meta-data and user-data to a local vm boot via files on a `vfat`_
-or `iso9660`_ filesystem. The filesystem volume label must be ``cidata``.
-
-These user-data and meta-data files are expected to be
-in the following format.
-
-::
-
- /user-data
- /meta-data
-
-Basically, user-data is simply user-data and meta-data is a yaml formatted file
-representing what you'd find in the EC2 metadata service.
-
-Given a disk ubuntu 12.04 cloud image in 'disk.img', you can create a sufficient disk
-by following the example below.
-
-::
-
- ## create user-data and meta-data files that will be used
- ## to modify image on first boot
- $ { echo instance-id: iid-local01; echo local-hostname: cloudimg; } > meta-data
-
- $ printf "#cloud-config\npassword: passw0rd\nchpasswd: { expire: False }\nssh_pwauth: True\n" > user-data
-
- ## create a disk to attach with some user-data and meta-data
- $ genisoimage -output seed.iso -volid cidata -joliet -rock user-data meta-data
-
- ## alternatively, create a vfat filesystem with same files
- ## $ truncate --size 2M seed.img
- ## $ mkfs.vfat -n cidata seed.img
- ## $ mcopy -oi seed.img user-data meta-data ::
-
- ## create a new qcow image to boot, backed by your original image
- $ qemu-img create -f qcow2 -b disk.img boot-disk.img
-
- ## boot the image and login as 'ubuntu' with password 'passw0rd'
- ## note, passw0rd was set as password through the user-data above,
- ## there is no password set on these images.
- $ kvm -m 256 \
- -net nic -net user,hostfwd=tcp::2222-:22 \
- -drive file=boot-disk.img,if=virtio \
- -drive file=seed.iso,if=virtio
-
-**Note:** that the instance-id provided (``iid-local01`` above) is what is used to
-determine if this is "first boot". So if you are making updates to user-data
-you will also have to change that, or start the disk fresh.
-
-Also, you can inject an ``/etc/network/interfaces`` file by providing the content
-for that file in the ``network-interfaces`` field of metadata.
-
-Example metadata:
-
-::
-
- instance-id: iid-abcdefg
- network-interfaces: |
- iface eth0 inet static
- address 192.168.1.10
- network 192.168.1.0
- netmask 255.255.255.0
- broadcast 192.168.1.255
- gateway 192.168.1.254
- hostname: myhost
-
-.. _iso9660: https://en.wikipedia.org/wiki/ISO_9660
-.. _vfat: https://en.wikipedia.org/wiki/File_Allocation_Table
diff --git a/doc/sources/opennebula/README.rst b/doc/sources/opennebula/README.rst
deleted file mode 100644
index 4d7de27a..00000000
--- a/doc/sources/opennebula/README.rst
+++ /dev/null
@@ -1,142 +0,0 @@
-The `OpenNebula`_ (ON) datasource supports the contextualization disk.
-
- See `contextualization overview`_, `contextualizing VMs`_ and
- `network configuration`_ in the public documentation for
- more information.
-
-OpenNebula's virtual machines are contextualized (parametrized) by
-CD-ROM image, which contains a shell script *context.sh* with
-custom variables defined on virtual machine start. There are no
-fixed contextualization variables, but the datasource accepts
-many used and recommended across the documentation.
-
-Datasource configuration
-~~~~~~~~~~~~~~~~~~~~~~~~~
-
-Datasource accepts following configuration options.
-
-::
-
- dsmode:
- values: local, net, disabled
- default: net
-
-Tells if this datasource will be processed in 'local' (pre-networking) or
-'net' (post-networking) stage or even completely 'disabled'.
-
-::
-
- parseuser:
- default: nobody
-
-Unprivileged system user used for contextualization script
-processing.
-
-Contextualization disk
-~~~~~~~~~~~~~~~~~~~~~~
-
-The following criteria are required:
-
-1. Must be formatted with `iso9660`_ filesystem
- or have a *filesystem* label of **CONTEXT** or **CDROM**
-2. Must contain file *context.sh* with contextualization variables.
- File is generated by OpenNebula, it has a KEY='VALUE' format and
- can be easily read by bash
-
-Contextualization variables
-~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-There are no fixed contextualization variables in OpenNebula, no standard.
-Following variables were found on various places and revisions of
-the OpenNebula documentation. Where multiple similar variables are
-specified, only first found is taken.
-
-::
-
- DSMODE
-
-Datasource mode configuration override. Values: local, net, disabled.
-
-::
-
- DNS
- ETH<x>_IP
- ETH<x>_NETWORK
- ETH<x>_MASK
- ETH<x>_GATEWAY
- ETH<x>_DOMAIN
- ETH<x>_DNS
-
-Static `network configuration`_.
-
-::
-
- HOSTNAME
-
-Instance hostname.
-
-::
-
- PUBLIC_IP
- IP_PUBLIC
- ETH0_IP
-
-If no hostname has been specified, cloud-init will try to create hostname
-from instance's IP address in 'local' dsmode. In 'net' dsmode, cloud-init
-tries to resolve one of its IP addresses to get hostname.
-
-::
-
- SSH_KEY
- SSH_PUBLIC_KEY
-
-One or multiple SSH keys (separated by newlines) can be specified.
-
-::
-
- USER_DATA
- USERDATA
-
-cloud-init user data.
-
-Example configuration
-~~~~~~~~~~~~~~~~~~~~~
-
-This example cloud-init configuration (*cloud.cfg*) enables
-OpenNebula datasource only in 'net' mode.
-
-::
-
- disable_ec2_metadata: True
- datasource_list: ['OpenNebula']
- datasource:
- OpenNebula:
- dsmode: net
- parseuser: nobody
-
-Example VM's context section
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-::
-
- CONTEXT=[
- PUBLIC_IP="$NIC[IP]",
- SSH_KEY="$USER[SSH_KEY]
- $USER[SSH_KEY1]
- $USER[SSH_KEY2] ",
- USER_DATA="#cloud-config
- # see https://help.ubuntu.com/community/CloudInit
-
- packages: []
-
- mounts:
- - [vdc,none,swap,sw,0,0]
- runcmd:
- - echo 'Instance has been configured by cloud-init.' | wall
- " ]
-
-.. _OpenNebula: http://opennebula.org/
-.. _contextualization overview: http://opennebula.org/documentation:documentation:context_overview
-.. _contextualizing VMs: http://opennebula.org/documentation:documentation:cong
-.. _network configuration: http://opennebula.org/documentation:documentation:cong#network_configuration
-.. _iso9660: https://en.wikipedia.org/wiki/ISO_9660
diff --git a/doc/sources/openstack/README.rst b/doc/sources/openstack/README.rst
deleted file mode 100644
index 8102597e..00000000
--- a/doc/sources/openstack/README.rst
+++ /dev/null
@@ -1,24 +0,0 @@
-*TODO*
-
-Vendor Data
-~~~~~~~~~~~
-
-The OpenStack metadata server can be configured to serve up vendor data
-which is available to all instances for consumption. OpenStack vendor
-data is, generally, a JSON object.
-
-cloud-init will look for configuration in the ``cloud-init`` attribute
-of the vendor data JSON object. cloud-init processes this configuration
-using the same handlers as user data, so any formats that work for user
-data should work for vendor data.
-
-For example, configuring the following as vendor data in OpenStack would
-upgrade packages and install ``htop`` on all instances:
-
-.. sourcecode:: json
-
- {"cloud-init": "#cloud-config\npackage_upgrade: True\npackages:\n - htop"}
-
-For more general information about how cloud-init handles vendor data,
-including how it can be disabled by users on instances, see
-https://bazaar.launchpad.net/~cloud-init-dev/cloud-init/trunk/view/head:/doc/vendordata.txt
diff --git a/doc/sources/ovf/README b/doc/sources/ovf/README
deleted file mode 100644
index e3ef12e0..00000000
--- a/doc/sources/ovf/README
+++ /dev/null
@@ -1,83 +0,0 @@
-This directory contains documentation and a demo of the OVF
-functionality that is present in cloud-init.
-
-The example/ directory contains the following files:
- example/ovf-env.xml
- This is an example ovf environment file
- to make an iso that qualifies for the ISO transport, do:
- mkdir my-iso
- cp environment.xml my-iso/ovf-env.xml
- genisoimage -o transport.iso -r my-iso
- Then, boot with that ISO attached as a CDrom
-- example/ubuntu-server.ovf
- Example generated by virtualbox "export" of a simple VM.
- It contains a functional ProductSection also. Given answers
- to each of the Properties there, a suitable OVF environment file
- (ovf-env.xml) could be created.
-
-== Demo ==
-In order to easily demonstrate this functionality, simple demo is
-contained here. To boot a local virtual machine in either kvm or virtual
-box, follow the steps below.
-
-- download a suitable Ubuntu image
- Visit http://cloud-images.ubuntu.com/releases and download a disk image
- of Natty, Oneiric or a newer release.
-
- $ burl="http://cloud-images.ubuntu.com/releases/"
- $ disk="ubuntu-11.10-server-cloudimg-i386-disk1"
- $ wget "$burl/11.10/release/$disk.img" -O "$disk.img"
-
-- If you're going to use virtual box, you will need to convert the image
- from qcow2 format into a virtual-box friendly VHD format.
- $ qemu-img convert -O vdi "$disk.img" "ubuntu.vdi"
-
-- If you're using kvm, you should create a qcow delta image to store
- the changes so you keep the original pristine.
- $ qemu-img create -f qcow2 -b "$disk.img" "ubuntu.qcow2"
-
- Optionally, you could decompress the image, which will make it boot faster
- but will take up more local disk space.
- $ qemu-img convert -O qcow2 "$disk.img" "$disk.qcow2"
- $ qemu-img create -f qcow2 -b "$disk.qcow2" ubuntu.qcow2
-
-- Create an ISO file that will provide user-data to the image.
- This will put the contents of 'user-data' into an ovf-env.xml file
- and create an ISO file that can then be attached at boot to provide
- the user data to cloud-init.
-
- $ ./make-iso ovf-env.xml.tmpl user-data --output ovftransport.iso
-
-- Boot your virtual machine
- The cloud-images boot with kernel and boot progress to ttyS0.
- You can change that at the grub prompt if you'd like by editing the
- kernel entry. Otherwise, to see progress you'll need to switch
- to the serial console. In kvm graphic mode, you do that by clicking
- in the window and then pressing pressing 'ctrl-alt-3'. For information
- on how to do that in virtualbox or kvm curses, see the relevant
- documentation.
-
- KVM:
- $ kvm -drive file=ubuntu.qcow2,if=virtio -cdrom ovftransport.iso \
- -m 256 -net nic -net user,hostfwd=tcp::2222-:22
-
- VirtualBox:
- - Launch the GUI and create a new vm with $disk.vdi and ovftransport.iso
- attached.
- - If you use 'NAT' networking, then forward a port (2222) to the
- guests' port 22 to be able to ssh.
-
- Upon successful boot you will be able to log in as the 'ubuntu' user
- with the password 'passw0rd' (which was set in the 'user-data' file).
-
- You will also be able to ssh to the instance with the provided:
- $ chmod 600 ovfdemo.pem
- $ ssh -i ovfdemo.pem -p 2222 ubuntu@localhost
-
-- Notes:
- * The 'instance-id' that is set in the ovf-env.xml image needs to
- be unique. If you want to run the first-boot code of cloud-init
- again you will either have to remove /var/lib/cloud ('rm -Rf' is fine)
- or create a new cdrom with a different instance-id. To do the
- ladder, simply add the '--instance-id=' flag to the 'make-iso'
- command above and start your vm with the new ISO attached.
diff --git a/doc/sources/ovf/example/ovf-env.xml b/doc/sources/ovf/example/ovf-env.xml
deleted file mode 100644
index 13e8f104..00000000
--- a/doc/sources/ovf/example/ovf-env.xml
+++ /dev/null
@@ -1,46 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<Environment xmlns="http://schemas.dmtf.org/ovf/environment/1"
- xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
- xmlns:oe="http://schemas.dmtf.org/ovf/environment/1"
- xsi:schemaLocation="http://schemas.dmtf.org/ovf/environment/1 ../dsp8027.xsd"
- oe:id="WebTier">
-
- <!-- This example reference a local schema file, to validate against online schema use:
- xsi:schemaLocation="http://schemas.dmtf.org/ovf/envelope/1 http://schemas.dmtf.org/ovf/envelope/1/dsp8027_1.0.0.xsd"
- -->
-
- <!-- Information about hypervisor platform -->
- <oe:PlatformSection>
- <Kind>ESX Server</Kind>
- <Version>3.0.1</Version>
- <Vendor>VMware, Inc.</Vendor>
- <Locale>en_US</Locale>
- </oe:PlatformSection>
-
- <!--- Properties defined for this virtual machine -->
- <PropertySection>
- <!-- instance-id is required, a unique instance-id -->
- <Property oe:key="instance-id" oe:value="i-abcdefg"/>
- <!--
- seedfrom is optional, but indicates to 'seed' user-data
- and meta-data the given url. In this example, pull
- http://tinyurl.com/sm-meta-data and http://tinyurl.com/sm-user-data
- -->
- <Property oe:key="seedfrom" oe:value="http://tinyurl.com/sm-"/>
- <!--
- public-keys is a public key to add to users authorized keys
- -->
- <Property oe:key="public-keys" oe:value="ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA3I7VUf2l5gSn5uavROsc5HRDpZdQueUq5ozemNSj8T7enqKHOEaFoU2VoPgGEWC9RyzSQVeyD6s7APMcE82EtmW4skVEgEGSbDc1pvxzxtchBj78hJP6Cf5TCMFSXw+Fz5rF1dR23QDbN1mkHs7adr8GW4kSWqU7Q7NDwfIrJJtO7Hi42GyXtvEONHbiRPOe8stqUly7MvUoN+5kfjBM8Qqpfl2+FNhTYWpMfYdPUnE7u536WqzFmsaqJctz3gBxH9Ex7dFtrxR4qiqEr9Qtlu3xGn7Bw07/+i1D+ey3ONkZLN+LQ714cgj8fRS4Hj29SCmXp5Kt5/82cD/VN3NtHw== smoser@brickies"/>
- <!-- hostname: the hostname to set -->
- <Property oe:key="hostname" oe:value="ubuntuhost"/>
- <!--
- The value for user-data is to be base64 encoded.
- it will be decoded, and then processed normally as user-data.
- The following represents '#!/bin/sh\necho "hi world"'
-
- -->
- <Property oe:key="user-data" oe:value="IyEvYmluL3NoCmVjaG8gImhpIHdvcmxkIgo="/>
- <Property oe:key="password" oe:value="passw0rd"/>
- </PropertySection>
-
-</Environment>
diff --git a/doc/sources/ovf/example/ubuntu-server.ovf b/doc/sources/ovf/example/ubuntu-server.ovf
deleted file mode 100644
index 846483a1..00000000
--- a/doc/sources/ovf/example/ubuntu-server.ovf
+++ /dev/null
@@ -1,130 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<Envelope xmlns="http://schemas.dmtf.org/ovf/envelope/1" xmlns:cim="http://schemas.dmtf.org/wbem/wscim/1/common" xmlns:ovf="http://schemas.dmtf.org/ovf/envelope/1" xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData" xmlns:vssd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_VirtualSystemSettingData" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
- <References>
- <File ovf:href="my.vmdk" ovf:id="file1" ovf:size="2031616"/>
- </References>
- <DiskSection>
- <Info>Virtual disk information</Info>
- <Disk ovf:capacity="52428800" ovf:diskId="vmdisk1" ovf:fileRef="file1" ovf:format="http://www.vmware.com/interfaces/specifications/vmdk.html#monolithicSparse"/>
- </DiskSection>
- <NetworkSection>
- <Info>The list of logical networks</Info>
- <Network ovf:name="bridged">
- <Description>The bridged network</Description>
- </Network>
- </NetworkSection>
- <VirtualSystem ovf:id="vm">
- <Info>A virtual machine</Info>
- <Name>Ubuntu</Name>
- <OperatingSystemSection ovf:id="93">
- <Info>11.04 (Natty Narwhal) Server</Info>
- </OperatingSystemSection>
- <ProductSection>
- <Info>Cloud-Init customization</Info>
- <Product>11.04 (Natty Narwhal) Server</Product>
- <Property ovf:key="instance-id" ovf:type="string" ovf:userConfigurable="true" ovf:value="id-ovf">
- <Label>A Unique Instance ID for this instance</Label>
- <Description>Specifies the instance id. This is required and used to determine if the machine should take "first boot" actions</Description>
- </Property>
- <Property ovf:key="hostname" ovf:type="string" ovf:userConfigurable="true" ovf:value="ubuntuguest">
- <Description>Specifies the hostname for the appliance</Description>
- </Property>
- <Property ovf:key="seedfrom" ovf:type="string" ovf:userConfigurable="true">
- <Label>Url to seed instance data from</Label>
- <Description>This field is optional, but indicates that the instance should 'seed' user-data and meta-data from the given url. If set to 'http://tinyurl.com/sm-' is given, meta-data will be pulled from http://tinyurl.com/sm-meta-data and user-data from http://tinyurl.com/sm-user-data. Leave this empty if you do not want to seed from a url.</Description>
- </Property>
- <Property ovf:key="public-keys" ovf:type="string" ovf:userConfigurable="true" ovf:value="">
- <Label>ssh public keys</Label>
- <Description>This field is optional, but indicates that the instance should populate the default user's 'authorized_keys' with this value</Description>
- </Property>
- <Property ovf:key="user-data" ovf:type="string" ovf:userConfigurable="true" ovf:value="">
- <Label>Encoded user-data</Label>
- <Description>In order to fit into a xml attribute, this value is base64 encoded . It will be decoded, and then processed normally as user-data.</Description>
- <!-- The following represents '#!/bin/sh\necho "hi world"'
- ovf:value="IyEvYmluL3NoCmVjaG8gImhpIHdvcmxkIgo="
- -->
- </Property>
- <Property ovf:key="password" ovf:type="string" ovf:userConfigurable="true" ovf:value="">
- <Label>Default User's password</Label>
- <Description>If set, the default user's password will be set to this value to allow password based login. The password will be good for only a single login. If set to the string 'RANDOM' then a random password will be generated, and written to the console.</Description>
- </Property>
- </ProductSection>
- <VirtualHardwareSection>
- <Info>Virtual hardware requirements</Info>
- <System>
- <vssd:ElementName>Virtual Hardware Family</vssd:ElementName>
- <vssd:InstanceID>0</vssd:InstanceID>
- <vssd:VirtualSystemIdentifier>Ubuntu 11.04 (Natty Narwhal) Server</vssd:VirtualSystemIdentifier>
- <vssd:VirtualSystemType>vmx-07 qemu-pc qemu-pc-0.13 virtualbox-2.2</vssd:VirtualSystemType>
- </System>
- <Item>
- <rasd:AllocationUnits>hertz * 10^6</rasd:AllocationUnits>
- <rasd:Description>Number of Virtual CPUs</rasd:Description>
- <rasd:ElementName>1 virtual CPU(s)</rasd:ElementName>
- <rasd:InstanceID>1</rasd:InstanceID>
- <rasd:ResourceType>3</rasd:ResourceType>
- <rasd:VirtualQuantity>1</rasd:VirtualQuantity>
- </Item>
- <Item>
- <rasd:AllocationUnits>byte * 2^20</rasd:AllocationUnits>
- <rasd:Description>Memory Size</rasd:Description>
- <rasd:ElementName>256MB of memory</rasd:ElementName>
- <rasd:InstanceID>2</rasd:InstanceID>
- <rasd:ResourceType>4</rasd:ResourceType>
- <rasd:VirtualQuantity>256</rasd:VirtualQuantity>
- </Item>
- <Item ovf:required="false">
- <rasd:Address>0</rasd:Address>
- <rasd:Description>USB Controller</rasd:Description>
- <rasd:ElementName>usb</rasd:ElementName>
- <rasd:InstanceID>3</rasd:InstanceID>
- <rasd:ResourceType>23</rasd:ResourceType>
- </Item>
- <Item>
- <rasd:Address>0</rasd:Address>
- <rasd:Description>SCSI Controller</rasd:Description>
- <rasd:ElementName>scsiController0</rasd:ElementName>
- <rasd:InstanceID>4</rasd:InstanceID>
- <rasd:ResourceSubType>lsilogic</rasd:ResourceSubType>
- <rasd:ResourceType>6</rasd:ResourceType>
- </Item>
- <Item>
- <rasd:Address>1</rasd:Address>
- <rasd:Description>IDE Controller</rasd:Description>
- <rasd:ElementName>ideController1</rasd:ElementName>
- <rasd:InstanceID>5</rasd:InstanceID>
- <rasd:ResourceType>5</rasd:ResourceType>
- </Item>
- <Item ovf:required="false">
- <rasd:AddressOnParent>0</rasd:AddressOnParent>
- <rasd:AutomaticAllocation>false</rasd:AutomaticAllocation>
- <rasd:ElementName>cdrom1</rasd:ElementName>
- <rasd:InstanceID>6</rasd:InstanceID>
- <rasd:Parent>5</rasd:Parent>
- <rasd:ResourceType>15</rasd:ResourceType>
- </Item>
- <Item>
- <rasd:AddressOnParent>0</rasd:AddressOnParent>
- <rasd:ElementName>disk1</rasd:ElementName>
- <rasd:HostResource>ovf:/disk/vmdisk1</rasd:HostResource>
- <rasd:InstanceID>7</rasd:InstanceID>
- <rasd:Parent>4</rasd:Parent>
- <rasd:ResourceType>17</rasd:ResourceType>
- </Item>
- <Item>
- <rasd:AddressOnParent>2</rasd:AddressOnParent>
- <rasd:AutomaticAllocation>true</rasd:AutomaticAllocation>
- <rasd:Connection>bridged</rasd:Connection>
- <rasd:Description>ethernet adapter on &quot;bridged&quot;</rasd:Description>
- <rasd:ElementName>ethernet0</rasd:ElementName>
- <rasd:InstanceID>8</rasd:InstanceID>
- <rasd:ResourceSubType>E1000</rasd:ResourceSubType>
- <rasd:ResourceType>10</rasd:ResourceType>
- </Item>
- </VirtualHardwareSection>
- <AnnotationSection ovf:required="false">
- <Info>For more information, see http://ubuntu.com</Info>
- <Annotation>This is Ubuntu Server.</Annotation>
- </AnnotationSection>
- </VirtualSystem>
-</Envelope>
diff --git a/doc/sources/ovf/make-iso b/doc/sources/ovf/make-iso
deleted file mode 100755
index 91d0e2e5..00000000
--- a/doc/sources/ovf/make-iso
+++ /dev/null
@@ -1,156 +0,0 @@
-#!/bin/bash
-
-VERBOSITY=0
-PROPERTIES=( instance-id hostname user-data seedfrom )
-DEFAULTS=( "i-ovfdemo00" "ovfdemo.localdomain" "" "" )
-
-DEF_OUTPUT="ovftransport.iso"
-TEMP_D=""
-
-error() { echo "$@" 1>&2; }
-fail() { [ $# -eq 0 ] || error "$@"; exit 1; }
-
-# propvalue(name, value)
-propvalue() {
- local prop="" val="$2" i=0
- for prop in "${PROPERTIES[@]}"; do
- if [ "$prop" = "$1" ]; then
- [ $# -eq 1 ] || DEFAULTS[$i]="$2"
- _RET=${DEFAULTS[$i]}
- return
- fi
- i=$(($i+1))
- done
- return
-}
-
-Usage() {
- cat <<EOF
-Usage: ${0##*/} ovf-env.xml.tmpl [user-data-file]
-
- create an an ovf transport iso with ovf-env.xml.tmpl
- as ovf-env.xml on the iso.
-
- if user-data-file is given, the file's contents will be base64 encoded
- and stuffed inside ovf-env.xml. This will override the '--user-data'
- argument.
-
- options:
- -o | --output OUTPUT write output to OUTPUT [default: $DEF_OUTPUT]
- -v | --verbose increase verbosity
-
-EOF
- local i=""
- for i in "${PROPERTIES[@]}"; do
- propvalue "$i"
- printf "%10s--%-17s%s\n" "" "$i" "set $i. [default: '$_RET']"
- done
- cat <<EOF
-
- Example:
- $ ${0##*/} --hostname "foobar.mydomain" ovf-env.xml.tmpl user-data
-
-EOF
-}
-
-bad_Usage() { Usage 1>&2; [ $# -eq 0 ] || error "$@"; exit 1; }
-cleanup() {
- [ -z "${TEMP_D}" -o ! -d "${TEMP_D}" ] || rm -Rf "${TEMP_D}"
-}
-
-debug() {
- local level=${1}; shift;
- [ "${level}" -ge "${VERBOSITY}" ] && return
- error "${@}"
-}
-
-short_opts="ho:v"
-long_opts="help,output:,verbose"
-for i in "${PROPERTIES[@]}"; do
- long_opts="$long_opts,$i:"
-done
-getopt_out=$(getopt --name "${0##*/}" \
- --options "${short_opts}" --long "${long_opts}" -- "$@") &&
- eval set -- "${getopt_out}" ||
- bad_Usage
-
-## <<insert default variables here>>
-output="${DEF_OUTPUT}"
-user_data=""
-
-while [ $# -ne 0 ]; do
- cur=${1}; next=${2};
- case "$cur" in
- -h|--help) Usage ; exit 0;;
- -o|--output) output=${2}; shift;;
- -v|--verbose) VERBOSITY=$((${VERBOSITY}+1));;
- --) shift; break;;
- --*)
- for i in "${PROPERTIES[@]}" _none_; do
- [ "${cur#--}" == "$i" ] || continue
- [ "$i" != "user-data" ] ||
- next=$(echo "$next" | base64 --wrap=0) ||
- fail "failed to base64 encode userdata"
- propvalue "$i" "$next"
- break
- done
- [ "$i" = "_none_" ] && bad_Usage "confused by $cur"
- ;;
- esac
- shift;
-done
-
-[ $# -eq 1 -o $# -eq 2 ] ||
- bad_Usage "wrong number of arguments"
-
-env_tmpl="$1"
-ud_file="$2"
-
-[ -f "$env_tmpl" ] || bad_Usage "$env_tmpl: not a file"
-[ -z "$ud_file" -o -f "$ud_file" ] ||
- bad_Usage "$ud_file: not a file"
-
-TEMP_D=$(mktemp -d "${TMPDIR:-/tmp}/${0##*/}.XXXXXX") ||
- fail "failed to make tempdir"
-trap cleanup EXIT
-
-mkdir "$TEMP_D/iso" && iso_d="$TEMP_D/iso" ||
- fail "failed to make a tempdir?"
-ovf_env="$TEMP_D/iso/ovf-env.xml"
-
-if [ -n "$ud_file" ]; then
- user_data=$(base64 --wrap=0 "$ud_file") ||
- fail "failed to base64 encode $ud_file. Do you have base64 installed?"
- propvalue user-data "$user_data"
-fi
-
-changes=( )
-for i in "${PROPERTIES[@]}"; do
- changes[${#changes[@]}]="-e"
- propvalue "$i"
- changes[${#changes[@]}]="s|@@$i@@|$_RET|g"
-done
-
-sed "${changes[@]}" "$env_tmpl" > "$ovf_env" ||
- fail "failed to replace string in $env_tmpl"
-
-if [ "${#changes[@]}" -ne 0 ]; then
- cmp "$ovf_env" "$env_tmpl" >/dev/null &&
- fail "nothing replaced in $ovf_env. template is identical to output"
-fi
-
-debug 1 "creating iso with: genisoimage -o tmp.iso -r iso"
-( cd "$TEMP_D" &&
- genisoimage -V OVF-TRANSPORT -o tmp.iso -r iso 2>/dev/null ) ||
- fail "failed to create iso. do you have genisoimage?"
-
-if [ "$output" = "-" ]; then
- cat "$TEMP_D/tmp.iso"
-else
- cp "$TEMP_D/tmp.iso" "$output" ||
- fail "failed to write to $output"
-fi
-
-error "wrote iso to $output"
-exit 0
-# vi: ts=4 noexpandtab
diff --git a/doc/sources/ovf/ovf-env.xml.tmpl b/doc/sources/ovf/ovf-env.xml.tmpl
deleted file mode 100644
index 8e255d43..00000000
--- a/doc/sources/ovf/ovf-env.xml.tmpl
+++ /dev/null
@@ -1,28 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<Environment xmlns="http://schemas.dmtf.org/ovf/environment/1"
- xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
- xmlns:oe="http://schemas.dmtf.org/ovf/environment/1"
- xsi:schemaLocation="http://schemas.dmtf.org/ovf/environment/1 ../dsp8027.xsd"
- oe:id="WebTier">
-
- <!-- This example reference a local schema file, to validate against online schema use:
- xsi:schemaLocation="http://schemas.dmtf.org/ovf/envelope/1 http://schemas.dmtf.org/ovf/envelope/1/dsp8027_1.0.0.xsd"
- -->
-
- <!-- Information about hypervisor platform -->
- <oe:PlatformSection>
- <Kind>ESX Server</Kind>
- <Version>3.0.1</Version>
- <Vendor>VMware, Inc.</Vendor>
- <Locale>en_US</Locale>
- </oe:PlatformSection>
-
- <!--- Properties defined for this virtual machine -->
- <PropertySection>
- <Property oe:key="instance-id" oe:value="@@instance-id@@"/>
- <Property oe:key="hostname" oe:value="@@hostname@@"/>
- <Property oe:key="user-data" oe:value="@@user-data@@"/>
- <Property oe:key="seedfrom" oe:value="@@seedfrom@@"/>
- </PropertySection>
-
-</Environment>
diff --git a/doc/sources/ovf/ovfdemo.pem b/doc/sources/ovf/ovfdemo.pem
deleted file mode 100644
index 5bc629c8..00000000
--- a/doc/sources/ovf/ovfdemo.pem
+++ /dev/null
@@ -1,27 +0,0 @@
------BEGIN RSA PRIVATE KEY-----
-MIIEpAIBAAKCAQEA1Zq/11Rky/uHdbKJewmEtDABGoSjIFyjoY04T5dFYUNwi0B6
-Km7b85Ylqmi/1KmR4Zvi++dj10XnusoWr/Zruv85hHilMZ9GozL2RD6jU/CaI+rB
-QkKSaR/CdmEHBbRimq6T2E9chMhJY0jNzeexJSKVR3QeLdbRZ64H7QGTHp7Ulodu
-vS9VwAWcpYbGgcM541fboFAiJOLICM1UPH4x5WDkTq/6yeElSmeiE2lHtESHhyMJ
-OSDB3YZ5hw1+4bY3sR+0vZ3VQWzpn1Lwg1X3AZA8yf+ZsmMZHhTFeCglsd8jlLHk
-Wudh5mJBkCuwPvRQk1gE5gSnTGti0TUqLIrNRwIDAQABAoIBAGZMrdIXxgp3VWHF
-9tfpMBgH4Y9stJ98HpXxh2V+4ih53v2iDKAj5c1cPH/HmQ/lgktVmDjikct43El2
-HbV6RBATyd0q1prUWEUy1ATNJvW9hmTrOlFchrg4EK8XOwC9angAYig3oeyp65PU
-O1SAwTMyw+GruARmHHYWQA9/MJF5yexrjBw00w7hnCsqjezU5YIYsXwgcz0Zw+Ix
-fDJcZFXF9X3Al7H3ZILW3PpfhcVl7WzkL47TIX4oB/ab2kltaTE90SZMXKVcLvTI
-6To2xJAnMUyasRfcGmvE8m0SqWqp66POAUDF2I8qu78inKH2u0rNtLQjyx5btF5K
-A39bPnkCgYEA8Joba3QFrbd0zPTP/DawRtTXzdIQcNjj4XEefxBN3Cw7MlCsfgDc
-xiAR703zqQ/IDkF00XrU5w7rmDga3Pv66JRzFDwvRVtGb6QV+lg7Ypd/6NI1G5AS
-0Qzneer2JytEpHoTqGH/vWcXzJRH2BfaPK/vEF4qhAXBqouz2DXn3EUCgYEA40ZU
-eDc4MmHOSuqoggSEDJ5NITgPbdkwOta0BmnBZ36M5vgqN8EfAZISKocLNlERDrRG
-MpBlQCulq3rpU7WYkx8hGE21f1YBo+vKkffI56ptO2lAp5iLflkSOypdiVN6OELW
-5SzkViohDnxKc6eshVycnNoxh6MqE6ugWSd6ahsCgYEA6t0kQwIgwPDCfYfEt2kT
-LjF675lNHzs5R8pKgLKDrpcmufjySJXC7UxE9ZrcbX3QRcozpIEI7vwrko3B+1Gm
-Hf87TtdpNYTh/vznz1btsVI+NCFuYheDprm4A9UOsDGWchAQvF/dayAFpVhhwVmX
-WYJMFWg2jGWqJTb2Oep1CRkCgYEAqzdkk1wmPe5o1w+I+sokIM1xFcGB/iNMrkbp
-QJuTVECGLcpvI6mdjjVY8ijiTX0s+ILfD2CwpnM7T8A83w9DbjJZYFHKla9ZdQBB
-j024UK6Xs9ZLGvdUv06i6We1J6t3u8K+2c/EBRWf6aXBAPgkhCOM6K2H+sL1A/Sb
-zA5trlkCgYArqJCk999mXQuMjNv6UTwzB0iYDjAFNgJdFmPMXlogD51r0HlGeCgD
-OEyup4FdIvX1ZYOCkKyieSngmPmY/P4lZBgQbM23FMp+oUkA+FlVW+WNVoXagUrh
-abatKtbZ+WZHHmgSoC8sAo5KnxM9O0R6fWlpoIhJTVoihkZYdmnpMg==
------END RSA PRIVATE KEY-----
diff --git a/doc/sources/ovf/user-data b/doc/sources/ovf/user-data
deleted file mode 100644
index bfac51fd..00000000
--- a/doc/sources/ovf/user-data
+++ /dev/null
@@ -1,7 +0,0 @@
-#cloud-config
-password: passw0rd
-chpasswd: { expire: False }
-ssh_pwauth: True
-
-ssh_authorized_keys:
- - ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDVmr/XVGTL+4d1sol7CYS0MAEahKMgXKOhjThPl0VhQ3CLQHoqbtvzliWqaL/UqZHhm+L752PXRee6yhav9mu6/zmEeKUxn0ajMvZEPqNT8Joj6sFCQpJpH8J2YQcFtGKarpPYT1yEyEljSM3N57ElIpVHdB4t1tFnrgftAZMentSWh269L1XABZylhsaBwznjV9ugUCIk4sgIzVQ8fjHlYOROr/rJ4SVKZ6ITaUe0RIeHIwk5IMHdhnmHDX7htjexH7S9ndVBbOmfUvCDVfcBkDzJ/5myYxkeFMV4KCWx3yOUseRa52HmYkGQK7A+9FCTWATmBKdMa2LRNSosis1H ubuntu@ovfdemo
diff --git a/doc/sources/smartos/README.rst b/doc/sources/smartos/README.rst
deleted file mode 100644
index e63f311f..00000000
--- a/doc/sources/smartos/README.rst
+++ /dev/null
@@ -1,149 +0,0 @@
-==================
-SmartOS Datasource
-==================
-
-This datasource finds metadata and user-data from the SmartOS virtualization
-platform (i.e. Joyent).
-
-Please see http://smartos.org/ for information about SmartOS.
-
-SmartOS Platform
-----------------
-The SmartOS virtualization platform uses meta-data to the instance via the
-second serial console. On Linux, this is /dev/ttyS1. The data is a provided
-via a simple protocol: something queries for the data, the console responds
-responds with the status and if "SUCCESS" returns until a single ".\n".
-
-New versions of the SmartOS tooling will include support for base64 encoded data.
-
-Meta-data channels
-------------------
-
-Cloud-init supports three modes of delivering user/meta-data via the flexible
-channels of SmartOS.
-
-* user-data is written to /var/db/user-data
- - per the spec, user-data is for consumption by the end-user, not provisioning
- tools
- - cloud-init entirely ignores this channel other than writting it to disk
- - removal of the meta-data key means that /var/db/user-data gets removed
- - a backup of previous meta-data is maintained as /var/db/user-data.<timestamp>
- - <timestamp> is the epoch time when cloud-init ran
-
-* user-script is written to /var/lib/cloud/scripts/per-boot/99_user_data
- - this is executed each boot
- - a link is created to /var/db/user-script
- - previous versions of the user-script is written to
- /var/lib/cloud/scripts/per-boot.backup/99_user_script.<timestamp>.
- - <timestamp> is the epoch time when cloud-init ran.
- - when the 'user-script' meta-data key goes missing, the user-script is
- removed from the file system, although a backup is maintained.
- - if the script is not shebanged (i.e. starts with #!<executable>), then
- or is not an executable, cloud-init will add a shebang of "#!/bin/bash"
-
-* cloud-init:user-data is treated like on other Clouds.
- - this channel is used for delivering _all_ cloud-init instructions
- - scripts delivered over this channel must be well formed (i.e. must have
- a shebang)
-
-Cloud-init supports reading the traditional meta-data fields supported by the
-SmartOS tools. These are:
- * root_authorized_keys
- * hostname
- * enable_motd_sys_info
- * iptables_disable
-
-Note: At this time iptables_disable and enable_motd_sys_info are read but
- are not actioned.
-
-disabling user-script
----------------------
-
-Cloud-init uses the per-boot script functionality to handle the execution
-of the user-script. If you want to prevent this use a cloud-config of:
-
-#cloud-config
-cloud_final_modules:
- - scripts-per-once
- - scripts-per-instance
- - scripts-user
- - ssh-authkey-fingerprints
- - keys-to-console
- - phone-home
- - final-message
- - power-state-change
-
-Alternatively you can use the json patch method
-#cloud-config-jsonp
-[
- { "op": "replace",
- "path": "/cloud_final_modules",
- "value": ["scripts-per-once",
- "scripts-per-instance",
- "scripts-user",
- "ssh-authkey-fingerprints",
- "keys-to-console",
- "phone-home",
- "final-message",
- "power-state-change"]
- }
-]
-
-The default cloud-config includes "script-per-boot". Cloud-init will still
-ingest and write the user-data but will not execute it, when you disable
-the per-boot script handling.
-
-Note: Unless you have an explicit use-case, it is recommended that you not
- disable the per-boot script execution, especially if you are using
- any of the life-cycle management features of SmartOS.
-
-The cloud-config needs to be delivered over the cloud-init:user-data channel
-in order for cloud-init to ingest it.
-
-base64
-------
-
-The following are exempt from base64 encoding, owing to the fact that they
-are provided by SmartOS:
- * root_authorized_keys
- * enable_motd_sys_info
- * iptables_disable
- * user-data
- * user-script
-
-This list can be changed through system config of variable 'no_base64_decode'.
-
-This means that user-script and user-data as well as other values can be
-base64 encoded. Since Cloud-init can only guess as to whether or not something
-is truly base64 encoded, the following meta-data keys are hints as to whether
-or not to base64 decode something:
- * base64_all: Except for excluded keys, attempt to base64 decode
- the values. If the value fails to decode properly, it will be
- returned in its text
- * base64_keys: A comma deliminated list of which keys are base64 encoded.
- * b64-<key>:
- for any key, if there exists an entry in the metadata for 'b64-<key>'
- Then 'b64-<key>' is expected to be a plaintext boolean indicating whether
- or not its value is encoded.
- * no_base64_decode: This is a configuration setting
- (i.e. /etc/cloud/cloud.cfg.d) that sets which values should not be
- base64 decoded.
-
-disk_aliases and ephemeral disk:
----------------
-By default, SmartOS only supports a single ephemeral disk. That disk is
-completely empty (un-partitioned with no filesystem).
-
-The SmartOS datasource has built-in cloud-config which instructs the
-'disk_setup' module to partition and format the ephemeral disk.
-
-You can control the disk_setup then in 2 ways:
- 1. through the datasource config, you can change the 'alias' of
- ephermeral0 to reference another device. The default is:
- 'disk_aliases': {'ephemeral0': '/dev/vdb'},
- Which means anywhere disk_setup sees a device named 'ephemeral0'
- then /dev/vdb will be substituted.
- 2. you can provide disk_setup or fs_setup data in user-data to overwrite
- the datasource's built-in values.
-
-See doc/examples/cloud-config-disk-setup.txt for information on disk_setup.
diff --git a/doc/status.txt b/doc/status.txt
deleted file mode 100644
index 60993216..00000000
--- a/doc/status.txt
+++ /dev/null
@@ -1,53 +0,0 @@
-cloud-init will keep a 'status' file up to date for other applications
-wishing to use it to determine cloud-init status.
-
-It will manage 2 files:
- status.json
- result.json
-
-The files will be written to /var/lib/cloud/data/ .
-A symlink will be created in /run/cloud-init. The link from /run is to ensure
-that if the file exists, it is not stale for this boot.
-
-status.json's format is:
- {
- 'v1': {
- 'init': {
- errors: [] # list of strings for each error that occurred
- start: float # time.time() that this stage started or None
- end: float # time.time() that this stage finished or None
- },
- 'init-local': {
- 'errors': [], 'start': <float>, 'end' <float> # (same as 'init' above)
- },
- 'modules-config': {
- 'errors': [], 'start': <float>, 'end' <float> # (same as 'init' above)
- },
- 'modules-final': {
- 'errors': [], 'start': <float>, 'end' <float> # (same as 'init' above)
- },
- 'datasource': string describing datasource found or None
- 'stage': string representing stage that is currently running
- ('init', 'init-local', 'modules-final', 'modules-config', None)
- if None, then no stage is running. Reader must read the start/end
- of each of the above stages to determine the state.
- }
-
-result.json's format is:
- {
- 'v1': {
- 'datasource': string describing the datasource found
- 'errors': [] # list of errors reported
- }
- }
-
-Thus, to determine if cloud-init is finished:
- fin = "/run/cloud-init/result.json"
- if os.path.exists(fin):
- ret = json.load(open(fin, "r"))
- if len(ret['v1']['errors']):
- print "Finished with errors:" + "\n".join(ret['v1']['errors'])
- else:
- print "Finished no errors"
- else:
- print "Not Finished"
diff --git a/doc/userdata.txt b/doc/userdata.txt
deleted file mode 100644
index cc691ae6..00000000
--- a/doc/userdata.txt
+++ /dev/null
@@ -1,79 +0,0 @@
-=== Overview ===
-Userdata is data provided by the entity that launches an instance.
-The cloud provider makes this data available to the instance via in one
-way or anohter.
-
-In EC2, the data is provided by the user via the '--user-data' or
-'user-data-file' argument to ec2-run-instances. The EC2 cloud makes the
-data available to the instance via its meta-data service at
-http://169.254.169.254/latest/user-data
-
-cloud-init can read this input and act on it in different ways.
-
-=== Input Formats ===
-cloud-init will download and cache to filesystem any user-data that it
-finds. However, certain types of user-data are handled specially.
-
- * Gzip Compressed Content
- content found to be gzip compressed will be uncompressed, and
- these rules applied to the uncompressed data
-
- * Mime Multi Part archive
- This list of rules is applied to each part of this multi-part file
- Using a mime-multi part file, the user can specify more than one
- type of data. For example, both a user data script and a
- cloud-config type could be specified.
-
- * User-Data Script
- begins with: #! or Content-Type: text/x-shellscript
- script will be executed at "rc.local-like" level during first boot.
- rc.local-like means "very late in the boot sequence"
-
- * Include File
- begins with #include or Content-Type: text/x-include-url
- This content is a "include" file. The file contains a list of
- urls, one per line. Each of the URLs will be read, and their content
- will be passed through this same set of rules. Ie, the content
- read from the URL can be gzipped, mime-multi-part, or plain text
-
-* Include File Once
- begins with #include-once or Content-Type: text/x-include-once-url
- This content is a "include" file. The file contains a list of
- urls, one per line. Each of the URLs will be read, and their content
- will be passed through this same set of rules. Ie, the content
- read from the URL can be gzipped, mime-multi-part, or plain text
- This file will just be downloaded only once per instance, and its
- contents cached for subsequent boots. This allows you to pass in
- one-time-use or expiring URLs.
-
- * Cloud Config Data
- begins with #cloud-config or Content-Type: text/cloud-config
-
- This content is "cloud-config" data. See the examples for a
- commented example of supported config formats.
-
- * Upstart Job
- begins with #upstart-job or Content-Type: text/upstart-job
-
- Content is placed into a file in /etc/init, and will be consumed
- by upstart as any other upstart job.
-
- * Cloud Boothook
- begins with #cloud-boothook or Content-Type: text/cloud-boothook
-
- This content is "boothook" data. It is stored in a file under
- /var/lib/cloud and then executed immediately.
-
- This is the earliest "hook" available. Note, that there is no
- mechanism provided for running only once. The boothook must take
- care of this itself. It is provided with the instance id in the
- environment variable "INSTANCE_ID". This could be made use of to
- provide a 'once-per-instance'
-
-=== Examples ===
-There are examples in the examples subdirectory.
-Additionally, the 'tools' directory contains 'write-mime-multipart',
-which can be used to easily generate mime-multi-part files from a list
-of input files. That data can then be given to an instance.
-
-See 'write-mime-multipart --help' for usage.
diff --git a/doc/var-lib-cloud.txt b/doc/var-lib-cloud.txt
deleted file mode 100644
index 7776d772..00000000
--- a/doc/var-lib-cloud.txt
+++ /dev/null
@@ -1,63 +0,0 @@
-/var/lib/cloud has the following structure:
- - scripts/
- per-instance/
- per-boot/
- per-once/
-
- files in these directories will be run by 'run-parts' once per
- instance, once per boot, and once per *ever*.
-
- - seed/
- <datasource>/
- sys-user-data
- user-data
- meta-data
-
- The 'seed/' directory allows you to seed a specific datasource
- For example, to seed the 'nocloud' datasource you would need to
- populate
- seed/nocloud/user-data
- seed/nocloud/meta-data
-
- - instance -> instances/i-abcde
- This is a symlink to the current instance/<instance-id> directory
- created/updated on boot
- - instances/
- i-abcdefgh/
- scripts/ # all scripts in scripts are per-instance
- sem/
- config-puppet
- config-ssh
- set-hostname
- cloud-config.txt
- user-data.txt
- user-data.txt.i
- obj.pkl
- handlers/
- data/ # just a per-instance data location to be used
- boot-finished
- # this file indicates when "boot" is finished
- # it is created by the 'final_message' cloud-config
- datasource # a file containing the class and string of datasource
-
- - sem/
- scripts.once
- These are the cloud-specific semaphores. The only thing that
- would go here are files to mark that a "per-once" script
- has run.
-
- - handlers/
- "persistent" handlers (not per-instance). Same as handlers
- from user-data, just will be cross-instance id
-
- - data/
- this is a persistent data location. cloud-init won't really
- use it, but something else (a handler or script could)
- previous-datasource
- previous-instance-id
- previous-hostname
-
-to clear out the current instance's data as if to force a "new run" on reboot
-do:
- ( cd /var/lib/cloud/instance && sudo rm -Rf * )
-
diff --git a/doc/vendordata.txt b/doc/vendordata.txt
deleted file mode 100644
index 9acbe41c..00000000
--- a/doc/vendordata.txt
+++ /dev/null
@@ -1,53 +0,0 @@
-=== Overview ===
-Vendordata is data provided by the entity that launches an instance
-(for example, the cloud provider). This data can be used to
-customize the image to fit into the particular environment it is
-being run in.
-
-Vendordata follows the same rules as user-data, with the following
-caveats:
- 1. Users have ultimate control over vendordata. They can disable its
- execution or disable handling of specific parts of multipart input.
- 2. By default it only runs on first boot
- 3. Vendordata can be disabled by the user. If the use of vendordata is
- required for the instance to run, then vendordata should not be
- used.
- 4. user supplied cloud-config is merged over cloud-config from
- vendordata.
-
-Users providing cloud-config data can use the '#cloud-config-jsonp' method
-to more finely control their modifications to the vendor supplied
-cloud-config. For example, if both vendor and user have provided
-'runcnmd' then the default merge handler will cause the user's runcmd to
-override the one provided by the vendor. To append to 'runcmd', the user
-could better provide multipart input with a cloud-config-jsonp part like:
- #cloud-config-jsonp
- [{ "op": "add", "path": "/runcmd", "value": ["my", "command", "here"]}]
-
-Further, we strongly advise vendors to not 'be evil'. By evil, we
-mean any action that could compromise a system. Since users trust
-you, please take care to make sure that any vendordata is safe,
-atomic, idempotent and does not put your users at risk.
-
-=== Input Formats ===
-cloud-init will download and cache to filesystem any vendor-data that it
-finds. Vendordata is handled exactly like user-data. That means that
-the vendor can supply multipart input and have those parts acted on
-in the same way as user-data.
-
-The only differences are:
- * user-scripts are stored in a different location than user-scripts (to
- avoid namespace collision)
- * user can disable part handlers by cloud-config settings.
- For example, to disable handling of 'part-handlers' in vendor-data,
- the user could provide user-data like this:
- #cloud-config
- vendordata: {excluded: 'text/part-handler'}
-
-=== Examples ===
-There are examples in the examples subdirectory.
-Additionally, the 'tools' directory contains 'write-mime-multipart',
-which can be used to easily generate mime-multi-part files from a list
-of input files. That data can then be given to an instance.
-
-See 'write-mime-multipart --help' for usage.