summaryrefslogtreecommitdiff
path: root/doc/config-cluster
diff options
context:
space:
mode:
authorJohn Wilkins <john.wilkins@dreamhost.com>2012-05-18 13:54:51 -0700
committerSage Weil <sage@inktank.com>2012-05-21 16:39:15 -0700
commit812989bf35d18416b494c06943ecc74a1bddcc27 (patch)
tree0603fcb30ffef254964626100b382cd916d585b6 /doc/config-cluster
parent3a2dc969fffe6ee6f3a51583444998aba6449b7a (diff)
downloadceph-812989bf35d18416b494c06943ecc74a1bddcc27.tar.gz
doc: misc updates
doc/architecture.rst - removed broken reference. doc/config-cluster - cleanup and added chef doc/install - Made generic to add Chef, OpenStack and libvert installs doc/init - Created light start | stop and health section doc/source - Removed $ from code examples. Trimmed paras to 80 char doc/images - Added preliminary diagram for Chef. doc/rec - Added reference to hardware. Added filesystem info. Signed-off-by: John Wilkins <john.wilkins@dreamhost.com>
Diffstat (limited to 'doc/config-cluster')
-rw-r--r--doc/config-cluster/ceph-conf.rst61
-rw-r--r--doc/config-cluster/chef.rst89
-rw-r--r--doc/config-cluster/demo-ceph.conf3
-rw-r--r--doc/config-cluster/deploying-ceph-conf.rst20
-rw-r--r--doc/config-cluster/deploying-ceph-with-mkcephfs.rst34
-rw-r--r--doc/config-cluster/file-system-recommendations.rst11
-rw-r--r--doc/config-cluster/index.rst1
7 files changed, 146 insertions, 73 deletions
diff --git a/doc/config-cluster/ceph-conf.rst b/doc/config-cluster/ceph-conf.rst
index f88c5fdda46..e1237a66a7b 100644
--- a/doc/config-cluster/ceph-conf.rst
+++ b/doc/config-cluster/ceph-conf.rst
@@ -13,12 +13,12 @@ Each process or daemon looks for a ``ceph.conf`` file that provides their
configuration settings. The default ``ceph.conf`` locations in sequential
order include:
- 1. ``$CEPH_CONF`` (*i.e.,* the path following
- the ``$CEPH_CONF`` environment variable)
- 2. ``-c path/path`` (*i.e.,* the ``-c`` command line argument)
- 3. ``/etc/ceph/ceph.conf``
- 4. ``~/.ceph/config``
- 5. ``./ceph.conf`` (*i.e.,* in the current working directory)
+#. ``$CEPH_CONF`` (*i.e.,* the path following
+the ``$CEPH_CONF`` environment variable)
+#. ``-c path/path`` (*i.e.,* the ``-c`` command line argument)
+#. ``/etc/ceph/ceph.conf``
+#. ``~/.ceph/config``
+#. ``./ceph.conf`` (*i.e.,* in the current working directory)
The ``ceph.conf`` file provides the settings for each Ceph daemon. Once you
have installed the Ceph packages on the OSD Cluster hosts, you need to create
@@ -124,26 +124,24 @@ alphanumeric for monitors and metadata servers. ::
``host`` and ``addr`` Settings
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-The :doc:`/install/hardware-recommendations` section provides some hardware guidelines for
-configuring the cluster. It is possible for a single host to run
-multiple daemons. For example, a single host with multiple disks or
-RAIDs may run one ``ceph-osd`` for each disk or RAID. Additionally, a
-host may run both a ``ceph-mon`` and an ``ceph-osd`` daemon on the
-same host. Ideally, you will have a host for a particular type of
-process. For example, one host may run ``ceph-osd`` daemons, another
-host may run a ``ceph-mds`` daemon, and other hosts may run
-``ceph-mon`` daemons.
-
-Each host has a name identified by the ``host`` setting, and a network
-location (i.e., domain name or IP address) identified by the ``addr``
-setting. For example::
+The `Hardware Recommendations <../hardware-recommendations>`_ section
+provides some hardware guidelines for configuring the cluster. It is possible
+for a single host to run multiple daemons. For example, a single host with
+multiple disks or RAIDs may run one ``ceph-osd`` for each disk or RAID.
+Additionally, a host may run both a ``ceph-mon`` and an ``ceph-osd`` daemon
+on the same host. Ideally, you will have a host for a particular type of
+process. For example, one host may run ``ceph-osd`` daemons, another host
+may run a ``ceph-mds`` daemon, and other hosts may run ``ceph-mon`` daemons.
+
+Each host has a name identified by the ``host`` setting, and a network location
+(i.e., domain name or IP address) identified by the ``addr`` setting. For example::
[osd.1]
host = hostNumber1
- addr = 150.140.130.120:1100
+ addr = 150.140.130.120
[osd.2]
host = hostNumber1
- addr = 150.140.130.120:1102
+ addr = 150.140.130.120
Monitor Configuration
@@ -155,7 +153,12 @@ algorithm can determine which version of the cluster map is the most accurate.
.. note:: You may deploy Ceph with a single monitor, but if the instance fails,
the lack of a monitor may interrupt data service availability.
-Ceph monitors typically listen on port ``6789``.
+Ceph monitors typically listen on port ``6789``. For example:
+
+ [mon.a]
+ host = hostNumber1
+ addr = 150.140.130.120:6789
+
Example Configuration File
~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -168,13 +171,11 @@ Configuration File Deployment Options
The most common way to deploy the ``ceph.conf`` file in a cluster is to have
all hosts share the same configuration file.
-You may create a ``ceph.conf`` file for each host if you wish, or
-specify a particular ``ceph.conf`` file for a subset of hosts within
-the cluster. However, using per-host ``ceph.conf`` configuration files
-imposes a maintenance burden as the cluster grows. In a typical
-deployment, an administrator creates a ``ceph.conf`` file on the
-Administration host and then copies that file to each OSD Cluster
-host.
+You may create a ``ceph.conf`` file for each host if you wish, or specify a
+particular ``ceph.conf`` file for a subset of hosts within the cluster. However,
+using per-host ``ceph.conf``configuration files imposes a maintenance burden as the
+cluster grows. In a typical deployment, an administrator creates a ``ceph.conf`` file
+on the Administration host and then copies that file to each OSD Cluster host.
The current cluster deployment script, ``mkcephfs``, does not make copies of the
-``ceph.conf``. You must copy the file manually.
+``ceph.conf``. You must copy the file manually. \ No newline at end of file
diff --git a/doc/config-cluster/chef.rst b/doc/config-cluster/chef.rst
new file mode 100644
index 00000000000..cd78e15314d
--- /dev/null
+++ b/doc/config-cluster/chef.rst
@@ -0,0 +1,89 @@
+=====================
+ Deploying with Chef
+=====================
+
+We use Chef cookbooks to deploy Ceph. See `Managing Cookbooks with Knife`_ for details
+on using ``knife``.
+
+Add a Cookbook Path
+-------------------
+Add the ``cookbook_path`` to your ``~/.ceph/knife.rb`` configuration file. For example::
+
+ cookbook_path '/home/userId/.chef/ceph-cookbooks'
+
+Install Ceph Cookbooks
+----------------------
+To get the cookbooks for Ceph, clone them from git.::
+
+ cd ~/.chef
+ git clone https://github.com/ceph/ceph-cookbooks.git
+ knife cookbook site upload parted btrfs parted
+
+Install Apache Cookbooks
+------------------------
+RADOS Gateway uses Apache 2. So you must install the Apache 2 cookbooks.
+To retrieve the Apache 2 cookbooks, execute the following::
+
+ cd ~/.chef/ceph-cookbooks
+ knife cookbook site download apache2
+
+The `apache2-{version}.tar.gz`` archive will appear in your ``~/.ceph`` directory.
+In the following example, replace ``{version}`` with the version of the Apache 2
+cookbook archive knife retrieved. Then, expand the archive and upload it to the
+Chef server.::
+
+ tar xvf apache2-{version}.tar.gz
+ knife cookbook upload apache2
+
+Configure Chef
+--------------
+To configure Chef, you must specify an environment and a series of roles. You
+may use the Web UI or ``knife`` to perform these tasks.
+
+The following instructions demonstrate how to perform these tasks with ``knife``.
+
+
+Create a role file for the Ceph monitor. ::
+
+ cat >ceph-mon.rb <<EOF
+ name "ceph-mon"
+ description "Ceph monitor server"
+ run_list(
+ 'recipe[ceph::single_mon]'
+ )
+ EOF
+
+Create a role file for the OSDs. ::
+
+ cat >ceph-osd.rb <<EOF
+ name "ceph-osd"
+ description "Ceph object store"
+ run_list(
+ 'recipe[ceph::bootstrap_osd]'
+ )
+ EOF
+
+Add the roles to Chef using ``knife``. ::
+
+ knife role from file ceph-mon.rb ceph-osd.rb
+
+You may also perform the same tasks with the command line and a ``vim`` editor.
+Set an ``EDITOR`` environment variable. ::
+
+ export EDITOR=vi
+
+Then exectute::
+
+ knife create role {rolename}
+
+The ``vim`` editor opens with a JSON object, and you may edit the settings and
+save the JSON file.
+
+Finally configure the nodes. ::
+
+ knife node edit {nodename}
+
+
+
+
+.. _Managing Cookbooks with Knife: http://wiki.opscode.com/display/chef/Managing+Cookbooks+With+Knife
diff --git a/doc/config-cluster/demo-ceph.conf b/doc/config-cluster/demo-ceph.conf
index 06821fc3d4d..65a7ea5a124 100644
--- a/doc/config-cluster/demo-ceph.conf
+++ b/doc/config-cluster/demo-ceph.conf
@@ -1,4 +1,5 @@
[global]
+ ; use cephx or none
auth supported = cephx
keyring = /etc/ceph/$name.keyring
@@ -11,6 +12,8 @@
osd data = /srv/osd.$id
osd journal = /srv/osd.$id.journal
osd journal size = 1000
+ ; uncomment the following line if you are mounting with ext4
+ ; filestore xattr use omap = true
[mon.a]
host = myserver01
diff --git a/doc/config-cluster/deploying-ceph-conf.rst b/doc/config-cluster/deploying-ceph-conf.rst
index 183967ded5f..30259b11633 100644
--- a/doc/config-cluster/deploying-ceph-conf.rst
+++ b/doc/config-cluster/deploying-ceph-conf.rst
@@ -1,10 +1,11 @@
==============================
Deploying Ceph Configuration
==============================
-Ceph's current deployment script does not copy the configuration file you
+Ceph's ``mkcephfs`` deployment script does not copy the configuration file you
created from the Administration host to the OSD Cluster hosts. Copy the
configuration file you created (*i.e.,* ``mycluster.conf`` in the example below)
-from the Administration host to ``etc/ceph/ceph.conf`` on each OSD Cluster host.
+from the Administration host to ``etc/ceph/ceph.conf`` on each OSD Cluster host
+if you are using ``mkcephfs`` to deploy Ceph.
::
@@ -12,18 +13,9 @@ from the Administration host to ``etc/ceph/ceph.conf`` on each OSD Cluster host.
ssh myserver02 sudo tee /etc/ceph/ceph.conf <mycluster.conf
ssh myserver03 sudo tee /etc/ceph/ceph.conf <mycluster.conf
-
-The current deployment script doesn't copy the start services. Copy the ``start``
-services from the Administration host to each OSD Cluster host. ::
-
- ssh myserver01 sudo /etc/init.d/ceph start
- ssh myserver02 sudo /etc/init.d/ceph start
- ssh myserver03 sudo /etc/init.d/ceph start
-
-The current deployment script may not create the default server directories. Create
-server directories for each instance of a Ceph daemon.
-
-Using the exemplary ``ceph.conf`` file, you would perform the following:
+The current deployment script does not create the default server directories. Create
+server directories for each instance of a Ceph daemon. Using the exemplary
+``ceph.conf`` file, you would perform the following:
On ``myserver01``::
diff --git a/doc/config-cluster/deploying-ceph-with-mkcephfs.rst b/doc/config-cluster/deploying-ceph-with-mkcephfs.rst
index 35ccbbdbeda..d428f4369b4 100644
--- a/doc/config-cluster/deploying-ceph-with-mkcephfs.rst
+++ b/doc/config-cluster/deploying-ceph-with-mkcephfs.rst
@@ -1,31 +1,17 @@
-================================
-Deploying Ceph with ``mkcephfs``
-================================
+==================================
+ Deploying Ceph with ``mkcephfs``
+==================================
Once you have copied your Ceph Configuration to the OSD Cluster hosts,
you may deploy Ceph with the ``mkcephfs`` script.
-.. note:: ``mkcephfs`` is a quick bootstrapping tool. It does not handle more complex operations, such as upgrades.
+.. note:: ``mkcephfs`` is a quick bootstrapping tool. It does not handle more
+ complex operations, such as upgrades.
-For production environments, you will also be able to deploy Ceph using Chef cookbooks (coming soon!).
-
-To run ``mkcephfs``, execute the following::
+For production environments, you deploy Ceph using Chef cookbooks. To run
+``mkcephfs``, execute the following::
- $ mkcephfs -a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.keyring
+ sudo mkcephfs -a -c /etc/ceph/ceph.conf -k ceph.keyring
-The script adds an admin key to the ``ceph.keyring``, which is analogous to a root password.
-
-The script adds an admin key to the ``mycluster.keyring``, which is analogous to a root password.
-
-To start the cluster, execute the following::
-
- /etc/init.d/ceph -a start
-
-Ceph should begin operating. You can check on the health of your Ceph cluster with the following::
-
- ceph health
-
-If you specified non-default locations for your configuration or keyring::
-
- ceph -c /path/to/conf -k /path/to/keyring health
-
+The script adds an admin key to the ``ceph.keyring``, which is analogous to a
+root password.
diff --git a/doc/config-cluster/file-system-recommendations.rst b/doc/config-cluster/file-system-recommendations.rst
index 6ea4950286e..5049f389250 100644
--- a/doc/config-cluster/file-system-recommendations.rst
+++ b/doc/config-cluster/file-system-recommendations.rst
@@ -1,6 +1,6 @@
-=========================================
-Hard Disk and File System Recommendations
-=========================================
+===========================================
+ Hard Disk and File System Recommendations
+===========================================
Ceph aims for data safety, which means that when the application receives notice
that data was written to the disk, that data was actually written to the disk.
@@ -9,7 +9,7 @@ disk. Newer kernels should work fine.
Use ``hdparm`` to disable write caching on the hard disk::
- $ hdparm -W 0 /dev/hda 0
+ hdparm -W 0 /dev/hda 0
Ceph OSDs depend on the Extended Attributes (XATTRs) of the underlying file
@@ -26,7 +26,8 @@ File system candidates for Ceph include B tree and B+ tree file systems such as:
- ``btrfs``
- ``XFS``
-If you are using ``ext4``, enable XATTRs. ::
+If you are using ``ext4``, mount your file system to enable XATTRs. You must also
+add the following line to the ``[osd]`` section of your ``ceph.conf`` file. ::
filestore xattr use omap = true
diff --git a/doc/config-cluster/index.rst b/doc/config-cluster/index.rst
index a36122e98c4..271d89ba48a 100644
--- a/doc/config-cluster/index.rst
+++ b/doc/config-cluster/index.rst
@@ -27,3 +27,4 @@ instance (a single context).
Configuration <ceph-conf>
Deploy Config <deploying-ceph-conf>
deploying-ceph-with-mkcephfs
+ Deploy with Chef <chef>