From 812989bf35d18416b494c06943ecc74a1bddcc27 Mon Sep 17 00:00:00 2001 From: John Wilkins Date: Fri, 18 May 2012 13:54:51 -0700 Subject: doc: misc updates doc/architecture.rst - removed broken reference. doc/config-cluster - cleanup and added chef doc/install - Made generic to add Chef, OpenStack and libvert installs doc/init - Created light start | stop and health section doc/source - Removed $ from code examples. Trimmed paras to 80 char doc/images - Added preliminary diagram for Chef. doc/rec - Added reference to hardware. Added filesystem info. Signed-off-by: John Wilkins --- doc/architecture.rst | 2 +- doc/config-cluster/ceph-conf.rst | 61 +- doc/config-cluster/chef.rst | 89 + doc/config-cluster/demo-ceph.conf | 3 + doc/config-cluster/deploying-ceph-conf.rst | 20 +- .../deploying-ceph-with-mkcephfs.rst | 34 +- doc/config-cluster/file-system-recommendations.rst | 11 +- doc/config-cluster/index.rst | 1 + doc/images/chef.png | Bin 0 -> 44230 bytes doc/images/chef.svg | 17074 +++++++++++++++++++ doc/index.rst | 1 + doc/init/check-cluster-health.rst | 16 + doc/init/index.rst | 77 + doc/init/start-cluster.rst | 23 + doc/init/stop-cluster.rst | 9 + doc/install/chef.rst | 201 + doc/install/index.rst | 18 +- doc/install/openstack.rst | 3 + doc/rec/filesystem.rst | 69 +- doc/rec/hardware.rst | 12 +- doc/source/build-packages.rst | 44 +- doc/source/build-prerequisites.rst | 56 +- doc/source/building-ceph.rst | 50 +- doc/source/contributing.rst | 4 +- doc/source/downloading-a-ceph-release.rst | 5 +- 25 files changed, 17704 insertions(+), 179 deletions(-) create mode 100644 doc/config-cluster/chef.rst create mode 100644 doc/images/chef.png create mode 100644 doc/images/chef.svg create mode 100644 doc/init/check-cluster-health.rst create mode 100644 doc/init/index.rst create mode 100644 doc/init/start-cluster.rst create mode 100644 doc/init/stop-cluster.rst create mode 100644 doc/install/chef.rst create mode 100644 doc/install/openstack.rst (limited to 'doc') diff --git a/doc/architecture.rst b/doc/architecture.rst index ed8a5fe12f3..59c02ebe8ee 100644 --- a/doc/architecture.rst +++ b/doc/architecture.rst @@ -80,7 +80,7 @@ metadata to store file owner etc. Underneath, ``ceph-osd`` stores the data on a local filesystem. We recommend using Btrfs_, but any POSIX filesystem that has extended -attributes should work (see :ref:`xattr`). +attributes should work. .. _Btrfs: http://en.wikipedia.org/wiki/Btrfs diff --git a/doc/config-cluster/ceph-conf.rst b/doc/config-cluster/ceph-conf.rst index f88c5fdda46..e1237a66a7b 100644 --- a/doc/config-cluster/ceph-conf.rst +++ b/doc/config-cluster/ceph-conf.rst @@ -13,12 +13,12 @@ Each process or daemon looks for a ``ceph.conf`` file that provides their configuration settings. The default ``ceph.conf`` locations in sequential order include: - 1. ``$CEPH_CONF`` (*i.e.,* the path following - the ``$CEPH_CONF`` environment variable) - 2. ``-c path/path`` (*i.e.,* the ``-c`` command line argument) - 3. ``/etc/ceph/ceph.conf`` - 4. ``~/.ceph/config`` - 5. ``./ceph.conf`` (*i.e.,* in the current working directory) +#. ``$CEPH_CONF`` (*i.e.,* the path following +the ``$CEPH_CONF`` environment variable) +#. ``-c path/path`` (*i.e.,* the ``-c`` command line argument) +#. ``/etc/ceph/ceph.conf`` +#. ``~/.ceph/config`` +#. ``./ceph.conf`` (*i.e.,* in the current working directory) The ``ceph.conf`` file provides the settings for each Ceph daemon. Once you have installed the Ceph packages on the OSD Cluster hosts, you need to create @@ -124,26 +124,24 @@ alphanumeric for monitors and metadata servers. :: ``host`` and ``addr`` Settings ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -The :doc:`/install/hardware-recommendations` section provides some hardware guidelines for -configuring the cluster. It is possible for a single host to run -multiple daemons. For example, a single host with multiple disks or -RAIDs may run one ``ceph-osd`` for each disk or RAID. Additionally, a -host may run both a ``ceph-mon`` and an ``ceph-osd`` daemon on the -same host. Ideally, you will have a host for a particular type of -process. For example, one host may run ``ceph-osd`` daemons, another -host may run a ``ceph-mds`` daemon, and other hosts may run -``ceph-mon`` daemons. - -Each host has a name identified by the ``host`` setting, and a network -location (i.e., domain name or IP address) identified by the ``addr`` -setting. For example:: +The `Hardware Recommendations <../hardware-recommendations>`_ section +provides some hardware guidelines for configuring the cluster. It is possible +for a single host to run multiple daemons. For example, a single host with +multiple disks or RAIDs may run one ``ceph-osd`` for each disk or RAID. +Additionally, a host may run both a ``ceph-mon`` and an ``ceph-osd`` daemon +on the same host. Ideally, you will have a host for a particular type of +process. For example, one host may run ``ceph-osd`` daemons, another host +may run a ``ceph-mds`` daemon, and other hosts may run ``ceph-mon`` daemons. + +Each host has a name identified by the ``host`` setting, and a network location +(i.e., domain name or IP address) identified by the ``addr`` setting. For example:: [osd.1] host = hostNumber1 - addr = 150.140.130.120:1100 + addr = 150.140.130.120 [osd.2] host = hostNumber1 - addr = 150.140.130.120:1102 + addr = 150.140.130.120 Monitor Configuration @@ -155,7 +153,12 @@ algorithm can determine which version of the cluster map is the most accurate. .. note:: You may deploy Ceph with a single monitor, but if the instance fails, the lack of a monitor may interrupt data service availability. -Ceph monitors typically listen on port ``6789``. +Ceph monitors typically listen on port ``6789``. For example: + + [mon.a] + host = hostNumber1 + addr = 150.140.130.120:6789 + Example Configuration File ~~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -168,13 +171,11 @@ Configuration File Deployment Options The most common way to deploy the ``ceph.conf`` file in a cluster is to have all hosts share the same configuration file. -You may create a ``ceph.conf`` file for each host if you wish, or -specify a particular ``ceph.conf`` file for a subset of hosts within -the cluster. However, using per-host ``ceph.conf`` configuration files -imposes a maintenance burden as the cluster grows. In a typical -deployment, an administrator creates a ``ceph.conf`` file on the -Administration host and then copies that file to each OSD Cluster -host. +You may create a ``ceph.conf`` file for each host if you wish, or specify a +particular ``ceph.conf`` file for a subset of hosts within the cluster. However, +using per-host ``ceph.conf``configuration files imposes a maintenance burden as the +cluster grows. In a typical deployment, an administrator creates a ``ceph.conf`` file +on the Administration host and then copies that file to each OSD Cluster host. The current cluster deployment script, ``mkcephfs``, does not make copies of the -``ceph.conf``. You must copy the file manually. +``ceph.conf``. You must copy the file manually. \ No newline at end of file diff --git a/doc/config-cluster/chef.rst b/doc/config-cluster/chef.rst new file mode 100644 index 00000000000..cd78e15314d --- /dev/null +++ b/doc/config-cluster/chef.rst @@ -0,0 +1,89 @@ +===================== + Deploying with Chef +===================== + +We use Chef cookbooks to deploy Ceph. See `Managing Cookbooks with Knife`_ for details +on using ``knife``. + +Add a Cookbook Path +------------------- +Add the ``cookbook_path`` to your ``~/.ceph/knife.rb`` configuration file. For example:: + + cookbook_path '/home/userId/.chef/ceph-cookbooks' + +Install Ceph Cookbooks +---------------------- +To get the cookbooks for Ceph, clone them from git.:: + + cd ~/.chef + git clone https://github.com/ceph/ceph-cookbooks.git + knife cookbook site upload parted btrfs parted + +Install Apache Cookbooks +------------------------ +RADOS Gateway uses Apache 2. So you must install the Apache 2 cookbooks. +To retrieve the Apache 2 cookbooks, execute the following:: + + cd ~/.chef/ceph-cookbooks + knife cookbook site download apache2 + +The `apache2-{version}.tar.gz`` archive will appear in your ``~/.ceph`` directory. +In the following example, replace ``{version}`` with the version of the Apache 2 +cookbook archive knife retrieved. Then, expand the archive and upload it to the +Chef server.:: + + tar xvf apache2-{version}.tar.gz + knife cookbook upload apache2 + +Configure Chef +-------------- +To configure Chef, you must specify an environment and a series of roles. You +may use the Web UI or ``knife`` to perform these tasks. + +The following instructions demonstrate how to perform these tasks with ``knife``. + + +Create a role file for the Ceph monitor. :: + + cat >ceph-mon.rb <ceph-osd.rb < Deploy Config deploying-ceph-with-mkcephfs + Deploy with Chef diff --git a/doc/images/chef.png b/doc/images/chef.png new file mode 100644 index 00000000000..ceef50f836f Binary files /dev/null and b/doc/images/chef.png differ diff --git a/doc/images/chef.svg b/doc/images/chef.svg new file mode 100644 index 00000000000..e7648be2393 --- /dev/null +++ b/doc/images/chef.svg @@ -0,0 +1,17074 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + image/svg+xml + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Chef Workstation + Chef Server + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Chef Nodes + + + diff --git a/doc/index.rst b/doc/index.rst index 07726b4eb37..e1a588af72f 100644 --- a/doc/index.rst +++ b/doc/index.rst @@ -20,6 +20,7 @@ cluster to ensure that the storage hosts are running smoothly. start/index install/index config-cluster/index + init/index ops/index rec/index config diff --git a/doc/init/check-cluster-health.rst b/doc/init/check-cluster-health.rst new file mode 100644 index 00000000000..bdc72e37b46 --- /dev/null +++ b/doc/init/check-cluster-health.rst @@ -0,0 +1,16 @@ +========================= + Checking Cluster Health +========================= +When you start the Ceph cluster, it may take some time to reach a healthy +state. You can check on the health of your Ceph cluster with the following:: + + ceph health + +If you specified non-default locations for your configuration or keyring:: + + ceph -c /path/to/conf -k /path/to/keyring health + +Upon starting the Ceph cluster, you will likely encounter a health +warning such as ``HEALTH_WARN XXX num pgs stale``. Wait a few moments and check +it again. When your cluster is ready, ``ceph health`` should return a message +such as ``HEALTH_OK``. At that point, it is okay to begin using the cluster. \ No newline at end of file diff --git a/doc/init/index.rst b/doc/init/index.rst new file mode 100644 index 00000000000..f7a4f4be6b7 --- /dev/null +++ b/doc/init/index.rst @@ -0,0 +1,77 @@ +========================== + Start | Stop the Cluster +========================== +The ``ceph`` process provides functionality to **start**, **restart**, and +**stop** your Ceph cluster. Each time you execute ``ceph``, you must specify at +least one option and one command. You may also specify a daemon type or a daemon +instance. For most newer Debian/Ubuntu distributions, you may use the following +syntax:: + + sudo service ceph [options] [commands] [daemons] + +For older distributions, you may wish to use the ``/etc/init.d/ceph`` path:: + + sudo /etc/init.d/ceph [options] [commands] [daemons] + +The ``ceph`` options include: + ++-----------------+----------+-------------------------------------------------+ +| Option | Shortcut | Description | ++=================+==========+=================================================+ +| ``--verbose`` | ``-v`` | Use verbose logging. | ++-----------------+----------+-------------------------------------------------+ +| ``--valgrind`` | ``N/A`` | (Developers only) Use `Valgrind`_ debugging. | ++-----------------+----------+-------------------------------------------------+ +| ``--allhosts`` | ``-a`` | Execute on all hosts in ``ceph.conf.`` | +| | | Otherwise, it only executes on ``localhost``. | ++-----------------+----------+-------------------------------------------------+ +| ``--restart`` | ``N/A`` | Automatically restart daemon if it core dumps. | ++-----------------+----------+-------------------------------------------------+ +| ``--norestart`` | ``N/A`` | Don't restart a daemon if it core dumps. | ++-----------------+----------+-------------------------------------------------+ +| ``--conf`` | ``-c`` | Use an alternate configuration file. | ++-----------------+----------+-------------------------------------------------+ + +The ``ceph`` commands include: + ++------------------+------------------------------------------------------------+ +| Command | Description | ++==================+============================================================+ +| ``start`` | Start the daemon(s). | ++------------------+------------------------------------------------------------+ +| ``stop`` | Stop the daemon(s). | ++------------------+------------------------------------------------------------+ +| ``forcestop`` | Force the daemon(s) to stop. Same as ``kill -9`` | ++------------------+------------------------------------------------------------+ +| ``killall`` | Kill all daemons of a particular type. | ++------------------+------------------------------------------------------------+ +| ``cleanlogs`` | Cleans out the log directory. | ++------------------+------------------------------------------------------------+ +| ``cleanalllogs`` | Cleans out **everything** in the log directory. | ++------------------+------------------------------------------------------------+ + +The ``ceph`` daemons include the daemon types: + +- ``mon`` +- ``osd`` +- ``mds`` + +The ``ceph`` daemons may also specify a specific instance:: + + sudo /etc/init.d/ceph -a start osd.0 + +Where ``osd.0`` is the first OSD in the cluster. + +.. _Valgrind: http://www.valgrind.org/ + + +.. toctree:: + :hidden: + + start-cluster + Check Cluster Health + stop-cluster + +See `Operations`_ for more detailed information. + +.. _Operations: ../ops/index.html diff --git a/doc/init/start-cluster.rst b/doc/init/start-cluster.rst new file mode 100644 index 00000000000..39d83b3547a --- /dev/null +++ b/doc/init/start-cluster.rst @@ -0,0 +1,23 @@ +==================== + Starting a Cluster +==================== +To start your Ceph cluster, execute the ``ceph`` with the ``start`` command. +The usage may differ based upon your Linux distribution. For example, for most +newer Debian/Ubuntu distributions, you may use the following syntax:: + + sudo service ceph start [options] [start|restart] [daemonType|daemonID] + +For older distributions, you may wish to use the ``/etc/init.d/ceph`` path:: + + sudo /etc/init.d/ceph [options] [start|restart] [daemonType|daemonID] + +The following examples illustrates a typical use case:: + + sudo service ceph -a start + sudo /etc/init.d/ceph -a start + +Once you execute with ``-a``, Ceph should begin operating. You may also specify +a particular daemon instance to constrain the command to a single instance. For +example:: + + sudo /etc/init.d/ceph start osd.0 \ No newline at end of file diff --git a/doc/init/stop-cluster.rst b/doc/init/stop-cluster.rst new file mode 100644 index 00000000000..245e6db22c9 --- /dev/null +++ b/doc/init/stop-cluster.rst @@ -0,0 +1,9 @@ +==================== + Stopping a Cluster +==================== +To stop a cluster, execute one of the following:: + + sudo service ceph stop + sudo /etc/init.d/ceph -a stop + +Ceph should shut down the operating processes. \ No newline at end of file diff --git a/doc/install/chef.rst b/doc/install/chef.rst new file mode 100644 index 00000000000..583128eb851 --- /dev/null +++ b/doc/install/chef.rst @@ -0,0 +1,201 @@ +================= + Installing Chef +================= +Chef defines three types of entities: + +#. **Chef Server:** Manages Chef 'nodes." +#. **Chef Nodes:** Managed by the Chef Server. +#. **Chef Workstation:** Manages Chef. + +.. image:: ../images/chef.png + +See `Chef Architecture Introduction`_ for details. + +Identify a host(s) for your Chef server and Chef workstation. You may +install them on the same host. To configure Chef, do the following on +the host designated to operate as the Chef server: + +#. Install Ruby +#. Install Chef +#. Install the Chef Server +#. Install Knife +#. Install the Chef Client + +Once you have completed the foregoing steps, you may bootstrap the +Chef nodes with ``knife.`` + +Installing Ruby +--------------- +Chef requires you to install Ruby. Use the version applicable to your current +Linux distribution. :: + + sudo apt-get update + sudo apt-get install ruby + +Installing Chef +--------------- +.. important:: Before you install Chef, identify the host for your Chef + server, and its fully qualified URI. + +First, add Opscode packages to your APT configuration. +Replace ``{dist.name}`` with the name of your Linux distribution. +For example:: + + sudo tee /etc/apt/sources.list.d/chef.list << EOF + deb http://apt.opscode.com/ `lsb_release -cs`{dist.name}-0.10 main + deb-src http://apt.opscode.com/ `lsb_release -cs`{dist.name}-0.10 main + EOF + +Next, you must request keys so that APT can verify the packages. :: + + gpg --keyserver keys.gnupg.net --recv-keys 83EF826A + gpg --export packages@opscode.com | sudo apt-key add - + +To install Chef, execute ``update`` and ``install``. For example:: + + sudo apt-get update + sudo apt-get install chef + +Enter the fully qualified URI for your Chef server. For example:: + + http://127.0.0.1:4000 + +Installing Chef Server +---------------------- +Once you have installed Chef, you must install the Chef server. +See `Installing Chef Server on Debian or Ubuntu using Packages`_ for details. +For example:: + + sudo apt-get install chef-server + +The Chef server installer will prompt you to enter a temporary password. Enter +a temporary password (e.g., ``foo``) and proceed with the installation. + +.. tip:: As of this writing, we found a bug in the Chef installer. + When you press **Enter** to get to the password entry field, nothing happens. + We were able to get to the password entry field by pressing **ESC**. + +Once the installer finishes and activates the Chef server, you may enter the fully +qualified URI in a browser to launch the Chef web UI. For example:: + + http://127.0.0.1:4000 + +The Chef web UI will prompt you to enter the username and password. + +- **login:** ``admin`` +- **password:** ``foo`` + +Once you have entered the temporary password, the Chef web UI will prompt you +to enter a new password. + +Configuring Knife +----------------- +Once you complete the Chef server installation, install ``knife`` on the the +Chef server. If the Chef server is a remote host, use ``ssh`` to connect. :: + + ssh username@my-chef-server + +In the ``/home/username`` directory, create a hidden Chef directory. :: + + mkdir -p ~/.chef + +The server generates validation and web UI certificates with read/write +permissions for the user that installed the Chef server. Copy them from the +``/etc/chef`` directory to the ``~/.chef`` directory. Then, change their +ownership to the current user. :: + + sudo cp /etc/chef/validation.pem /etc/chef/webui.pem ~/.chef + sudo chown -R $USER ~/.chef + +From the current user's home directory, configure ``knife`` with an initial +API client. :: + + knife configure -i + +The configuration will prompt you for inputs. Answer accordingly: + +*Where should I put the config file? [~/.chef/knife.rb]* Press **Enter** +to accept the default value. + +*Please enter the chef server URL:* If you are installing the +client on the same host as the server, enter ``http://localhost:4000``. +Otherwise, enter an appropriate URL for the server. + +*Please enter a clientname for the new client:* Press **Enter** +to accept the default value. + +*Please enter the existing admin clientname:* Press **Enter** +to accept the default value. + +*Please enter the location of the existing admin client's private key:* +Override the default value so that it points to the ``.chef`` directory. +(*e.g.,* ``.chef/webui.pem``) + +*Please enter the validation clientname:* Press **Enter** to accept +the default value. + +*Please enter the location of the validation key:* Override the +default value so that it points to the ``.chef`` directory. +(*e.g.,* ``.chef/validation.pem``) + +*Please enter the path to a chef repository (or leave blank):* +Leave the entry field blank and press **Enter**. + + +Installing Chef Client +---------------------- +Install the Chef client on the Chef Workstation. If you use the same host for +the workstation and server, you may have performed a number of these steps. +See `Installing Chef Client on Ubuntu or Debian`_ + +Create a directory for the GPG key. :: + + sudo mkdir -p /etc/apt/trusted.gpg.d + +Add the GPG keys and update the index. :: + + gpg --keyserver keys.gnupg.net --recv-keys 83EF826A + gpg --export packages@opscode.com | sudo tee /etc/apt/trusted.gpg.d/opscode-keyring.gpg > /dev/null + +Update APT. :: + + sudo apt-get update + +Install the Opscode keyring to ensure the keyring stays up to date. :: + + sudo apt-get install opscode-keyring + +The ``chef-client`` requires a ``client.rb`` and a copy of the +``validation.pem`` file. Create a directory for them. :: + + sudo mkdir -p /etc/chef + +Create the ``client.rb`` and ``validation.pem`` for ``chef-client``. :: + + sudo knife configure client /etc/chef + +Bootstrapping Nodes +------------------- +The fastest way to deploy Chef on nodes is to use ``knife`` +to boostrap each node. Chef must have network access to each host +you intend to configure as a node (e.g., ``NAT``, ``ssh``). Replace +the ``{dist.vernum}`` with your distribution and version number. +For example:: + + knife bootstrap IP_ADDR -d {dist.vernum}-apt --sudo + +See `Knife Bootstrap`_ for details. + +Verify Nodes +------------ +Verify that you have setup all the hosts you want to use as +Chef nodes. :: + + knife node list + +A list of the nodes you've boostrapped should appear. + +.. _Chef Architecture Introduction: http://wiki.opscode.com/display/chef/Architecture+Introduction +.. _Installing Chef Client on Ubuntu or Debian: http://wiki.opscode.com/display/chef/Installing+Chef+Client+on+Ubuntu+or+Debian +.. _Installing Chef Server on Debian or Ubuntu using Packages: http://wiki.opscode.com/display/chef/Installing+Chef+Server+on+Debian+or+Ubuntu+using+Packages +.. _Knife Bootstrap: http://wiki.opscode.com/display/chef/Knife+Bootstrap diff --git a/doc/install/index.rst b/doc/install/index.rst index 61665826271..228c1c8d483 100644 --- a/doc/install/index.rst +++ b/doc/install/index.rst @@ -1,17 +1,25 @@ -================= - Installing Ceph -================= +============== + Installation +============== Storage clusters are the foundation of the Ceph system. Ceph storage hosts provide object storage. Clients access the Ceph storage cluster directly from an application (using ``librados``), over an object storage protocol such as Amazon S3 or OpenStack Swift (using ``radosgw``), or with a block device (using ``rbd``). To begin using Ceph, you must first set up a storage cluster. -The following sections provide guidance for configuring a storage cluster and -installing Ceph components: +You may deploy Ceph with our ``mkcephfs`` bootstrap utility for development +and test environments. For production environments, we recommend deploying +Ceph with the Chef cloud management tool. + +If your deployment uses OpenStack, you will also need to install OpenStack. + +The following sections provide guidance for installing components used with +Ceph: .. toctree:: Hardware Recommendations Installing Debian/Ubuntu Packages Installing RPM Packages + Installing Chef + Installing OpenStack diff --git a/doc/install/openstack.rst b/doc/install/openstack.rst new file mode 100644 index 00000000000..1e4640ca90f --- /dev/null +++ b/doc/install/openstack.rst @@ -0,0 +1,3 @@ +====================== + Installing OpenStack +====================== diff --git a/doc/rec/filesystem.rst b/doc/rec/filesystem.rst index ad302dff69c..efe2038720d 100644 --- a/doc/rec/filesystem.rst +++ b/doc/rec/filesystem.rst @@ -1,24 +1,45 @@ -======================================= - Underlying filesystem recommendations -======================================= - -.. todo:: Benefits of each, limits on non-btrfs ones, performance data when we have them, etc - - -.. _btrfs: - -Btrfs ------ - -.. todo:: what does btrfs give you (the journaling thing) - - -ext4/ext3 ---------- - -.. _xattr: - -Enabling extended attributes -~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -.. todo:: how to enable xattr on ext4/3 +============ + Filesystem +============ +For details on file systems when configuring a cluster See +`Hard Disk and File System Recommendations`_ . + +.. tip:: We recommend configuring Ceph to use the ``XFS`` file system in + the near term, and ``btrfs`` in the long term once it is stable + enough for production. + +Before ``ext3``, ``ReiserFS`` was the only journaling file system available for +Linux. However, ``ext3`` doesn't provide Extended Attribute (XATTR) support. +While ``ext4`` provides XATTR support, it only allows XATTRs up to 4kb. The +4kb limit is not enough for RADOS GW ACLs, snapshots, and other features. As of +version 0.45, Ceph provides a ``leveldb`` feature for ``ext4`` file systems +that stores XATTRs in excess of 4kb in a ``leveldb`` database. + +The ``XFS`` and ``btrfs`` file systems provide numerous advantages in highly +scaled data storage environments when `compared`_ to ``ext3`` and ``ext4``. +Both ``XFS`` and ``btrfs`` are `journaling file systems`_, which means that +they are more robust when recovering from crashes, power outages, etc. These +filesystems journal all of the changes they will make before performing writes. + +``XFS`` was developed for Silicon Graphics, and is a mature and stable +filesystem. By contrast, ``btrfs`` is a relatively new file system that aims +to address the long-standing wishes of system administrators working with +large scale data storage environments. ``btrfs`` has some unique features +and advantages compared to other Linux filesystems. + +``btrfs`` is a `copy-on-write`_ filesystem. It supports file creation +timestamps and checksums that verify metadata integrity, so it can detect +bad copies of data and fix them with the good copies. The copy-on-write +capability means that ``btrfs`` can support snapshots that are writable. +``btrfs`` supports transparent compression and other features. + +``btrfs`` also incorporates multi-device management into the file system, +which enables you to support heterogeneous disk storage infrastructure, +data allocation policies. The community also aims to provide ``fsck``, +deduplication, and data encryption support in the future. This compelling +list of features makes ``btrfs`` the ideal choice for Ceph clusters. + +.. _copy-on-write: http://en.wikipedia.org/wiki/Copy-on-write +.. _Hard Disk and File System Recommendations: ../../config-cluster/file-system-recommendations +.. _compared: http://en.wikipedia.org/wiki/Comparison_of_file_systems +.. _journaling file systems: http://en.wikipedia.org/wiki/Journaling_file_system diff --git a/doc/rec/hardware.rst b/doc/rec/hardware.rst index bdde86f758b..4fe241357ed 100644 --- a/doc/rec/hardware.rst +++ b/doc/rec/hardware.rst @@ -1,7 +1,7 @@ -========================== - Hardware recommendations -========================== +========== + Hardware +========== -Discussing the hardware requirements for each daemon, the tradeoffs of -doing one ceph-osd per machine versus one per disk, and hardware-related -configuration options like journaling locations. +See `Hardware Recommendations`_ for details. + +.. _Hardware Recommendations: ../../install/hardware-recommendations diff --git a/doc/source/build-packages.rst b/doc/source/build-packages.rst index efda8ea40d8..7304f619f7e 100644 --- a/doc/source/build-packages.rst +++ b/doc/source/build-packages.rst @@ -1,51 +1,49 @@ -=================== -Build Ceph Packages -=================== - -To build packages, you must clone the `Ceph`_ repository. -You can create installation packages from the latest code using ``dpkg-buildpackage`` for Debian/Ubuntu -or ``rpmbuild`` for the RPM Package Manager. - -.. tip:: When building on a multi-core CPU, use the ``-j`` and the number of cores * 2. - For example, use ``-j4`` for a dual-core processor to accelerate the build. +===================== + Build Ceph Packages +===================== +To build packages, you must clone the `Ceph`_ repository. You can create +installation packages from the latest code using ``dpkg-buildpackage`` for +Debian/Ubuntu or ``rpmbuild`` for the RPM Package Manager. +.. tip:: When building on a multi-core CPU, use the ``-j`` and the number of + cores * 2. For example, use ``-j4`` for a dual-core processor to accelerate + the build. Advanced Package Tool (APT) --------------------------- +To create ``.deb`` packages for Debian/Ubuntu, ensure that you have cloned the +`Ceph`_ repository, installed the `build prerequisites`_ and installed +``debhelper``:: -To create ``.deb`` packages for Debian/Ubuntu, ensure that you have cloned the `Ceph`_ repository, -installed the `build prerequisites`_ and installed ``debhelper``:: - - $ sudo apt-get install debhelper + sudo apt-get install debhelper Once you have installed debhelper, you can build the packages: - $ sudo dpkg-buildpackage + sudo dpkg-buildpackage For multi-processor CPUs use the ``-j`` option to accelerate the build. RPM Package Manager ------------------- +To create ``.rpm`` packages, ensure that you have cloned the `Ceph`_ repository, +installed the `build prerequisites`_ and installed ``rpm-build`` and +``rpmdevtools``:: -To create ``.prm`` packages, ensure that you have cloned the `Ceph`_ repository, -installed the `build prerequisites`_ and installed ``rpm-build`` and ``rpmdevtools``:: - - $ yum install rpm-build rpmdevtools + yum install rpm-build rpmdevtools Once you have installed the tools, setup an RPM compilation environment:: - $ rpmdev-setuptree + rpmdev-setuptree Fetch the source tarball for the RPM compilation environment:: - $ wget -P ~/rpmbuild/SOURCES/ http://ceph.newdream.net/download/ceph-.tar.gz + wget -P ~/rpmbuild/SOURCES/ http://ceph.newdream.net/download/ceph-.tar.gz Build the RPM packages:: - $ rpmbuild -tb ~/rpmbuild/SOURCES/ceph-.tar.gz + rpmbuild -tb ~/rpmbuild/SOURCES/ceph-.tar.gz For multi-processor CPUs use the ``-j`` option to accelerate the build. - .. _build prerequisites: ../build-prerequisites .. _Ceph: ../cloning-the-ceph-source-code-repository diff --git a/doc/source/build-prerequisites.rst b/doc/source/build-prerequisites.rst index 5d854836ebf..7c62ea86282 100644 --- a/doc/source/build-prerequisites.rst +++ b/doc/source/build-prerequisites.rst @@ -1,15 +1,16 @@ -=================== -Build Prerequisites -=================== +===================== + Build Prerequisites +===================== +Before you can build Ceph source code or Ceph documentation, you need to install +several libraries and tools. -Before you can build Ceph source code or Ceph documentation, you need to install several libraries and tools. - -.. tip:: Check this section to see if there are specific prerequisites for your Linux/Unix distribution. +.. tip:: Check this section to see if there are specific prerequisites for your + Linux/Unix distribution. Prerequisites for Building Ceph Source Code =========================================== -Ceph provides ``autoconf`` and ``automake`` scripts to get you started quickly. Ceph build scripts -depend on the following: +Ceph provides ``autoconf`` and ``automake`` scripts to get you started quickly. +Ceph build scripts depend on the following: - ``autotools-dev`` - ``autoconf`` @@ -32,13 +33,15 @@ depend on the following: - ``pkg-config`` - ``libcurl4-gnutls-dev`` -On Ubuntu, execute ``sudo apt-get install`` for each dependency that isn't installed on your host. :: +On Ubuntu, execute ``sudo apt-get install`` for each dependency that isn't +installed on your host. :: - $ sudo apt-get install autotools-dev autoconf automake cdbs gcc g++ git libboost-dev libedit-dev libssl-dev libtool libfcgi libfcgi-dev libfuse-dev linux-kernel-headers libcrypto++-dev libcrypto++ libexpat1-dev + sudo apt-get install autotools-dev autoconf automake cdbs gcc g++ git libboost-dev libedit-dev libssl-dev libtool libfcgi libfcgi-dev libfuse-dev linux-kernel-headers libcrypto++-dev libcrypto++ libexpat1-dev -On Debian/Squeeze, execute ``aptitude install`` for each dependency that isn't installed on your host. :: +On Debian/Squeeze, execute ``aptitude install`` for each dependency that isn't +installed on your host. :: - $ aptitude install autotools-dev autoconf automake cdbs gcc g++ git libboost-dev libedit-dev libssl-dev libtool libfcgi libfcgi-dev libfuse-dev linux-kernel-headers libcrypto++-dev libcrypto++ libexpat1-dev + aptitude install autotools-dev autoconf automake cdbs gcc g++ git libboost-dev libedit-dev libssl-dev libtool libfcgi libfcgi-dev libfuse-dev linux-kernel-headers libcrypto++-dev libcrypto++ libexpat1-dev Ubuntu Requirements @@ -52,16 +55,17 @@ Ubuntu Requirements - ``libgdata-common`` - ``libgdata13`` -Execute ``sudo apt-get install`` for each dependency that isn't installed on your host. :: +Execute ``sudo apt-get install`` for each dependency that isn't installed on +your host. :: - $ sudo apt-get install uuid-dev libkeytutils-dev libgoogle-perftools-dev libatomic-ops-dev libaio-dev libgdata-common libgdata13 + sudo apt-get install uuid-dev libkeytutils-dev libgoogle-perftools-dev libatomic-ops-dev libaio-dev libgdata-common libgdata13 Debian ------ Alternatively, you may also install:: - $ aptitude install fakeroot dpkg-dev - $ aptitude install debhelper cdbs libexpat1-dev libatomic-ops-dev + aptitude install fakeroot dpkg-dev + aptitude install debhelper cdbs libexpat1-dev libatomic-ops-dev openSUSE 11.2 (and later) ------------------------- @@ -72,16 +76,18 @@ openSUSE 11.2 (and later) - ``libopenssl-devel`` - ``fuse-devel`` (optional) -Execute ``zypper install`` for each dependency that isn't installed on your host. :: +Execute ``zypper install`` for each dependency that isn't installed on your +host. :: - $zypper install boost-devel gcc-c++ libedit-devel libopenssl-devel fuse-devel + zypper install boost-devel gcc-c++ libedit-devel libopenssl-devel fuse-devel Prerequisites for Building Ceph Documentation ============================================= Ceph utilizes Python's Sphinx documentation tool. For details on -the Sphinx documentation tool, refer to: `Sphinx `_ -Follow the directions at `Sphinx 1.1.3 `_ -to install Sphinx. To run Sphinx, with ``admin/build-doc``, at least the following are required: +the Sphinx documentation tool, refer to: `Sphinx`_ +Follow the directions at `Sphinx 1.1.3`_ +to install Sphinx. To run Sphinx, with ``admin/build-doc``, at least the +following are required: - ``python-dev`` - ``python-pip`` @@ -92,6 +98,10 @@ to install Sphinx. To run Sphinx, with ``admin/build-doc``, at least the followi - ``ditaa`` - ``graphviz`` -Execute ``sudo apt-get install`` for each dependency that isn't installed on your host. :: +Execute ``sudo apt-get install`` for each dependency that isn't installed on +your host. :: + + sudo apt-get install python-dev python-pip python-virtualenv libxml2-dev libxslt-dev doxygen ditaa graphviz - $ sudo apt-get install python-dev python-pip python-virtualenv libxml2-dev libxslt-dev doxygen ditaa graphviz +.. _Sphinx: http://sphinx.pocoo.org +.. _Sphinx 1.1.3: http://pypi.python.org/pypi/Sphinx diff --git a/doc/source/building-ceph.rst b/doc/source/building-ceph.rst index e2d0af07690..e6337f18175 100644 --- a/doc/source/building-ceph.rst +++ b/doc/source/building-ceph.rst @@ -1,38 +1,46 @@ -============= -Building Ceph -============= - +=============== + Building Ceph +=============== Ceph provides build scripts for source code and for documentation. Building Ceph -============= -Ceph provides ``automake`` and ``configure`` scripts to streamline the build process. To build Ceph, navigate to your cloned Ceph repository and execute the following:: +------------- +Ceph provides ``automake`` and ``configure`` scripts to streamline the build +process. To build Ceph, navigate to your cloned Ceph repository and execute the +following:: - $ cd ceph - $ ./autogen.sh - $ ./configure - $ make + cd ceph + ./autogen.sh + ./configure + make -You can use ``make -j`` to execute multiple jobs depending upon your system. For example:: +You can use ``make -j`` to execute multiple jobs depending upon your system. For +example:: - $ make -j4 + make -j4 To install Ceph locally, you may also use:: - $ make install + sudo make install -If you install Ceph locally, ``make`` will place the executables in ``usr/local/bin``. -You may add the ``ceph.conf`` file to the ``usr/local/bin`` directory to run an evaluation environment of Ceph from a single directory. +If you install Ceph locally, ``make`` will place the executables in +``usr/local/bin``. You may add the ``ceph.conf`` file to the ``usr/local/bin`` +directory to run an evaluation environment of Ceph from a single directory. Building Ceph Documentation -=========================== -Ceph utilizes Python’s Sphinx documentation tool. For details on the Sphinx documentation tool, refer to: `Sphinx `_. To build the Ceph documentaiton, navigate to the Ceph repository and execute the build script:: +--------------------------- +Ceph utilizes Python’s Sphinx documentation tool. For details on the Sphinx +documentation tool, refer to: `Sphinx`_. To build the Ceph documentaiton, +navigate to the Ceph repository and execute the build script:: - $ cd ceph - $ admin/build-doc + cd ceph + admin/build-doc Once you build the documentation set, you may navigate to the source directory to view it:: - $ cd build-doc/output + cd build-doc/output + +There should be an ``/html`` directory and a ``/man`` directory containing +documentation in HTML and manpage formats respectively. -There should be an ``/html`` directory and a ``/man`` directory containing documentation in HTML and manpage formats respectively. +.. _Sphinx: http://sphinx.pocoo.org \ No newline at end of file diff --git a/doc/source/contributing.rst b/doc/source/contributing.rst index ec6a92dc5dd..ef63f26d9cf 100644 --- a/doc/source/contributing.rst +++ b/doc/source/contributing.rst @@ -10,12 +10,12 @@ Generate SSH Keys You must generate SSH keys for github to clone the Ceph repository. If you do not have SSH keys for ``github``, execute:: - $ ssh-keygen -d + ssh-keygen -d Get the key to add to your ``github`` account (the following example assumes you used the default file path):: - $ cat .ssh/id_dsa.pub + cat .ssh/id_dsa.pub Copy the public key. diff --git a/doc/source/downloading-a-ceph-release.rst b/doc/source/downloading-a-ceph-release.rst index 2193aa69814..816795ab814 100644 --- a/doc/source/downloading-a-ceph-release.rst +++ b/doc/source/downloading-a-ceph-release.rst @@ -5,4 +5,7 @@ As Ceph development progresses, the Ceph team releases new versions of the source code. You may download source code tarballs for Ceph releases here: -`Ceph Release Tarballs `_ +`Ceph Release Tarballs`_ + + +.. _Ceph Release Tarballs: http://ceph.com/download/ \ No newline at end of file -- cgit v1.2.1