summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
-rw-r--r--doc/index.rst1
-rw-r--r--doc/install/index.rst23
-rw-r--r--doc/rados/operations/operating.rst238
-rw-r--r--doc/start/hardware-recommendations.rst (renamed from doc/install/hardware-recommendations.rst)0
-rw-r--r--doc/start/index.rst39
-rw-r--r--doc/start/intro.rst70
-rw-r--r--doc/start/os-recommendations.rst (renamed from doc/install/os-recommendations.rst)33
-rw-r--r--doc/start/quick-ceph-deploy.rst390
-rw-r--r--doc/start/quick-cephfs.rst4
-rw-r--r--doc/start/quick-rbd.rst56
-rw-r--r--doc/start/quick-rgw.rst4
-rw-r--r--doc/start/quick-start-preflight.rst195
12 files changed, 661 insertions, 392 deletions
diff --git a/doc/index.rst b/doc/index.rst
index 8bf5340b2f6..4068be599e5 100644
--- a/doc/index.rst
+++ b/doc/index.rst
@@ -90,6 +90,7 @@ about Ceph, see our `Architecture`_ section.
:maxdepth: 1
:hidden:
+ start/intro
start/index
install/index
rados/index
diff --git a/doc/install/index.rst b/doc/install/index.rst
index 347b6ae9ac2..013bd9cb546 100644
--- a/doc/install/index.rst
+++ b/doc/install/index.rst
@@ -1,6 +1,6 @@
-==============
- Installation
-==============
+=======================
+ Installation (Manual)
+=======================
The Ceph Object Store is the foundation of all Ceph clusters, and it consists
primarily of two types of daemons: Object Storage Daemons (OSDs) and monitors.
@@ -13,22 +13,7 @@ by focusing first on the object storage cluster.
.. raw:: html
- <table cellpadding="10"><colgroup><col width="33%"><col width="33%"><col width="33%"></colgroup><tbody valign="top"><tr><td><h3>Recommendations</h3>
-
-To begin using Ceph in production, you should review our hardware
-recommendations and operating system recommendations. Many of the
-frequently-asked questions in our mailing list involve hardware-related
-questions and how to install Ceph on various distributions.
-
-.. toctree::
- :maxdepth: 2
-
- Hardware Recommendations <hardware-recommendations>
- OS Recommendations <os-recommendations>
-
-.. raw:: html
-
- </td><td><h3>Installation</h3>
+ <table cellpadding="10"><colgroup><col width="50%"><col width="50%"></colgroup><tbody valign="top"><tr><td><h3>Installation</h3>
If you are deploying a Ceph cluster (that is, not developing Ceph),
install Ceph using our stable release packages. For testing, you
diff --git a/doc/rados/operations/operating.rst b/doc/rados/operations/operating.rst
index 9942ea3cabf..0e4be40ce4a 100644
--- a/doc/rados/operations/operating.rst
+++ b/doc/rados/operations/operating.rst
@@ -7,11 +7,11 @@
Running Ceph with Upstart
=========================
-When deploying Ceph Cuttlefish and beyond with ``ceph-deploy``, you may start
-and stop Ceph daemons on a :term:`Ceph Node` using the event-based `Upstart`_.
-Upstart does not require you to define daemon instances in the Ceph configuration
-file (although, they are still required for ``sysvinit`` should you choose to
-use it).
+When deploying Ceph Cuttlefish and beyond with ``ceph-deploy`` on Debian/Ubuntu
+distributions, you may start and stop Ceph daemons on a :term:`Ceph Node` using
+the event-based `Upstart`_. Upstart does not require you to define daemon
+instances in the Ceph configuration file (although, they are still required for
+``sysvinit`` should you choose to use it).
To list the Ceph Upstart jobs and instances on a node, execute::
@@ -19,6 +19,7 @@ To list the Ceph Upstart jobs and instances on a node, execute::
See `initctl`_ for additional details.
+
Starting all Daemons
--------------------
@@ -93,29 +94,20 @@ For example::
sudo start ceph-mds id=ceph-server
-
.. index:: Ceph service; sysvinit; operating a cluster
-Running Ceph as a Service
-=========================
+Running Ceph
+============
-When you deploy Ceph Argonaut or Bobtail with ``mkcephfs``, use the
-service or traditional sysvinit.
+Each time you to **start**, **restart**, and **stop** Ceph daemons (or your
+entire cluster) you must specify at least one option and one command. You may
+also specify a daemon type or a daemon instance. ::
-The ``ceph`` service provides functionality to **start**, **restart**, and
-**stop** your Ceph cluster. Each time you execute ``ceph`` processes, you
-must specify at least one option and one command. You may also specify a daemon
-type or a daemon instance. For most newer Debian/Ubuntu distributions, you may
-use the following syntax::
+ {commandline} [options] [commands] [daemons]
- sudo service ceph [options] [commands] [daemons]
-For older distributions, you may wish to use the ``/etc/init.d/ceph`` path::
-
- sudo /etc/init.d/ceph [options] [commands] [daemons]
-
-The ``ceph`` service options include:
+The ``ceph`` options include:
+-----------------+----------+-------------------------------------------------+
| Option | Shortcut | Description |
@@ -134,7 +126,7 @@ The ``ceph`` service options include:
| ``--conf`` | ``-c`` | Use an alternate configuration file. |
+-----------------+----------+-------------------------------------------------+
-The ``ceph`` service commands include:
+The ``ceph`` commands include:
+------------------+------------------------------------------------------------+
| Command | Description |
@@ -152,83 +144,213 @@ The ``ceph`` service commands include:
| ``cleanalllogs`` | Cleans out **everything** in the log directory. |
+------------------+------------------------------------------------------------+
-For subsystem operations, the ``ceph`` service can target specific daemon types by
-adding a particular daemon type for the ``[daemons]`` option. Daemon types include:
+For subsystem operations, the ``ceph`` service can target specific daemon types
+by adding a particular daemon type for the ``[daemons]`` option. Daemon types
+include:
- ``mon``
- ``osd``
- ``mds``
-The ``ceph`` service's ``[daemons]`` setting may also target a specific instance.
-To start a Ceph daemon on the local :term:`Ceph Node`, use the following syntax::
- sudo /etc/init.d/ceph start osd.0
+Running Ceph with sysvinit
+--------------------------
-To start a Ceph daemon on another node, use the following syntax::
-
- sudo /etc/init.d/ceph -a start osd.0
+Using traditional ``sysvinit`` is the recommended way to run Ceph with CentOS,
+Red Hat, Fedora, and SLES distributions. You may also use it for older
+distributions of Debian/Ubuntu.
-Where ``osd.0`` is the first OSD in the cluster.
-
-Starting a Cluster
-------------------
+Starting all Daemons
+~~~~~~~~~~~~~~~~~~~~
To start your Ceph cluster, execute ``ceph`` with the ``start`` command.
-The usage may differ based upon your Linux distribution. For example, for most
-newer Debian/Ubuntu distributions, you may use the following syntax::
-
- sudo service ceph [options] [start|restart] [daemonType|daemonID]
-
-For older distributions, you may wish to use the ``/etc/init.d/ceph`` path::
+Use the following syntax::
sudo /etc/init.d/ceph [options] [start|restart] [daemonType|daemonID]
The following examples illustrates a typical use case::
- sudo service ceph -a start
sudo /etc/init.d/ceph -a start
Once you execute with ``-a`` (i.e., execute on all nodes), Ceph should begin
-operating. You may also specify a particular daemon instance to constrain the
-command to a single instance. To start a Ceph daemon on the local Ceph Node,
-use the following syntax::
+operating.
+
+
+Stopping all Daemons
+~~~~~~~~~~~~~~~~~~~~
+
+To stop your Ceph cluster, execute ``ceph`` with the ``stop`` command.
+Use the following syntax::
+
+ sudo /etc/init.d/ceph [options] stop [daemonType|daemonID]
+
+The following examples illustrates a typical use case::
+
+ sudo /etc/init.d/ceph -a stop
+Once you execute with ``-a`` (i.e., execute on all nodes), Ceph should stop
+operating.
+
+
+Starting all Daemons by Type
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+To start all Ceph daemons of a particular type on the local Ceph Node, use the
+following syntax::
+
+ sudo /etc/init.d/ceph start {daemon-type}
+ sudo /etc/init.d/ceph start osd
+
+To start all Ceph daemons of a particular type on another node, use the
+following syntax::
+
+ sudo /etc/init.d/ceph -a start {daemon-type}
+ sudo /etc/init.d/ceph -a start osd
+
+
+Stopping all Daemons by Type
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+To stop all Ceph daemons of a particular type on the local Ceph Node, use the
+following syntax::
+
+ sudo /etc/init.d/ceph stop {daemon-type}
+ sudo /etc/init.d/ceph stop osd
+
+To stop all Ceph daemons of a particular type on another node, use the
+following syntax::
+
+ sudo /etc/init.d/ceph -a stop {daemon-type}
+ sudo /etc/init.d/ceph -a stop osd
+
+
+Starting a Daemon
+~~~~~~~~~~~~~~~~~
+
+To start a Ceph daemon on the local Ceph Node, use the following syntax::
+
+ sudo /etc/init.d/ceph start {daemon-type}.{instance}
sudo /etc/init.d/ceph start osd.0
To start a Ceph daemon on another node, use the following syntax::
+ sudo /etc/init.d/ceph -a start {daemon-type}.{instance}
sudo /etc/init.d/ceph -a start osd.0
-Stopping a Cluster
-------------------
+Stopping a Daemon
+~~~~~~~~~~~~~~~~~
+
+To stop a Ceph daemon on the local Ceph Node, use the following syntax::
+
+ sudo /etc/init.d/ceph stop {daemon-type}.{instance}
+ sudo /etc/init.d/ceph stop osd.0
+
+To stop a Ceph daemon on another node, use the following syntax::
+
+ sudo /etc/init.d/ceph -a stop {daemon-type}.{instance}
+ sudo /etc/init.d/ceph -a stop osd.0
+
+
+Running Ceph as a Service
+-------------------------
+
+When you deploy Ceph Argonaut or Bobtail with ``mkcephfs``, you operate
+Ceph as a service (you may also use sysvinit).
+
+
+Starting all Daemons
+~~~~~~~~~~~~~~~~~~~~
+
+To start your Ceph cluster, execute ``ceph`` with the ``start`` command.
+Use the following syntax::
+
+ sudo service ceph [options] [start|restart] [daemonType|daemonID]
+
+The following examples illustrates a typical use case::
+
+ sudo service ceph -a start
+
+Once you execute with ``-a`` (i.e., execute on all nodes), Ceph should begin
+operating.
+
+
+Stopping all Daemons
+~~~~~~~~~~~~~~~~~~~~
To stop your Ceph cluster, execute ``ceph`` with the ``stop`` command.
-The usage may differ based upon your Linux distribution. For example, for most
-newer Debian/Ubuntu distributions, you may use the following syntax::
+Use the following syntax::
sudo service ceph [options] stop [daemonType|daemonID]
For example::
- sudo service ceph -a stop
-
-For older distributions, you may wish to use the ``/etc/init.d/ceph`` path::
-
- sudo /etc/init.d/ceph -a stop
+ sudo service ceph -a stop
Once you execute with ``-a`` (i.e., execute on all nodes), Ceph should shut
-down. You may also specify a particular daemon instance to constrain the
-command to a single instance. To stop a Ceph daemon on the local Ceph Node,
-use the following syntax::
+down.
+
+
+Starting all Daemons by Type
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+To start all Ceph daemons of a particular type on the local Ceph Node, use the
+following syntax::
+
+ sudo service ceph start {daemon-type}
+ sudo service ceph start osd
+
+To start all Ceph daemons of a particular type on all nodes, use the following
+syntax::
+
+ sudo service ceph -a start {daemon-type}
+ sudo service ceph -a start osd
+
+
+Stopping all Daemons by Type
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+To stop all Ceph daemons of a particular type on the local Ceph Node, use the
+following syntax::
+
+ sudo service ceph stop {daemon-type}
+ sudo service ceph stop osd
+
+To stop all Ceph daemons of a particular type on all nodes, use the following
+syntax::
+
+ sudo service ceph -a stop {daemon-type}
+ sudo service ceph -a stop osd
- sudo /etc/init.d/ceph stop osd.0
+
+Starting a Daemon
+~~~~~~~~~~~~~~~~~
+
+To start a Ceph daemon on the local Ceph Node, use the following syntax::
+
+ sudo service ceph start {daemon-type}.{instance}
+ sudo service ceph start osd.0
+
+To start a Ceph daemon on another node, use the following syntax::
+
+ sudo service ceph -a start {daemon-type}.{instance}
+ sudo service ceph -a start osd.0
+
+
+Stopping a Daemon
+~~~~~~~~~~~~~~~~~
+
+To stop a Ceph daemon on the local Ceph Node, use the following syntax::
+
+ sudo service ceph stop {daemon-type}.{instance}
+ sudo service ceph stop osd.0
To stop a Ceph daemon on another node, use the following syntax::
- sudo /etc/init.d/ceph -a stop osd.0
+ sudo service ceph -a stop {daemon-type}.{instance}
+ sudo service ceph -a stop osd.0
diff --git a/doc/install/hardware-recommendations.rst b/doc/start/hardware-recommendations.rst
index 90d29e5e7e2..90d29e5e7e2 100644
--- a/doc/install/hardware-recommendations.rst
+++ b/doc/start/hardware-recommendations.rst
diff --git a/doc/start/index.rst b/doc/start/index.rst
index 2fc03c0a284..6e9277746d9 100644
--- a/doc/start/index.rst
+++ b/doc/start/index.rst
@@ -1,34 +1,6 @@
-=================
- Getting Started
-=================
-
-Whether you want to provide :term:`Ceph Object Storage` and/or :term:`Ceph Block
-Device` services to :term:`Cloud Platforms`, deploy a :term:`Ceph Filesystem` or
-use Ceph for another purpose, all :term:`Ceph Storage Cluster` deployments begin
-with setting up each :term:`Ceph Node`, your network and the Ceph Storage
-Cluster. A Ceph Storage Cluster has three essential daemons:
-
-.. ditaa:: +---------------+ +---------------+ +---------------+
- | OSDs | | Monitor | | MDS |
- +---------------+ +---------------+ +---------------+
-
-- **OSDs**: A :term:`Ceph OSD Daemon` (OSD) stores data, handles data
- replication, recovery, backfilling, rebalancing, and provides some monitoring
- information to Ceph Monitors by checking other Ceph OSD Daemons for a
- heartbeat. A Ceph Storage Cluster requires at least two Ceph OSD Daemons to
- achieve an ``active + clean`` state.
-
-- **Monitors**: A :term:`Ceph Monitor` maintains maps of the cluster state,
- including the monitor map, the OSD map, the Placement Group (PG) map, and the
- CRUSH map. Ceph maintains a history (called an "epoch") of each state change
- in the Ceph Monitors, Ceph OSD Daemons, and PGs.
-
-- **MDSs**: A :term:`Ceph Metadata Server` (MDS) stores metadata on behalf of
- the :term:`Ceph Filesystem` (i.e., Ceph Block Devices and Ceph Object Storage
- do not use MDS). Ceph Metadata Servers make it feasible for POSIX file system
- users to execute basic commands like ``ls``, ``find``, etc. without placing
- an enormous burden on the Ceph Storage Cluster.
-
+======================
+ Installation (Quick)
+======================
.. raw:: html
@@ -37,18 +9,17 @@ Cluster. A Ceph Storage Cluster has three essential daemons:
A :term:`Ceph Client` and a :term:`Ceph Node` may require some basic
configuration work prior to deploying a Ceph Storage Cluster. You can also
-avail yourself of help from the Ceph community by getting involved.
+avail yourself of help by getting involved in the Ceph community.
.. toctree::
- Get Involved <get-involved>
Preflight <quick-start-preflight>
.. raw:: html
</td><td><h3>Step 2: Storage Cluster</h3>
-Once you've completed your preflight checklist, you should be able to begin
+Once you've completed your preflight checklist, you should be able to begin
deploying a Ceph Storage Cluster.
.. toctree::
diff --git a/doc/start/intro.rst b/doc/start/intro.rst
new file mode 100644
index 00000000000..704ff1e8cd5
--- /dev/null
+++ b/doc/start/intro.rst
@@ -0,0 +1,70 @@
+===============
+ Intro to Ceph
+===============
+
+Whether you want to provide :term:`Ceph Object Storage` and/or :term:`Ceph Block
+Device` services to :term:`Cloud Platforms`, deploy a :term:`Ceph Filesystem` or
+use Ceph for another purpose, all :term:`Ceph Storage Cluster` deployments begin
+with setting up each :term:`Ceph Node`, your network and the Ceph Storage
+Cluster. A Ceph Storage Cluster requires at least one Ceph Monitor and at least
+two Ceph OSD Daemons. The Ceph Metadata Server is essential when running Ceph
+Filesystem clients.
+
+.. ditaa:: +---------------+ +---------------+ +---------------+
+ | OSDs | | Monitor | | MDS |
+ +---------------+ +---------------+ +---------------+
+
+- **OSDs**: A :term:`Ceph OSD Daemon` (OSD) stores data, handles data
+ replication, recovery, backfilling, rebalancing, and provides some monitoring
+ information to Ceph Monitors by checking other Ceph OSD Daemons for a
+ heartbeat. A Ceph Storage Cluster requires at least two Ceph OSD Daemons to
+ achieve an ``active + clean`` state when the cluster makes two copies of your
+ data (Ceph makes 2 copies by default, but you can adjust it).
+
+- **Monitors**: A :term:`Ceph Monitor` maintains maps of the cluster state,
+ including the monitor map, the OSD map, the Placement Group (PG) map, and the
+ CRUSH map. Ceph maintains a history (called an "epoch") of each state change
+ in the Ceph Monitors, Ceph OSD Daemons, and PGs.
+
+- **MDSs**: A :term:`Ceph Metadata Server` (MDS) stores metadata on behalf of
+ the :term:`Ceph Filesystem` (i.e., Ceph Block Devices and Ceph Object Storage
+ do not use MDS). Ceph Metadata Servers make it feasible for POSIX file system
+ users to execute basic commands like ``ls``, ``find``, etc. without placing
+ an enormous burden on the Ceph Storage Cluster.
+
+Ceph stores a client's data as objects within storage pools. Using the CRUSH
+algorithm, Ceph calculates which placement group should contain the object,
+and further calculates which Ceph OSD Daemon should store the placement group.
+The CRUSH algorithm enables the Ceph Storage Cluster to scale, rebalance, and
+recover dynamically.
+
+
+.. raw:: html
+
+ <style type="text/css">div.body h3{margin:5px 0px 0px 0px;}</style>
+ <table cellpadding="10"><colgroup><col width="50%"><col width="50%"></colgroup><tbody valign="top"><tr><td><h3>Recommendations</h3>
+
+To begin using Ceph in production, you should review our hardware
+recommendations and operating system recommendations.
+
+.. toctree::
+ :maxdepth: 2
+
+ Hardware Recommendations <hardware-recommendations>
+ OS Recommendations <os-recommendations>
+
+
+.. raw:: html
+
+ </td><td><h3>Get Involved</h3>
+
+ You can avail yourself of help or contribute documentation, source
+ code or bugs by getting involved in the Ceph community.
+
+.. toctree::
+
+ get-involved
+
+.. raw:: html
+
+ </td></tr></tbody></table>
diff --git a/doc/install/os-recommendations.rst b/doc/start/os-recommendations.rst
index 71a4d3a278b..d8b418fe1b0 100644
--- a/doc/install/os-recommendations.rst
+++ b/doc/start/os-recommendations.rst
@@ -36,6 +36,36 @@ platforms. Generally speaking, there is very little dependence on
specific distributions aside from the kernel and system initialization
package (i.e., sysvinit, upstart, systemd).
+
+Dumpling (0.67)
+---------------
+
++----------+----------+--------------------+--------------+---------+------------+
+| Distro | Release | Code Name | Kernel | Notes | Testing |
++==========+==========+====================+==============+=========+============+
+| Ubuntu | 12.04 | Precise Pangolin | linux-3.2.0 | 1, 2 | B, I, C |
++----------+----------+--------------------+--------------+---------+------------+
+| Ubuntu | 12.10 | Quantal Quetzal | linux-3.5.4 | 2 | B |
++----------+----------+--------------------+--------------+---------+------------+
+| Ubuntu | 13.04 | Raring Ringtail | linux-3.8.5 | | B |
++----------+----------+--------------------+--------------+---------+------------+
+| Debian | 6.0 | Squeeze | linux-2.6.32 | 1, 2, 3 | B |
++----------+----------+--------------------+--------------+---------+------------+
+| Debian | 7.0 | Wheezy | linux-3.2.0 | 1, 2 | B |
++----------+----------+--------------------+--------------+---------+------------+
+| CentOS | 6.3 | N/A | linux-2.6.32 | 1, 2 | B, I |
++----------+----------+--------------------+--------------+---------+------------+
+| RHEL | 6.3 | | linux-2.6.32 | 1, 2 | B, I |
++----------+----------+--------------------+--------------+---------+------------+
+| Fedora | 18.0 | Spherical Cow | linux-3.6.0 | | B |
++----------+----------+--------------------+--------------+---------+------------+
+| Fedora | 19.0 | Schrödinger's Cat | linux-3.10.0 | | B |
++----------+----------+--------------------+--------------+---------+------------+
+| OpenSuse | 12.2 | N/A | linux-3.4.0 | 2 | B |
++----------+----------+--------------------+--------------+---------+------------+
+
+
+
Cuttlefish (0.61)
-----------------
@@ -63,6 +93,7 @@ Cuttlefish (0.61)
| OpenSuse | 12.2 | N/A | linux-3.4.0 | 2 | B |
+----------+----------+--------------------+--------------+---------+------------+
+
Bobtail (0.56)
--------------
@@ -90,6 +121,7 @@ Bobtail (0.56)
| OpenSuse | 12.2 | N/A | linux-3.4.0 | 2 | B |
+----------+----------+--------------------+--------------+---------+------------+
+
Argonaut (0.48)
---------------
@@ -126,6 +158,7 @@ Notes
``ceph-osd`` daemons using ``XFS`` or ``ext4`` on the same host will
not perform as well as they could.
+
Testing
-------
diff --git a/doc/start/quick-ceph-deploy.rst b/doc/start/quick-ceph-deploy.rst
index 3c0ca1b0653..1fabd1b182f 100644
--- a/doc/start/quick-ceph-deploy.rst
+++ b/doc/start/quick-ceph-deploy.rst
@@ -3,26 +3,31 @@
=============================
If you haven't completed your `Preflight Checklist`_, do that first. This
-**Quick Start** sets up a two-node demo cluster so you can explore some of the
-:term:`Ceph Storage Cluster` functionality. This **Quick Start** will help you
-install a minimal Ceph Storage Cluster on a server node from your admin node
-using ``ceph-deploy``.
+**Quick Start** sets up a :term:`Ceph Storage Cluster` using ``ceph-deploy``
+on your admin node. Create a three Ceph Node cluster so you can
+explore Ceph functionality.
.. ditaa::
- /----------------\ /----------------\
- | Admin Node |<------->| Server Node |
- | cCCC | | cCCC |
- +----------------+ +----------------+
- | Ceph Commands | | ceph - mon |
- \----------------/ +----------------+
- | ceph - osd |
- +----------------+
- | ceph - mds |
- \----------------/
-
-
-For best results, create a directory on your admin node for maintaining the
-configuration of your cluster. ::
+ /------------------\ /----------------\
+ | Admin Node | | ceph–node1 |
+ | +-------->+ cCCC |
+ | ceph–deploy | | mon.ceph–node1 |
+ \---------+--------/ \----------------/
+ |
+ | /----------------\
+ | | ceph–node2 |
+ +----------------->+ cCCC |
+ | | osd.0 |
+ | \----------------/
+ |
+ | /----------------\
+ | | ceph–node3 |
+ +----------------->| cCCC |
+ | osd.1 |
+ \----------------/
+
+For best results, create a directory on your admin node node for maintaining the
+configuration that ``ceph-deploy`` generates for your cluster. ::
mkdir my-cluster
cd my-cluster
@@ -31,228 +36,283 @@ configuration of your cluster. ::
current directory. Ensure you are in this directory when executing
``ceph-deploy``.
+As a first exercise, create a Ceph Storage Cluster with one Ceph Monitor and two
+Ceph OSD Daemons. Once the cluster reaches a ``active + clean`` state, expand it
+by adding a third Ceph OSD Daemon, a Metadata Server and two more Ceph Monitors.
+
+.. important:: Do not call ``ceph-deploy`` with ``sudo`` or run it as ``root``
+ if you are logged in as a different user, because it will not issue ``sudo``
+ commands needed on the remote host.
Create a Cluster
================
-To create your Ceph Storage Cluster, declare its initial monitors, generate a
-filesystem ID (``fsid``) and generate monitor keys by entering the following
-command on a commandline prompt::
+If at any point you run into trouble and you want to start over, execute
+the following::
- ceph-deploy new {mon-server-name}
- ceph-deploy new mon-ceph-node
+ ceph-deploy purgedata {ceph-node} [{ceph-node}]
+ ceph-deploy forgetkeys
-Check the output of ``ceph-deploy`` with ``ls`` and ``cat`` in the current
-directory. You should see a Ceph configuration file, a keyring, and a log file
-for the new cluster. See `ceph-deploy new -h`_ for additional details.
-.. topic:: Single Node Quick Start
+On your admin node, perform the following steps using ``ceph-deploy``.
- Assuming only one node for your Ceph Storage Cluster, you will need to
- modify the default ``osd crush chooseleaf type`` setting (it defaults to
- ``1`` for ``node``) to ``0`` for ``device`` so that it will peer with OSDs
- on the local node. Add the following line to your Ceph configuration file::
-
- osd crush chooseleaf type = 0
+#. Create the cluster. ::
-.. tip:: If you deploy without executing foregoing step on a single node
- cluster, your Ceph Storage Cluster will not achieve an ``active + clean``
- state. To remedy this situation, you must modify your `CRUSH Map`_.
+ ceph-deploy new {ceph-node}
+ ceph-deploy new ceph-node1
-Install Ceph
-============
+ Check the output of ``ceph-deploy`` with ``ls`` and ``cat`` in the current
+ directory. You should see a Ceph configuration file, a keyring, and a log
+ file for the new cluster. See `ceph-deploy new -h`_ for additional details.
-To install Ceph on your server node, open a command line on your admin
-node and type the following::
+#. Install Ceph. ::
- ceph-deploy install {server-node-name}[,{server-node-name}]
- ceph-deploy install mon-ceph-node
+ ceph-deploy install {ceph-node}[{ceph-node} ...]
+ ceph-deploy install ceph-node1 ceph-node2 ceph-node3
-Without additional arguments, ``ceph-deploy`` will install the most recent
-stable Ceph package to the server node. See `ceph-deploy install -h`_ for
-additional details.
-.. tip:: When ``ceph-deploy`` completes installation successfully,
- it should echo ``OK``.
+#. Add a Ceph Monitor. ::
+ ceph-deploy mon create {ceph-node}
+ ceph-deploy mon create ceph-node1
+
+#. Gather keys. ::
-Add a Monitor
-=============
+ ceph-deploy gatherkeys {ceph-node}
+ ceph-deploy gatherkeys ceph-node1
-To run a Ceph cluster, you need at least one Ceph Monitor. When using
-``ceph-deploy``, the tool enforces a single Ceph Monitor per node. Execute the
-following to create a Ceph Monitor::
+ Once you have gathered keys, your local directory should have the following
+ keyrings:
- ceph-deploy mon create {mon-server-name}
- ceph-deploy mon create mon-ceph-node
+ - ``{cluster-name}.client.admin.keyring``
+ - ``{cluster-name}.bootstrap-osd.keyring``
+ - ``{cluster-name}.bootstrap-mds.keyring``
+
-.. tip:: In production environments, we recommend running Ceph Monitors on
- nodes that do not run OSDs.
+#. Add two OSDs. For fast setup, this quick start uses a directory rather
+ than an entire disk per Ceph OSD Daemon. See `ceph-deploy osd`_ for
+ details on using separate disks/partitions for OSDs and journals.
+ Login to the Ceph Nodes and create a directory for
+ the Ceph OSD Daemon. ::
+
+ ssh ceph-node2
+ sudo mkdir /tmp/osd0
+ exit
+
+ ssh ceph-node3
+ sudo mkdir /tmp/osd1
+ exit
-When you have added a monitor successfully, directories under ``/var/lib/ceph``
-on your server node should have subdirectories ``bootstrap-mds`` and
-``bootstrap-osd`` that contain keyrings. If these directories do not contain
-keyrings, execute ``ceph-deploy mon create`` again on the admin node.
+ Then, from your admin node, use ``ceph-deploy`` to prepare the OSDs. ::
+ ceph-deploy osd prepare {ceph-node}:/path/to/directory
+ ceph-deploy osd prepare ceph-node2:/tmp/osd0 ceph-node3:/tmp/osd1
-Gather Keys
-===========
+ Finally, activate the OSDs. ::
-To deploy additional daemons and provision them with monitor authentication keys
-from your admin node, you must first gather keys from a monitor node. Execute
-the following to gather keys::
+ ceph-deploy osd activate {ceph-node}:/path/to/directory
+ ceph-deploy osd activate ceph-node2:/tmp/osd0 ceph-node3:/tmp/osd1
- ceph-deploy gatherkeys {mon-server-name}
- ceph-deploy gatherkeys mon-ceph-node
+#. Use ``ceph-deploy`` to copy the configuration file and admin key to
+ your admin node and your Ceph Nodes so that you can use the ``ceph``
+ CLI without having to specify the monitor address and
+ ``ceph.client.admin.keyring`` each time you execute a command. ::
+
+ ceph-deploy admin {ceph-node}
+ ceph-deploy admin admin-node ceph-node1 ceph-node2 ceph-node3
-Once you have gathered keys, your local directory should have the following keyrings:
+ **Note:** Since you are using ``ceph-deploy`` to talk to the
+ local host, your host must be reachable by its hostname
+ (e.g., you can modify ``/etc/hosts`` if necessary). Ensure that
+ you have the correct permissions for the ``ceph.client.admin.keyring``.
-- ``{cluster-name}.client.admin.keyring``
-- ``{cluster-name}.bootstrap-osd.keyring``
-- ``{cluster-name}.bootstrap-mds.keyring``
+#. Check your cluster's health. ::
-If you don't have these keyrings, you may not have created a monitor successfully,
-or you may have a problem with your network connection. Ensure that you complete
-this step such that you have the foregoing keyrings before proceeding further.
+ ceph health
-.. tip:: You may repeat this procedure. If it fails, check to see if the
- ``/var/lib/ceph/boostrap-{osd}|{mds}`` directories on the server node
- have keyrings. If they do not have keyrings, try adding the monitor again;
- then, return to this step.
+ Your cluster should return an ``active + clean`` state when it
+ has finished peering.
-Add Ceph OSD Daemons
-====================
+Operating Your Cluster
+======================
-For a cluster's object placement groups to reach an ``active + clean`` state,
-you must have at least two instances of a :term:`Ceph OSD Daemon` running and
-at least two copies of an object (``osd pool default size`` is ``2``
-by default).
+Deploying a Ceph cluster with ``ceph-deploy`` automatically starts the cluster.
+To operate the cluster daemons with Debian/Ubuntu distributions, see
+`Running Ceph with Upstart`_. To operate the cluster daemons with CentOS,
+Red Hat, Fedora, and SLES distributions, see `Running Ceph with sysvinit`_.
-Adding Ceph OSD Daemons is slightly more involved than other ``ceph-deploy``
-commands, because a Ceph OSD Daemon involves both a data store and a journal.
-The ``ceph-deploy`` tool has the ability to invoke ``ceph-disk-prepare`` to
-prepare the disk and activate the Ceph OSD Daemon for you.
+To learn more about peering and cluster health, see `Monitoring a Cluster`_.
+To learn more about Ceph OSD Daemon and placement group health, see
+`Monitoring OSDs and PGs`_.
+
+Once you deploy a Ceph cluster, you can try out some of the administration
+functionality, the ``rados`` object store command line, and then proceed to
+Quick Start guides for Ceph Block Device, Ceph Filesystem, and the Ceph Object
+Gateway.
-Multiple OSDs on the OS Disk (Demo Only)
-----------------------------------------
-For demonstration purposes, you may wish to add multiple OSDs to the OS disk
-(not recommended for production systems). To use Ceph OSDs daemons on the OS
-disk, you must use ``prepare`` and ``activate`` as separate steps. First,
-define a directory for the Ceph OSD daemon(s). ::
-
- mkdir /tmp/osd0
- mkdir /tmp/osd1
-
-Then, use ``prepare`` to prepare the directory(ies) for use with a
-Ceph OSD Daemon. ::
-
- ceph-deploy osd prepare {osd-node-name}:/tmp/osd0
- ceph-deploy osd prepare {osd-node-name}:/tmp/osd1
+Expanding Your Cluster
+======================
-Finally, use ``activate`` to activate the Ceph OSD Daemons. ::
+Once you have a basic cluster up and running, the next step is to expand
+cluster. Add a Ceph OSD Daemon and a Ceph Metadata Server to ``ceph-node1``.
+Then add a Ceph Monitor to ``ceph-node2`` and ``ceph-node3`` to establish a
+quorum of Ceph Monitors.
- ceph-deploy osd activate {osd-node-name}:/tmp/osd0
- ceph-deploy osd activate {osd-node-name}:/tmp/osd1
+.. ditaa::
+ /------------------\ /----------------\
+ | ceph–deploy | | ceph–node1 |
+ | Admin Node | | cCCC |
+ | +-------->+ mon.ceph–node1 |
+ | | | osd.2 |
+ | | | mds.ceph–node1 |
+ \---------+--------/ \----------------/
+ |
+ | /----------------\
+ | | ceph–node2 |
+ | | cCCC |
+ +----------------->+ |
+ | | osd.0 |
+ | | mon.ceph–node2 |
+ | \----------------/
+ |
+ | /----------------\
+ | | ceph–node3 |
+ | | cCCC |
+ +----------------->+ |
+ | osd.1 |
+ | mon.ceph–node3 |
+ \----------------/
-.. tip:: You need two OSDs to reach an ``active + clean`` state. You can
- add one OSD at a time, but OSDs need to communicate with each other
- for Ceph to run properly. Always use more than one OSD per cluster.
+Adding an OSD
+-------------
+Since you are running a 3-node cluster for demonstration purposes, add the OSD
+to the monitor node. ::
-List Disks
-----------
+ ssh ceph-node1
+ sudo mkdir /tmp/osd2
+ exit
-To list the available disk drives on a prospective :term:`Ceph Node`, execute
-the following::
+Then, from your ``ceph-deploy`` node, prepare the OSD. ::
- ceph-deploy disk list {osd-node-name}
- ceph-deploy disk list ceph-node
+ ceph-deploy osd prepare {ceph-node}:/path/to/directory
+ ceph-deploy osd prepare ceph-node1:/tmp/osd2
+Finally, activate the OSDs. ::
-Zap a Disk
-----------
+ ceph-deploy osd activate {ceph-node}:/path/to/directory
+ ceph-deploy osd activate ceph-node1:/tmp/osd2
-To zap a disk (delete its partition table) in preparation for use with Ceph,
-execute the following::
- ceph-deploy disk zap {osd-node-name}:{disk}
- ceph-deploy disk zap ceph-node:sdb ceph-node:sdb2
+Once you have added your new OSD, Ceph will begin rebalancing the cluster by
+migrating placement groups to your new OSD. You can observe this process with
+the ``ceph`` CLI. ::
-.. important:: This will delete all data on the disk.
+ ceph -w
+You should see the placement group states change from ``active+clean`` to active
+with some degraded objects, and finally ``active+clean`` when migration
+completes. (Control-c to exit.)
-Add OSDs on Standalone Disks
-----------------------------
-You can add OSDs using ``prepare`` and ``activate`` in two discrete
-steps. To prepare a disk for use with a Ceph OSD Daemon, execute the
-following::
+Add a Metadata Server
+---------------------
- ceph-deploy osd prepare {osd-node-name}:{osd-disk-name}[:/path/to/journal]
- ceph-deploy osd prepare ceph-node:sdb
+To use CephFS, you need at least one metadata server. Execute the following to
+create a metadata server::
-To activate the Ceph OSD Daemon, execute the following::
+ ceph-deploy mds create {ceph-node}
+ ceph-deploy mds create ceph-node1
- ceph-deploy osd activate {osd-node-name}:{osd-partition-name}
- ceph-deploy osd activate ceph-node:sdb1
-To prepare an OSD disk and activate it in one step, execute the following::
+.. note:: Currently Ceph runs in production with one metadata server only. You
+ may use more, but there is currently no commercial support for a cluster
+ with multiple metadata servers.
- ceph-deploy osd create {osd-node-name}:{osd-disk-name}[:/path/to/journal] [{osd-node-name}:{osd-disk-name}[:/path/to/journal]]
- ceph-deploy osd create ceph-node:sdb:/dev/ssd1 ceph-node:sdc:/dev/ssd2
+Adding Monitors
+---------------
-.. note:: The journal example assumes you will use a partition on a separate
- solid state drive (SSD). If you omit a journal drive or partition,
- ``ceph-deploy`` will use create a separate partition for the journal
- on the same drive. If you have already formatted your disks and created
- partitions, you may also use partition syntax for your OSD disk.
+A Ceph Storage Cluster requires at least one Ceph Monitor to run. For high
+availability, Ceph Storage Clusters typically run multiple Ceph
+Monitors so that the failure of a single Ceph Monitor will not bring down the
+Ceph Storage Cluster. Ceph uses the Paxos algorithm, which requires a majority
+of monitors (i.e., 1, 2:3, 3:4, 3:5, 4:6, etc.) to form a quorum.
-You must add a minimum of two Ceph OSD Daemons for the placement groups in
-a cluster to achieve an ``active + clean`` state.
+Add two Ceph Monitors to your cluster. ::
+ ceph-deploy mon create {ceph-node}
+ ceph-deploy mon create ceph-node2 ceph-node3
-Add a MDS
-=========
+Once you have added your new Ceph Monitors, Ceph will begin synchronizing
+the monitors and form a quorum. You can check the quorum status by executing
+the following::
-To use CephFS, you need at least one metadata node. Execute the following to
-create a metadata node::
+ ceph quorum_status
- ceph-deploy mds create {node-name}
- ceph-deploy mds create ceph-node
-.. note:: Currently Ceph runs in production with one metadata node only. You
- may use more, but there is currently no commercial support for a cluster
- with multiple metadata nodes.
+Storing/Retrieving Object Data
+==============================
+To store object data in the Ceph Storage Cluster, a Ceph client must:
-Summary
-=======
+#. Set an object name
+#. Specify a `pool`_
-Deploying a Ceph cluster with ``ceph-deploy`` automatically starts the cluster.
-To operate the cluster daemons, see `Running Ceph with Upstart`_.
+The Ceph Client retrieves the latest cluster map and the CRUSH algorithm
+calculates how to map the object to a `placement group`_, and then calculates
+how to assign the placement group to a Ceph OSD Daemon dynamically. To find the
+object location, all you need is the object name and the pool name. For
+example::
-Once you deploy a Ceph cluster, you can try out some of the administration
-functionality, the object store command line, and then proceed to Quick Start
-guides for RBD, CephFS, and the Ceph Gateway.
+ ceph osd map {poolname} {object-name}
-.. topic:: Other ceph-deploy Commands
+.. topic:: Exercise: Locate an Object
- To view other ``ceph-deploy`` commands, execute:
-
- ``ceph-deploy -h``
-
+ As an exercise, lets create an object. Specify an object name, a path to
+ a test file containing some object data and a pool name using the
+ ``rados put`` command on the command line. For example::
+
+ rados put {object-name} {file-path} --pool=data
+ rados put test-object-1 testfile.txt --pool=data
+
+ To verify that the Ceph Storage Cluster stored the object, execute
+ the following::
+
+ rados -p data ls
+
+ Now, identify the object location::
-See `Ceph Deploy`_ for additional details.
+ ceph osd map {pool-name} {object-name}
+ ceph osd map data test-object-1
+
+ Ceph should output the object's location. For example::
+
+ osdmap e537 pool 'data' (0) object 'test-object-1' -> pg 0.d1743484 (0.4) -> up [1,0] acting [1,0]
+
+ To remove the test object, simply delete it using the ``rados rm``
+ command. For example::
+
+ rados rm test-object-1 --pool=data
+
+As the cluster evolves, the object location may change dynamically. One benefit
+of Ceph's dynamic rebalancing is that Ceph relieves you from having to perform
+the migration manually.
.. _Preflight Checklist: ../quick-start-preflight
.. _Ceph Deploy: ../../rados/deployment
.. _ceph-deploy install -h: ../../rados/deployment/ceph-deploy-install
.. _ceph-deploy new -h: ../../rados/deployment/ceph-deploy-new
+.. _ceph-deploy osd: ../../rados/deployment/ceph-deploy-osd
.. _Running Ceph with Upstart: ../../rados/operations/operating#running-ceph-with-upstart
-.. _CRUSH Map: ../../rados/operations/crush-map \ No newline at end of file
+.. _Running Ceph with sysvinit: ../../rados/operations/operating#running-ceph-with-sysvinit
+.. _CRUSH Map: ../../rados/operations/crush-map
+.. _pool: ../../rados/operations/pools
+.. _placement group: ../../rados/operations/placement-groups
+.. _Monitoring a Cluster: ../../rados/operations/monitoring
+.. _Monitoring OSDs and PGs: ../../rados/operations/monitoring-osd-pg \ No newline at end of file
diff --git a/doc/start/quick-cephfs.rst b/doc/start/quick-cephfs.rst
index 18dadb005ec..5449e5a6fe3 100644
--- a/doc/start/quick-cephfs.rst
+++ b/doc/start/quick-cephfs.rst
@@ -3,7 +3,7 @@
=====================
To use the :term:`Ceph FS` Quick Start guide, you must have executed the
-procedures in the `Ceph Deploy Quick Start`_ guide first. Execute this quick
+procedures in the `Storage Cluster Quick Start`_ guide first. Execute this quick
start on the Admin Host.
Prerequisites
@@ -91,7 +91,7 @@ See `Ceph FS`_ for additional information. Ceph FS is not quite as stable
as the Ceph Block Device and Ceph Object Storage. See `Troubleshooting`_
if you encounter trouble.
-.. _Ceph Deploy Quick Start: ../quick-ceph-deploy
+.. _Storage Cluster Quick Start: ../quick-ceph-deploy
.. _Ceph FS: ../../cephfs/
.. _FAQ: http://wiki.ceph.com/03FAQs/01General_FAQ#How_Can_I_Give_Ceph_a_Try.3F
.. _Troubleshooting: ../../cephfs/troubleshooting \ No newline at end of file
diff --git a/doc/start/quick-rbd.rst b/doc/start/quick-rbd.rst
index a466771502d..9424457f8c2 100644
--- a/doc/start/quick-rbd.rst
+++ b/doc/start/quick-rbd.rst
@@ -2,47 +2,73 @@
Block Device Quick Start
==========================
-To use this guide, you must have executed the procedures in the `Object Store
-Quick Start`_ guide first. Ensure your :term:`Ceph Storage Cluster` is in an
-``active + clean`` state before working with the :term:`Ceph Block Device`.
-Execute this quick start on the admin node.
+To use this guide, you must have executed the procedures in the `Storage
+Cluster Quick Start`_ guide first. Ensure your :term:`Ceph Storage Cluster` is
+in an ``active + clean`` state before working with the :term:`Ceph Block
+Device`.
.. note:: The Ceph Block Device is also known as :term:`RBD` or :term:`RADOS`
Block Device.
-#. Install ``ceph-common``. ::
- sudo apt-get install ceph-common
+.. ditaa::
+ /------------------\ /----------------\
+ | Admin Node | | ceph–client |
+ | +-------->+ cCCC |
+ | ceph–deploy | | ceph |
+ \------------------/ \----------------/
-#. Create a block device image. ::
- rbd create foo --size 4096 [-m {mon-IP}] [-k /path/to/ceph.client.admin.keyring]
+You may use a virtual machine for your ``ceph-client`` node, but do not
+execute the following procedures on the same physical node as your Ceph
+Storage Cluster nodes (unless you use a VM). See `FAQ`_ for details.
-#. Load the ``rbd`` client module. ::
+
+Install Ceph
+============
+
+#. On the admin node, use ``ceph-deploy`` to install Ceph on your
+ ``ceph-client`` node. ::
+
+ ceph-deploy install ceph-client
+
+#. On the admin node, use ``ceph-deploy`` to copy the Ceph configuration file
+ and the ``ceph.client.admin.keyring`` to the ``ceph-client``. ::
+
+ ceph-deploy admin ceph-client
+
+
+Configure a Block Device
+========================
+
+#. On the ``ceph-client`` node, create a block device image. ::
+
+ rbd create foo --size 4096 [-m {mon-IP}] [-k /path/to/ceph.client.admin.keyring]
+
+#. On the ``ceph-client`` node, load the ``rbd`` client module. ::
sudo modprobe rbd
-#. Map the image to a block device. ::
+#. On the ``ceph-client`` node, map the image to a block device. ::
sudo rbd map foo --pool rbd --name client.admin [-m {mon-IP}] [-k /path/to/ceph.client.admin.keyring]
-#. Use the block device. In the following example, create a file system. ::
+#. Use the block device by creating a file system on the ``ceph-client``
+ node. ::
sudo mkfs.ext4 -m0 /dev/rbd/rbd/foo
This may take a few moments.
-#. Mount the file system. ::
+#. Mount the file system on the ``ceph-client`` node. ::
sudo mkdir /mnt/ceph-block-device
sudo mount /dev/rbd/rbd/foo /mnt/ceph-block-device
cd /mnt/ceph-block-device
-.. note:: Mount the block device on the client machine,
- not the server machine. See `FAQ`_ for details.
See `block devices`_ for additional details.
-.. _Object Store Quick Start: ../quick-ceph-deploy
+.. _Storage Cluster Quick Start: ../quick-ceph-deploy
.. _block devices: ../../rbd/rbd
.. _FAQ: http://wiki.ceph.com/03FAQs/01General_FAQ#How_Can_I_Give_Ceph_a_Try.3F
diff --git a/doc/start/quick-rgw.rst b/doc/start/quick-rgw.rst
index af48a3154c1..40cf7d4f4dc 100644
--- a/doc/start/quick-rgw.rst
+++ b/doc/start/quick-rgw.rst
@@ -2,7 +2,7 @@
Object Storage Quick Start
============================
-To use this guide, you must have executed the procedures in the `Ceph Deploy
+To use this guide, you must have executed the procedures in the `Storage Cluster
Quick Start`_ guide first. Ensure your :term:`Ceph Storage Cluster` is in an
``active + clean`` state before working with the :term:`Ceph Object Storage`.
@@ -344,7 +344,7 @@ tutorials. See the `S3-compatible`_ and `Swift-compatible`_ APIs for details.
.. _Create rgw.conf: ../../radosgw/config/index.html#create-rgw-conf
-.. _Ceph Deploy Quick Start: ../quick-ceph-deploy
+.. _Storage Cluster Quick Start: ../quick-ceph-deploy
.. _Ceph Object Storage Manual Install: ../../radosgw/manual-install
.. _RGW Configuration: ../../radosgw/config
.. _S3-compatible: ../../radosgw/s3
diff --git a/doc/start/quick-start-preflight.rst b/doc/start/quick-start-preflight.rst
index 74dc403c211..77a54795f19 100644
--- a/doc/start/quick-start-preflight.rst
+++ b/doc/start/quick-start-preflight.rst
@@ -4,74 +4,57 @@
.. versionadded:: 0.60
-Thank you for trying Ceph! Petabyte-scale data clusters are quite an
-undertaking. Before delving deeper into Ceph, we recommend setting up a two-node
-demo cluster to explore some of the functionality. This **Preflight Checklist**
-will help you prepare an admin node and a server node for use with
-``ceph-deploy``.
-
-.. ditaa::
- /----------------\ /----------------\
- | Admin Node |<------->| Server Node |
- | cCCC | | cCCC |
- \----------------/ \----------------/
-
-
-Before you can deploy Ceph using ``ceph-deploy``, you need to ensure that you
-have a few things set up first on your admin node and on nodes running Ceph
-daemons.
-
-
-Install an Operating System
-===========================
-
-Install a recent release of Debian or Ubuntu (e.g., 12.04, 12.10, 13.04) on your
-nodes. For additional details on operating systems or to use other operating
-systems other than Debian or Ubuntu, see `OS Recommendations`_.
-
-
-Install an SSH Server
-=====================
-
-The ``ceph-deploy`` utility requires ``ssh``, so your server node(s) require an
-SSH server. ::
-
- sudo apt-get install openssh-server
-
-
-Create a User
-=============
-
-Create a user on nodes running Ceph daemons.
-
-.. tip:: We recommend a username that brute force attackers won't
- guess easily (e.g., something other than ``root``, ``ceph``, etc).
-
-::
+Thank you for trying Ceph! We recommend setting up a ``ceph-deploy`` admin node
+and a 3-node :term:`Ceph Storage Cluster` to explore the basics of Ceph. This
+**Preflight Checklist** will help you prepare a ``ceph-deploy`` admin node and
+three Ceph Nodes (or virtual machines) that will host your Ceph Storage Cluster.
+
+
+.. ditaa::
+ /------------------\ /----------------\
+ | Admin Node | | ceph–node1 |
+ | +-------->+ |
+ | ceph–deploy | | cCCC |
+ \---------+--------/ \----------------/
+ |
+ | /----------------\
+ | | ceph–node2 |
+ +----------------->+ |
+ | | cCCC |
+ | \----------------/
+ |
+ | /----------------\
+ | | ceph–node3 |
+ +----------------->| |
+ | cCCC |
+ \----------------/
+
+
+Ceph Node Setup
+===============
+
+Perform the following steps:
+
+#. Create a user on each Ceph Node. ::
ssh user@ceph-server
sudo useradd -d /home/ceph -m ceph
sudo passwd ceph
-
-``ceph-deploy`` installs packages onto your nodes. This means that
-the user you create requires passwordless ``sudo`` privileges.
-
-.. note:: We **DO NOT** recommend enabling the ``root`` password
- for security reasons.
-
-To provide full privileges to the user, add the following to
-``/etc/sudoers.d/ceph``. ::
+#. Add ``root`` privileges for the user on each Ceph Node. ::
echo "ceph ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/ceph
sudo chmod 0440 /etc/sudoers.d/ceph
-Configure SSH
-=============
+#. Install an SSH server (if necessary)::
-Configure your admin machine with password-less SSH access to each node
-running Ceph daemons (leave the passphrase empty). ::
+ sudo apt-get install openssh-server
+ sudo yum install openssh-server
+
+
+#. Configure your ``ceph-deploy`` admin node with password-less SSH access to
+ each Ceph Node. Leave the passphrase empty::
ssh-keygen
Generating public/private key pair.
@@ -81,77 +64,95 @@ running Ceph daemons (leave the passphrase empty). ::
Your identification has been saved in /ceph-client/.ssh/id_rsa.
Your public key has been saved in /ceph-client/.ssh/id_rsa.pub.
-Copy the key to each node running Ceph daemons::
+#. Copy the key to each Ceph Node. ::
ssh-copy-id ceph@ceph-server
-Modify your ~/.ssh/config file of your admin node so that it defaults
-to logging in as the user you created when no username is specified. ::
+
+#. Modify the ``~/.ssh/config`` file of your ``ceph-deploy`` admin node so that
+ it logs in to Ceph Nodes as the user you created (e.g., ``ceph``). ::
Host ceph-server
- Hostname ceph-server.fqdn-or-ip-address.com
- User ceph
+ Hostname ceph-server.fqdn-or-ip-address.com
+ User ceph
+
+
+#. Ensure connectivity using ``ping`` with hostnames (i.e., not IP addresses).
+ Address hostname resolution issues and firewall issues as necessary.
-.. note:: Do not call ceph-deploy with ``sudo`` or run as ``root`` if you are
- login in as a different user (as in the ssh config above) because it
- will not issue ``sudo`` commands needed on the remote host.
-Install ceph-deploy
-===================
+Ceph Deploy Setup
+=================
-To install ``ceph-deploy``, execute the following::
+Add Ceph repositories to the ``ceph-deploy`` admin node. Then, install
+``ceph-deploy``.
+
+.. important:: Do not call ``ceph-deploy`` with ``sudo`` or run it as ``root``
+ if you are logged in as a different user, because it will not issue ``sudo``
+ commands needed on the remote host.
+
+
+Advanced Package Tool (APT)
+---------------------------
+
+For Debian and Ubuntu distributions, perform the following steps:
+
+#. Add the release key::
wget -q -O- 'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc' | sudo apt-key add -
echo deb http://ceph.com/debian-dumpling/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
sudo apt-get update
sudo apt-get install ceph-deploy
+#. Add the Ceph packages to your repository. Replace ``{ceph-stable-release}``
+ with a stable Ceph release (e.g., ``cuttlefish``, ``dumpling``, etc.).
+ For example::
+
+ echo deb http://ceph.com/debian-{ceph-stable-release}/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
-Ensure Connectivity
-===================
+#. Update your repository and install ``ceph-deploy``::
-Ensure that your admin node has connectivity to the network and to your Server
-node (e.g., ensure ``iptables``, ``ufw`` or other tools that may prevent
-connections, traffic forwarding, etc. to allow what you need).
+ sudo apt-get update && sudo apt-get install ceph-deploy
-.. tip:: The ``ceph-deploy`` tool is new and you may encounter some issues
- without effective error messages.
-Once you have completed this pre-flight checklist, you are ready to begin using
-``ceph-deploy``.
+Red Hat Package Manager (RPM)
+-----------------------------
+For Red Hat(rhel6), CentOS (el6), Fedora 17-19 (f17-f19), OpenSUSE 12
+(opensuse12), and SLES (sles11) perform the following steps:
-Hostname Resolution
-===================
+#. Add the package to your repository. Open a text editor and create a
+ Yellowdog Updater, Modified (YUM) entry. Use the file path
+ ``/etc/yum.repos.d/ceph.repo``. For example::
-Ensure that your admin node can resolve the server node's hostname. ::
+ sudo vim /etc/yum.repos.d/ceph.repo
- ping {server-node}
+ Paste the following example code. Replace ``{ceph-stable-release}`` with
+ the recent stable release of Ceph (e.g., ``dumpling``). Replace ``{distro}``
+ with your Linux distribution (e.g., ``el6`` for CentOS 6, ``rhel6`` for
+ Red Hat 6, ``fc18`` or ``fc19`` for Fedora 18 or Fedora 19, and ``sles11``
+ for SLES 11). Finally, save the contents to the
+ ``/etc/yum.repos.d/ceph.repo`` file. ::
-If you execute ``ceph-deploy`` against the localhost, ``ceph-deploy``
-must be able to resolve its IP address. Consider adding the IP address
-to your ``/etc/hosts`` file such that it resolves to the hostname. ::
+ [ceph-noarch]
+ name=Ceph noarch packages
+ baseurl=http://ceph.com/rpm-{ceph-stable-release}/{distro}/noarch
+ enabled=1
+ gpgcheck=1
+ type=rpm-md
+ gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
- hostname
- host -4 {hostname}
- sudo vim /etc/hosts
- {ip-address} {hostname}
+#. Update your repository and install ``ceph-deploy``::
- ceph-deploy {command} {hostname}
+ sudo yum update && sudo yum install ceph-deploy
-.. tip:: The ``ceph-deploy`` tool will not resolve to ``localhost``. Use
- the hostname.
Summary
=======
-Once you have passwordless ``ssh`` connectivity, passwordless ``sudo``,
-installed ``ceph-deploy``, and you have ensured appropriate connectivity,
-proceed to the `Storage Cluster Quick Start`_.
-
-.. tip:: The ``ceph-deploy`` utility can install Ceph packages on remote
- machines from the admin node!
+This completes the Quick Start Preflight. Proceed to the `Storage Cluster
+Quick Start`_.
.. _Storage Cluster Quick Start: ../quick-ceph-deploy
.. _OS Recommendations: ../../install/os-recommendations