summaryrefslogtreecommitdiff
path: root/doc/start
diff options
context:
space:
mode:
authorJohn Wilkins <john.wilkins@dreamhost.com>2012-03-14 11:58:27 -0700
committerTommi Virtanen <tommi.virtanen@dreamhost.com>2012-05-02 12:09:54 -0700
commita1b31ddfda6ada09d6aa17d1b92a7cce25c87f74 (patch)
treefd19d01c4a113c93638f70affe781d020208400f /doc/start
parentd3a2c565661790787ec2b6372b7868f399396504 (diff)
downloadceph-a1b31ddfda6ada09d6aa17d1b92a7cce25c87f74.tar.gz
Initial cut of introduction, getting started, and installing. More to do on installation. RADOS gateway to follow.
Signed-off-by: John Wilkins <john.wilkins@dreamhost.com> Signed-off-by: Tommi Virtanen <tommi.virtanen@dreamhost.com>
Diffstat (limited to 'doc/start')
-rw-r--r--doc/start/block.rst89
-rw-r--r--doc/start/build_prerequisites.rst105
-rw-r--r--doc/start/building_ceph.rst31
-rw-r--r--doc/start/cloning_the_ceph_source_code_repository.rst54
-rw-r--r--doc/start/download_packages.rst41
-rw-r--r--doc/start/downloading_a_ceph_release.rst6
-rw-r--r--doc/start/filesystem.rst73
-rw-r--r--doc/start/get_involved_in_the_ceph_community.rst21
-rw-r--r--doc/start/index.rst63
-rw-r--r--doc/start/object.rst108
-rw-r--r--doc/start/summary.rst38
-rw-r--r--doc/start/why_use_ceph.rst39
12 files changed, 359 insertions, 309 deletions
diff --git a/doc/start/block.rst b/doc/start/block.rst
deleted file mode 100644
index 4ed09be150a..00000000000
--- a/doc/start/block.rst
+++ /dev/null
@@ -1,89 +0,0 @@
-.. index:: RBD
-
-=====================
- Starting to use RBD
-=====================
-
-Introduction
-============
-
-`RBD` is the block device component of Ceph. It provides a block
-device interface to a Linux machine, while striping the data across
-multiple `RADOS` objects for improved performance. For more
-information, see :ref:`rbd`.
-
-
-Installation
-============
-
-To use `RBD`, you need to install a Ceph cluster. Follow the
-instructions in :doc:`/ops/install/index`. Continue with these
-instructions once you have a healthy cluster running.
-
-
-Setup
-=====
-
-The default `pool` used by `RBD` is called ``rbd``. It is created for
-you as part of the installation. If you wish to use multiple pools,
-for example for access control, see :ref:`create-new-pool`.
-
-First, we need a ``client`` key that is authorized to access the right
-pool. Follow the instructions in :ref:`add-new-key`. Let's set the
-``id`` of the key to be ``bar``. You could set up one key per machine
-using `RBD`, or let them share a single key; your call. Make sure the
-keyring containing the new key is available on the machine.
-
-Then, authorize the key to access the new pool. Follow the
-instructions in :ref:`auth-pool`.
-
-
-Usage
-=====
-
-`RBD` can be accessed in two ways:
-
-- as a block device on a Linux machine
-- via the ``rbd`` network storage driver in Qemu/KVM
-
-
-.. rubric:: Example: As a block device
-
-Using the ``client.bar`` key you set up earlier, we can create an RBD
-image called ``tengigs``::
-
- rbd --name=client.bar create --size=10240 tengigs
-
-And then make that visible as a block device::
-
- touch secretfile
- chmod go= secretfile
- ceph-authtool --name=bar --print-key /etc/ceph/client.bar.keyring >secretfile
- rbd map tengigs --user bar --secret secretfile
-
-.. todo:: the secretfile part is really clumsy
-
-For more information, see :doc:`rbd </man/8/rbd>`\(8).
-
-
-.. rubric:: Example: As a Qemu/KVM storage driver via Libvirt
-
-You'll need ``kvm`` v0.15, and ``libvirt`` v0.8.7 or newer.
-
-Create the RBD image as above, and then refer to it in the ``libvirt``
-virtual machine configuration::
-
- <disk type='network' device='disk'>
- <source protocol='rbd' name='rbd/tengigs'>
- <host name='10.0.0.101' port='6789'/>
- <host name='10.0.0.102' port='6789'/>
- <host name='10.0.0.103' port='6789'/>
- </source>
- <target dev='vda' bus='virtio'/>
- </disk
-
-.. todo:: use secret keys
-
-.. todo:: ceph.conf usage for mon addresses
-
-.. todo:: pending libvirt xml schema changes
diff --git a/doc/start/build_prerequisites.rst b/doc/start/build_prerequisites.rst
new file mode 100644
index 00000000000..481cd3ef2a9
--- /dev/null
+++ b/doc/start/build_prerequisites.rst
@@ -0,0 +1,105 @@
+===================
+Build Prerequisites
+===================
+
+Before you can build Ceph documentation or Ceph source code, you need to install several libraries and tools.
+
+.. tip:: Check this section to see if there are specific prerequisites for your Linux/Unix distribution.
+
+
+Prerequisites for Building Ceph Documentation
+=============================================
+Ceph utilizes Python's Sphinx documentation tool. For details on
+the Sphinx documentation tool, refer to: `Sphinx <http://sphinx.pocoo.org>`_
+Follow the directions at `Sphinx 1.1.3 <http://pypi.python.org/pypi/Sphinx>`_
+to install Sphinx. To run Sphinx, with `admin/build-doc`, at least the following are required:
+
+- ``python-dev``
+- ``python-pip``
+- ``python-virtualenv``
+- ``libxml2-dev``
+- ``libxslt-dev``
+- ``doxygen``
+- ``ditaa``
+- ``graphviz``
+
+Execute ``sudo apt-get install`` for each dependency that isn't installed on your host. ::
+
+ $ sudo apt-get install python-dev python-pip python-virtualenv libxml2-dev libxslt-dev doxygen ditaa graphviz
+
+Prerequisites for Building Ceph Source Code
+===========================================
+Ceph provides ``autoconf`` and ``automake`` scripts to get you started quickly. Ceph build scripts
+depend on the following:
+
+- ``autotools-dev``
+- ``autoconf``
+- ``automake``
+- ``cdbs``
+- ``gcc``
+- ``g++``
+- ``git``
+- ``libboost-dev``
+- ``libedit-dev``
+- ``libssl-dev``
+- ``libtool``
+- ``libfcgi``
+- ``libfcgi-dev``
+- ``libfuse-dev``
+- ``linux-kernel-headers``
+- ``libcrypto++-dev``
+- ``libcrypto++``
+- ``libexpat1-dev``
+- ``libgtkmm-2.4-dev``
+- ``pkg-config``
+
+On Ubuntu, execute ``sudo apt-get install`` for each dependency that isn't installed on your host. ::
+
+ $ sudo apt-get install autotools-dev autoconf automake cdbs
+ gcc g++ git libboost-dev libedit-dev libssl-dev libtool
+ libfcgi libfcgi-dev libfuse-dev linux-kernel-headers
+ libcrypto++-dev libcrypto++ libexpat1-dev libgtkmm-2.4-dev
+
+On Debian/Squeeze, execute ``aptitude install`` for each dependency that isn't installed on your host. ::
+
+ $ aptitude install autotools-dev autoconf automake cdbs
+ gcc g++ git libboost-dev libedit-dev libssl-dev libtool
+ libfcgi libfcgi-dev libfuse-dev linux-kernel-headers
+ libcrypto++-dev libcrypto++ libexpat1-dev libgtkmm-2.4-dev
+
+
+Ubuntu Requirements
+-------------------
+
+- ``uuid-dev``
+- ``libkeytutils-dev``
+- ``libgoogle-perftools-dev``
+- ``libatomic-ops-dev``
+- ``libaio-dev``
+- ``libgdata-common``
+- ``libgdata13``
+
+Execute ``sudo apt-get install`` for each dependency that isn't installed on your host. ::
+
+ $ sudo apt-get install uuid-dev libkeytutils-dev libgoogle-perftools-dev
+ libatomic-ops-dev libaio-dev libgdata-common libgdata13
+
+Debian
+------
+Alternatively, you may also install::
+
+ $ aptitude install fakeroot dpkg-dev
+ $ aptitude install debhelper cdbs libexpat1-dev libatomic-ops-dev
+
+openSUSE 11.2 (and later)
+-------------------------
+
+- ``boost-devel``
+- ``gcc-c++``
+- ``libedit-devel``
+- ``libopenssl-devel``
+- ``fuse-devel`` (optional)
+
+Execute ``zypper install`` for each dependency that isn't installed on your host. ::
+
+ $zypper install boost-devel gcc-c++ libedit-devel libopenssl-devel fuse-devel \ No newline at end of file
diff --git a/doc/start/building_ceph.rst b/doc/start/building_ceph.rst
new file mode 100644
index 00000000000..81a2039901d
--- /dev/null
+++ b/doc/start/building_ceph.rst
@@ -0,0 +1,31 @@
+=============
+Building Ceph
+=============
+
+Ceph provides build scripts for source code and for documentation.
+
+Building Ceph
+=============
+Ceph provides ``automake`` and ``configure`` scripts to streamline the build process. To build Ceph, navigate to your cloned Ceph repository and execute the following::
+
+ $ cd ceph
+ $ ./autogen.sh
+ $ ./configure
+ $ make
+
+You can use ``make -j`` to execute multiple jobs depending upon your system. For example::
+
+ $ make -j4
+
+Building Ceph Documentation
+===========================
+Ceph utilizes Python’s Sphinx documentation tool. For details on the Sphinx documentation tool, refer to: `Sphinx <http://sphinx.pocoo.org>`_. To build the Ceph documentaiton, navigate to the Ceph repository and execute the build script::
+
+ $ cd ceph
+ $ admin/build-doc
+
+Once you build the documentation set, you may navigate to the source directory to view it::
+
+ $ cd build-doc/output
+
+There should be an ``/html`` directory and a ``/man`` directory containing documentation in HTML and manpage formats respectively.
diff --git a/doc/start/cloning_the_ceph_source_code_repository.rst b/doc/start/cloning_the_ceph_source_code_repository.rst
new file mode 100644
index 00000000000..8486e2df298
--- /dev/null
+++ b/doc/start/cloning_the_ceph_source_code_repository.rst
@@ -0,0 +1,54 @@
+=======================================
+Cloning the Ceph Source Code Repository
+=======================================
+To check out the Ceph source code, you must have ``git`` installed
+on your local host. To install ``git``, execute::
+
+ $ sudo apt-get install git
+
+You must also have a ``github`` account. If you do not have a
+``github`` account, go to `github.com <http://github.com>`_ and register.
+Follow the directions for setting up git at `Set Up Git <http://help.github.com/linux-set-up-git/>`_.
+
+Generate SSH Keys
+-----------------
+You must generate SSH keys for github to clone the Ceph
+repository. If you do not have SSH keys for ``github``, execute::
+
+ $ ssh-keygen -d
+
+Get the key to add to your ``github`` account::
+
+ $ cat .ssh/id_dsa.pub
+
+Copy the public key.
+
+Add the Key
+-----------
+Go to your your ``github`` account,
+click on "Account Settings" (i.e., the 'tools' icon); then,
+click "SSH Keys" on the left side navbar.
+
+Click "Add SSH key" in the "SSH Keys" list, enter a name for
+the key, paste the key you generated, and press the "Add key"
+button.
+
+Clone the Source
+----------------
+To clone the Ceph source code repository, execute::
+
+ $ git clone git@github.com:ceph/ceph.git
+
+Once ``git clone`` executes, you should have a full copy of the Ceph repository.
+
+Clone the Submodules
+--------------------
+Before you can build Ceph, you must get the ``init`` submodule and the ``update`` submodule::
+
+ $ git submodule init
+ $ git submodule update
+
+.. tip:: Make sure you maintain the latest copies of these submodules. Running ``git status`` will tell you if the submodules are out of date::
+
+ $ git status
+
diff --git a/doc/start/download_packages.rst b/doc/start/download_packages.rst
new file mode 100644
index 00000000000..9bf6d091311
--- /dev/null
+++ b/doc/start/download_packages.rst
@@ -0,0 +1,41 @@
+====================
+Downloading Packages
+====================
+
+We automatically build Debian and Ubuntu packages for any branches or tags that appear in
+the ``ceph.git`` `repository <http://github.com/ceph/ceph>`_. We build packages for the following
+architectures:
+
+- ``amd64``
+- ``i386``
+
+For each architecture, we build packages for the following distributions:
+
+- Debian 7.0 (``wheezy``)
+- Debian 6.0 (``squeeze``)
+- Debian unstable (``sid``)
+- Ubuntu 12.04 (``precise``)
+- Ubuntu 11.10 (``oneiric``)
+- Ubuntu 11.04 (``natty``)
+- Ubuntu 10.10 (``maverick``)
+
+When you execute the following commands to install the Ceph packages, replace ``{ARCH}`` with the architecture of your CPU,
+``{DISTRO}`` with the code name of your operating system (e.g., ``wheezy``, rather than the version number) and
+``{BRANCH}`` with the version of Ceph you want to run (e.g., ``master``, ``stable``, ``unstable``, ``v0.44``, etc.). ::
+
+ wget -q -O- https://raw.github.com/ceph/ceph/master/keys/autobuild.asc \
+ | sudo apt-key add -
+
+ sudo tee /etc/apt/sources.list.d/ceph.list <<EOF
+ deb http://ceph.newdream.net/debian-snapshot-{ARCH}/{BRANCH}/ {DISTRO} main
+ deb-src http://ceph.newdream.net/debian-snapshot-{ARCH}/{BRANCH}/ {DISTRO} main
+ EOF
+
+ sudo apt-get update
+ sudo apt-get install ceph
+
+
+When you download packages, you will receive the latest package build, which may be several weeks behind the current release
+or the most recent code. It may contain bugs that have already been fixed in the most recent versions of the code. Until packages
+contain only stable code, you should carefully consider the tradeoffs of installing from a package or retrieving the latest release
+or the most current source code and building Ceph. \ No newline at end of file
diff --git a/doc/start/downloading_a_ceph_release.rst b/doc/start/downloading_a_ceph_release.rst
new file mode 100644
index 00000000000..5a3ce1a4890
--- /dev/null
+++ b/doc/start/downloading_a_ceph_release.rst
@@ -0,0 +1,6 @@
+==========================
+Downloading a Ceph Release
+==========================
+As Ceph development progresses, the Ceph team releases new versions. You may download Ceph releases here:
+
+`Ceph Releases <http://ceph.newdream.net/download/>`_ \ No newline at end of file
diff --git a/doc/start/filesystem.rst b/doc/start/filesystem.rst
deleted file mode 100644
index 5a10f79ec82..00000000000
--- a/doc/start/filesystem.rst
+++ /dev/null
@@ -1,73 +0,0 @@
-========================
- Starting to use CephFS
-========================
-
-Introduction
-============
-
-The Ceph Distributed File System is a scalable network file system
-aiming for high performance, large data storage, and POSIX
-compliance. For more information, see :ref:`cephfs`.
-
-
-Installation
-============
-
-To use `Ceph DFS`, you need to install a Ceph cluster. Follow the
-instructions in :doc:`/ops/install/index`. Continue with these
-instructions once you have a healthy cluster running.
-
-
-Setup
-=====
-
-First, we need a ``client`` key that is authorized to access the
-filesystem. Follow the instructions in :ref:`add-new-key`. Let's set
-the ``id`` of the key to be ``foo``. You could set up one key per
-machine mounting the filesystem, or let them share a single key; your
-call. Make sure the keyring containing the new key is available on the
-machine doing the mounting.
-
-
-Usage
-=====
-
-There are two main ways of using the filesystem. You can use the Ceph
-client implementation that is included in the Linux kernel, or you can
-use the FUSE userspace filesystem. For an explanation of the
-tradeoffs, see :ref:`Status <cfuse-kernel-tradeoff>`. Follow the
-instructions in :ref:`mounting`.
-
-Once you have the filesystem mounted, you can use it like any other
-filesystem. The changes you make on one client will be visible to
-other clients that have mounted the same filesystem.
-
-You can now use snapshots, automatic disk usage tracking, and all
-other features `Ceph DFS` has. All read and write operations will be
-automatically distributed across your whole storage cluster, giving
-you the best performance available.
-
-.. todo:: links for snapshots, disk usage
-
-You can use :doc:`cephfs </man/8/cephfs>`\(8) to interact with
-``cephfs`` internals.
-
-
-.. rubric:: Example: Home directories
-
-If you locate UNIX user account home directories under a Ceph
-filesystem mountpoint, the same files will be available from all
-machines set up this way.
-
-Users can move between hosts, or even use them simultaneously, and
-always access the same files.
-
-
-.. rubric:: Example: HPC
-
-In a HPC (High Performance Computing) scenario, hundreds or thousands
-of machines could all mount the Ceph filesystem, and worker processes
-on all of the machines could then access the same files for
-input/output.
-
-.. todo:: point to the lazy io optimization
diff --git a/doc/start/get_involved_in_the_ceph_community.rst b/doc/start/get_involved_in_the_ceph_community.rst
new file mode 100644
index 00000000000..241be479443
--- /dev/null
+++ b/doc/start/get_involved_in_the_ceph_community.rst
@@ -0,0 +1,21 @@
+===================================
+Get Involved in the Ceph Community!
+===================================
+These are exciting times in the Ceph community!
+Follow the `Ceph Blog <http://ceph.newdream.net/news/>`__ to keep track of Ceph progress.
+
+As you delve into Ceph, you may have questions or feedback for the Ceph development team.
+Ceph developers are often available on the ``#ceph`` IRC channel at ``irc.oftc.net``,
+particularly during daytime hours in the US Pacific Standard Time zone.
+Keep in touch with developer activity by subscribing_ to the email list at ceph-devel@vger.kernel.org.
+You can opt out of the email list at any time by unsubscribing_. A simple email is
+all it takes! If you would like to view the archives, go to Gmane_.
+You can help prepare Ceph for production by filing
+and tracking bugs, and providing feature requests using
+the `bug/feature tracker <http://tracker.newdream.net/projects/ceph>`__.
+
+.. _subscribing: mailto:majordomo@vger.kernel.org?body=subscribe+ceph-devel
+.. _unsubscribing: mailto:majordomo@vger.kernel.org?body=unsubscribe+ceph-devel
+.. _Gmane: http://news.gmane.org/gmane.comp.file-systems.ceph.devel
+
+If you need hands-on help, `commercial support <http://ceph.newdream.net/support/>`__ is available too! \ No newline at end of file
diff --git a/doc/start/index.rst b/doc/start/index.rst
index 9f305c5a910..2922d62552e 100644
--- a/doc/start/index.rst
+++ b/doc/start/index.rst
@@ -1,43 +1,28 @@
-=================
- Getting Started
-=================
-
-.. todo:: write about vstart, somewhere
-
-The Ceph Storage System consists of multiple components, and can be
-used in multiple ways. To guide you through it, please pick an aspect
-of Ceph that is most interesting to you:
-
-- :doc:`Object storage <object>`: read and write objects,
- flexible-sized data containers that have both data (a sequence of
- bytes) and metadata (mapping from keys to values), either via a
- :doc:`library interface </api/index>`, or via an :ref:`HTTP API
- <radosgw>`.
-
- *Example*: an asset management system
-
-- :doc:`Block storage <block>`: use a remote data store as if it was a
- local hard disk, for example for virtual machine disk storage, for
- high-availability, etc.
-
- *Example*: virtual machine system with live migration
-
-- :doc:`Distributed filesystem <filesystem>`: access the data store as
- a network filesystem, with strict POSIX semantics such as locking.
-
- *Example*: organization with hundreds of Linux servers with active
- users accessing them remotely
-
-.. todo:: should the api mention above link to librados or libradospp
- directly? which one?
-
-.. todo:: fs example could be thin clients, HPC, typical university
- setting, what to pick?
-
+===============
+Getting Started
+===============
+Welcome to Ceph! The following sections provide information
+that will help you get started before you install Ceph:
+
+- :doc:`Why use Ceph? <why_use_ceph>`
+- :doc:`Get Involved in the Ceph Community! <get_involved_in_the_ceph_community>`
+- :doc:`Build Prerequisites <build_prerequisites>`
+- :doc:`Download Packages <download_packages>`
+- :doc:`Downloading a Ceph Release <downloading_a_ceph_release>`
+- :doc:`Cloning the Ceph Source Code Repository <cloning_the_ceph_source_code_repository>`
+- :doc:`Building Ceph<building_ceph>`
+- :doc:`Summary <summary>`
+
+Once you successfully build the Ceph code, you may proceed to `RADOS OSD Provisioning <../install/RADOS_OSD_Provisioning>`_.
.. toctree::
:hidden:
- object
- block
- filesystem
+ why_use_ceph
+ Get Involved <get_involved_in_the_ceph_community>
+ build_prerequisites
+ Download Packages <download_packages>
+ Download a Release <downloading_a_ceph_release>
+ Clone the Source Code <cloning_the_ceph_source_code_repository>
+ building_ceph
+ summary
diff --git a/doc/start/object.rst b/doc/start/object.rst
deleted file mode 100644
index 7b8a9b95f1d..00000000000
--- a/doc/start/object.rst
+++ /dev/null
@@ -1,108 +0,0 @@
-=======================
- Starting to use RADOS
-=======================
-
-.. highlight:: python
-
-.. index:: RADOS, object
-
-Introduction
-============
-
-`RADOS` is the object storage component of Ceph.
-
-An object, in this context, means a named entity that has
-
-- a `name`: a sequence of bytes, unique within its container, that is
- used to locate and access the object
-- `content`: sequence of bytes
-- `metadata`: a mapping from keys to values, for example ``color:
- blue, importance: low``
-
-None of these have any prescribed meaning to Ceph, and can be freely
-chosen by the user.
-
-`RADOS` takes care of distributing the objects across the whole
-storage cluster and replicating them for fault tolerance.
-
-
-Installation
-============
-
-To use `RADOS`, you need to install a Ceph cluster. Follow the
-instructions in :doc:`/ops/install/index`. Continue with these
-instructions once you have a healthy cluster running.
-
-
-Setup
-=====
-
-First, we need to create a `pool` that will hold our assets. Follow
-the instructions in :ref:`create-new-pool`. Let's name the pool
-``assets``.
-
-Then, we need a ``client`` key that is authorized to access that
-pool. Follow the instructions in :ref:`add-new-key`. Let's set the
-``id`` of the key to be ``webapp``. You could set up one key per
-machine running the web service, or let them share a single key; your
-call. Make sure the keyring containing the new key is available on the
-machine running the asset management system.
-
-Then, authorize the key to access the new pool. Follow the
-instructions in :ref:`auth-pool`.
-
-
-Usage
-=====
-
-`RADOS` is accessed via a network protocol, implemented in the
-:doc:`/api/librados` and :doc:`/api/libradospp` libraries. There are
-also wrappers for other languages.
-
-.. todo:: link to python, phprados here
-
-Instead of a low-level programming library, you can also use a
-higher-level service, with user accounts, access control and such
-features, via the :ref:`radosgw` HTTP service. See :doc:`/ops/radosgw`
-for more.
-
-
-.. rubric:: Example: Asset management
-
-Let's say we write our asset management system in Python. We'll use
-the ``rados`` Python module for accessing `RADOS`.
-
-.. todo:: link to rados.py, where ever it'll be documented
-
-With the key we created in Setup_, we'll be able to open a RADOS
-connection::
-
- import rados
-
- r=rados.Rados('webapp')
- r.conf_read_file()
- r.connect()
-
- ioctx = r.open_ioctx('assets')
-
-and then write an object::
-
- # holding content fully in memory to make the example simpler;
- # see API docs for how to do this better
- ioctx.write_full('1.jpg', 'jpeg-content-goes-here')
-
-and read it back::
-
- # holding content fully in memory to make the example simpler;
- # see API docs for how to do this better
- content = ioctx.write_full('1.jpg')
-
-
-We can also manipulate the metadata related to the object::
-
- ioctx.set_xattr('1.jpg', 'content-type', 'image/jpeg')
-
-
-Now you can use these as fits the web server framework of your choice,
-passing the ``ioctx`` variable from initialization to the request
-serving function.
diff --git a/doc/start/summary.rst b/doc/start/summary.rst
new file mode 100644
index 00000000000..7a551d2ea48
--- /dev/null
+++ b/doc/start/summary.rst
@@ -0,0 +1,38 @@
+=======
+Summary
+=======
+
+Once you complete the build, you should find x under the /ceph/src directory.
+
+
++---------------+------------------+
+|table heading | table heading 2 |
++===============+==================+
+|ceph-dencoder | a utility to encode, decode, and dump ceph data structures (debugger) |
++---------------+------------------+
+|cephfs | Client |
++---------------+------------------+
+|ceph-fuse | Client |
++---------------+------------------+
+|ceph-mds | The Ceph filesystem service daemon. |
++---------------+------------------+
+|ceph-mon | The Ceph monitor. |
++---------------+------------------+
+|ceph-osd | The RADOS OSD storage daemon. |
++---------------+------------------+
+|ceph-syn | a simple synthetic workload generator. |
++---------------+------------------+
+|crushtool | is a utility that lets you create, compile, and decompile CRUSH map files |
++---------------+------------------+
+|monmaptool | a utility to create, view, and modify a monitor cluster map |
++---------------+------------------+
+|mount.ceph | a simple helper for mounting the Ceph file system on a Linux host. |
++---------------+------------------+
+|osd.maptool | a utility that lets you create, view, and manipulate OSD cluster maps from the Ceph distributed file system. |
++---------------+------------------+
+|ceph.conf | a utility for getting information about a ceph configuration file. |
++---------------+------------------+
+
+a FastCGI service that provides a RESTful HTTP API to store objects and metadata. ???
+
+Once you successfully build the Ceph code, you may proceed to Installing Ceph.
diff --git a/doc/start/why_use_ceph.rst b/doc/start/why_use_ceph.rst
new file mode 100644
index 00000000000..c606a0c6a5f
--- /dev/null
+++ b/doc/start/why_use_ceph.rst
@@ -0,0 +1,39 @@
+=============
+Why use Ceph?
+=============
+Ceph provides an economic and technical foundation for massive scalability.
+
+Financial constraints limit scalability. Ceph is free and open source, which means it does not require expensive
+license fees or expensive updates. Ceph can run on economical commodity hardware, which reduces one economic barrier to scalability. Ceph is easy to install and administer, so it reduces expenses related to administration. Ceph supports popular and widely accepted interfaces (e.g., POSIX-compliance, Swift, Amazon S3, FUSE, etc.). So Ceph provides a compelling solution for building petabyte-to-exabyte scale storage systems.
+
+Technical and personnel constraints also limit scalability. The performance profile of highly scaled systems
+can very substantially. With intelligent load balancing and adaptive metadata servers that re-balance the file system dynamically, Ceph alleviates the administrative burden of optimizing performance. Additionally, because Ceph provides for data replication, Ceph is fault tolerant. Ceph administrators can simply replace a failed host by subtituting new hardware without having to rely on complex fail-over scenarios. With POSIX semantics for Unix/Linux-based operating systems, popular interfaces like Swift or Amazon S3, and advanced features like directory-level snapshots, system administrators can deploy enterprise applications on Ceph, and provide those applications with a long-term economical solution for scalable persistence.
+
+Reasons to use Ceph include:
+
+- Extraordinary scalability
+
+ - Terabytes to exabytes
+ - Tens of thousands of client nodes
+
+- Standards compliant
+
+ - Virtual file system (vfs)
+ - Shell (bash)
+ - FUSE
+ - Swift-compliant interface
+ - Amazon S3-compliant interface
+
+- Reliable and fault-tolerant
+
+ - Strong data consistency and safety semantics
+ - Intelligent load balancing and dynamic re-balancing
+ - Semi-autonomous data replication
+ - Node monitoring and failure detection
+ - Hot swappable hardware
+
+- Economical (Ceph is free!)
+
+ - Open source and free
+ - Uses heterogeneous commodity hardware
+ - Easy to setup and maintain