summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorJohn Wilkins <john.wilkins@inktank.com>2013-09-17 13:22:09 -0700
committerJohn Wilkins <john.wilkins@inktank.com>2013-09-17 13:22:09 -0700
commitfcd749ffe60251ce392ae9c1eea5cb70e2e5d89e (patch)
treea529fe8a588f9f5ec8af4452e78d7cad5788dfe3
parente55d59f8269a806eb5033e82f7f704e672d33046 (diff)
downloadceph-fcd749ffe60251ce392ae9c1eea5cb70e2e5d89e.tar.gz
doc: Excised content from "Getting Started" and created Intro to Ceph.
Signed-off-by: John Wilkins <john.wilkins@inktank.com>
-rw-r--r--doc/start/intro.rst39
1 files changed, 39 insertions, 0 deletions
diff --git a/doc/start/intro.rst b/doc/start/intro.rst
new file mode 100644
index 00000000000..b04363d9f52
--- /dev/null
+++ b/doc/start/intro.rst
@@ -0,0 +1,39 @@
+===============
+ Intro to Ceph
+===============
+
+Whether you want to provide :term:`Ceph Object Storage` and/or :term:`Ceph Block
+Device` services to :term:`Cloud Platforms`, deploy a :term:`Ceph Filesystem` or
+use Ceph for another purpose, all :term:`Ceph Storage Cluster` deployments begin
+with setting up each :term:`Ceph Node`, your network and the Ceph Storage
+Cluster. A Ceph Storage Cluster requires at least one Ceph Monitor and at least
+two Ceph OSD Daemons. The Ceph Metadata Server is essential when running Ceph
+Filesystem clients.
+
+.. ditaa:: +---------------+ +---------------+ +---------------+
+ | OSDs | | Monitor | | MDS |
+ +---------------+ +---------------+ +---------------+
+
+- **OSDs**: A :term:`Ceph OSD Daemon` (OSD) stores data, handles data
+ replication, recovery, backfilling, rebalancing, and provides some monitoring
+ information to Ceph Monitors by checking other Ceph OSD Daemons for a
+ heartbeat. A Ceph Storage Cluster requires at least two Ceph OSD Daemons to
+ achieve an ``active + clean`` state when the cluster makes two copies of your
+ data (Ceph makes 2 copies by default, but you can adjust it).
+
+- **Monitors**: A :term:`Ceph Monitor` maintains maps of the cluster state,
+ including the monitor map, the OSD map, the Placement Group (PG) map, and the
+ CRUSH map. Ceph maintains a history (called an "epoch") of each state change
+ in the Ceph Monitors, Ceph OSD Daemons, and PGs.
+
+- **MDSs**: A :term:`Ceph Metadata Server` (MDS) stores metadata on behalf of
+ the :term:`Ceph Filesystem` (i.e., Ceph Block Devices and Ceph Object Storage
+ do not use MDS). Ceph Metadata Servers make it feasible for POSIX file system
+ users to execute basic commands like ``ls``, ``find``, etc. without placing
+ an enormous burden on the Ceph Storage Cluster.
+
+Ceph stores a client's data as objects within storage pools. Using the CRUSH
+algorithm, Ceph calculates which placement group should contain the object,
+and further calculates which Ceph OSD Daemon should store the placement group.
+The CRUSH algorithm enables the Ceph Storage Cluster to scale, rebalance, and
+recover dynamically. \ No newline at end of file