summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorJohn Wilkins <john.wilkins@inktank.com>2013-10-22 18:12:46 -0700
committerJohn Wilkins <john.wilkins@inktank.com>2013-10-22 18:12:46 -0700
commitb8d54cdf23554e0d705dab81e449104a78a49f34 (patch)
tree5ea3eb6b6b35784d85a36eaa19ec1e9ee9b2e94b
parent53486afa018a859d1c04c1aeaf7a2a697a9332da (diff)
downloadceph-wip-doc-install.tar.gz
doc: Fixed typo, clarified example.wip-doc-install
Signed-off-by: John Wilkins <john.wilkins@inktank.com>
-rw-r--r--doc/start/hardware-recommendations.rst21
1 files changed, 13 insertions, 8 deletions
diff --git a/doc/start/hardware-recommendations.rst b/doc/start/hardware-recommendations.rst
index 4af68ba8072..c589301a435 100644
--- a/doc/start/hardware-recommendations.rst
+++ b/doc/start/hardware-recommendations.rst
@@ -339,8 +339,8 @@ configurations for Ceph OSDs, and a lighter configuration for monitors.
Calxeda Example
---------------
-A recent (2013) Ceph cluster project is using ARM hardware with low
-power consumption and high storage density for for Ceph OSDs.
+A recent (2013) Ceph cluster project uses ARM hardware to obtain low
+power consumption and high storage density.
+----------------+----------------+----------------------------------------+
| Configuration | Criteria | Minimum Recommended |
@@ -360,12 +360,17 @@ power consumption and high storage density for for Ceph OSDs.
| | Mgmt. Network | 1x 1GB Ethernet NICs |
+----------------+----------------+----------------------------------------+
-The project enables the deployment of 36 Ceph OSD Daemons, one for each
-3TB drive. Each processor runs 3 Ceph OSD Daemons. Four processors per
-card allows the 12 processors in with just four cards. This configuration
-provides 108TB of storage (slightly less after full ratio settings) per
-4U chassis.
-
+The chassis configuration enables the deployment of 36 Ceph OSD Daemons per
+chassis, one for each 3TB drive. Each System-on-a-chip (SoC) processor runs 3
+Ceph OSD Daemons. Four SoC processors per card allows the 12 processors to run
+36 Ceph OSD Daemons with capacity remaining for rebalancing, backfilling and
+recovery. This configuration provides 108TB of storage (slightly less after full
+ratio settings) per 4U chassis. Using a chassis exclusively for Ceph OSD Daemons
+makes it easy to expand the cluster's storage capacity significantly with
+relative ease.
+
+**Note:** the project uses Ceph for cold storage, so there are no SSDs
+for journals.
.. _Ceph Write Throughput 1: http://ceph.com/community/ceph-performance-part-1-disk-controller-write-throughput/