summaryrefslogtreecommitdiff
path: root/doc
diff options
context:
space:
mode:
authorJohn Wilkins <john.wilkins@dreamhost.com>2012-05-01 19:20:34 -0700
committerTommi Virtanen <tommi.virtanen@dreamhost.com>2012-05-02 12:09:56 -0700
commit0fb0ef9ee346484ae43c726ea485dfd81fe3f6b0 (patch)
tree50bff6fa0ecf9ed79e61c0cbc10732950a5d2a0a /doc
parentee44db4a5119c75f81c1cae9bf091d507289fa43 (diff)
downloadceph-0fb0ef9ee346484ae43c726ea485dfd81fe3f6b0.tar.gz
Corrections.
Signed-off-by: John Wilkins <john.wilkins@dreamhost.com> Signed-off-by: Tommi Virtanen <tommi.virtanen@dreamhost.com>
Diffstat (limited to 'doc')
-rw-r--r--doc/index.rst4
-rw-r--r--doc/start/introduction_to_clustered_storage.rst6
-rw-r--r--doc/start/why_use_ceph.rst4
3 files changed, 7 insertions, 7 deletions
diff --git a/doc/index.rst b/doc/index.rst
index 2e2e9c19cd7..fcbc455df97 100644
--- a/doc/index.rst
+++ b/doc/index.rst
@@ -1,13 +1,13 @@
===============
Welcome to Ceph
===============
-Ceph uniquely delivers **object, block, and file storage in one unified system**. Ceph is highly reliable, easy to manage, and free. The power of Ceph can transform your company’s IT infrastructure and your ability to manage vast amounts of data. Ceph delivers extraordinary scalability--thousands of clients accessing petabytes to exabytes of data. Ceph leverages commodity hardware and intelligent daemons to accommodate large numbers of storage hosts, which communicate with each other to replicate data, and redistribute data dynamically. Ceph's cluster of monitors oversee the hosts in the Ceph storage cluster to ensure that the storage hosts are running smoothly.
+Ceph uniquely delivers **object, block, and file storage in one unified system**. Ceph is highly reliable, easy to manage, and free. The power of Ceph can transform your company’s IT infrastructure and your ability to manage vast amounts of data. Ceph delivers extraordinary scalability--thousands of clients accessing petabytes to exabytes of data. Ceph leverages commodity hardware and intelligent daemons to accommodate large numbers of storage hosts, which communicate with each other to replicate data, and redistribute data dynamically. Ceph's cluster of monitors oversees the hosts in the Ceph storage cluster to ensure that the storage hosts are running smoothly.
.. image:: images/stack.png
Ceph Development Status
=======================
-Ceph has been under development as an open source project for since 2004, and its current focus
+Ceph has been under development as an open source project since 2004, and its current focus
is on stability. The Ceph file system is functionally complete, but has not been tested well enough at scale
and under load to recommend it for a production environment yet. We recommend deploying Ceph for testing
and evaluation. We do not recommend deploying Ceph into a production environment or storing valuable data
diff --git a/doc/start/introduction_to_clustered_storage.rst b/doc/start/introduction_to_clustered_storage.rst
index 7da0b33a94f..eeaa930fce6 100644
--- a/doc/start/introduction_to_clustered_storage.rst
+++ b/doc/start/introduction_to_clustered_storage.rst
@@ -23,7 +23,7 @@ Ceph takes advantage of these resources to create a unified storage system with
At the core of Ceph storage is a service entitled the Reliable Autonomic Distributed Object Store (RADOS).
RADOS revolutionizes Object Storage Devices (OSD)s by utilizing the CPU, memory and network interface of
the storage hosts to communicate with each other, replicate data, and redistribute data dynamically. RADOS
-implements an algorithm that performs Controlled Replication Under Scalable Hashing, which we refer we refer to as CRUSH.
+implements an algorithm that performs Controlled Replication Under Scalable Hashing, which we refer to as CRUSH.
CRUSH enables RADOS to plan and distribute the data automatically so that system administrators do not have to
do it manually. By utilizing each host's computing resources, RADOS increases scalability while simultaneously
eliminating both a performance bottleneck and a single point of failure common to systems that manage clusters centrally.
@@ -34,7 +34,7 @@ clusters by maintaining a master copy of the cluster map. For example, storage h
the cluster for the purposes of providing data storage services; not connected via a network; powered off; or, suffering from
a malfunction.
-Ceph provides a light-weight monitor process to address faults in the OSD clusters as they arise. Like OSDs, monitors
+Ceph provides a lightweight monitor process to address faults in the OSD clusters as they arise. Like OSDs, monitors
should be replicated in large-scale systems so that if one monitor crashes, another monitor can serve in its place.
When the Ceph storage cluster employs multiple monitors, the monitors may get out of sync and have different versions
of the cluster map. Ceph utilizes an algorithm to resolve disparities among versions of the cluster map.
@@ -43,4 +43,4 @@ Ceph Metadata Servers (MDSs) are only required for Ceph FS. You can use RADOS bl
RADOS Gateway without MDSs. The MDSs dynamically adapt their behavior to the current workload.
As the size and popularity of parts of the file system hierarchy change over time, the MDSs
dynamically redistribute the file system hierarchy among the available
-MDSs to balance the load to use server resources effectively. \ No newline at end of file
+MDSs to balance the load to use server resources effectively.
diff --git a/doc/start/why_use_ceph.rst b/doc/start/why_use_ceph.rst
index 9ca8abe35cf..81253c61448 100644
--- a/doc/start/why_use_ceph.rst
+++ b/doc/start/why_use_ceph.rst
@@ -9,10 +9,10 @@ unified storage system (e.g., Amazon S3, Swift, FUSE, block devices, POSIX-compl
need to build out a different storage system for each storage interface you support.
Technical and personnel constraints also limit scalability. The performance profile of highly scaled systems
-can very substantially. Ceph relieves system administrators of the complex burden of manual performance optimization
+can vary substantially. Ceph relieves system administrators of the complex burden of manual performance optimization
by utilizing the storage system's computing resources to balance loads intelligently and rebalance the file system dynamically.
Ceph replicates data automatically so that hardware failures do not result in data loss or cascading load spikes.
-Ceph fault tolerant, so complex fail-over scenarios are unnecessary. Ceph administrators can simply replace a failed host
+Ceph is fault tolerant, so complex fail-over scenarios are unnecessary. Ceph administrators can simply replace a failed host
with new hardware.
With POSIX semantics for Unix/Linux-based operating systems, popular interfaces like Amazon S3 or Swift, block devices