summaryrefslogtreecommitdiff
path: root/doc/start
diff options
context:
space:
mode:
authorJohn Wilkins <john.wilkins@dreamhost.com>2012-04-11 11:21:43 -0700
committerTommi Virtanen <tommi.virtanen@dreamhost.com>2012-05-02 12:09:55 -0700
commit859da18e5e7b9756bd0cb65698c349e2be2eb1b9 (patch)
tree0fd1cfba80380b78ef3c6ab9c0ac5145c41d335b /doc/start
parentbc857d8696376a82385ce1eefd9834cb2cf7a233 (diff)
downloadceph-859da18e5e7b9756bd0cb65698c349e2be2eb1b9.tar.gz
Building out information architecture. Modified getting involved, why use ceph, etc.
Signed-off-by: Tommi Virtanen <tommi.virtanen@dreamhost.com>
Diffstat (limited to 'doc/start')
-rw-r--r--doc/start/get_involved_in_the_ceph_community.rst53
-rw-r--r--doc/start/introduction_to_clustered_storage.rst15
-rw-r--r--doc/start/why_use_ceph.rst21
3 files changed, 60 insertions, 29 deletions
diff --git a/doc/start/get_involved_in_the_ceph_community.rst b/doc/start/get_involved_in_the_ceph_community.rst
index 241be479443..cce71ac2219 100644
--- a/doc/start/get_involved_in_the_ceph_community.rst
+++ b/doc/start/get_involved_in_the_ceph_community.rst
@@ -1,21 +1,48 @@
===================================
Get Involved in the Ceph Community!
===================================
-These are exciting times in the Ceph community!
-Follow the `Ceph Blog <http://ceph.newdream.net/news/>`__ to keep track of Ceph progress.
+These are exciting times in the Ceph community! Get involved!
-As you delve into Ceph, you may have questions or feedback for the Ceph development team.
-Ceph developers are often available on the ``#ceph`` IRC channel at ``irc.oftc.net``,
-particularly during daytime hours in the US Pacific Standard Time zone.
-Keep in touch with developer activity by subscribing_ to the email list at ceph-devel@vger.kernel.org.
-You can opt out of the email list at any time by unsubscribing_. A simple email is
-all it takes! If you would like to view the archives, go to Gmane_.
-You can help prepare Ceph for production by filing
-and tracking bugs, and providing feature requests using
-the `bug/feature tracker <http://tracker.newdream.net/projects/ceph>`__.
++-----------------+-------------------------------------------------+-----------------------------------------------+
+|Channel | Description | Contact Info |
++=================+=================================================+===============================================+
+| **Blog** | Check the Ceph Blog_ periodically to keep track | http://ceph.newdream.net/news |
+| | of Ceph progress and important announcements. | |
++-----------------+-------------------------------------------------+-----------------------------------------------+
+| **IRC** | As you delve into Ceph, you may have questions | |
+| | or feedback for the Ceph development team. Ceph | - **Domain:** ``irc.oftc.net`` |
+| | developers are often available on the ``#ceph`` | - **Channel:** ``#ceph`` |
+| | IRC channel particularly during daytime hours | |
+| | in the US Pacific Standard Time zone. | |
++-----------------+-------------------------------------------------+-----------------------------------------------+
+| **Email List** | Keep in touch with developer activity by | |
+| | subscribing_ to the email list at | - Subscribe_ |
+| | ceph-devel@vger.kernel.org. You can opt out of | - Unsubscribe_ |
+| | the email list at any time by unsubscribing_. | - Gmane_ |
+| | A simple email is all it takes! If you would | |
+| | like to view the archives, go to Gmane_. | |
++-----------------+-------------------------------------------------+-----------------------------------------------+
+| **Bug Tracker** | You can help keep Ceph production worthy by | http://tracker.newdream.net/projects/ceph |
+| | filing and tracking bugs, and providing feature | |
+| | requests using the Bug Tracker_. | |
++-----------------+-------------------------------------------------+-----------------------------------------------+
+| **Source Code** | If you would like to participate in | |
+| | development, bug fixing, or if you just want | - http://github.com:ceph/ceph.git |
+| | the very latest code for Ceph, you can get it | - ``$git clone git@github.com:ceph/ceph.git`` |
+| | at http://github.com. | |
++-----------------+-------------------------------------------------+-----------------------------------------------+
+| **Support** | If you have a very specific problem, an | http://ceph.newdream.net/support |
+| | immediate need, or if your deployment requires | |
+| | significant help, consider commercial support_. | |
++-----------------+-------------------------------------------------+-----------------------------------------------+
+
+
+.. _Subscribe: mailto:majordomo@vger.kernel.org?body=subscribe+ceph-devel
+.. _Unsubscribe: mailto:majordomo@vger.kernel.org?body=unsubscribe+ceph-devel
.. _subscribing: mailto:majordomo@vger.kernel.org?body=subscribe+ceph-devel
.. _unsubscribing: mailto:majordomo@vger.kernel.org?body=unsubscribe+ceph-devel
.. _Gmane: http://news.gmane.org/gmane.comp.file-systems.ceph.devel
-
-If you need hands-on help, `commercial support <http://ceph.newdream.net/support/>`__ is available too! \ No newline at end of file
+.. _Tracker: http://tracker.newdream.net/projects/ceph
+.. _Blog: http://ceph.newdream.net/news
+.. _support: http://ceph.newdream.net/support
diff --git a/doc/start/introduction_to_clustered_storage.rst b/doc/start/introduction_to_clustered_storage.rst
index 627898453fd..7da0b33a94f 100644
--- a/doc/start/introduction_to_clustered_storage.rst
+++ b/doc/start/introduction_to_clustered_storage.rst
@@ -7,16 +7,11 @@ to support storing many petabytes of data with the ability to store exabytes of
A number of factors make it challenging to build large storage systems. Three of them include:
-- **Capital Expenditure**: Proprietary systems are expensive. So building scalable systems requires
-using less expensive commodity hardware and a "scale out" approach to reduce build-out expenses.
+- **Capital Expenditure**: Proprietary systems are expensive. So building scalable systems requires using less expensive commodity hardware and a "scale out" approach to reduce build-out expenses.
-- **Ongoing Operating Expenses**: Supporting thousands of storage hosts can impose significant personnel
-expenses, particularly as hardware and networking infrastructure must be installed, maintained and replaced
-ongoingly.
+- **Ongoing Operating Expenses**: Supporting thousands of storage hosts can impose significant personnel expenses, particularly as hardware and networking infrastructure must be installed, maintained and replaced ongoingly.
-- **Loss of Data or Access to Data**: Mission-critical enterprise applications cannot suffer significant
-amounts of downtime, including loss of data *or access to data*. Yet, in systems with thousands of storage hosts,
-hardware failure is an expectation, not an exception.
+- **Loss of Data or Access to Data**: Mission-critical enterprise applications cannot suffer significant amounts of downtime, including loss of data *or access to data*. Yet, in systems with thousands of storage hosts, hardware failure is an expectation, not an exception.
Because of the foregoing factors and other factors, building massive storage systems requires new thinking.
@@ -48,6 +43,4 @@ Ceph Metadata Servers (MDSs) are only required for Ceph FS. You can use RADOS bl
RADOS Gateway without MDSs. The MDSs dynamically adapt their behavior to the current workload.
As the size and popularity of parts of the file system hierarchy change over time, the MDSs
dynamically redistribute the file system hierarchy among the available
-MDSs to balance the load to use server resources effectively.
-
-<image>
+MDSs to balance the load to use server resources effectively. \ No newline at end of file
diff --git a/doc/start/why_use_ceph.rst b/doc/start/why_use_ceph.rst
index c606a0c6a5f..c3071b70969 100644
--- a/doc/start/why_use_ceph.rst
+++ b/doc/start/why_use_ceph.rst
@@ -1,13 +1,24 @@
=============
Why use Ceph?
=============
-Ceph provides an economic and technical foundation for massive scalability.
-
-Financial constraints limit scalability. Ceph is free and open source, which means it does not require expensive
-license fees or expensive updates. Ceph can run on economical commodity hardware, which reduces one economic barrier to scalability. Ceph is easy to install and administer, so it reduces expenses related to administration. Ceph supports popular and widely accepted interfaces (e.g., POSIX-compliance, Swift, Amazon S3, FUSE, etc.). So Ceph provides a compelling solution for building petabyte-to-exabyte scale storage systems.
+Ceph provides an economic and technical foundation for massive scalability. Ceph is free and open source,
+which means it does not require expensive license fees or expensive updates. Ceph can run on economical
+commodity hardware, which reduces another economic barrier to scalability. Ceph is easy to install and administer,
+so it reduces expenses related to administration. Ceph supports popular and widely accepted interfaces in a
+unified storage system (e.g., Amazon S3, Swift, FUSE, block devices, POSIX-compliant shells, etc.), so you don't
+need to build out a different storage system for each storage interface you support.
Technical and personnel constraints also limit scalability. The performance profile of highly scaled systems
-can very substantially. With intelligent load balancing and adaptive metadata servers that re-balance the file system dynamically, Ceph alleviates the administrative burden of optimizing performance. Additionally, because Ceph provides for data replication, Ceph is fault tolerant. Ceph administrators can simply replace a failed host by subtituting new hardware without having to rely on complex fail-over scenarios. With POSIX semantics for Unix/Linux-based operating systems, popular interfaces like Swift or Amazon S3, and advanced features like directory-level snapshots, system administrators can deploy enterprise applications on Ceph, and provide those applications with a long-term economical solution for scalable persistence.
+can very substantially. Ceph relieves system administrators of the complex burden of manual performance optimization
+by utilizing the storage system's computing resources to balance loads intelligently and rebalance the file system dynamically.
+Ceph replicates data automatically so that hardware failures do not result in data loss or cascading load spikes.
+Ceph fault tolerant, so complex fail-over scenarios are unnecessary. Ceph administrators can simply replace a failed host
+with new hardware.
+
+With POSIX semantics for Unix/Linux-based operating systems, popular interfaces like Amazon S3 or Swift, block devices
+and advanced features like directory-level snapshots, you can deploy enterprise applications on Ceph while
+providing them with a long-term economical solution for scalable storage. While Ceph is open source, commercial
+support is available too! So Ceph provides a compelling solution for building petabyte-to-exabyte scale storage systems.
Reasons to use Ceph include: