summaryrefslogtreecommitdiff
path: root/doc
diff options
context:
space:
mode:
Diffstat (limited to 'doc')
-rw-r--r--doc/api/s3/authentication.rst60
-rw-r--r--doc/config-cluster/authentication.rst122
-rw-r--r--doc/config-cluster/ceph-conf.rst48
-rw-r--r--doc/config-cluster/demo-ceph.conf7
-rw-r--r--doc/config-cluster/deploying-ceph-conf.rst39
-rw-r--r--doc/config-cluster/deploying-ceph-with-mkcephfs.rst18
-rw-r--r--doc/config-cluster/index.rst5
-rw-r--r--doc/config-cluster/mkcephfs.rst63
-rw-r--r--doc/config-cluster/pools.rst42
-rw-r--r--doc/index.rst1
-rw-r--r--doc/init/stop-cluster.rst2
-rw-r--r--doc/rbd/rados-rbd-cmds.rst113
-rw-r--r--doc/rbd/rbd-ko.rst73
-rw-r--r--doc/rbd/rbd.rst31
-rw-r--r--doc/start/quick-start.rst9
15 files changed, 514 insertions, 119 deletions
diff --git a/doc/api/s3/authentication.rst b/doc/api/s3/authentication.rst
index d443c0ac0eb..b1875385bf9 100644
--- a/doc/api/s3/authentication.rst
+++ b/doc/api/s3/authentication.rst
@@ -1,15 +1,16 @@
-Authentication and ACLs
-=======================
-Requests to the RADOS Gateway (RGW) can be either authenticated or unauthenticated.
-RGW assumes unauthenticated requests are sent by an anonymous user. RGW supports
-canned ACLs.
+=========================
+ Authentication and ACLs
+=========================
+
+Requests to the RADOS Gateway (RGW) can be either authenticated or
+unauthenticated. RGW assumes unauthenticated requests are sent by an anonymous
+user. RGW supports canned ACLs.
Authentication
--------------
-
-Authenticating a request requires including an access key and a Hash-based Message Authentication Code (HMAC)
-in the request before it is sent to the RGW server. RGW uses an S3-compatible authentication
-approach. The HTTP header signing is similar to OAuth 1.0, but avoids the complexity associated with the 3-legged OAuth 1.0 method.
+Authenticating a request requires including an access key and a Hash-based
+Message Authentication Code (HMAC) in the request before it is sent to the
+RGW server. RGW uses an S3-compatible authentication approach.
::
@@ -17,31 +18,34 @@ approach. The HTTP header signing is similar to OAuth 1.0, but avoids the comple
PUT /buckets/bucket/object.mpeg
Host: cname.domain.com
Date: Mon, 2 Jan 2012 00:01:01 +0000
+ Content-Encoding: mpeg
Content-Length: 9999999
- Content-Encoding: mpeg
Authorization: AWS {access-key}:{hash-of-header-and-secret}
-In the foregoing example, replace ``{access-key}`` with the value for your access key ID followed by
-a colon (``:``). Replace ``{hash-of-header-and-secret}`` with a hash of the header string and the secret
-corresponding to the access key ID.
+In the foregoing example, replace ``{access-key}`` with the value for your access
+key ID followed by a colon (``:``). Replace ``{hash-of-header-and-secret}`` with
+a hash of the header string and the secret corresponding to the access key ID.
To generate the hash of the header string and secret, you must:
-1. Get the value of the header string and the secret::
-
- str = "HTTP/1.1\nPUT /buckets/bucket/object.mpeg\nHost: cname.domain.com\n
- Date: Mon, 2 Jan 2012 00:01:01 +0000\nContent-Length: 9999999\nContent-Encoding: mpeg";
-
- secret = "valueOfSecret";
+#. Get the value of the header string.
+#. Normalize the request header string into canonical form.
+#. Generate an HMAC using a SHA-1 hashing algorithm.
+ See `RFC 2104`_ and `HMAC`_ for details.
+#. Encode the ``hmac`` result as base-64.
-2. Generate an HMAC using a SHA-1 hashing algorithm. ::
+To normalize the header into canonical form:
- hmac = object.hmac-sha1(str, secret);
-
-3. Encode the ``hmac`` result using base-64. ::
-
- encodedHmac = someBase64Encoder.encode(hmac);
+#. Get all fields beginning with ``x-amz-``.
+#. Ensure that the fields are all lowercase.
+#. Sort the fields lexicographically.
+#. Combine multiple instances of the same field name into a
+ single field and separate the field values with a comma.
+#. Replace white space and line breaks in field values with a single space.
+#. Remove white space before and after colons.
+#. Append a new line after each field.
+#. Merge the fields back into the header.
Replace the ``{hash-of-header-and-secret}`` with the base-64 encoded HMAC string.
@@ -50,7 +54,8 @@ Access Control Lists (ACLs)
RGW supports S3-compatible ACL functionality. An ACL is a list of access grants
that specify which operations a user can perform on a bucket or on an object.
-Each grant has a different meaning when applied to a bucket versus applied to an object:
+Each grant has a different meaning when applied to a bucket versus applied to
+an object:
+------------------+--------------------------------------------------------+----------------------------------------------+
| Permission | Bucket | Object |
@@ -65,3 +70,6 @@ Each grant has a different meaning when applied to a bucket versus applied to an
+------------------+--------------------------------------------------------+----------------------------------------------+
| ``FULL_CONTROL`` | Grantee has full permissions for object in the bucket. | Grantee can read or write to the object ACL. |
+------------------+--------------------------------------------------------+----------------------------------------------+
+
+.. _RFC 2104: http://www.ietf.org/rfc/rfc2104.txt
+.. _HMAC: http://en.wikipedia.org/wiki/HMAC
diff --git a/doc/config-cluster/authentication.rst b/doc/config-cluster/authentication.rst
new file mode 100644
index 00000000000..98811024f37
--- /dev/null
+++ b/doc/config-cluster/authentication.rst
@@ -0,0 +1,122 @@
+================
+ Authentication
+================
+
+Default users and pools are suitable for initial testing purposes. For test bed
+and production environments, you should create users and assign pool access to
+the users. For user management, see the `ceph-authtool`_ command for details.
+
+Enabling Authentication
+-----------------------
+In the ``[global]`` settings of your ``ceph.conf`` file, you must enable
+authentication for your cluster. ::
+
+ [global]
+ auth supported = cephx
+
+The valid values are ``cephx`` or ``none``. If you specify ``cephx``, you should
+also specify the keyring's path. We recommend using the ``/etc/ceph`` directory.
+Provide a ``keyring`` setting in ``ceph.conf`` like this::
+
+ [global]
+ auth supported = cephx
+ keyring = /etc/ceph/keyring.bin
+
+If there is no keyring in the path, generate one.
+
+Generating a Keyring
+--------------------
+To generate a keyring in the default location, use the ``ceph-authtool`` and
+specify the same path you specified in the ``[global]`` section of your
+``ceph.conf`` file. For example::
+
+ sudo ceph-authtool --create-keyring /etc/ceph/keyring.bin
+
+Specify Keyrings for each Daemon
+--------------------------------
+In your ``ceph.conf`` file under the daemon settings, you must also specify the
+keyring directory and keyring name. The metavariable ``$name`` resolves
+automatically. ::
+
+ [mon]
+ keyring = /etc/ceph/keyring.$name
+
+ [osd]
+ keyring = /etc/ceph/keyring.$name
+
+ [mds]
+ keyring = /etc/ceph/keyring.$name
+
+Generate a Key
+--------------
+Keys enable a specific user to access the monitor, metadata server and cluster
+according to capabilities assigned to the key. To generate a key for a user,
+you must specify specify a path to the keyring and a username. Replace
+the ``{keyring/path}`` and ``{username}`` below. ::
+
+ sudo ceph-authtool {keyring/path} -n client.{username} --gen-key
+
+For example::
+
+ sudo ceph-authtool /etc/ceph/keyring.bin -n client.whirlpool --gen-key
+
+.. note: User names are associated to user types, which include ``client``
+ ``admin``, ``osd``, ``mon``, and ``mds``. In most cases, you will be
+ creating keys for ``client`` users.
+
+List Keys
+---------
+To see a list of keys in a keyring, execute the following::
+
+ sudo ceph-authtool /etc/ceph/keyring.bin --list
+
+A keyring will display the user, the user's key, and the capabilities
+associated to the user's key.
+
+Add Capabilities to a Key
+-------------------------
+To add capabilities to a key, you must specify the username, and a capability
+for at least one of the monitor, metadata server and OSD. You may add more than
+one capability when executing the ``ceph-authtool`` command. Replace the
+``{usertype.username}``, ``{daemontype}`` and ``{capability}`` below::
+
+ sudo ceph-authtool -n {usertype.username} --cap {daemontype} {capability}
+
+For example::
+
+ ceph-authtool -n client.whirlpool --cap mds 'allow' --cap osd 'allow rw pool=swimmingpool' --cap mon 'allow r' /etc/ceph/keyring.bin
+
+Add the Keys to your Cluster
+----------------------------
+Once you have generated keys and added capabilities to the keys, add each of the
+keys to your cluster. Replace the ``{usertype.username}`` below. ::
+
+ sudo ceph auth add {usertype.username} -i /etc/ceph/keyring.bin
+
+For example::
+
+ sudo ceph auth add client.whirlpool -i /etc/ceph/keyring.bin
+
+To list the keys in your cluster, execute the following::
+
+ sudo ceph auth list
+
+The ``client.admin`` Key
+------------------------
+Each Ceph command you execute on the command line assumes that you are
+the ``client.admin`` default user. When running Ceph with ``cephx`` enabled,
+you need to have a ``client.admin`` key to run ``ceph`` commands.
+
+.. important: To continue to run Ceph commands on the command line with
+ ``cephx`` enabled, you need to create a key for the ``client.admin``
+ user, and create a secret file under ``/etc/ceph``.
+
+::
+
+ sudo ceph-authtool /etc/ceph/keyring.bin -n client.admin --gen-key
+ sudo ceph-authtool -n client.admin --cap mds 'allow' --cap osd 'allow *' --cap mon 'allow *' /etc/ceph/keyring.bin
+ sudo ceph auth add client.admin -i /etc/ceph/keyring.bin
+
+
+.. _ceph-authtool: http://ceph.com/docs/master/man/8/ceph-authtool/
+ \ No newline at end of file
diff --git a/doc/config-cluster/ceph-conf.rst b/doc/config-cluster/ceph-conf.rst
index 83dd6082bb5..c0c551c1c37 100644
--- a/doc/config-cluster/ceph-conf.rst
+++ b/doc/config-cluster/ceph-conf.rst
@@ -9,7 +9,7 @@ at least one of three processes or daemons:
- Monitor (``ceph-mon``)
- Metadata Server (``ceph-mds``)
-Each process or daemon looks for a ``ceph.conf`` file that provides their
+Each process or daemon looks for a ``ceph.conf`` file that provides its
configuration settings. The default ``ceph.conf`` locations in sequential
order include:
@@ -90,9 +90,8 @@ instances of all processes in the cluster. Use the ``[global]`` setting for
values that are common for all hosts in the cluster. You can override each
``[global]`` setting by:
-1. Changing the setting in a particular ``[group]``.
-2. Changing the setting in a particular process type (*e.g.,* ``[osd]``, ``[mon]``, ``[mds]`` ).
-3. Changing the setting in a particular process (*e.g.,* ``[osd.1]`` )
+#. Changing the setting in a particular process type (*e.g.,* ``[osd]``, ``[mon]``, ``[mds]`` ).
+#. Changing the setting in a particular process (*e.g.,* ``[osd.1]`` )
Overriding a global setting affects all child processes, except those that
you specifically override. For example::
@@ -108,6 +107,11 @@ specify settings under ``[osd]``, ``[mon]`` or ``[mds]`` without specifying a
particular instance, the setting will apply to all OSDs, monitors or metadata
daemons respectively.
+For details on settings for each type of daemon,
+see `Configuration Reference`_.
+
+.. _Configuration Reference: ../../config
+
Instance Settings
~~~~~~~~~~~~~~~~~
You may specify settings for particular instances of an daemon. You may specify
@@ -121,6 +125,7 @@ alphanumeric for monitors and metadata servers. ::
; settings affect mon.a1 only.
[mds.b2]
; settings affect mds.b2 only.
+
``host`` and ``addr`` Settings
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -136,13 +141,11 @@ may run a ``ceph-mds`` daemon, and other hosts may run ``ceph-mon`` daemons.
Each host has a name identified by the ``host`` setting, and a network location
(i.e., domain name or IP address) identified by the ``addr`` setting. For example::
- [osd.1]
- host = hostNumber1
- addr = 150.140.130.120
- [osd.2]
- host = hostNumber1
- addr = 150.140.130.120
-
+ [mon.a]
+ host = hostName
+ mon addr = 150.140.130.120:6789
+ [osd.0]
+ host = hostName
Monitor Configuration
~~~~~~~~~~~~~~~~~~~~~
@@ -156,9 +159,8 @@ algorithm can determine which version of the cluster map is the most accurate.
Ceph monitors typically listen on port ``6789``. For example::
[mon.a]
- host = hostNumber1
- addr = 150.140.130.120:6789
-
+ host = hostName
+ mon addr = 150.140.130.120:6789
Example Configuration File
~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -166,16 +168,12 @@ Example Configuration File
.. literalinclude:: demo-ceph.conf
:language: ini
-Configuration File Deployment Options
--------------------------------------
-The most common way to deploy the ``ceph.conf`` file in a cluster is to have
-all hosts share the same configuration file.
-You may create a ``ceph.conf`` file for each host if you wish, or specify a
-particular ``ceph.conf`` file for a subset of hosts within the cluster. However,
-using per-host ``ceph.conf`` configuration files imposes a maintenance burden as the
-cluster grows. In a typical deployment, an administrator creates a ``ceph.conf`` file
-on the Administration host and then copies that file to each OSD Cluster host.
+``iptables`` Configuration
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+Monitors listen on port 6789, while metadata servers and OSDs listen on the first
+available port beginning at 6800. Ensure that you open port 6789 on hosts that run
+a monitor daemon, and open one port beginning at port 6800 for each OSD or metadata
+server that runs on the host. For example::
-The current cluster deployment script, ``mkcephfs``, does not make copies of the
-``ceph.conf``. You must copy the file manually.
+ iptables -A INPUT -m multiport -p tcp -s 192.168.1.0/24 --dports 6789,6800:6803 -j ACCEPT \ No newline at end of file
diff --git a/doc/config-cluster/demo-ceph.conf b/doc/config-cluster/demo-ceph.conf
index 65a7ea5a124..6f7048cd5d9 100644
--- a/doc/config-cluster/demo-ceph.conf
+++ b/doc/config-cluster/demo-ceph.conf
@@ -1,12 +1,14 @@
[global]
; use cephx or none
auth supported = cephx
- keyring = /etc/ceph/$name.keyring
+ keyring = /etc/ceph/keyring.bin
[mon]
mon data = /srv/mon.$id
+ keyring = /etc/ceph/keyring.$name
[mds]
+ keyring = /etc/ceph/keyring.$name
[osd]
osd data = /srv/osd.$id
@@ -14,6 +16,7 @@
osd journal size = 1000
; uncomment the following line if you are mounting with ext4
; filestore xattr use omap = true
+ keyring = /etc/ceph/keyring.$name
[mon.a]
host = myserver01
@@ -37,4 +40,4 @@
host = myserver03
[mds.a]
- host = myserver01 \ No newline at end of file
+ host = myserver01
diff --git a/doc/config-cluster/deploying-ceph-conf.rst b/doc/config-cluster/deploying-ceph-conf.rst
deleted file mode 100644
index c5bc485e1f1..00000000000
--- a/doc/config-cluster/deploying-ceph-conf.rst
+++ /dev/null
@@ -1,39 +0,0 @@
-==================================
- Deploying the Ceph Configuration
-==================================
-Ceph's ``mkcephfs`` deployment script does not copy the configuration file you
-created from the Administration host to the OSD Cluster hosts. Copy the
-configuration file you created (*i.e.,* ``mycluster.conf`` in the example below)
-from the Administration host to ``etc/ceph/ceph.conf`` on each OSD Cluster host
-if you are using ``mkcephfs`` to deploy Ceph.
-
-::
-
- ssh myserver01 sudo tee /etc/ceph/ceph.conf <mycluster.conf
- ssh myserver02 sudo tee /etc/ceph/ceph.conf <mycluster.conf
- ssh myserver03 sudo tee /etc/ceph/ceph.conf <mycluster.conf
-
-The current deployment script does not create the default server directories. Create
-server directories for each instance of a Ceph daemon. Using the exemplary
-``ceph.conf`` file, you would perform the following:
-
-On ``myserver01``::
-
- sudo mkdir srv/osd.0
- sudo mkdir srv/mon.a
-
-On ``myserver02``::
-
- sudo mkdir srv/osd.1
- sudo mkdir srv/mon.b
-
-On ``myserver03``::
-
- sudo mkdir srv/osd.2
- sudo mkdir srv/mon.c
-
-On ``myserver04``::
-
- sudo mkdir srv/osd.3
-
-.. important:: The ``host`` variable determines which host runs each instance of a Ceph daemon.
diff --git a/doc/config-cluster/deploying-ceph-with-mkcephfs.rst b/doc/config-cluster/deploying-ceph-with-mkcephfs.rst
deleted file mode 100644
index 3a14fabeacd..00000000000
--- a/doc/config-cluster/deploying-ceph-with-mkcephfs.rst
+++ /dev/null
@@ -1,18 +0,0 @@
-==================================
- Deploying Ceph with ``mkcephfs``
-==================================
-
-Once you have copied your Ceph Configuration to the OSD Cluster hosts,
-you may deploy Ceph with the ``mkcephfs`` script.
-
-.. note:: ``mkcephfs`` is a quick bootstrapping tool. It does not handle more
- complex operations, such as upgrades.
-
-For production environments, you deploy Ceph using Chef cookbooks. To run
-``mkcephfs``, execute the following::
-
- cd /etc/ceph
- sudo mkcephfs -a -c /etc/ceph/ceph.conf -k ceph.keyring
-
-The script adds an admin key to the ``ceph.keyring``, which is analogous to a
-root password.
diff --git a/doc/config-cluster/index.rst b/doc/config-cluster/index.rst
index 271d89ba48a..831ea337458 100644
--- a/doc/config-cluster/index.rst
+++ b/doc/config-cluster/index.rst
@@ -25,6 +25,7 @@ instance (a single context).
file-system-recommendations
Configuration <ceph-conf>
- Deploy Config <deploying-ceph-conf>
- deploying-ceph-with-mkcephfs
+ Deploy with mkcephfs <mkcephfs>
Deploy with Chef <chef>
+ Storage Pools <pools>
+ Authentication <authentication>
diff --git a/doc/config-cluster/mkcephfs.rst b/doc/config-cluster/mkcephfs.rst
new file mode 100644
index 00000000000..789002ce51e
--- /dev/null
+++ b/doc/config-cluster/mkcephfs.rst
@@ -0,0 +1,63 @@
+=============================
+ Deploying with ``mkcephfs``
+=============================
+
+Copy Configuration File to All Hosts
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Ceph's ``mkcephfs`` deployment script does not copy the configuration file you
+created from the Administration host to the OSD Cluster hosts. Copy the
+configuration file you created (*i.e.,* ``mycluster.conf`` in the example below)
+from the Administration host to ``etc/ceph/ceph.conf`` on each OSD Cluster host
+if you are using ``mkcephfs`` to deploy Ceph.
+
+::
+
+ ssh myserver01 sudo tee /etc/ceph/ceph.conf <mycluster.conf
+ ssh myserver02 sudo tee /etc/ceph/ceph.conf <mycluster.conf
+ ssh myserver03 sudo tee /etc/ceph/ceph.conf <mycluster.conf
+
+
+Create the Default Directories
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+The ``mkcephfs`` deployment script does not create the default server directories.
+Create server directories for each instance of a Ceph daemon. The ``host``
+variables in the ``ceph.conf`` file determine which host runs each instance of
+a Ceph daemon. Using the exemplary ``ceph.conf`` file, you would perform
+the following:
+
+On ``myserver01``::
+
+ sudo mkdir srv/osd.0
+ sudo mkdir srv/mon.a
+
+On ``myserver02``::
+
+ sudo mkdir srv/osd.1
+ sudo mkdir srv/mon.b
+
+On ``myserver03``::
+
+ sudo mkdir srv/osd.2
+ sudo mkdir srv/mon.c
+ sudo mkdir srv/mds.a
+
+Run ``mkcephfs``
+~~~~~~~~~~~~~~~~
+Once you have copied your Ceph Configuration to the OSD Cluster hosts
+and created the default directories, you may deploy Ceph with the
+``mkcephfs`` script.
+
+.. note:: ``mkcephfs`` is a quick bootstrapping tool. It does not handle more
+ complex operations, such as upgrades.
+
+For production environments, deploy Ceph using Chef cookbooks. To run
+``mkcephfs``, execute the following::
+
+ cd /etc/ceph
+ sudo mkcephfs -a -c /etc/ceph/ceph.conf -k ceph.keyring
+
+The script adds an admin key to the ``ceph.keyring``, which is analogous to a
+root password. See `Authentication`_ when running with ``cephx`` enabled.
+
+
+.. _Authentication: ../authentication \ No newline at end of file
diff --git a/doc/config-cluster/pools.rst b/doc/config-cluster/pools.rst
new file mode 100644
index 00000000000..156b22de5b4
--- /dev/null
+++ b/doc/config-cluster/pools.rst
@@ -0,0 +1,42 @@
+===============
+ Storage Pools
+===============
+
+Ceph stores data in 'pools' within the OSDs. When you first deploy a cluster
+without specifying pools, Ceph uses the default pools for storing data.
+To organize data into pools, see the `rados`_ command for details.
+
+You can list, create, and remove pools. You can also view the pool utilization
+statistics.
+
+List Pools
+----------
+To list your cluster's pools, execute::
+
+ rados lspools
+
+The default pools include:
+
+- ``data``
+- ``metadata``
+- ``rbd``
+
+Create a Pool
+-------------
+To create a pool, execute::
+
+ rados mkpool {pool_name}
+
+Remove a Pool
+-------------
+To remove a pool, execute::
+
+ rados rmpool {pool_name}
+
+Show Pool Stats
+---------------
+To show a pool's utilization statistics, execute::
+
+ rados df
+
+.. _rados: http://ceph.com/docs/master/man/8/rados/ \ No newline at end of file
diff --git a/doc/index.rst b/doc/index.rst
index 77b6bf85ae6..01c70ff2c69 100644
--- a/doc/index.rst
+++ b/doc/index.rst
@@ -21,6 +21,7 @@ cluster to ensure that the storage hosts are running smoothly.
install/index
config-cluster/index
init/index
+ rbd/rbd
ops/index
rec/index
config
diff --git a/doc/init/stop-cluster.rst b/doc/init/stop-cluster.rst
index 245e6db22c9..57c1ab1e680 100644
--- a/doc/init/stop-cluster.rst
+++ b/doc/init/stop-cluster.rst
@@ -3,7 +3,7 @@
====================
To stop a cluster, execute one of the following::
- sudo service ceph stop
+ sudo service -a ceph stop
sudo /etc/init.d/ceph -a stop
Ceph should shut down the operating processes. \ No newline at end of file
diff --git a/doc/rbd/rados-rbd-cmds.rst b/doc/rbd/rados-rbd-cmds.rst
new file mode 100644
index 00000000000..0f9542a2891
--- /dev/null
+++ b/doc/rbd/rados-rbd-cmds.rst
@@ -0,0 +1,113 @@
+====================
+ RADOS RDB Commands
+====================
+The ``rbd`` command enables you to create, list, introspect and remove block
+device images. You can also use it to clone images, create snapshots,
+rollback an image to a snapshot, view a snapshot, etc. For details on using
+the ``rbd`` command, see `RBD – Manage RADOS Block Device (RBD) Images`_ for
+details.
+
+
+Creating a Block Device Image
+-----------------------------
+Before you can add a block device to a Ceph client, you must create an image for
+it in the OSD cluster first. To create a block device image, execute the
+following::
+
+ rbd create {image-name} --size {megabytes} --dest-pool {pool-name}
+
+For example, to create a 1GB image named ``foo`` that stores information in a
+pool named ``swimmingpool``, execute the following::
+
+ rbd create foo --size 1024
+ rbd create bar --size 1024 --pool swimmingpool
+
+.. note:: You must create a pool first before you can specify it as a
+ source. See `Storage Pools`_ for details.
+
+Listing Block Device Images
+---------------------------
+To list block devices in the ``rbd`` pool, execute the following::
+
+ rbd ls
+
+To list block devices in a particular pool, execute the following,
+but replace ``{poolname}`` with the name of the pool::
+
+ rbd ls {poolname}
+
+For example::
+
+ rbd ls swimmingpool
+
+Retrieving Image Information
+----------------------------
+To retrieve information from a particular image, execute the following,
+but replace ``{image-name}`` with the name for the image::
+
+ rbd --image {image-name} info
+
+For example::
+
+ rbd --image foo info
+
+To retrieve information from an image within a pool, execute the following,
+but replace ``{image-name}`` with the name of the image and replace ``{pool-name}``
+with the name of the pool::
+
+ rbd --image {image-name} -p {pool-name} info
+
+For example::
+
+ rbd --image bar -p swimmingpool info
+
+Resizing a Block Device Image
+-----------------------------
+RBD images are thin provisioned. They don't actually use any physical storage
+until you begin saving data to them. However, they do have a maximum capacity
+that you set with the ``--size`` option. If you want to increase (or decrease)
+the maximum size of a RADOS block device image, execute the following::
+
+ rbd resize --image foo --size 2048
+
+
+Removing a Block Device Image
+-----------------------------
+To remove a block device, execute the following, but replace ``{image-name}``
+with the name of the image you want to remove::
+
+ rbd rm {image-name}
+
+For example::
+
+ rbd rm foo
+
+To remove a block device from a pool, execute the following, but replace
+``{image-name}`` with the name of the image to remove and replace
+``{pool-name}`` with the name of the pool::
+
+ rbd rm {image-name} -p {pool-name}
+
+For example::
+
+ rbd rm bar -p swimmingpool
+
+
+Snapshotting Block Device Images
+--------------------------------
+One of the advanced features of RADOS block devices is that you can create
+snapshots of the images to retain a history of an image's state. Ceph supports
+RBD snapshots from the ``rbd`` command, from a kernel object, from a
+KVM, and from cloud solutions. Once you create snapshots of an image, you
+can rollback to a snapshot, list snapshots, remove snapshots and purge
+the snapshots.
+
+.. important:: Generally, you should stop i/o before snapshotting an image.
+ If the image contains a filesystem, the filesystem should be in a
+ consistent state before snapshotting too.
+
+
+
+
+.. _Storage Pools: ../../config-cluster/pools
+.. _RBD – Manage RADOS Block Device (RBD) Images: ../../man/8/rbd/ \ No newline at end of file
diff --git a/doc/rbd/rbd-ko.rst b/doc/rbd/rbd-ko.rst
new file mode 100644
index 00000000000..1e01f0bec86
--- /dev/null
+++ b/doc/rbd/rbd-ko.rst
@@ -0,0 +1,73 @@
+==============================
+ RBD Kernel Object Operations
+==============================
+
+Add a Block Device
+------------------
+To add an RBD image as a kernel object, first load the Ceph RBD module::
+
+ modprobe rbd
+
+Map the RBD image to the kernel object with ``add``, specifying the IP address
+of the monitor, the user name, and the RBD image name as follows::
+
+ echo "{mon-ip-address} name={user-name} rbd {image-name}" | sudo tee /sys/bus/rbd/add
+
+For example::
+
+ echo "10.20.30.40 name=admin rbd foo" | sudo tee /sys/bus/rbd/add
+
+If you use ``cephx`` authentication, you must also specify a secret. ::
+
+ echo "10.20.30.40 name=admin,secret=/path/to/secret rbd foo" | sudo tee /sys/bus/rbd/add
+
+
+A kernel block device resides under the ``/sys/bus/rbd/devices`` directory and
+provides the following functions:
+
++------------------+------------------------------------------------------------+
+| Function | Description |
++==================+============================================================+
+| ``client_id`` | Returns the client ID of the given device ID. |
++------------------+------------------------------------------------------------+
+| ``create_snap`` | Creates a snap from a snap name and a device ID. |
++------------------+------------------------------------------------------------+
+| ``current_snap`` | Returns the most recent snap for the given device ID. |
++------------------+------------------------------------------------------------+
+| ``major`` | |
++------------------+------------------------------------------------------------+
+| ``name`` | Returns the RBD image name of the device ID. |
++------------------+------------------------------------------------------------+
+| ``pool`` | Returns the pool source of the device ID. |
++------------------+------------------------------------------------------------+
+| ``refresh`` | Refreshes the given device with the SDs. |
++------------------+------------------------------------------------------------+
+| ``size`` | Returns the size of the device. |
++------------------+------------------------------------------------------------+
+| ``uevent`` | |
++------------------+------------------------------------------------------------+
+
+
+List Block Devices
+------------------
+Images are mounted as devices sequentially starting from ``0``. To list the
+devices mounted, execute the following::
+
+ ls /sys/bus/rbd/devices
+
+
+Removing a Block Device
+-----------------------
+To remove an RBD image, specify its index and use ``tee`` to call ``remove`` as
+follows, but replace ``{device-number}`` with the number of the device you want
+to remove::
+
+ echo {device-number} | sudo tee /sys/bus/rbd/remove
+
+
+Creating a Snapshot
+-------------------
+To create a snapshot of a device, you must specify the device number. ::
+
+ echo sn1 | sudo tee /sys/bus/rbd/devices/0{device-number}/create_snap
+
diff --git a/doc/rbd/rbd.rst b/doc/rbd/rbd.rst
new file mode 100644
index 00000000000..b1ad272a46e
--- /dev/null
+++ b/doc/rbd/rbd.rst
@@ -0,0 +1,31 @@
+===============
+ Block Devices
+===============
+
+A block is a sequence of bytes (for example, a 512-byte block of data).
+Block-based storage interfaces are the most common way to store data with
+rotating media such as hard disks, CDs, floppy disks, and even traditional
+9-track tape. The ubiquity of block device interfaces makes a virtual block
+device an ideal candidate to interact with a mass data storage system like Ceph.
+
+Ceph's RADOS Block Devices (RBD) interact with RADOS OSDs using the
+``librados`` and ``librbd`` libraries. RBDs are thin-provisioned, resizable
+and store data striped over multiple OSDs in a Ceph cluster. RBDs inherit
+``librados`` capabilities such as snapshotting and cloning. Ceph's RBDs deliver
+high performance with infinite scalability to kernel objects, kernel virtual
+machines and cloud-based computing systems like OpenStack and CloudStack.
+
+The ``librbd`` library converts data blocks into objects for storage in
+RADOS OSD clusters--the same storage system for ``librados`` object stores and
+the Ceph FS filesystem. You can use the same cluster to operate object stores,
+the Ceph FS filesystem, and RADOS block devices simultaneously.
+
+.. toctree::
+ :maxdepth: 1
+
+ RADOS Commands <rados-rbd-cmds>
+ Kernel Objects <rbd-ko>
+
+
+
+ \ No newline at end of file
diff --git a/doc/start/quick-start.rst b/doc/start/quick-start.rst
index 8ce6a7a2082..88c90988c27 100644
--- a/doc/start/quick-start.rst
+++ b/doc/start/quick-start.rst
@@ -7,16 +7,13 @@ single host. Quick start is intended for Debian/Ubuntu Linux distributions.
#. `Install Ceph packages`_
#. Create a ``ceph.conf`` file.
See `Ceph Configuration Files`_ for details.
-#. Deploy the Ceph configuration.
- See `Deploying the Ceph Configuration`_ for details.
-#. Configure a Ceph cluster
- See `Deploying Ceph with mkcephfs`_ for details.
+#. Deploy the Ceph configuration.
+ See `Deploy with mkcephfs`_ for details.
#. Start a Ceph cluster.
See `Starting a Cluster`_ for details.
.. _Install Ceph packages: ../../install/debian
.. _Ceph Configuration Files: ../../config-cluster/ceph-conf
-.. _Deploying the Ceph Configuration: ../../config-cluster/deploying-ceph-conf
-.. _Deploying Ceph with mkcephfs: ../../config-cluster/deploying-ceph-with-mkcephfs
+.. _Deploy with mkcephfs: ../../config-cluster/mkcephfs
.. _Starting a Cluster: ../../init/start-cluster/