summaryrefslogtreecommitdiff
path: root/doc/ops/install.rst
blob: 1b002930f61d8d4929fa44b44f83ef95094d9c69 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
===========================
 Installing a Ceph cluster
===========================

For development and really early stage testing, see :doc:`/dev/index`.

For installing the latest development builds, see
:doc:`/ops/autobuilt`.

Installing any complex distributed software can be a lot of work. We
support two automated ways of installing Ceph: using Chef_, or with
the ``mkcephfs`` shell script.

.. _Chef: http://wiki.opscode.com/display/chef

.. topic:: Status as of 2011-09

  This section hides a lot of the tedious underlying details. If you
  need to, or wish to, roll your own deployment automation, or are
  doing it manually, you'll have to dig into a lot more intricate
  details.  We are working on simplifying the installation, as that
  also simplifies our Chef cookbooks.


.. _install-chef:

Installing Ceph using Chef
==========================

(Try saying that fast 10 times.)

.. topic:: Status as of 2011-09

  While we have Chef cookbooks in use internally, they are not yet
  ready to handle unsupervised installation of a full cluster. Stay
  tuned for updates.

.. todo:: write me


Installing Ceph using ``mkcephfs``
==================================

.. note:: ``mkcephfs`` is meant as a quick bootstrapping tool. It does
   not handle more complex operations, such as upgrades. For
   production clusters, you will want to use the :ref:`Chef cookbooks
   <install-chef>`.

Pick a host that has the Ceph software installed -- it does not have
to be a part of your cluster, but it does need to have *matching
versions* of the ``mkcephfs`` command and other Ceph tools
installed. This will be your `admin host`.


Installing the packages
-----------------------


.. _install-debs:

Debian/Ubuntu
~~~~~~~~~~~~~

We regularly build Debian and Ubuntu packages for the `amd64` and
`i386` architectures, for the following distributions:

- ``sid`` (Debian unstable)
- ``squeeze`` (Debian 6.0)
- ``lenny`` (Debian 5.0)
- ``oneiric`` (Ubuntu 11.11)
- ``natty`` (Ubuntu 11.04)
- ``maverick`` (Ubuntu 10.10)

.. todo:: http://ceph.newdream.net/debian/dists/ also has ``lucid``
   (Ubuntu 10.04), should that be removed?

Whenever we say *DISTRO* below, replace that with the codename of your
operating system.

Run these commands on all nodes::

	wget -q -O- https://raw.github.com/NewDreamNetwork/ceph/master/keys/release.asc \
	| sudo apt-key add -

	sudo tee /etc/apt/sources.list.d/ceph.list <<EOF
	deb http://ceph.newdream.net/debian/ DISTRO main
	deb-src http://ceph.newdream.net/debian/ DISTRO main
	EOF

	sudo apt-get update
	sudo apt-get install ceph


.. todo:: For older distributions, you may need to make sure your apt-get may read .bz2 compressed files. This works for Debian Lenny 5.0.3: ``apt-get install bzip2``

.. todo:: Ponder packages; ceph.deb currently pulls in gceph (ceph.deb
   Recommends: ceph-client-tools ceph-fuse libceph1 librados2 librbd1
   btrfs-tools gceph) (other interesting: ceph-client-tools ceph-fuse
   libceph-dev librados-dev librbd-dev obsync python-ceph radosgw)


.. todo:: Other operating system support.


Creating a ``ceph.conf`` file
-----------------------------

On the `admin host`, create a file with a name like
``mycluster.conf``.

Here's a template for a 3-node cluster, where all three machines run a
:ref:`monitor <monitor>` and an :ref:`object store <rados>`, and the
first one runs the :ref:`Ceph filesystem daemon <cephfs>`. Replace the
hostnames and IP addresses with your own, and add/remove hosts as
appropriate. All hostnames *must* be short form (no domain).

.. literalinclude:: mycluster.conf
   :language: ini

Note how the ``host`` variables dictate what node runs what
services. See :doc:`/ops/config` for more information.

.. todo:: More specific link for host= convention.

.. todo:: Point to cluster design docs, once they are ready.

.. todo:: At this point, either use 1 or 3 mons, point to :doc:`grow/mon`


Running ``mkcephfs``
--------------------

Verify that you can manage the nodes from the host you intend to run
``mkcephfs`` on:

- Make sure you can SSH_ from the `admin host` into all the nodes
  using the short hostnames (``myserver`` not
  ``myserver.mydept.example.com``), with no user specified
  [#ssh_config]_.
- Make sure you can SSH_ from the `admin host` into all the nodes
  as ``root`` using the short hostnames.
- Make sure you can run ``sudo`` without passphrase prompts on all
  nodes [#sudo]_.

.. _SSH: http://openssh.org/

If you are not using :ref:`Btrfs <btrfs>`, enable :ref:`extended
attributes <xattr>`.

On each node, make sure the directory ``/srv/osd.N`` (with the
appropriate ``N``) exists, and the right filesystem is mounted. If you
are not using a separate filesystem for the file store, just run
``sudo mkdir /srv/osd.N`` (with the right ``N``).

Then, using the right path to the ``mycluster.conf`` file you prepared
earlier, run::

	mkcephfs -a -c mycluster.conf -k mycluster.keyring

This will place an `admin key` into ``mycluster.keyring``. This will
be used to manage the cluster. Treat it like a ``root`` password to
your filesystem.

.. todo:: Link to explanation of `admin key`.

That should SSH into all the nodes, and set up Ceph for you.

It does **not** copy the configuration, or start the services. Let's
do that::

	ssh myserver01 sudo tee /etc/ceph/ceph.conf <mycluster.conf
	ssh myserver02 sudo tee /etc/ceph/ceph.conf <mycluster.conf
	ssh myserver03 sudo tee /etc/ceph/ceph.conf <mycluster.conf
	...

	ssh myserver01 sudo /etc/init.d/ceph start
	ssh myserver02 sudo /etc/init.d/ceph start
	ssh myserver03 sudo /etc/init.d/ceph start
	...

After a little while, the cluster should come up and reach a healthy
state. We can check that::

	ceph -k mycluster.keyring -c mycluster.conf health
	2011-09-06 12:33:51.561012 mon <- [health]
	2011-09-06 12:33:51.562164 mon2 -> 'HEALTH_OK' (0)

.. todo:: Document "healthy"

.. todo:: Improve output.



.. rubric:: Footnotes

.. [#ssh_config] Something like this in your ``~/.ssh_config`` may
   help -- unfortunately you need an entry per node::

	Host myserverNN
	     Hostname myserverNN.dept.example.com
	     User ubuntu

.. [#sudo] The relevant ``sudoers`` syntax looks like this::

	%admin ALL=(ALL) NOPASSWD:ALL