summaryrefslogtreecommitdiff
path: root/doc
diff options
context:
space:
mode:
authorLin Jen-Shin <godfat@godfat.org>2016-11-03 20:43:24 +0800
committerLin Jen-Shin <godfat@godfat.org>2016-11-03 20:43:24 +0800
commitb0af0ab62fa7b0b64443e510ed388cef83db996d (patch)
tree4c8f383a9e79c8ad747962545a171f6a1c59c51f /doc
parent9176a19e3d858a6d64a2254260febe000474af6d (diff)
parentca1096e77f1f44089cd8e37e2fe7fa392571542f (diff)
downloadgitlab-ce-b0af0ab62fa7b0b64443e510ed388cef83db996d.tar.gz
Merge remote-tracking branch 'upstream/master' into pipeline-notifications
* upstream/master: (26 commits) Add a `--force` option to bin/changelog Update examples in changelog docs to use single quotes around title Use the server's base URL without relative URL part when creating links in JIRA Make ESLint ignore instrumented files for coverage analysis (!7236) Check that JavaScript file names match convention (!7238) Removed z-index for filters on issue boards GitLab 8.13 not 13 Replace MR Description Format links Fix gdb backtrace command Update gitlab.yml.example remove extra spaces from app/workers/post_receive.rb Add Rake task to create/repair GitLab Shell hooks symlinks Added guide for upgrading Postgres using Slony Ensure hook tokens are write-only in the API Add support for token attr in project hooks API Add a CHANGELOG entry Fix edit button wiki Updated Sortable JS plugin Allow owners to fetch source code in CI builds fixes milestone dropdown not select issue ...
Diffstat (limited to 'doc')
-rw-r--r--doc/administration/raketasks/maintenance.md220
-rw-r--r--doc/administration/troubleshooting/debug.md2
-rw-r--r--doc/api/projects.md2
-rw-r--r--doc/development/README.md1
-rw-r--r--doc/development/changelog.md182
-rw-r--r--doc/development/testing.md36
-rw-r--r--doc/integration/jira.md4
-rw-r--r--doc/raketasks/maintenance.md189
-rw-r--r--doc/update/README.md2
-rw-r--r--doc/update/upgrading_postgresql_using_slony.md482
10 files changed, 930 insertions, 190 deletions
diff --git a/doc/administration/raketasks/maintenance.md b/doc/administration/raketasks/maintenance.md
new file mode 100644
index 00000000000..f3c2e72341f
--- /dev/null
+++ b/doc/administration/raketasks/maintenance.md
@@ -0,0 +1,220 @@
+# Maintenance Rake Tasks
+
+## Gather information about GitLab and the system it runs on
+
+This command gathers information about your GitLab installation and the System it runs on. These may be useful when asking for help or reporting issues.
+
+**Omnibus Installation**
+
+```
+sudo gitlab-rake gitlab:env:info
+```
+
+**Source Installation**
+
+```
+bundle exec rake gitlab:env:info RAILS_ENV=production
+```
+
+Example output:
+
+```
+System information
+System: Debian 7.8
+Current User: git
+Using RVM: no
+Ruby Version: 2.1.5p273
+Gem Version: 2.4.3
+Bundler Version: 1.7.6
+Rake Version: 10.3.2
+Sidekiq Version: 2.17.8
+
+GitLab information
+Version: 7.7.1
+Revision: 41ab9e1
+Directory: /home/git/gitlab
+DB Adapter: postgresql
+URL: https://gitlab.example.com
+HTTP Clone URL: https://gitlab.example.com/some-project.git
+SSH Clone URL: git@gitlab.example.com:some-project.git
+Using LDAP: no
+Using Omniauth: no
+
+GitLab Shell
+Version: 2.4.1
+Repositories: /home/git/repositories/
+Hooks: /home/git/gitlab-shell/hooks/
+Git: /usr/bin/git
+```
+
+## Check GitLab configuration
+
+Runs the following rake tasks:
+
+- `gitlab:gitlab_shell:check`
+- `gitlab:sidekiq:check`
+- `gitlab:app:check`
+
+It will check that each component was setup according to the installation guide and suggest fixes for issues found.
+
+You may also have a look at our [Trouble Shooting Guide](https://github.com/gitlabhq/gitlab-public-wiki/wiki/Trouble-Shooting-Guide).
+
+**Omnibus Installation**
+
+```
+sudo gitlab-rake gitlab:check
+```
+
+**Source Installation**
+
+```
+bundle exec rake gitlab:check RAILS_ENV=production
+```
+
+NOTE: Use SANITIZE=true for gitlab:check if you want to omit project names from the output.
+
+Example output:
+
+```
+Checking Environment ...
+
+Git configured for git user? ... yes
+Has python2? ... yes
+python2 is supported version? ... yes
+
+Checking Environment ... Finished
+
+Checking GitLab Shell ...
+
+GitLab Shell version? ... OK (1.2.0)
+Repo base directory exists? ... yes
+Repo base directory is a symlink? ... no
+Repo base owned by git:git? ... yes
+Repo base access is drwxrws---? ... yes
+post-receive hook up-to-date? ... yes
+post-receive hooks in repos are links: ... yes
+
+Checking GitLab Shell ... Finished
+
+Checking Sidekiq ...
+
+Running? ... yes
+
+Checking Sidekiq ... Finished
+
+Checking GitLab ...
+
+Database config exists? ... yes
+Database is SQLite ... no
+All migrations up? ... yes
+GitLab config exists? ... yes
+GitLab config outdated? ... no
+Log directory writable? ... yes
+Tmp directory writable? ... yes
+Init script exists? ... yes
+Init script up-to-date? ... yes
+Redis version >= 2.0.0? ... yes
+
+Checking GitLab ... Finished
+```
+
+## Rebuild authorized_keys file
+
+In some case it is necessary to rebuild the `authorized_keys` file.
+
+**Omnibus Installation**
+
+```
+sudo gitlab-rake gitlab:shell:setup
+```
+
+**Source Installation**
+
+```
+cd /home/git/gitlab
+sudo -u git -H bundle exec rake gitlab:shell:setup RAILS_ENV=production
+```
+
+```
+This will rebuild an authorized_keys file.
+You will lose any data stored in authorized_keys file.
+Do you want to continue (yes/no)? yes
+```
+
+## Clear redis cache
+
+If for some reason the dashboard shows wrong information you might want to
+clear Redis' cache.
+
+**Omnibus Installation**
+
+```
+sudo gitlab-rake cache:clear
+```
+
+**Source Installation**
+
+```
+cd /home/git/gitlab
+sudo -u git -H bundle exec rake cache:clear RAILS_ENV=production
+```
+
+## Precompile the assets
+
+Sometimes during version upgrades you might end up with some wrong CSS or
+missing some icons. In that case, try to precompile the assets again.
+
+Note that this only applies to source installations and does NOT apply to
+Omnibus packages.
+
+**Source Installation**
+
+```
+cd /home/git/gitlab
+sudo -u git -H bundle exec rake assets:precompile RAILS_ENV=production
+```
+
+For omnibus versions, the unoptimized assets (JavaScript, CSS) are frozen at
+the release of upstream GitLab. The omnibus version includes optimized versions
+of those assets. Unless you are modifying the JavaScript / CSS code on your
+production machine after installing the package, there should be no reason to redo
+rake assets:precompile on the production machine. If you suspect that assets
+have been corrupted, you should reinstall the omnibus package.
+
+## Tracking Deployments
+
+GitLab provides a Rake task that lets you track deployments in GitLab
+Performance Monitoring. This Rake task simply stores the current GitLab version
+in the GitLab Performance Monitoring database.
+
+**Omnibus Installation**
+
+```
+sudo gitlab-rake gitlab:track_deployment
+```
+
+**Source Installation**
+
+```
+cd /home/git/gitlab
+sudo -u git -H bundle exec rake gitlab:track_deployment RAILS_ENV=production
+```
+
+## Create or repair repository hooks symlink
+
+If the GitLab shell hooks directory location changes or another circumstance
+leads to the hooks symlink becoming missing or invalid, run this Rake task
+to create or repair the symlinks.
+
+**Omnibus Installation**
+
+```
+sudo gitlab-rake gitlab:shell:create_hooks
+```
+
+**Source Installation**
+
+```
+cd /home/git/gitlab
+sudo -u git -H bundle exec rake gitlab:shell:create_hooks RAILS_ENV=production
+```
diff --git a/doc/administration/troubleshooting/debug.md b/doc/administration/troubleshooting/debug.md
index d8dce4388e1..6f1356ddf8f 100644
--- a/doc/administration/troubleshooting/debug.md
+++ b/doc/administration/troubleshooting/debug.md
@@ -107,7 +107,7 @@ downtime. Otherwise skip to the next section.
1. To see the current threads, run:
```
- apply all thread bt
+ thread apply all bt
```
1. Once you're done debugging with `gdb`, be sure to detach from the process and exit:
diff --git a/doc/api/projects.md b/doc/api/projects.md
index 8ebac57e612..4f4b20a1874 100644
--- a/doc/api/projects.md
+++ b/doc/api/projects.md
@@ -1139,6 +1139,7 @@ Parameters:
| `pipeline_events` | boolean | no | Trigger hook on pipeline events |
| `wiki_events` | boolean | no | Trigger hook on wiki events |
| `enable_ssl_verification` | boolean | no | Do SSL verification when triggering the hook |
+| `token` | string | no | Secret token to validate received payloads; this will not be returned in the response |
### Edit project hook
@@ -1164,6 +1165,7 @@ Parameters:
| `pipeline_events` | boolean | no | Trigger hook on pipeline events |
| `wiki_events` | boolean | no | Trigger hook on wiki events |
| `enable_ssl_verification` | boolean | no | Do SSL verification when triggering the hook |
+| `token` | string | no | Secret token to validate received payloads; this will not be returned in the response |
### Delete project hook
diff --git a/doc/development/README.md b/doc/development/README.md
index 3f2151bbe8e..bf1f054b7d5 100644
--- a/doc/development/README.md
+++ b/doc/development/README.md
@@ -21,6 +21,7 @@
## Process
+- [Generate a changelog entry with `bin/changelog`](changelog.md)
- [Code review guidelines](code_review.md) for reviewing code and having code reviewed.
- [Merge request performance guidelines](merge_request_performance_guidelines.md)
for ensuring merge requests do not negatively impact GitLab performance
diff --git a/doc/development/changelog.md b/doc/development/changelog.md
new file mode 100644
index 00000000000..390eb549d0b
--- /dev/null
+++ b/doc/development/changelog.md
@@ -0,0 +1,182 @@
+# Generate a changelog entry
+
+This guide contains instructions for generating a changelog entry data file, as
+well as information and history about our changelog process.
+
+## Overview
+
+Each bullet point, or **entry**, in our [`CHANGELOG.md`][changelog.md] file is
+generated from a single data file in the [`changelogs/unreleased/`][unreleased]
+(or corresponding EE) folder. The file is expected to be a [YAML] file in the
+following format:
+
+```yaml
+---
+title: "Going through change[log]s"
+merge_request: 1972
+author: Ozzy Osbourne
+```
+
+The `merge_request` value is a reference to a merge request that adds this
+entry, and the `author` key is used to give attribution to community
+contributors. Both are optional.
+
+If you're working on the GitLab EE repository, the entry will be added to
+`changelogs/unreleased-ee/` instead.
+
+[changelog.md]: https://gitlab.com/gitlab-org/gitlab-ce/blob/master/CHANGELOG.md
+[unreleased]: https://gitlab.com/gitlab-org/gitlab-ce/tree/master/changelogs/
+[YAML]: https://en.wikipedia.org/wiki/YAML
+
+## Instructions
+
+A `bin/changelog` script is available to generate the changelog entry file
+automatically.
+
+Its simplest usage is to provide the value for `title`:
+
+```text
+$ bin/changelog 'Hey DZ, I added a feature to GitLab!'
+create changelogs/unreleased/my-feature.yml
+---
+title: Hey DZ, I added a feature to GitLab!
+merge_request:
+author:
+```
+
+The entry filename is based on the name of the current Git branch. If you run
+the command above on a branch called `feature/hey-dz`, it will generate a
+`changelogs/unreleased/feature-hey-dz.yml` file.
+
+### Arguments
+
+| Argument | Shorthand | Purpose |
+| ----------------- | --------- | --------------------------------------------- |
+| `--amend` | | Amend the previous commit |
+| `--force` | `-f` | Overwrite an existing entry |
+| `--merge-request` | `-m` | Merge Request ID |
+| `--dry-run` | `-n` | Don't actually write anything, just print |
+| `--git-username` | `-u` | Use Git user.name configuration as the author |
+| `--help` | `-h` | Print help message |
+
+#### `--amend`
+
+You can pass the **`--amend`** argument to automatically stage the generated
+file and amend it to the previous commit.
+
+If you use **`--amend`** and don't provide a title, it will automatically use
+the "subject" of the previous commit, which is the first line of the commit
+message:
+
+```text
+$ git show --oneline
+ab88683 Added an awesome new feature to GitLab
+
+$ bin/changelog --amend
+create changelogs/unreleased/feature-hey-dz.yml
+---
+title: Added an awesome new feature to GitLab
+merge_request:
+author:
+```
+
+#### `--force` or `-f`
+
+Use **`--force`** or **`-f`** to overwrite an existing changelog entry if it
+already exists.
+
+```text
+$ bin/changelog 'Hey DZ, I added a feature to GitLab!'
+error changelogs/unreleased/feature-hey-dz.yml already exists! Use `--force` to overwrite.
+
+$ bin/changelog 'Hey DZ, I added a feature to GitLab!' --force
+create changelogs/unreleased/feature-hey-dz.yml
+---
+title: Hey DZ, I added a feature to GitLab!
+merge_request: 1983
+author:
+```
+
+#### `--merge-request` or `-m`
+
+Use the **`--merge-request`** or **`-m`** argument to provide the
+`merge_request` value:
+
+```text
+$ bin/changelog 'Hey DZ, I added a feature to GitLab!' -m 1983
+create changelogs/unreleased/feature-hey-dz.yml
+---
+title: Hey DZ, I added a feature to GitLab!
+merge_request: 1983
+author:
+```
+
+#### `--dry-run` or `-n`
+
+Use the **`--dry-run`** or **`-n`** argument to prevent actually writing or
+committing anything:
+
+```text
+$ bin/changelog --amend --dry-run
+create changelogs/unreleased/feature-hey-dz.yml
+---
+title: Added an awesome new feature to GitLab
+merge_request:
+author:
+
+$ ls changelogs/unreleased/
+```
+
+#### `--git-username` or `-u`
+
+Use the **`--git-username`** or **`-u`** argument to automatically fill in the
+`author` value with your configured Git `user.name` value:
+
+```text
+$ git config user.name
+Jane Doe
+
+$ bin/changelog --u 'Hey DZ, I added a feature to GitLab!'
+create changelogs/unreleased/feature-hey-dz.yml
+---
+title: Hey DZ, I added a feature to GitLab!
+merge_request:
+author: Jane Doe
+```
+
+## History and Reasoning
+
+Our `CHANGELOG` file was previously updated manually by each contributor that
+felt their change warranted an entry. When two merge requests added their own
+entries at the same spot in the list, it created a merge conflict in one as soon
+as the other was merged. When we had dozens of merge requests fighting for the
+same changelog entry location, this quickly became a major source of merge
+conflicts and delays in development.
+
+This led us to a [boring solution] of "add your entry in a random location in
+the list." This actually worked pretty well as we got further along in each
+monthly release cycle, but at the start of a new cycle, when a new version
+section was added and there were fewer places to "randomly" add an entry, the
+conflicts became a problem again until we had a sufficient number of entries.
+
+On top of all this, it created an entirely different headache for [release managers]
+when they cherry-picked a commit into a stable branch for a patch release. If
+the commit included an entry in the `CHANGELOG`, it would include the entire
+changelog for the latest version in `master`, so the release manager would have
+to manually remove the later entries. They often would have had to do this
+multiple times per patch release. This was compounded when we had to release
+multiple patches at once due to a security issue.
+
+We needed to automate all of this manual work. So we [started brainstorming].
+After much discussion we settled on the current solution of one file per entry,
+and then compiling the entries into the overall `CHANGELOG.md` file during the
+[release process].
+
+[boring solution]: https://about.gitlab.com/handbook/#boring-solutions
+[release managers]: https://gitlab.com/gitlab-org/release-tools/blob/master/doc/release-manager.md
+[started brainstorming]: https://gitlab.com/gitlab-org/gitlab-ce/issues/17826
+[release process]: https://gitlab.com/gitlab-org/release-tools
+
+---
+
+[Return to Development documentation](README.md)
diff --git a/doc/development/testing.md b/doc/development/testing.md
index 8e91ac5e3ba..b0b26ccf57a 100644
--- a/doc/development/testing.md
+++ b/doc/development/testing.md
@@ -132,6 +132,42 @@ Adding new Spinach scenarios is acceptable _only if_ the new scenario requires
no more than one new `step` definition. If more than that is required, the
test should be re-implemented using RSpec instead.
+## Testing Rake Tasks
+
+To make testing Rake tasks a little easier, there is a helper that can be included
+in lieu of the standard Spec helper. Instead of `require 'spec_helper'`, use
+`require 'rake_helper'`. The helper includes `spec_helper` for you, and configures
+a few other things to make testing Rake tasks easier.
+
+At a minimum, requiring the Rake helper will redirect `stdout`, include the
+runtime task helpers, and include the `RakeHelpers` Spec support module.
+
+The `RakeHelpers` module exposes a `run_rake_task(<task>)` method to make
+executing tasks simple. See `spec/support/rake_helpers.rb` for all available
+methods.
+
+Example:
+
+```ruby
+require 'rake_helper'
+
+describe 'gitlab:shell rake tasks' do
+ before do
+ Rake.application.rake_require 'tasks/gitlab/shell'
+
+ stub_warn_user_is_not_gitlab
+ end
+
+ describe 'install task' do
+ it 'invokes create_hooks task' do
+ expect(Rake::Task['gitlab:shell:create_hooks']).to receive(:invoke)
+
+ run_rake_task('gitlab:shell:install')
+ end
+ end
+end
+```
+
---
[Return to Development documentation](README.md)
diff --git a/doc/integration/jira.md b/doc/integration/jira.md
index cf1557ddc44..2e31fd994de 100644
--- a/doc/integration/jira.md
+++ b/doc/integration/jira.md
@@ -135,7 +135,7 @@ password as they will be needed when configuring GitLab in the next section.
JIRA configuration in GitLab is done via a project's **Services**.
-#### GitLab 13.0 with JIRA v1000.x
+#### GitLab 8.13.0 with JIRA v1000.x
To enable JIRA integration in a project, navigate to the project's
and open the context menu clicking on the top right gear icon, then go to
@@ -160,7 +160,7 @@ with the linked JIRA project.
#### GitLab 6.x-7.7 with JIRA v6.x
-_**Note:** GitLab versions 13.0 and up contain various integration improvements.
+_**Note:** GitLab versions 8.13.0 and up contain various integration improvements.
We strongly recommend upgrading._
In `gitlab.yml` enable the JIRA issue tracker section by
diff --git a/doc/raketasks/maintenance.md b/doc/raketasks/maintenance.md
index 315cb56a089..266aeb7d60e 100644
--- a/doc/raketasks/maintenance.md
+++ b/doc/raketasks/maintenance.md
@@ -1,188 +1,3 @@
-# Maintenance
+# Maintenance Rake Tasks
-## Gather information about GitLab and the system it runs on
-
-This command gathers information about your GitLab installation and the System it runs on. These may be useful when asking for help or reporting issues.
-
-```
-# omnibus-gitlab
-sudo gitlab-rake gitlab:env:info
-
-# installation from source
-bundle exec rake gitlab:env:info RAILS_ENV=production
-```
-
-Example output:
-
-```
-System information
-System: Debian 7.8
-Current User: git
-Using RVM: no
-Ruby Version: 2.1.5p273
-Gem Version: 2.4.3
-Bundler Version: 1.7.6
-Rake Version: 10.3.2
-Sidekiq Version: 2.17.8
-
-GitLab information
-Version: 7.7.1
-Revision: 41ab9e1
-Directory: /home/git/gitlab
-DB Adapter: postgresql
-URL: https://gitlab.example.com
-HTTP Clone URL: https://gitlab.example.com/some-project.git
-SSH Clone URL: git@gitlab.example.com:some-project.git
-Using LDAP: no
-Using Omniauth: no
-
-GitLab Shell
-Version: 2.4.1
-Repositories: /home/git/repositories/
-Hooks: /home/git/gitlab-shell/hooks/
-Git: /usr/bin/git
-```
-
-## Check GitLab configuration
-
-Runs the following rake tasks:
-
-- `gitlab:gitlab_shell:check`
-- `gitlab:sidekiq:check`
-- `gitlab:app:check`
-
-It will check that each component was setup according to the installation guide and suggest fixes for issues found.
-
-You may also have a look at our [Trouble Shooting Guide](https://github.com/gitlabhq/gitlab-public-wiki/wiki/Trouble-Shooting-Guide).
-
-```
-# omnibus-gitlab
-sudo gitlab-rake gitlab:check
-
-# installation from source
-bundle exec rake gitlab:check RAILS_ENV=production
-```
-
-NOTE: Use SANITIZE=true for gitlab:check if you want to omit project names from the output.
-
-Example output:
-
-```
-Checking Environment ...
-
-Git configured for git user? ... yes
-Has python2? ... yes
-python2 is supported version? ... yes
-
-Checking Environment ... Finished
-
-Checking GitLab Shell ...
-
-GitLab Shell version? ... OK (1.2.0)
-Repo base directory exists? ... yes
-Repo base directory is a symlink? ... no
-Repo base owned by git:git? ... yes
-Repo base access is drwxrws---? ... yes
-post-receive hook up-to-date? ... yes
-post-receive hooks in repos are links: ... yes
-
-Checking GitLab Shell ... Finished
-
-Checking Sidekiq ...
-
-Running? ... yes
-
-Checking Sidekiq ... Finished
-
-Checking GitLab ...
-
-Database config exists? ... yes
-Database is SQLite ... no
-All migrations up? ... yes
-GitLab config exists? ... yes
-GitLab config outdated? ... no
-Log directory writable? ... yes
-Tmp directory writable? ... yes
-Init script exists? ... yes
-Init script up-to-date? ... yes
-Redis version >= 2.0.0? ... yes
-
-Checking GitLab ... Finished
-```
-
-## Rebuild authorized_keys file
-
-In some case it is necessary to rebuild the `authorized_keys` file.
-
-For Omnibus-packages:
-```
-sudo gitlab-rake gitlab:shell:setup
-```
-
-For installations from source:
-```
-cd /home/git/gitlab
-sudo -u git -H bundle exec rake gitlab:shell:setup RAILS_ENV=production
-```
-
-```
-This will rebuild an authorized_keys file.
-You will lose any data stored in authorized_keys file.
-Do you want to continue (yes/no)? yes
-```
-
-## Clear redis cache
-
-If for some reason the dashboard shows wrong information you might want to
-clear Redis' cache.
-
-For Omnibus-packages:
-```
-sudo gitlab-rake cache:clear
-```
-
-For installations from source:
-```
-cd /home/git/gitlab
-sudo -u git -H bundle exec rake cache:clear RAILS_ENV=production
-```
-
-## Precompile the assets
-
-Sometimes during version upgrades you might end up with some wrong CSS or
-missing some icons. In that case, try to precompile the assets again.
-
-Note that this only applies to source installations and does NOT apply to
-omnibus packages.
-
-For installations from source:
-```
-cd /home/git/gitlab
-sudo -u git -H bundle exec rake assets:precompile RAILS_ENV=production
-```
-
-For omnibus versions, the unoptimized assets (JavaScript, CSS) are frozen at
-the release of upstream GitLab. The omnibus version includes optimized versions
-of those assets. Unless you are modifying the JavaScript / CSS code on your
-production machine after installing the package, there should be no reason to redo
-rake assets:precompile on the production machine. If you suspect that assets
-have been corrupted, you should reinstall the omnibus package.
-
-## Tracking Deployments
-
-GitLab provides a Rake task that lets you track deployments in GitLab
-Performance Monitoring. This Rake task simply stores the current GitLab version
-in the GitLab Performance Monitoring database.
-
-For Omnibus-packages:
-
-```
-sudo gitlab-rake gitlab:track_deployment
-```
-
-For installations from source:
-
-```
-cd /home/git/gitlab
-sudo -u git -H bundle exec rake gitlab:track_deployment RAILS_ENV=production
-```
+This document was moved to [administration/raketasks/maintenance](../administration/raketasks/maintenance.md).
diff --git a/doc/update/README.md b/doc/update/README.md
index 975d72164b4..837b31abb97 100644
--- a/doc/update/README.md
+++ b/doc/update/README.md
@@ -85,6 +85,8 @@ possible.
- [MySQL installation guide](../install/database_mysql.md) contains additional
information about configuring GitLab to work with a MySQL database.
- [Restoring from backup after a failed upgrade](restore_after_failure.md)
+- [Upgrading PostgreSQL Using Slony](upgrading_postgresql_using_slony.md), for
+ upgrading a PostgreSQL database with minimal downtime.
[omnidocker]: http://docs.gitlab.com/omnibus/docker/README.html
[source-ee]: https://gitlab.com/gitlab-org/gitlab-ee/tree/master/doc/update
diff --git a/doc/update/upgrading_postgresql_using_slony.md b/doc/update/upgrading_postgresql_using_slony.md
new file mode 100644
index 00000000000..f009906256e
--- /dev/null
+++ b/doc/update/upgrading_postgresql_using_slony.md
@@ -0,0 +1,482 @@
+# Upgrading PostgreSQL Using Slony
+
+This guide describes the steps one can take to upgrade their PostgreSQL database
+to the latest version without the need for hours of downtime. This guide assumes
+you have two database servers: one database server running an older version of
+PostgreSQL (e.g. 9.2.18) and one server running a newer version (e.g. 9.6.0).
+
+For this process we'll use a PostgreSQL replication tool called
+["Slony"](http://www.slony.info/). Slony allows replication between different
+PostgreSQL versions and as such can be used to upgrade a cluster with a minimal
+amount of downtime.
+
+In various places we'll refer to the user `gitlab-psql`. This user should be the
+user used to run the various PostgreSQL OS processes. If you're using a
+different user (e.g. `postgres`) you should replace `gitlab-psql` with the name
+of said user. This guide also assumes your database is called
+`gitlabhq_production`. If you happen to use a different database name you should
+change this accordingly.
+
+## Database Dumps
+
+Slony only replicates data and not any schema changes. As a result we must
+ensure that all databases have the same database structure.
+
+To do so we'll generate a dump of our current database. This dump will only
+contain the structure, not any data. To generate this dump run the following
+command on your active database server:
+
+```bash
+sudo -u gitlab-psql /opt/gitlab/embedded/bin/pg_dump -h /var/opt/gitlab/postgresql -p 5432 -U gitlab-psql -s -f /tmp/structure.sql gitlabhq_production
+```
+
+If you're not using GitLab's Omnibus package you may have to adjust the paths to
+`pg_dump` and the PostgreSQL installation directory to match the paths of your
+configuration.
+
+Once the structure dump is generated we also need to generate a dump for the
+`schema_migrations` table. This table doesn't have any primary keys and as such
+can't be replicated easily by Slony. To generate this dump run the following
+command on your active database server:
+
+```bash
+sudo -u gitlab-psql /opt/gitlab/embedded/bin/pg_dump -h /var/opt/gitlab/postgresql/ -p 5432 -U gitlab-psql -a -t schema_migrations -f /tmp/migrations.sql gitlabhq_production
+```
+
+Next we'll need to move these files somewhere accessible by the new database
+server. The easiest way is to simply download these files to your local system:
+
+```bash
+scp your-user@production-database-host:/tmp/*.sql /tmp
+```
+
+This will copy all the SQL files located in `/tmp` to your local system's
+`/tmp` directory. Once copied you can safely remove the files from the database
+server.
+
+## Installing Slony
+
+Slony will be used to upgrade the database without requiring long downtimes.
+Slony can be downloaded from http://www.slony.info/. If you have installed
+PostgreSQL using your operating system's package manager you may also be able to
+install Slony using said package manager.
+
+When compiling Slony from source you *must* use the following commands to do so:
+
+```bash
+./configure --prefix=/path/to/installation/directory --with-perltools --with-pgconfigdir=/path/to/directory/containing/pg_config/bin
+make
+make install
+```
+
+Omnibus users can use the following commands:
+
+```bash
+./configure --prefix=/opt/gitlab/embedded --with-perltools --with-pgconfigdir=/opt/gitlab/embedded/bin
+make
+make install
+```
+
+This assumes you have installed GitLab into /opt/gitlab.
+
+To test if Slony is installed properly, run the following commands:
+
+```bash
+test -f /opt/gitlab/embedded/bin/slonik && echo 'Slony installed' || echo 'Slony not installed'
+test -f /opt/gitlab/embedded/bin/slonik_init_cluster && echo 'Slony Perl tools are available' || echo 'Slony Perl tools are not available'
+/opt/gitlab/embedded/bin/slonik -v
+```
+
+This assumes Slony was installed to `/opt/gitlab/embedded`. If Slony was
+installed properly the output of these commands will be (the mentioned "slonik"
+version may be different):
+
+```
+Slony installed
+Slony Perl tools are available
+slonik version 2.2.5
+```
+
+## Slony User
+
+Next we must set up a PostgreSQL user that Slony can use to replicate your
+database. To do so, log in to your production database using `psql` using a
+super user account. Once done run the following SQL queries:
+
+```sql
+CREATE ROLE slony WITH SUPERUSER LOGIN REPLICATION ENCRYPTED PASSWORD 'password string here';
+ALTER ROLE slony SET statement_timeout TO 0;
+```
+
+Make sure you replace "password string here" with the actual password for the
+user. A password is *required*. This user must be created on _both_ the old and
+new database server using the same password.
+
+Once the user has been created make sure you note down the password as we will
+need it later on.
+
+## Configuring Slony
+
+Now we can finally start configuring Slony. Slony uses a configuration file for
+most of the work so we'll need to set this one up. This configuration file
+specifies where to put log files, how Slony should connect to the databases,
+etc.
+
+First we'll need to create some required directories and set the correct
+permissions. To do so, run the following commands on both the old and new
+database server:
+
+```bash
+sudo mkdir -p /var/log/gitlab/slony /var/run/slony1 /var/opt/gitlab/postgresql/slony
+sudo chown gitlab-psql:root /var/log/gitlab/slony /var/run/slony1 /var/opt/gitlab/postgresql/slony
+```
+
+Here `gitlab-psql` is the user used to run the PostgreSQL database processes. If
+you're using a different user you should replace this with the name of said
+user.
+
+Now that the directories are in place we can create the configuration file. For
+this we can use the following template:
+
+```perl
+if ($ENV{"SLONYNODES"}) {
+ require $ENV{"SLONYNODES"};
+} else {
+ $CLUSTER_NAME = 'slony_replication';
+ $LOGDIR = '/var/log/gitlab/slony';
+ $MASTERNODE = 1;
+ $DEBUGLEVEL = 2;
+
+ add_node(host => 'OLD_HOST', dbname => 'gitlabhq_production', port =>5432,
+ user=>'slony', password=>'SLONY_PASSWORD', node=>1);
+
+ add_node(host => 'NEW_HOST', dbname => 'gitlabhq_production', port =>5432,
+ user=>'slony', password=>'SLONY_PASSWORD', node=>2, parent=>1 );
+}
+
+$SLONY_SETS = {
+ "set1" => {
+ "set_id" => 1,
+ "table_id" => 1,
+ "sequence_id" => 1,
+ "pkeyedtables" => [
+ TABLES
+ ],
+ },
+};
+
+if ($ENV{"SLONYSET"}) {
+ require $ENV{"SLONYSET"};
+}
+
+# Please do not add or change anything below this point.
+1;
+```
+
+In this configuration file you should replace a few placeholders before you can
+use it. The following placeholders should be replaced:
+
+* `OLD_HOST`: the address of the old database server.
+* `NEW_HOST`: the address of the new database server.
+* `SLONY_PASSWORD`: the password of the Slony user created earlier.
+* `TABLES`: the tables to replicate.
+
+The list of tables to replicate can be generated by running the following
+command on your old PostgreSQL database:
+
+```
+sudo gitlab-psql gitlabhq_production -c "select concat('\"', schemaname, '.', tablename, '\",') from pg_catalog.pg_tables where schemaname = 'public' and tableowner = 'gitlab' and tablename != 'schema_migrations' order by tablename asc;" -t
+```
+
+If you're not using Omnibus you should replace `gitlab-psql` with the
+appropriate path to the `psql` executable.
+
+The above command outputs a list of tables in a format that can be copy-pasted
+directly into the above configuration file. Make sure to _replace_ `TABLES` with
+this output, don't just append it below it. Once done you'll end up with
+something like this:
+
+```perl
+"pkeyedtables" => [
+ "public.abuse_reports",
+ "public.appearances",
+ "public.application_settings",
+ ... more rows here ...
+]
+```
+
+Once you have the configuration file generated you must install it on both the
+old and new database. To do so, place it in
+`/var/opt/gitlab/postgresql/slony/slon_tools.conf` (for which we created the
+directory earlier on).
+
+Now that the configuration file is in place we can _finally_ start replicating
+our database. First we must set up the schema in our new database. To do so make
+sure that the SQL files we generated earlier can be found in the `/tmp`
+directory of the new server. Once these files are in place start a `psql`
+session on this server:
+
+```
+sudo gitlab-psql gitlabhq_production
+```
+
+Now run the following commands:
+
+```
+\i /tmp/structure.sql
+\i /tmp/migrations.sql
+```
+
+To verify if the structure is in place close the session, start it again, then
+run `\d`. If all went well you should see output along the lines of the
+following:
+
+```
+ List of relations
+ Schema | Name | Type | Owner
+--------+---------------------------------------------+----------+-------------
+ public | abuse_reports | table | gitlab
+ public | abuse_reports_id_seq | sequence | gitlab
+ public | appearances | table | gitlab
+ public | appearances_id_seq | sequence | gitlab
+ public | application_settings | table | gitlab
+ public | application_settings_id_seq | sequence | gitlab
+ public | approvals | table | gitlab
+ ... more rows here ...
+```
+
+Now we can initialize the required tables and what not that Slony will use for
+its replication process. To do so, run the following on the old database:
+
+```
+sudo -u gitlab-psql /opt/gitlab/embedded/bin/slonik_init_cluster --conf /var/opt/gitlab/postgresql/slony/slon_tools.conf | /opt/gitlab/embedded/bin/slonik
+```
+
+If all went well this will produce something along the lines of:
+
+```
+<stdin>:10: Set up replication nodes
+<stdin>:13: Next: configure paths for each node/origin
+<stdin>:16: Replication nodes prepared
+<stdin>:17: Please start a slon replication daemon for each node
+```
+
+Next we need to start a replication node on every server. To do so, run the
+following on the old database:
+
+```
+sudo -u gitlab-psql /opt/gitlab/embedded/bin/slon_start 1 --conf /var/opt/gitlab/postgresql/slony/slon_tools.conf
+```
+
+If all went well this will produce output such as:
+
+
+```
+Invoke slon for node 1 - /opt/gitlab/embedded/bin/slon -p /var/run/slony1/slony_replication_node1.pid -s 1000 -d2 slony_replication 'host=192.168.0.7 dbname=gitlabhq_production user=slony port=5432 password=hieng8ezohHuCeiqu0leeghai4aeyahp' > /var/log/gitlab/slony/node1/gitlabhq_production-2016-10-06.log 2>&1 &
+Slon successfully started for cluster slony_replication, node node1
+PID [26740]
+Start the watchdog process as well...
+```
+
+Next we need to run the following command on the _new_ database server:
+
+```
+sudo -u gitlab-psql /opt/gitlab/embedded/bin/slon_start 2 --conf /var/opt/gitlab/postgresql/slony/slon_tools.conf
+```
+
+This will produce similar output if all went well.
+
+Next we need to tell the new database server what it should replicate. This can
+be done by running the following command on the _new_ database server:
+
+```
+sudo -u gitlab-psql /opt/gitlab/embedded/bin/slonik_create_set 1 --conf /var/opt/gitlab/postgresql/slony/slon_tools.conf | /opt/gitlab/embedded/bin/slonik
+```
+
+This should produce output along the lines of the following:
+
+```
+<stdin>:11: Subscription set 1 (set1) created
+<stdin>:12: Adding tables to the subscription set
+<stdin>:16: Add primary keyed table public.abuse_reports
+<stdin>:20: Add primary keyed table public.appearances
+<stdin>:24: Add primary keyed table public.application_settings
+... more rows here ...
+<stdin>:327: Adding sequences to the subscription set
+<stdin>:328: All tables added
+```
+
+Finally we can start the replication process by running the following on the
+_new_ database server:
+
+```
+sudo -u gitlab-psql /opt/gitlab/embedded/bin/slonik_subscribe_set 1 2 --conf /var/opt/gitlab/postgresql/slony/slon_tools.conf | /opt/gitlab/embedded/bin/slonik
+```
+
+This should produce the following output:
+
+```
+<stdin>:6: Subscribed nodes to set 1
+```
+
+At this point the new database server will start replicating the data of the old
+database server. This process can take anywhere from a few minutes to hours, if
+not days. Unfortunately Slony itself doesn't really provide a way of knowing
+when the two databases are in sync. To get an estimate of the progress you can
+use the following shell script:
+
+```
+#!/usr/bin/env bash
+
+set -e
+
+user='slony'
+pass='SLONY_PASSWORD'
+
+function main {
+ while :
+ do
+ local source
+ local target
+
+ source=$(PGUSER="${user}" PGPASSWORD="${pass}" /opt/gitlab/embedded/bin/psql -h OLD_HOST gitlabhq_production -c "select pg_size_pretty(pg_database_size('gitlabhq_production'));" -t -A)
+ target=$(PGUSER="${user}" PGPASSWORD="${pass}" /opt/gitlab/embedded/bin/psql -h NEW_HOST gitlabhq_production -c "select pg_size_pretty(pg_database_size('gitlabhq_production'));" -t -A)
+
+ echo "$(date): ${target} of ${source}" >> progress.log
+ echo "$(date): ${target} of ${source}"
+
+ sleep 60
+ done
+}
+
+main
+```
+
+This script will compare the sizes of the old and new database every minute and
+print the result to STDOUT as well as logging it to a file. Make sure to replace
+`SLONY_PASSWORD`, `OLD_HOST`, and `NEW_HOST` with the correct values.
+
+## Stopping Replication
+
+At some point the two databases are in sync. Once this is the case you'll need
+to plan for a few minutes of downtime. This small downtime window is used to
+stop the replication process, remove any Slony data from both databases, restart
+GitLab so it can use the new database, etc.
+
+First, let's stop all of GitLab. Omnibus users can do so by running the
+following on their GitLab server(s):
+
+```
+sudo gitlab-ctl stop unicorn
+sudo gitlab-ctl stop sidekiq
+sudo gitlab-ctl stop mailroom
+```
+
+If you have any other processes that use PostgreSQL you should also stop those.
+
+Once everything has been stopped you should update any configuration settings,
+DNS records, etc so they all point to the new database.
+
+Once the settings have been taken care of we need to stop the replication
+process. It's crucial that no new data is written to the databases at this point
+as this data will be lost.
+
+To stop replication, run the following on both database servers:
+
+```bash
+sudo -u gitlab-psql /opt/gitlab/embedded/bin/slon_kill --conf /var/opt/gitlab/postgresql/slony/slon_tools.conf
+```
+
+This will stop all the Slony processes on the host the command was executed on.
+
+## Resetting Sequences
+
+The above setup does not replicate database sequences, as such these must be
+reset manually in the target database. You can use the following script for
+this:
+
+```bash
+#!/usr/bin/env bash
+set -e
+
+function main {
+ local fix_sequences
+ local fix_owners
+
+ fix_sequences='/tmp/fix_sequences.sql'
+ fix_owners='/tmp/fix_owners.sql'
+
+ # The SQL queries were taken from
+ # https://wiki.postgresql.org/wiki/Fixing_Sequences
+ sudo gitlab-psql gitlabhq_production -t -c "
+ SELECT 'ALTER SEQUENCE '|| quote_ident(MIN(schema_name)) ||'.'|| quote_ident(MIN(seq_name))
+ ||' OWNED BY '|| quote_ident(MIN(TABLE_NAME)) ||'.'|| quote_ident(MIN(column_name)) ||';'
+ FROM (
+ SELECT
+ n.nspname AS schema_name,
+ c.relname AS TABLE_NAME,
+ a.attname AS column_name,
+ SUBSTRING(d.adsrc FROM E'^nextval\\(''([^'']*)''(?:::text|::regclass)?\\)') AS seq_name
+ FROM pg_class c
+ JOIN pg_attribute a ON (c.oid=a.attrelid)
+ JOIN pg_attrdef d ON (a.attrelid=d.adrelid AND a.attnum=d.adnum)
+ JOIN pg_namespace n ON (c.relnamespace=n.oid)
+ WHERE has_schema_privilege(n.oid,'USAGE')
+ AND n.nspname NOT LIKE 'pg!_%' escape '!'
+ AND has_table_privilege(c.oid,'SELECT')
+ AND (NOT a.attisdropped)
+ AND d.adsrc ~ '^nextval'
+ ) seq
+ GROUP BY seq_name HAVING COUNT(*)=1;
+ " > "${fix_owners}"
+
+ sudo gitlab-psql gitlabhq_production -t -c "
+ SELECT 'SELECT SETVAL(' ||
+ quote_literal(quote_ident(PGT.schemaname) || '.' || quote_ident(S.relname)) ||
+ ', COALESCE(MAX(' ||quote_ident(C.attname)|| '), 1) ) FROM ' ||
+ quote_ident(PGT.schemaname)|| '.'||quote_ident(T.relname)|| ';'
+ FROM pg_class AS S,
+ pg_depend AS D,
+ pg_class AS T,
+ pg_attribute AS C,
+ pg_tables AS PGT
+ WHERE S.relkind = 'S'
+ AND S.oid = D.objid
+ AND D.refobjid = T.oid
+ AND D.refobjid = C.attrelid
+ AND D.refobjsubid = C.attnum
+ AND T.relname = PGT.tablename
+ ORDER BY S.relname;
+ " > "${fix_sequences}"
+
+ sudo gitlab-psql gitlabhq_production -f "${fix_owners}"
+ sudo gitlab-psql gitlabhq_production -f "${fix_sequences}"
+
+ rm "${fix_owners}" "${fix_sequences}"
+}
+
+main
+```
+
+Upload this script to the _target_ server and execute it as follows:
+
+```bash
+bash path/to/the/script/above.sh
+```
+
+This will correct the ownership of sequences and reset the next value for the
+`id` column to the next available value.
+
+## Removing Slony
+
+Next we need to remove all Slony related data. To do so, run the following
+command on the _target_ server:
+
+```bash
+sudo gitlab-psql gitlabhq_production -c "DROP SCHEMA _slony_replication CASCADE;"
+```
+
+Once done you can safely remove any Slony related files (e.g. the log
+directory), and uninstall Slony if desired. At this point you can start your
+GitLab instance again and if all went well it should be using your new database
+server.