summaryrefslogtreecommitdiff
path: root/doc/administration/troubleshooting
diff options
context:
space:
mode:
authorGitLab Bot <gitlab-bot@gitlab.com>2020-03-06 03:08:08 +0000
committerGitLab Bot <gitlab-bot@gitlab.com>2020-03-06 03:08:08 +0000
commita6011c3d70e0e8ac318ba6629183c44f8614c4df (patch)
treea3d21394d63c47448998c89f01eb88e57c0ed8ce /doc/administration/troubleshooting
parentffc757a7a92535559c20eb706593f7358d9bf589 (diff)
downloadgitlab-ce-a6011c3d70e0e8ac318ba6629183c44f8614c4df.tar.gz
Add latest changes from gitlab-org/gitlab@master
Diffstat (limited to 'doc/administration/troubleshooting')
-rw-r--r--doc/administration/troubleshooting/debug.md16
-rw-r--r--doc/administration/troubleshooting/kubernetes_cheat_sheet.md6
-rw-r--r--doc/administration/troubleshooting/postgresql.md6
-rw-r--r--doc/administration/troubleshooting/sidekiq.md18
4 files changed, 23 insertions, 23 deletions
diff --git a/doc/administration/troubleshooting/debug.md b/doc/administration/troubleshooting/debug.md
index db8d186db43..c1f2a5c92a3 100644
--- a/doc/administration/troubleshooting/debug.md
+++ b/doc/administration/troubleshooting/debug.md
@@ -33,7 +33,7 @@ an SMTP server, but you're not seeing mail delivered. Here's how to check the se
```ruby
irb(main):002:0> ActionMailer::Base.smtp_settings
- => {:address=>"localhost", :port=>25, :domain=>"localhost.localdomain", :user_name=>nil, :password=>nil, :authentication=>nil, :enable_starttls_auto=>true}```
+ => {:address=>"localhost", :port=>25, :domain=>"localhost.localdomain", :user_name=>nil, :password=>nil, :authentication=>nil, :enable_starttls_auto=>true}
```
In the example above, the SMTP server is configured for the local machine. If this is intended, you may need to check your local mail
@@ -56,13 +56,13 @@ For more advanced issues, `gdb` is a must-have tool for debugging issues.
To install on Ubuntu/Debian:
-```
+```shell
sudo apt-get install gdb
```
On CentOS:
-```
+```shell
sudo yum install gdb
```
@@ -103,14 +103,14 @@ downtime. Otherwise skip to the next section.
1. Run `sudo gdb -p <PID>` to attach to the Unicorn process.
1. In the gdb window, type:
- ```
+ ```plaintext
call (void) rb_backtrace()
```
1. This forces the process to generate a Ruby backtrace. Check
`/var/log/gitlab/unicorn/unicorn_stderr.log` for the backtace. For example, you may see:
- ```ruby
+ ```plaintext
from /opt/gitlab/embedded/service/gitlab-rails/lib/gitlab/metrics/sampler.rb:33:in `block in start'
from /opt/gitlab/embedded/service/gitlab-rails/lib/gitlab/metrics/sampler.rb:33:in `loop'
from /opt/gitlab/embedded/service/gitlab-rails/lib/gitlab/metrics/sampler.rb:36:in `block (2 levels) in start'
@@ -124,13 +124,13 @@ downtime. Otherwise skip to the next section.
1. To see the current threads, run:
- ```
+ ```plaintext
thread apply all bt
```
1. Once you're done debugging with `gdb`, be sure to detach from the process and exit:
- ```
+ ```plaintext
detach
exit
```
@@ -162,7 +162,7 @@ separate Rails process to debug the issue:
1. Create a Personal Access Token for your user (Profile Settings -> Access Tokens).
1. Bring up the GitLab Rails console. For omnibus users, run:
- ```
+ ```shell
sudo gitlab-rails console
```
diff --git a/doc/administration/troubleshooting/kubernetes_cheat_sheet.md b/doc/administration/troubleshooting/kubernetes_cheat_sheet.md
index 4ffce11aed0..38c0661da06 100644
--- a/doc/administration/troubleshooting/kubernetes_cheat_sheet.md
+++ b/doc/administration/troubleshooting/kubernetes_cheat_sheet.md
@@ -202,9 +202,9 @@ and they will assist you with any issues you are having.
- How to get the manifest for a release. It can be useful because it contains the info about
all Kubernetes resources and dependent charts:
- ```shell
- helm get manifest <release name>
- ```
+ ```shell
+ helm get manifest <release name>
+ ```
## Installation of minimal GitLab config via Minukube on macOS
diff --git a/doc/administration/troubleshooting/postgresql.md b/doc/administration/troubleshooting/postgresql.md
index ab302c919b2..b793f0a2ebc 100644
--- a/doc/administration/troubleshooting/postgresql.md
+++ b/doc/administration/troubleshooting/postgresql.md
@@ -99,13 +99,13 @@ References:
- [Customer ticket (internal) GitLab 12.1.6](https://gitlab.zendesk.com/agent/tickets/134307) and [Google doc (internal)](https://docs.google.com/document/d/19xw2d_D1ChLiU-MO1QzWab-4-QXgsIUcN5e_04WTKy4)
- [Issue #2 deadlocks can occur if an instance is flooded with pushes](https://gitlab.com/gitlab-org/gitlab/issues/33650). Provided for context about how GitLab code can have this sort of unanticipated effect in unusual situations.
-```
+```plaintext
ERROR: deadlock detected
```
Three applicable timeouts are identified in the issue [#1](https://gitlab.com/gitlab-org/gitlab/issues/30528); our recommended settings are as follows:
-```
+```ini
deadlock_timeout = 5s
statement_timeout = 15s
idle_in_transaction_session_timeout = 60s
@@ -128,7 +128,7 @@ Comments in issue [#1](https://gitlab.com/gitlab-org/gitlab/issues/30528) indica
See current settings with:
-```
+```shell
sudo gitlab-rails runner "c = ApplicationRecord.connection ; puts c.execute('SHOW statement_timeout').to_a ;
puts c.execute('SHOW lock_timeout').to_a ;
puts c.execute('SHOW idle_in_transaction_session_timeout').to_a ;"
diff --git a/doc/administration/troubleshooting/sidekiq.md b/doc/administration/troubleshooting/sidekiq.md
index 91361dddf02..b72bce5b3c6 100644
--- a/doc/administration/troubleshooting/sidekiq.md
+++ b/doc/administration/troubleshooting/sidekiq.md
@@ -31,7 +31,7 @@ Check in `/var/log/gitlab/sidekiq/current` or `$GITLAB_HOME/log/sidekiq.log` for
the backtrace output. The backtraces will be lengthy and generally start with
several `WARN` level messages. Here's an example of a single thread's backtrace:
-```
+```plaintext
2016-04-13T06:21:20.022Z 31517 TID-orn4urby0 WARN: ActiveRecord::RecordNotFound: Couldn't find Note with 'id'=3375386
2016-04-13T06:21:20.022Z 31517 TID-orn4urby0 WARN: /opt/gitlab/embedded/service/gem/ruby/2.1.0/gems/activerecord-4.2.5.2/lib/active_record/core.rb:155:in `find'
/opt/gitlab/embedded/service/gitlab-rails/app/workers/new_note_worker.rb:7:in `perform'
@@ -55,7 +55,7 @@ respond to the `TTIN` signal, this is a good next step.
If `perf` is not installed on your system, install it with `apt-get` or `yum`:
-```
+```shell
# Debian
sudo apt-get install linux-tools
@@ -68,13 +68,13 @@ sudo yum install perf
Run perf against the Sidekiq PID:
-```
+```shell
sudo perf record -p <sidekiq_pid>
```
Let this run for 30-60 seconds and then press Ctrl-C. Then view the perf report:
-```
+```shell
sudo perf report
# Sample output
@@ -102,13 +102,13 @@ of the process (Sidekiq will not process jobs while `gdb` is attached).
Start by attaching to the Sidekiq PID:
-```
+```shell
gdb -p <sidekiq_pid>
```
Then gather information on all the threads:
-```
+```plaintext
info threads
# Example output
@@ -129,7 +129,7 @@ from /opt/gitlab/embedded/service/gem/ruby/2.1.0/gems/nokogiri-1.6.7.2/lib/nokog
If you see a suspicious thread, like the Nokogiri one above, you may want
to get more information:
-```
+```plaintext
thread 21
bt
@@ -147,7 +147,7 @@ bt
To output a backtrace from all threads at once:
-```
+```plaintext
set pagination off
thread apply all bt
```
@@ -155,7 +155,7 @@ thread apply all bt
Once you're done debugging with `gdb`, be sure to detach from the process and
exit:
-```
+```plaintext
detach
exit
```