summaryrefslogtreecommitdiff
path: root/doc/development
diff options
context:
space:
mode:
Diffstat (limited to 'doc/development')
-rw-r--r--doc/development/README.md2
-rw-r--r--doc/development/adding_database_indexes.md6
-rw-r--r--doc/development/architecture.md189
-rw-r--r--doc/development/build_test_package.md13
-rw-r--r--doc/development/changelog.md18
-rw-r--r--doc/development/code_review.md68
-rw-r--r--doc/development/diffs.md24
-rw-r--r--doc/development/ee_features.md153
-rw-r--r--doc/development/fe_guide/style_guide_scss.md6
-rw-r--r--doc/development/fe_guide/vue.md8
-rw-r--r--doc/development/fe_guide/vuex.md12
-rw-r--r--doc/development/feature_flags.md2
-rw-r--r--doc/development/github_importer.md18
-rw-r--r--doc/development/i18n/proofreader.md1
-rw-r--r--doc/development/instrumentation.md8
-rw-r--r--doc/development/logging.md144
-rw-r--r--doc/development/migration_style_guide.md7
-rw-r--r--doc/development/new_fe_guide/development/performance.md2
-rw-r--r--doc/development/new_fe_guide/development/testing.md295
-rw-r--r--doc/development/performance.md22
-rw-r--r--doc/development/post_deployment_migrations.md6
-rw-r--r--doc/development/query_count_limits.md4
-rw-r--r--doc/development/swapping_tables.md4
-rw-r--r--doc/development/switching_to_rails5.md27
-rw-r--r--doc/development/testing_guide/end_to_end_tests.md69
-rw-r--r--doc/development/testing_guide/review_apps.md4
-rw-r--r--doc/development/testing_guide/smoke.md9
-rw-r--r--doc/development/testing_guide/testing_levels.md93
-rw-r--r--doc/development/ui_guide.md10
-rw-r--r--doc/development/ux_guide/users.md16
-rw-r--r--doc/development/what_requires_downtime.md2
31 files changed, 962 insertions, 280 deletions
diff --git a/doc/development/README.md b/doc/development/README.md
index 14dfe8eb1f3..d2dd62ecac5 100644
--- a/doc/development/README.md
+++ b/doc/development/README.md
@@ -30,6 +30,7 @@ description: 'Learn how to contribute to GitLab.'
## Backend guides
- [GitLab utilities](utilities.md)
+- [Logging](logging.md)
- [API styleguide](api_styleguide.md) Use this styleguide if you are
contributing to the API.
- [GraphQL API styleguide](api_graphql_styleguide.md) Use this
@@ -51,6 +52,7 @@ description: 'Learn how to contribute to GitLab.'
- [Prometheus metrics](prometheus_metrics.md)
- [Guidelines for reusing abstractions](reusing_abstractions.md)
- [DeclarativePolicy framework](policies.md)
+- [Switching to Rails 5](switching_to_rails5.md)
## Performance guides
diff --git a/doc/development/adding_database_indexes.md b/doc/development/adding_database_indexes.md
index ea6f14da3b9..d1d2b8c4907 100644
--- a/doc/development/adding_database_indexes.md
+++ b/doc/development/adding_database_indexes.md
@@ -28,9 +28,9 @@ to filter data by. Instead one should ask themselves the following questions:
1. Can I write my query in such a way that it re-uses as many existing indexes
as possible?
-2. Is the data going to be large enough that using an index will actually be
+1. Is the data going to be large enough that using an index will actually be
faster than just iterating over the rows in the table?
-3. Is the overhead of maintaining the index worth the reduction in query
+1. Is the overhead of maintaining the index worth the reduction in query
timings?
We'll explore every question in detail below.
@@ -62,7 +62,7 @@ In short:
1. Try to write your query in such a way that it re-uses as many existing
indexes as possible.
-2. Run the query using `EXPLAIN ANALYZE` and study the output to find the most
+1. Run the query using `EXPLAIN ANALYZE` and study the output to find the most
ideal query.
## Data Size
diff --git a/doc/development/architecture.md b/doc/development/architecture.md
index 66d8a4f2f6e..01d99c46f89 100644
--- a/doc/development/architecture.md
+++ b/doc/development/architecture.md
@@ -10,39 +10,182 @@ For information, see the [GitLab Release Process](https://gitlab.com/gitlab-org/
Both EE and CE require some add-on components called gitlab-shell and Gitaly. These components are available from the [gitlab-shell](https://gitlab.com/gitlab-org/gitlab-shell/tree/master) and [gitaly](https://gitlab.com/gitlab-org/gitaly/tree/master) repositories respectively. New versions are usually tags but staying on the master branch will give you the latest stable version. New releases are generally around the same time as GitLab CE releases with exception for informal security updates deemed critical.
-## Physical office analogy
+## GitLab Omnibus Component by Component
-You can imagine GitLab as a physical office.
+This document is designed to be consumed by systems adminstrators and GitLab Support Engineers who want to understand more about the internals of GitLab and how they work together.
-**The repositories** are the goods GitLab handles.
-They can be stored in a warehouse.
-This can be either a hard disk, or something more complex, such as a NFS filesystem;
+When deployed, GitLab should be considered the amalgamation of the below processes. When troubleshooting or debugging, be as specific as possible as to which component you are referencing. That should increase clarity and reduce confusion.
-**Nginx** acts like the front-desk.
-Users come to Nginx and request actions to be done by workers in the office;
+### GitLab Process Descriptions
-**The database** is a series of metal file cabinets with information on:
- - The goods in the warehouse (metadata, issues, merge requests etc);
- - The users coming to the front desk (permissions)
+As of this writing, a fresh GitLab 11.3.0 install will show the following processes with `gitlab-ctl status`:
-**Redis** is a communication board with “cubby holes” that can contain tasks for office workers;
+```
+run: alertmanager: (pid 30829) 14207s; run: log: (pid 13906) 2432044s
+run: gitaly: (pid 30771) 14210s; run: log: (pid 13843) 2432046s
+run: gitlab-monitor: (pid 30788) 14209s; run: log: (pid 13868) 2432045s
+run: gitlab-workhorse: (pid 30758) 14210s; run: log: (pid 13855) 2432046s
+run: logrotate: (pid 30246) 3407s; run: log: (pid 13825) 2432047s
+run: nginx: (pid 30849) 14207s; run: log: (pid 13856) 2432046s
+run: node-exporter: (pid 30929) 14206s; run: log: (pid 13877) 2432045s
+run: postgres-exporter: (pid 30935) 14206s; run: log: (pid 13931) 2432044s
+run: postgresql: (pid 13133) 2432214s; run: log: (pid 13848) 2432046s
+run: prometheus: (pid 30807) 14209s; run: log: (pid 13884) 2432045s
+run: redis: (pid 30560) 14274s; run: log: (pid 13807) 2432047s
+run: redis-exporter: (pid 30946) 14205s; run: log: (pid 13869) 2432045s
+run: sidekiq: (pid 30953) 14205s; run: log: (pid 13810) 2432047s
+run: unicorn: (pid 30960) 14204s; run: log: (pid 13809) 2432047s
+```
+
+### Layers
+
+GitLab can be considered to have two layers from a process perspective:
+
+- **Monitoring**: Anything from this layer is not required to deliver GitLab the application, but will allow administrators more insight into their infrastructure and what the service as a whole is doing.
+- **Core**: Any process that is vital for the delivery of GitLab as as platform. If any of these processes halt there will be a GitLab outage. For the Core layer, you can further divide into:
+ - **Processors**: These processes are responsible for actually performing operations and presenting the service.
+ - **Data**: These services store/expose structured data for the GitLab service.
+
+### alertmanager
+
+- Omnibus configuration options
+- Layer: Monitoring
+
+[Alert manager](https://prometheus.io/docs/alerting/alertmanager/) is a tool provided by prometheus that _"handles alerts sent by client applications such as the Prometheus server. It takes care of deduplicating, grouping, and routing them to the correct receiver integration such as email, PagerDuty, or OpsGenie. It also takes care of silencing and inhibition of alerts."_ You can read more in [issue gitlab-ce#45740](https://gitlab.com/gitlab-org/gitlab-ce/issues/45740) about what we will be alerting on.
+
+### gitaly
+
+- [Omnibus confiugration options](https://gitlab.com/gitlab-org/gitaly/tree/master/doc/configuration)
+- Layer: Core Service (Data)
+
+Gitaly is a service designed by GitLab to remove our need for NFS for Git storage in distributed deployments of GitLab. (Think GitLab.com or High Availablity Deployments) As of 11.3.0, This service handles all Git level access in GitLab. You can read more about the project [in the project's readme](https://gitlab.com/gitlab-org/gitaly).
+
+### gitlab-monitor
+
+- Omnibus configuration options
+- Layer: Monitoring
+
+GitLab Monitor is a process disigned in house that allows us to export metrics about GitLab application internals to prometheus. You can read more [in the project's readme](https://gitlab.com/gitlab-org/gitlab-monitor)
+
+### gitlab-workhorse
+
+- Omnibus configuration options
+- Layer: Core Service (Processor)
+
+[GitLab Workhorse](https://gitlab.com/gitlab-org/gitlab-workhorse) is a program designed at GitLab to help alieviate pressure from unicorn. You can read more about the [historical reasons for developing](https://about.gitlab.com/2016/04/12/a-brief-history-of-gitlab-workhorse/). It's designed to act as a smart reverse proxy to help speed up GitLab as a whole.
+
+### logrotate
+
+- [Omnibus configuration options](https://docs.gitlab.com/omnibus/settings/logs.html#logrotate)
+- Layer: Core Service
+
+GitLab is comprised of a large number of services that all log. We started bundling our own logrotate as of 7.4 to make sure we were logging responsibly. This is just a packaged version of the common opensource offering.
+
+### nginx
+
+- [Omnibus configuration options](https://docs.gitlab.com/omnibus/settings/nginx.html)
+- Layer: Core Service (Processor)
+
+Nginx as as an ingress port for all HTTP requests and routes them to the approriate sub-systems within GitLab. We are bundling an unmodified version of the popular open source webserver.
+
+### node-exporter
+
+- [Omnibus configuration options](https://docs.gitlab.com/ee/administration/monitoring/prometheus/node_exporter.html)
+- Layer: Monitoring
+
+[Node Exporter](https://github.com/prometheus/node_exporter) is a Prometheus tool that gives us metrics on the underlying machine. (Think CPU/Disk/Load) It's just a packaged version of the common open source offering from the Prometheus project.
+
+### postgres-exporter
+
+- [Omnibus configuration options](https://docs.gitlab.com/ee/administration/monitoring/prometheus/postgres_exporter.html)
+- Layer: Monitoring
+
+[Postgres-exporter](https://github.com/wrouesnel/postgres_exporter) is the community provided Prometheus exporter that will deliver data about Postgres to prometheus for use in Grafana Dashboards.
+
+### postgresql
+
+- [Omnibus configuration options](https://docs.gitlab.com/omnibus/settings/database.html)
+- Layer: Core Service (Data)
+
+GitLab packages the popular Database to provide storage for Application meta data and user information.
+
+### prometheus
+
+- [Omnibus configuration options](https://docs.gitlab.com/ee/administration/monitoring/prometheus/)
+- Layer: Monitoring
+
+Prometheus is a time-series tool that helps GitLab administrators expose metrics about the individual processes used to provide GitLab the service.
+
+### redis
+
+- [Omnibus configuration options](https://docs.gitlab.com/omnibus/settings/redis.html)
+- Layer: Core Service (Data)
+
+Redis is packaged to provide a place to store:
+
+- session data
+- temporary cache information
+- background job queues.
+
+### redis-exporter
+
+- [Omnibus configuration options](https://docs.gitlab.com/ee/administration/monitoring/prometheus/redis_exporter.html)
+- Layer: Monitoring
+
+[Redis Exporter](https://github.com/oliver006/redis_exporter) is designed to give specific metrics about the Redis process to Prometheus so that we can graph these metrics in Graphana.
+
+### sidekiq
+
+- Omnibus configuration options
+- Layer: Core Service (Processor)
+
+Sidekiq is a Ruby background job processor that pulls jobs from the redis queue and processes them. Background jobs allow GitLab to provide a faster request/response cycle by moving work into the background.
+
+### unicorn
+
+- [Omnibus configuration options](https://docs.gitlab.com/omnibus/settings/unicorn.html)
+- Layer: Core Service (Processor)
+
+[Unicorn](https://bogomips.org/unicorn/) is a Ruby application server that is used to run the core Rails Application that provides the user facing features in GitLab. Often process output you will see this as `bundle` or `config.ru` depending on the GitLab version.
+
+### Additional Processes
+
+### GitLab Pages
+
+TODO
+
+### Mattermost
+
+TODO
+
+## GitLab by Request Type
+
+GitLab provides two "interfaces" for end users to access the service:
+
+- Web HTTP Requests (Viewing the UI/API)
+- Git HTTP/SSH Requests (Pushing/Pulling Git Data)
+
+It's important to understand the distinction as some processes are used in both and others are exclusive to a specific request type.
+
+### GitLab Web HTTP Request Cycle
+
+When making a request to an HTTP Endpoint (Think `/users/sign_in`) the request will take the following path through the GitLab Service:
+
+- nginx - Acts as our first line reverse proxy
+- gitlab-workhorse - This determines if it needs to go to the Rails application or somewhere else to reduce load on unicorn.
+- unicorn - Since this is a web request, and it needs to access the application it will go to Unicorn.
+- Postgres/Gitaly/Redis - Depending on the type of request, it may hit these services to store or retreive data.
-**Sidekiq** is a worker that primarily handles sending out emails.
-It takes tasks from the Redis communication board;
-**A Unicorn worker** is a worker that handles quick/mundane tasks.
-They work with the communication board (Redis).
-Their job description:
- - check permissions by checking the user session stored in a Redis “cubby hole”;
- - make tasks for Sidekiq;
- - fetch stuff from the warehouse or move things around in there;
+### GitLab Git Request Cycle
-**GitLab-shell** is a third kind of worker that takes orders from a fax machine (SSH) instead of the front desk (HTTP).
-GitLab-shell communicates with Sidekiq via the “communication board” (Redis), and asks quick questions of the Unicorn workers either directly or via the front desk.
+Below we describe the different pathing that HTTP vs. SSH Git requests will take. There is some overlap with the Web Request Cycle but also some differences.
-**Gitaly** is a back desk that is specialized on reaching the disks to perform git operations efficiently and keep a copy of the result of costly operations. All git operations go through Gitaly.
+### Web Request (80/443)
+TODO
-**GitLab Enterprise Edition (the application)** is the collection of processes and business practices that the office is run by.
+### SSH Request (22)
+TODO
## System Layout
diff --git a/doc/development/build_test_package.md b/doc/development/build_test_package.md
index 439d228baef..c5f6adfeaeb 100644
--- a/doc/development/build_test_package.md
+++ b/doc/development/build_test_package.md
@@ -4,12 +4,13 @@ While developing a new feature or modifying an existing one, it is helpful if an
installable package (or a docker image) containing those changes is available
for testing. For this very purpose, a manual job is provided in the GitLab CI/CD
pipeline that can be used to trigger a pipeline in the omnibus-gitlab repository
-that will create
-1. A deb package for Ubuntu 16.04, available as a build artifact, and
-2. A docker image, which is pushed to [Omnibus GitLab's container
-registry](https://gitlab.com/gitlab-org/omnibus-gitlab/container_registry)
-(images titled `gitlab-ce` and `gitlab-ee` respectively and image tag is the
-commit which triggered the pipeline).
+that will create:
+
+- A deb package for Ubuntu 16.04, available as a build artifact, and
+- A docker image, which is pushed to [Omnibus GitLab's container
+ registry](https://gitlab.com/gitlab-org/omnibus-gitlab/container_registry)
+ (images titled `gitlab-ce` and `gitlab-ee` respectively and image tag is the
+ commit which triggered the pipeline).
When you push a commit to either the gitlab-ce or gitlab-ee project, the
pipeline for that commit will have a `build-package` manual action you can
diff --git a/doc/development/changelog.md b/doc/development/changelog.md
index f06d40d1dbb..cd0a1f46d27 100644
--- a/doc/development/changelog.md
+++ b/doc/development/changelog.md
@@ -133,15 +133,15 @@ If you're working on the GitLab EE repository, the entry will be added to
### Arguments
-| Argument | Shorthand | Purpose |
-| ----------------- | --------- | ---------------------------------------------------------------------------------------------------------- |
-| [`--amend`] | | Amend the previous commit |
-| [`--force`] | `-f` | Overwrite an existing entry |
-| [`--merge-request`] | `-m` | Set merge request ID |
-| [`--dry-run`] | `-n` | Don't actually write anything, just print |
-| [`--git-username`] | `-u` | Use Git user.name configuration as the author |
-| [`--type`] | `-t` | The category of the change, valid options are: added, fixed, changed, deprecated, removed, security, other |
-| [`--help`] | `-h` | Print help message |
+| Argument | Shorthand | Purpose |
+| ----------------- | --------- | --------------------------------------------------------------------------------------------------------------------------------------- |
+| [`--amend`] | | Amend the previous commit |
+| [`--force`] | `-f` | Overwrite an existing entry |
+| [`--merge-request`] | `-m` | Set merge request ID |
+| [`--dry-run`] | `-n` | Don't actually write anything, just print |
+| [`--git-username`] | `-u` | Use Git user.name configuration as the author |
+| [`--type`] | `-t` | The category of the change, valid options are: `added`, `fixed`, `changed`, `deprecated`, `removed`, `security`, `performance`, `other` |
+| [`--help`] | `-h` | Print help message |
[`--amend`]: #-amend
[`--force`]: #-force-or-f
diff --git a/doc/development/code_review.md b/doc/development/code_review.md
index 96f3861f8d7..52710e54e86 100644
--- a/doc/development/code_review.md
+++ b/doc/development/code_review.md
@@ -9,11 +9,11 @@ code is effective, understandable, and maintainable.
## Getting your merge request reviewed, approved, and merged
-You are strongly encouraged to get your code **reviewed** by a
+You are strongly encouraged to get your code **reviewed** by a
[reviewer](https://about.gitlab.com/handbook/engineering/#reviewer) as soon as
there is any code to review, to get a second opinion on the chosen solution and
implementation, and an extra pair of eyes looking for bugs, logic problems, or
-uncovered edge cases. The reviewer can be from a different team, but it is
+uncovered edge cases. The reviewer can be from a different team, but it is
recommended to pick someone who knows the domain well. You can read more about the
importance of involving reviewer(s) in the section on the responsibility of the author below.
@@ -23,7 +23,7 @@ one of the [Merge request coaches][team].
Depending on the areas your merge request touches, it must be **approved** by one
or more [maintainers](https://about.gitlab.com/handbook/engineering/#maintainer):
-For approvals, we use the approval functionality found in the merge request
+For approvals, we use the approval functionality found in the merge request
widget. Reviewers can add their approval by [approving additionally](https://docs.gitlab.com/ee/user/project/merge_requests/merge_request_approvals.html#adding-or-removing-an-approval).
1. If your merge request includes backend changes [^1], it must be
@@ -42,43 +42,43 @@ widget. Reviewers can add their approval by [approving additionally](https://doc
Getting your merge request **merged** also requires a maintainer. If it requires
more than one approval, the last maintainer to review and approve it will also merge it.
-As described in the section on the responsibility of the maintainer below, you
-are recommended to get your merge request approved and merged by maintainer(s)
+As described in the section on the responsibility of the maintainer below, you
+are recommended to get your merge request approved and merged by maintainer(s)
from other teams than your own.
### The responsibility of the merge request author
The responsibility to find the best solution and implement it lies with the
-merge request author.
+merge request author.
-Before assigning a merge request to a maintainer for approval and merge, they
-should be confident that it actually solves the problem it was meant to solve,
-that it does so in the most appropriate way, that it satisfies all requirements,
-and that there are no remaining bugs, logical problems, or uncovered edge cases.
-The merge request should also have a completed task list in its description and
+Before assigning a merge request to a maintainer for approval and merge, they
+should be confident that it actually solves the problem it was meant to solve,
+that it does so in the most appropriate way, that it satisfies all requirements,
+and that there are no remaining bugs, logical problems, or uncovered edge cases.
+The merge request should also have a completed task list in its description and
a passing CI pipeline to avoid unnecessary back and forth.
To reach the required level of confidence in their solution, an author is expected
-to involve other people in the investigation and implementation processes as
+to involve other people in the investigation and implementation processes as
appropriate.
-They are encouraged to reach out to domain experts to discuss different solutions
-or get an implementation reviewed, to product managers and UX designers to clear
-up confusion or verify that the end result matches what they had in mind, to
+They are encouraged to reach out to domain experts to discuss different solutions
+or get an implementation reviewed, to product managers and UX designers to clear
+up confusion or verify that the end result matches what they had in mind, to
database specialists to get input on the data model or specific queries, or to
any other developer to get an in-depth review of the solution.
If an author is unsure if a merge request needs a domain expert's opinion, that's
-usually a pretty good sign that it does, since without it the required level of
+usually a pretty good sign that it does, since without it the required level of
confidence in their solution will not have been reached.
### The responsibility of the maintainer
Maintainers are responsible for the overall health, quality, and consistency of
-the GitLab codebase, across domains and product areas.
+the GitLab codebase, across domains and product areas.
-Consequently, their reviews will focus primarily on things like overall
-architecture, code organization, separation of concerns, tests, DRYness,
+Consequently, their reviews will focus primarily on things like overall
+architecture, code organization, separation of concerns, tests, DRYness,
consistency, and readability.
Since a maintainer's job only depends on their knowledge of the overall GitLab
@@ -87,12 +87,12 @@ merge requests from any team and in any product area.
In fact, authors are recommended to get their merge requests merged by maintainers
from other teams than their own, to ensure that all code across GitLab is consistent
-and can be easily understood by all contributors, from both inside and outside the
+and can be easily understood by all contributors, from both inside and outside the
company, without requiring team-specific expertise.
Maintainers will do their best to also review the specifics of the chosen solution
before merging, but as they are not necessarily domain experts, they may be poorly
-placed to do so without an unreasonable investment of time. In those cases, they
+placed to do so without an unreasonable investment of time. In those cases, they
will defer to the judgment of the author and earlier reviewers and involved domain
experts, in favor of focusing on their primary responsibilities.
@@ -100,7 +100,7 @@ If a developer who happens to also be a maintainer was involved in a merge reque
as a domain expert and/or reviewer, it is recommended that they are not also picked
as the maintainer to ultimately approve and merge it.
-Maintainers should check before merging if the merge request is approved by the
+Maintainers should check before merging if the merge request is approved by the
required approvers.
## Best practices
@@ -230,41 +230,41 @@ Enterprise Edition instance. This has some implications:
1. **Query changes** should be tested to ensure that they don't result in worse
performance at the scale of GitLab.com:
1. Generating large quantities of data locally can help.
- 2. Asking for query plans from GitLab.com is the most reliable way to validate
+ 1. Asking for query plans from GitLab.com is the most reliable way to validate
these.
-2. **Database migrations** must be:
+1. **Database migrations** must be:
1. Reversible.
- 2. Performant at the scale of GitLab.com - ask a maintainer to test the
+ 1. Performant at the scale of GitLab.com - ask a maintainer to test the
migration on the staging environment if you aren't sure.
- 3. Categorised correctly:
+ 1. Categorised correctly:
- Regular migrations run before the new code is running on the instance.
- [Post-deployment migrations](post_deployment_migrations.md) run _after_
the new code is deployed, when the instance is configured to do that.
- [Background migrations](background_migrations.md) run in Sidekiq, and
should only be done for migrations that would take an extreme amount of
time at GitLab.com scale.
-3. **Sidekiq workers**
+1. **Sidekiq workers**
[cannot change in a backwards-incompatible way](sidekiq_style_guide.md#removing-or-renaming-queues):
1. Sidekiq queues are not drained before a deploy happens, so there will be
workers in the queue from the previous version of GitLab.
- 2. If you need to change a method signature, try to do so across two releases,
+ 1. If you need to change a method signature, try to do so across two releases,
and accept both the old and new arguments in the first of those.
- 3. Similarly, if you need to remove a worker, stop it from being scheduled in
+ 1. Similarly, if you need to remove a worker, stop it from being scheduled in
one release, then remove it in the next. This will allow existing jobs to
execute.
- 4. Don't forget, not every instance will upgrade to every intermediate version
+ 1. Don't forget, not every instance will upgrade to every intermediate version
(some people may go from X.1.0 to X.10.0, or even try bigger upgrades!), so
try to be liberal in accepting the old format if it is cheap to do so.
-4. **Cached values** may persist across releases. If you are changing the type a
+1. **Cached values** may persist across releases. If you are changing the type a
cached value returns (say, from a string or nil to an array), change the
cache key at the same time.
-5. **Settings** should be added as a
+1. **Settings** should be added as a
[last resort](https://about.gitlab.com/handbook/product/#convention-over-configuration).
If you're adding a new setting in `gitlab.yml`:
1. Try to avoid that, and add to `ApplicationSetting` instead.
- 2. Ensure that it is also
+ 1. Ensure that it is also
[added to Omnibus](https://docs.gitlab.com/omnibus/settings/gitlab.yml.html#adding-a-new-setting-to-gitlab-yml).
-6. **Filesystem access** can be slow, so try to avoid
+1. **Filesystem access** can be slow, so try to avoid
[shared files](shared_files.md) when an alternative solution is available.
### Credits
diff --git a/doc/development/diffs.md b/doc/development/diffs.md
index 4adae5dabe2..43fc125c21d 100644
--- a/doc/development/diffs.md
+++ b/doc/development/diffs.md
@@ -17,20 +17,20 @@ The diffs fetching process _limits_ single file diff sizes and the overall size
then persisted on `merge_request_diff_files` table.
Even though diffs larger than 10% of the value of `ApplicationSettings#diff_max_patch_bytes` are collapsed,
-we still keep them on Postgres. However, diff files larger than defined _safety limits_
+we still keep them on Postgres. However, diff files larger than defined _safety limits_
(see the [Diff limits section](#diff-limits)) are _not_ persisted in the database.
In order to present diffs information on the Merge Request diffs page, we:
1. Fetch all diff files from database `merge_request_diff_files`
-2. Fetch the _old_ and _new_ file blobs in batch to:
- 1. Highlight old and new file content
- 2. Know which viewer it should use for each file (text, image, deleted, etc)
- 3. Know if the file content changed
- 4. Know if it was stored externally
- 5. Know if it had storage errors
-3. If the diff file is cacheable (text-based), it's cached on Redis
-using `Gitlab::Diff::FileCollection::MergeRequestDiff`
+1. Fetch the _old_ and _new_ file blobs in batch to:
+ - Highlight old and new file content
+ - Know which viewer it should use for each file (text, image, deleted, etc)
+ - Know if the file content changed
+ - Know if it was stored externally
+ - Know if it had storage errors
+1. If the diff file is cacheable (text-based), it's cached on Redis
+ using `Gitlab::Diff::FileCollection::MergeRequestDiff`
### Note diffs
@@ -39,9 +39,9 @@ on `NoteDiffFile` (which is associated with the actual `DiffNote`). So instead
of hitting the repository every time we need the diff of the file, we:
1. Check whether we have the `NoteDiffFile#diff` persisted and use it
-2. Otherwise, if it's a current MR revision, use the persisted
-`MergeRequestDiffFile#diff`
-3. In the last scenario, go the the repository and fetch the diff
+1. Otherwise, if it's a current MR revision, use the persisted
+ `MergeRequestDiffFile#diff`
+1. In the last scenario, go the repository and fetch the diff
## Diff limits
diff --git a/doc/development/ee_features.md b/doc/development/ee_features.md
index b6f053ff0e9..9aea03139ee 100644
--- a/doc/development/ee_features.md
+++ b/doc/development/ee_features.md
@@ -119,10 +119,20 @@ This also applies to views.
### EE features based on CE features
-For features that build on existing CE features, write a module in the
-`EE` namespace and `prepend` it in the CE class. This makes conflicts
-less likely to happen during CE to EE merges because only one line is
-added to the CE class - the `prepend` line.
+For features that build on existing CE features, write a module in the `EE`
+namespace and `prepend` it in the CE class, on the last line of the file that
+the class resides in. This makes conflicts less likely to happen during CE to EE
+merges because only one line is added to the CE class - the `prepend` line. For
+example, to prepend a module into the `User` class you would use the following
+approach:
+
+```ruby
+class User < ActiveRecord::Base
+ # ... lots of code here ...
+end
+
+User.prepend(EE::User)
+```
Since the module would require an `EE` namespace, the file should also be
put in an `ee/` sub-directory. For example, we want to extend the user model
@@ -231,7 +241,6 @@ the existing file:
```ruby
class ApplicationController < ActionController::Base
- prepend EE::ApplicationController
# ...
def after_sign_out_path_for(resource)
@@ -240,6 +249,8 @@ class ApplicationController < ActionController::Base
# ...
end
+
+ApplicationController.prepend(EE::ApplicationController)
```
And create a new file in the `ee/` sub-directory with the altered
@@ -533,8 +544,6 @@ module API
end
end
- prepend EE::API::MergeRequests
-
params :optional_params do
# CE specific params go here...
@@ -542,6 +551,8 @@ module API
end
end
end
+
+API::MergeRequests.prepend(EE::API::MergeRequests)
```
And then we could override it in EE module:
@@ -582,10 +593,10 @@ module API
authorize_read_builds!
end
end
-
- prepend EE::API::JobArtifacts
end
end
+
+API::JobArtifacts.prepend(EE::API::JobArtifacts)
```
And then we can follow regular object-oriented practices to override it:
@@ -626,8 +637,6 @@ module API
end
end
- prepend EE::API::MergeRequests
-
put ':id/merge_requests/:merge_request_iid/merge' do
merge_request = find_project_merge_request(params[:merge_request_iid])
@@ -639,6 +648,8 @@ module API
end
end
end
+
+API::MergeRequests.prepend(EE::API::MergeRequests)
```
Note that `update_merge_request_ee` doesn't do anything in CE, but
@@ -676,27 +687,37 @@ or not we really need to extend it from EE. For now we're not using it much.
Sometimes we need to use different arguments for a particular API route, and we
can't easily extend it with an EE module because Grape has different context in
-different blocks. In order to overcome this, we could use class methods from the
-API class.
+different blocks. In order to overcome this, we need to move the data to a class
+method that resides in a separate module or class. This allows us to extend that
+module or class before its data is used, without having to place a `prepend` in
+the middle of CE code.
For example, in one place we need to pass an extra argument to
`at_least_one_of` so that the API could consider an EE-only argument as the
-least argument. This is not quite beautiful but it's working:
+least argument. We would approach this as follows:
```ruby
+# api/merge_requests/parameters.rb
module API
class MergeRequests < Grape::API
- def self.update_params_at_least_one_of
- %i[
- assignee_id
- description
- ]
+ module Parameters
+ def self.update_params_at_least_one_of
+ %i[
+ assignee_id
+ description
+ ]
+ end
end
+ end
+end
- prepend EE::API::MergeRequests
+API::MergeRequests::Parameters.prepend(EE::API::MergeRequests::Parameters)
+# api/merge_requests.rb
+module API
+ class MergeRequests < Grape::API
params do
- at_least_one_of(*::API::MergeRequests.update_params_at_least_one_of)
+ at_least_one_of(*Parameters.update_params_at_least_one_of)
end
end
end
@@ -708,16 +729,18 @@ And then we could easily extend that argument in the EE class method:
module EE
module API
module MergeRequests
- extend ActiveSupport::Concern
+ module Parameters
+ extend ActiveSupport::Concern
- class_methods do
- extend ::Gitlab::Utils::Override
+ class_methods do
+ extend ::Gitlab::Utils::Override
- override :update_params_at_least_one_of
- def update_params_at_least_one_of
- super.push(*%i[
- squash
- ])
+ override :update_params_at_least_one_of
+ def update_params_at_least_one_of
+ super.push(*%i[
+ squash
+ ])
+ end
end
end
end
@@ -728,6 +751,78 @@ end
It could be annoying if we need this for a lot of routes, but it might be the
simplest solution right now.
+This approach can also be used when models define validations that depend on
+class methods. For example:
+
+```ruby
+# app/models/identity.rb
+class Identity < ActiveRecord::Base
+ def self.uniqueness_scope
+ [:provider]
+ end
+
+ prepend EE::Identity
+
+ validates :extern_uid,
+ allow_blank: true,
+ uniqueness: { scope: uniqueness_scope, case_sensitive: false }
+end
+
+# ee/app/models/ee/identity.rb
+module EE
+ module Identity
+ extend ActiveSupport::Concern
+
+ class_methods do
+ extend ::Gitlab::Utils::Override
+
+ def uniqueness_scope
+ [*super, :saml_provider_id]
+ end
+ end
+ end
+end
+```
+
+Instead of taking this approach, we would refactor our code into the following:
+
+```ruby
+# ee/app/models/ee/identity/uniqueness_scopes.rb
+module EE
+ module Identity
+ module UniquenessScopes
+ extend ActiveSupport::Concern
+
+ class_methods do
+ extend ::Gitlab::Utils::Override
+
+ def uniqueness_scope
+ [*super, :saml_provider_id]
+ end
+ end
+ end
+ end
+end
+
+# app/models/identity/uniqueness_scopes.rb
+class Identity < ActiveRecord::Base
+ module UniquenessScopes
+ def self.uniqueness_scope
+ [:provider]
+ end
+ end
+end
+
+Identity::UniquenessScopes.prepend(EE::Identity::UniquenessScopes)
+
+# app/models/identity.rb
+class Identity < ActiveRecord::Base
+ validates :extern_uid,
+ allow_blank: true,
+ uniqueness: { scope: Identity::UniquenessScopes.scopes, case_sensitive: false }
+end
+```
+
### Code in `spec/`
When you're testing EE-only features, avoid adding examples to the
diff --git a/doc/development/fe_guide/style_guide_scss.md b/doc/development/fe_guide/style_guide_scss.md
index 48eb6d0a7d6..b09243598d5 100644
--- a/doc/development/fe_guide/style_guide_scss.md
+++ b/doc/development/fe_guide/style_guide_scss.md
@@ -183,9 +183,11 @@ Don't use ID selectors in CSS.
```
### Variables
+
Before adding a new variable for a color or a size, guarantee:
-1. There isn't already one
-2. There isn't a similar one we can use instead.
+
+- There isn't already one
+- There isn't a similar one we can use instead.
## Linting
diff --git a/doc/development/fe_guide/vue.md b/doc/development/fe_guide/vue.md
index f6cbd11042c..ccfd465531a 100644
--- a/doc/development/fe_guide/vue.md
+++ b/doc/development/fe_guide/vue.md
@@ -221,6 +221,14 @@ const vm = mountComponent(Component, data);
The main return value of a Vue component is the rendered output. In order to test the component we
need to test the rendered output. [Vue][vue-test] guide's to unit test show us exactly that:
+## Vue.js Expert Role
+One should apply to be a Vue.js expert by opening an MR when the Merge Request's they create and review show:
+- Deep understanding of Vue and Vuex reactivy
+- Vue and Vuex code are structured according to both official and our guidelines
+- Full understanding of testing a Vue and Vuex application
+- Vuex code follows the [documented pattern](./vuex.md#actions-pattern-request-and-receive-namespaces)
+- Knowledge about the existing Vue and Vuex applications and existing reusable components
+
[vue-docs]: http://vuejs.org/guide/index.html
[issue-boards]: https://gitlab.com/gitlab-org/gitlab-ce/tree/master/app/assets/javascripts/boards
diff --git a/doc/development/fe_guide/vuex.md b/doc/development/fe_guide/vuex.md
index f582f5da323..0f57835fb87 100644
--- a/doc/development/fe_guide/vuex.md
+++ b/doc/development/fe_guide/vuex.md
@@ -114,19 +114,21 @@ When a request is made we often want to show a loading state to the user.
Instead of creating an action to toggle the loading state and dispatch it in the component,
create:
+
1. An action `requestSomething`, to toggle the loading state
1. An action `receiveSomethingSuccess`, to handle the success callback
1. An action `receiveSomethingError`, to handle the error callback
1. An action `fetchSomething` to make the request.
1. In case your application does more than a `GET` request you can use these as examples:
- 1. `PUT`: `createSomething`
- 2. `POST`: `updateSomething`
- 3. `DELETE`: `deleteSomething`
+ - `PUT`: `createSomething`
+ - `POST`: `updateSomething`
+ - `DELETE`: `deleteSomething`
The component MUST only dispatch the `fetchNamespace` action. Actions namespaced with `request` or `receive` should not be called from the component
The `fetch` action will be responsible to dispatch `requestNamespace`, `receiveNamespaceSuccess` and `receiveNamespaceError`
By following this pattern we guarantee:
+
1. All applications follow the same pattern, making it easier for anyone to maintain the code
1. All data in the application follows the same lifecycle pattern
1. Actions are contained and human friendly
@@ -297,12 +299,12 @@ export default {
```javascript
// component.vue
-
+
// bad
created() {
this.$store.commit('mutation');
}
-
+
// good
created() {
this.$store.dispatch('action');
diff --git a/doc/development/feature_flags.md b/doc/development/feature_flags.md
index 350593cc813..1019a1fd0e2 100644
--- a/doc/development/feature_flags.md
+++ b/doc/development/feature_flags.md
@@ -33,7 +33,7 @@ You can follow the progress on that [in the issue on our issue tracker](https://
In general, it's better to have a group- or user-based gate, and you should prefer
it over the use of percentage gates. This would make debugging easier, as you
-filter for example logs and errors based on actors too. Futhermore, this allows
+filter for example logs and errors based on actors too. Furthermore, this allows
for enabling for the `gitlab-org` group first, while the rest of the users
aren't impacted.
diff --git a/doc/development/github_importer.md b/doc/development/github_importer.md
index 0d558583bb8..e860bde48dc 100644
--- a/doc/development/github_importer.md
+++ b/doc/development/github_importer.md
@@ -99,8 +99,8 @@ This worker will wrap up the import process by performing some housekeeping
Advancing stages is done in one of two ways:
-1. Scheduling the worker for the next stage directly.
-2. Scheduling a job for `Gitlab::GithubImport::AdvanceStageWorker` which will
+- Scheduling the worker for the next stage directly.
+- Scheduling a job for `Gitlab::GithubImport::AdvanceStageWorker` which will
advance the stage when all work of the current stage has been completed.
The first approach should only be used by workers that perform all their work in
@@ -147,7 +147,7 @@ We handle this by doing the following:
1. Once we hit the rate limit all jobs will automatically reschedule themselves
in such a way that they are not executed until the rate limit has been reset.
-2. We cache the mapping of GitHub users to GitLab users in Redis.
+1. We cache the mapping of GitHub users to GitLab users in Redis.
More information on user caching can be found below.
@@ -157,21 +157,21 @@ When mapping GitHub users to GitLab users we need to (in the worst case)
perform:
1. One API call to get the user's Email address.
-2. Two database queries to see if a corresponding GitLab user exists. One query
+1. Two database queries to see if a corresponding GitLab user exists. One query
will try to find the user based on the GitHub user ID, while the second query
is used to find the user using their GitHub Email address.
Because this process is quite expensive we cache the result of these lookups in
Redis. For every user looked up we store three keys:
-1. A Redis key mapping GitHub usernames to their Email addresses.
-2. A Redis key mapping a GitHub Email addresses to a GitLab user ID.
-3. A Redis key mapping a GitHub user ID to GitLab user ID.
+- A Redis key mapping GitHub usernames to their Email addresses.
+- A Redis key mapping a GitHub Email addresses to a GitLab user ID.
+- A Redis key mapping a GitHub user ID to GitLab user ID.
There are two types of lookups we cache:
-1. A positive lookup, meaning we found a GitLab user ID.
-2. A negative lookup, meaning we didn't find a GitLab user ID. Caching this
+- A positive lookup, meaning we found a GitLab user ID.
+- A negative lookup, meaning we didn't find a GitLab user ID. Caching this
prevents us from performing the same work for users that we know don't exist
in our GitLab database.
diff --git a/doc/development/i18n/proofreader.md b/doc/development/i18n/proofreader.md
index f58d79fccf1..c4ac53f45ac 100644
--- a/doc/development/i18n/proofreader.md
+++ b/doc/development/i18n/proofreader.md
@@ -41,6 +41,7 @@ are very appreciative of the work done by translators and proofreaders!
- Nikita Grylov - [GitLab](https://gitlab.com/nixel2007), [Crowdin](https://crowdin.com/profile/nixel2007)
- Alexy Lustin - [GitLab](https://gitlab.com/allustin), [Crowdin](https://crowdin.com/profile/lustin)
- Spanish
+ - Pedro Garcia - [GitLab](https://gitlab.com/pedgarrod), [Crowdin](https://crowdin.com/profile/breaking_pitt)
- Ukrainian
- Volodymyr Sobotovych - [GitLab](https://gitlab.com/wheleph), [Crowdin](https://crowdin.com/profile/wheleph)
- Andrew Vityuk - [GitLab](https://gitlab.com/3_1_3_u), [Crowdin](https://crowdin.com/profile/andruwa13)
diff --git a/doc/development/instrumentation.md b/doc/development/instrumentation.md
index 7761f65d78a..bef166f2aec 100644
--- a/doc/development/instrumentation.md
+++ b/doc/development/instrumentation.md
@@ -117,11 +117,11 @@ The block is executed and the execution time is stored as a set of fields in the
currently running transaction. If no transaction is present the block is yielded
without measuring anything.
-3 values are measured for a block:
+Three values are measured for a block:
-1. The real time elapsed, stored in NAME_real_time.
-2. The CPU time elapsed, stored in NAME_cpu_time.
-3. The call count, stored in NAME_call_count.
+- The real time elapsed, stored in NAME_real_time.
+- The CPU time elapsed, stored in NAME_cpu_time.
+- The call count, stored in NAME_call_count.
Both the real and CPU timings are measured in milliseconds.
diff --git a/doc/development/logging.md b/doc/development/logging.md
new file mode 100644
index 00000000000..abd08c420da
--- /dev/null
+++ b/doc/development/logging.md
@@ -0,0 +1,144 @@
+# GitLab Developers Guide to Logging
+
+[GitLab Logs](../administration/logs.md) play a critical role for both
+administrators and GitLab team members to diagnose problems in the field.
+
+## Don't use `Rails.logger`
+
+Currently `Rails.logger` calls all get saved into `production.log`, which contains
+a mix of Rails' logs and other calls developers have inserted in the code base.
+For example:
+
+```
+Started GET "/gitlabhq/yaml_db/tree/master" for 168.111.56.1 at 2015-02-12 19:34:53 +0200
+Processing by Projects::TreeController#show as HTML
+ Parameters: {"project_id"=>"gitlabhq/yaml_db", "id"=>"master"}
+
+ ...
+
+ Namespaces"."created_at" DESC, "namespaces"."id" DESC LIMIT 1 [["id", 26]]
+ CACHE (0.0ms) SELECT "members".* FROM "members" WHERE "members"."source_type" = 'Project' AND "members"."type" IN ('ProjectMember') AND "members"."source_id" = $1 AND "members"."source_type" = $2 AND "members"."user_id" = 1 ORDER BY "members"."created_at" DESC, "members"."id" DESC LIMIT 1 [["source_id", 18], ["source_type", "Project"]]
+ CACHE (0.0ms) SELECT "members".* FROM "members" WHERE "members"."source_type" = 'Project' AND "members".
+ (1.4ms) SELECT COUNT(*) FROM "merge_requests" WHERE "merge_requests"."target_project_id" = $1 AND ("merge_requests"."state" IN ('opened','reopened')) [["target_project_id", 18]]
+ Rendered layouts/nav/_project.html.haml (28.0ms)
+ Rendered layouts/_collapse_button.html.haml (0.2ms)
+ Rendered layouts/_flash.html.haml (0.1ms)
+ Rendered layouts/_page.html.haml (32.9ms)
+Completed 200 OK in 166ms (Views: 117.4ms | ActiveRecord: 27.2ms)
+```
+
+These logs suffer from a number of problems:
+
+1. They often lack timestamps or other contextual information (e.g. project ID, user)
+2. They may span multiple lines, which make them hard to find via Elasticsearch.
+3. They lack a common structure, which make them hard to parse by log
+forwarders, such as Logstash or Fluentd. This also makes them hard to
+search.
+
+Note that currently on GitLab.com, any messages in `production.log` will
+NOT get indexed by Elasticsearch due to the sheer volume and noise. They
+do end up in Google Stackdriver, but it is still harder to search for
+logs there. See the [GitLab.com logging
+documentation](https://gitlab.com/gitlab-com/runbooks/blob/master/howto/logging.md)
+for more details.
+
+## Use structured (JSON) logging
+
+Structured logging solves these problems. Consider the example from an API request:
+
+```json
+{"time":"2018-10-29T12:49:42.123Z","severity":"INFO","duration":709.08,"db":14.59,"view":694.49,"status":200,"method":"GET","path":"/api/v4/projects","params":[{"key":"action","value":"git-upload-pack"},{"key":"changes","value":"_any"},{"key":"key_id","value":"secret"},{"key":"secret_token","value":"[FILTERED]"}],"host":"localhost","ip":"::1","ua":"Ruby","route":"/api/:version/projects","user_id":1,"username":"root","queue_duration":100.31,"gitaly_calls":30}
+```
+
+In a single line, we've included all the information that a user needs
+to understand what happened: the timestamp, HTTP method and path, user
+ID, etc.
+
+### How to use JSON logging
+
+Suppose you want to log the events that happen in a project
+importer. You want to log issues created, merge requests, etc. as the
+importer progresses. Here's what to do:
+
+1. Look at [the list of GitLab Logs](../administration/logs.md) to see
+if your log message might belong with one of the existing log files.
+1. If there isn't a good place, consider creating a new filename, but
+check with a maintainer if it makes sense to do so. A log file should
+make it easy for people to search pertinent logs in one place. For
+example, `geo.log` contains all logs pertaining to GitLab Geo.
+To create a new file:
+ 1. Choose a filename (e.g. `importer_json.log`).
+ 1. Create a new subclass of `Gitlab::JsonLogger`:
+
+ ```ruby
+ module Gitlab
+ module Import
+ class Logger < ::Gitlab::JsonLogger
+ def self.file_name_noext
+ 'importer_json'
+ end
+ end
+ end
+ end
+ ```
+
+ 1. In your class where you want to log, you might initialize the logger as an instance variable:
+
+ ```ruby
+ attr_accessor :logger
+
+ def initialize
+ @logger = Gitlab::Import::Logger.build
+ end
+ ```
+
+ Note that it's useful to memoize this because creating a new logger
+ each time you log will open a file, adding unnecessary overhead.
+
+1. Now insert log messages into your code. When adding logs,
+ make sure to include all the context as key-value pairs:
+
+ ```ruby
+ # BAD
+ logger.info("Unable to create project #{project.id}")
+ ```
+
+ ```ruby
+ # GOOD
+ logger.info("Unable to create project", project_id: project.id)
+ ```
+
+1. Be sure to create a common base structure of your log messages. For example,
+ all messages might have `current_user_id` and `project_id` to make it easier
+ to search for activities by user for a given time.
+
+1. Do NOT mix and match types. Elasticsearch won't be able to index your
+ logs properly if you [mix integer and string
+ types](https://www.elastic.co/guide/en/elasticsearch/guide/current/mapping.html#_avoiding_type_gotchas):
+
+ ```ruby
+ # BAD
+ logger.info("Import error", error: 1)
+ logger.info("Import error", error: "I/O failure")
+ ```
+
+ ```ruby
+ # GOOD
+ logger.info("Import error", error_code: 1, error: "I/O failure")
+ ```
+
+## Additional steps with new log files
+
+1. Consider log retention settings. By default, Omnibus will rotate any
+logs in `/var/log/gitlab/gitlab-rails/*.log` every hour and [keep at
+most 30 compressed files](https://docs.gitlab.com/omnibus/settings/logs.html#logrotate).
+On GitLab.com, that setting is only 6 compressed files. These settings should suffice
+for most users, but you may need to tweak them in [omnibus-gitlab](https://gitlab.com/gitlab-org/omnibus-gitlab).
+
+1. If you add a new file, submit an issue to the [production
+tracker](https://gitlab.com/gitlab-com/gl-infra/production/issues) or
+a merge request to the [gitlab_fluentd](https://gitlab.com/gitlab-cookbooks/gitlab_fluentd)
+project. See [this example](https://gitlab.com/gitlab-cookbooks/gitlab_fluentd/merge_requests/51/diffs).
+
+1. Be sure to update the [GitLab CE/EE documentation](../administration/logs.md) and the [GitLab.com
+runbooks](https://gitlab.com/gitlab-com/runbooks/blob/master/howto/logging.md).
diff --git a/doc/development/migration_style_guide.md b/doc/development/migration_style_guide.md
index 6f31e5b82e5..e4e532bb4ed 100644
--- a/doc/development/migration_style_guide.md
+++ b/doc/development/migration_style_guide.md
@@ -187,12 +187,7 @@ end
When adding a foreign-key constraint to either an existing or new
column remember to also add a index on the column.
-This is _required_ if the foreign-key constraint specifies
-`ON DELETE CASCADE` or `ON DELETE SET NULL` behavior. On a cascading
-delete, the [corresponding record needs to be retrieved using an
-index](https://www.cybertec-postgresql.com/en/postgresql-indexes-and-foreign-keys/)
-(otherwise, we'd need to scan the whole table) for subsequent update or
-deletion.
+This is _required_ for all foreign-keys.
Here's an example where we add a new column with a foreign key
constraint. Note it includes `index: true` to create an index for it.
diff --git a/doc/development/new_fe_guide/development/performance.md b/doc/development/new_fe_guide/development/performance.md
index 244dfb3756f..5ccd5357314 100644
--- a/doc/development/new_fe_guide/development/performance.md
+++ b/doc/development/new_fe_guide/development/performance.md
@@ -2,7 +2,7 @@
## Monitoring
-We have a performance dashboard available in one of our [grafana instances](https://performance.gprd.gitlab.com/dashboard/db/sitespeed-page-summary?orgId=1). This dashboard automatically aggregates metric data from [sitespeed.io](https://sitespeed.io) every 6 hours. These changes are displayed after a set number of pages are aggregated.
+We have a performance dashboard available in one of our [grafana instances](https://dashboards.gitlab.net/d/1EBTz3Dmz/sitespeed-page-summary?orgId=1). This dashboard automatically aggregates metric data from [sitespeed.io](https://sitespeed.io) every 6 hours. These changes are displayed after a set number of pages are aggregated.
These pages can be found inside a text file in the gitlab-build-images [repository](https://gitlab.com/gitlab-org/gitlab-build-images) called [gitlab.txt](https://gitlab.com/gitlab-org/gitlab-build-images/blob/master/scripts/gitlab.txt)
Any frontend engineer can contribute to this dashboard. They can contribute by adding or removing urls of pages from this text file. Please have a [frontend monitoring expert](https://about.gitlab.com/team) review your changes before assigning to a maintainer of the `gitlab-build-images` project. The changes will go live on the next scheduled run after the changes are merged into `master`.
diff --git a/doc/development/new_fe_guide/development/testing.md b/doc/development/new_fe_guide/development/testing.md
index a9223ac6b0f..082acbedcd2 100644
--- a/doc/development/new_fe_guide/development/testing.md
+++ b/doc/development/new_fe_guide/development/testing.md
@@ -1,30 +1,247 @@
# Overview of Frontend Testing
-## Types of tests in our codebase
+Tests relevant for frontend development can be found at two places:
+
+- `spec/javascripts/` which are run by Karma and contain
+ - [frontend unit tests](#frontend-unit-tests)
+ - [frontend component tests](#frontend-component-tests)
+ - [frontend integration tests](#frontend-integration-tests)
+- `spec/features/` which are run by RSpec and contain
+ - [feature tests](#feature-tests)
+
+In addition there were feature tests in `features/` run by Spinach in the past.
+These have been removed from our codebase in May 2018 ([#23036](https://gitlab.com/gitlab-org/gitlab-ce/issues/23036)).
+
+See also:
-* **RSpec**
- * **[Ruby unit tests](#ruby-unit-tests-spec-rb)** for models, controllers, helpers, etc. (`/spec/**/*.rb`)
- * **[Full feature tests](#full-feature-tests-spec-features-rb)** (`/spec/features/**/*.rb`)
-* **[Karma](#karma-tests-spec-javascripts-js)** (`/spec/javascripts/**/*.js`)
-* <s>Spinach</s> — These have been removed from our codebase in May 2018. (`/features/`)
+- [old testing guide](../../testing_guide/frontend_testing.html)
+- [notes on testing Vue components](../../fe_guide/vue.html#testing-vue-components)
-## RSpec: Ruby unit tests `/spec/**/*.rb`
+## Frontend unit tests
-These tests are meant to unit test the ruby models, controllers and helpers.
+Unit tests are on the lowest abstraction level and typically test functionality that is not directly perceivable by a user.
-### When do we write/update these tests?
+### When to use unit tests
-Whenever we create or modify any Ruby models, controllers or helpers we add/update corresponding tests.
+<details>
+ <summary>exported functions and classes</summary>
+ Anything that is exported can be reused at various places in a way you have no control over.
+ Therefore it is necessary to document the expected behavior of the public interface with tests.
+</details>
----
+<details>
+ <summary>Vuex actions</summary>
+ Any Vuex action needs to work in a consistent way independent of the component it is triggered from.
+</details>
-## RSpec: Full feature tests `/spec/features/**/*.rb`
+<details>
+ <summary>Vuex mutations</summary>
+ For complex Vuex mutations it helps to identify the source of a problem by separating the tests from other parts of the Vuex store.
+</details>
-Full feature tests will load a full app environment and allow us to test things like rendering DOM, interacting with links and buttons, testing the outcome of those interactions through multiple pages if necessary. These are also called end-to-end tests but should not be confused with QA end-to-end tests (`package-and-qa` manual pipeline job).
+### When *not* to use unit tests
-### When do we write/update these tests?
+<details>
+ <summary>non-exported functions or classes</summary>
+ Anything that is not exported from a module can be considered private or an implementation detail and doesn't need to be tested.
+</details>
-When we add a new feature, we write at least two tests covering the success and the failure scenarios.
+<details>
+ <summary>constants</summary>
+ Testing the value of a constant would mean to copy it.
+ This results in extra effort without additional confidence that the value is correct.
+</details>
+
+<details>
+ <summary>Vue components</summary>
+ Computed properties, methods, and lifecycle hooks can be considered an implementation detail of components and don't need to be tested.
+ They are implicitly covered by component tests.
+ The <a href="https://vue-test-utils.vuejs.org/guides/#getting-started">official Vue guidelines</a> suggest the same.
+</details>
+
+### What to mock in unit tests
+
+<details>
+ <summary>state of the class under test</summary>
+ Modifying the state of the class under test directly rather than using methods of the class avoids side-effects in test setup.
+</details>
+
+<details>
+ <summary>other exported classes</summary>
+ Every class needs to be tested in isolation to prevent test scenarios from growing exponentially.
+</details>
+
+<details>
+ <summary>single DOM elements if passed as parameters</summary>
+ For tests that only operate on single DOM elements rather than a whole page, creating these elements is cheaper than loading a whole HTML fixture.
+</details>
+
+<details>
+ <summary>all server requests</summary>
+ When running frontend unit tests, the backend may not be reachable.
+ Therefore all outgoing requests need to be mocked.
+</details>
+
+<details>
+ <summary>asynchronous background operations</summary>
+ Background operations cannot be stopped or waited on, so they will continue running in the following tests and cause side effects.
+</details>
+
+### What *not* to mock in unit tests
+
+<details>
+ <summary>non-exported functions or classes</summary>
+ Everything that is not exported can be considered private to the module and will be implicitly tested via the exported classes / functions.
+</details>
+
+<details>
+ <summary>methods of the class under test</summary>
+ By mocking methods of the class under test, the mocks will be tested and not the real methods.
+</details>
+
+<details>
+ <summary>utility functions (pure functions, or those that only modify parameters)</summary>
+ If a function has no side effects because it has no state, it is safe to not mock it in tests.
+</details>
+
+<details>
+ <summary>full HTML pages</summary>
+ Loading the HTML of a full page slows down tests, so it should be avoided in unit tests.
+</details>
+
+## Frontend component tests
+
+Component tests cover the state of a single component that is perceivable by a user depending on external signals such as user input, events fired from other components, or application state.
+
+### When to use component tests
+
+- Vue components
+
+### When *not* to use component tests
+
+<details>
+ <summary>Vue applications</summary>
+ Vue applications may contain many components.
+ Testing them on a component level requires too much effort.
+ Therefore they are tested on frontend integration level.
+</details>
+
+<details>
+ <summary>HAML templates</summary>
+ HAML templates contain only Markup and no frontend-side logic.
+ Therefore they are not complete components.
+</details>
+
+### What to mock in component tests
+
+<details>
+ <summary>DOM</summary>
+ Operating on the real DOM is significantly slower than on the virtual DOM.
+</details>
+
+<details>
+ <summary>properties and state of the component under test</summary>
+ Similarly to testing classes, modifying the properties directly (rather than relying on methods of the component) avoids side-effects.
+</details>
+
+<details>
+ <summary>Vuex store</summary>
+ To avoid side effects and keep component tests simple, Vuex stores are replaced with mocks.
+</details>
+
+<details>
+ <summary>all server requests</summary>
+ Similar to unit tests, when running component tests, the backend may not be reachable.
+ Therefore all outgoing requests need to be mocked.
+</details>
+
+<details>
+ <summary>asynchronous background operations</summary>
+ Similar to unit tests, background operations cannot be stopped or waited on, so they will continue running in the following tests and cause side effects.
+</details>
+
+<details>
+ <summary>child components</summary>
+ Every component is tested individually, so child components are mocked.
+ See also <a href="https://vue-test-utils.vuejs.org/api/#shallowmount">shallowMount()</a>
+</details>
+
+### What *not* to mock in component tests
+
+<details>
+ <summary>methods or computed properties of the component under test</summary>
+ By mocking part of the component under test, the mocks will be tested and not the real component.
+</details>
+
+<details>
+ <summary>functions and classes independent from Vue</summary>
+ All plain JavaScript code is already covered by unit tests and needs not to be mocked in component tests.
+</details>
+
+## Frontend integration tests
+
+Integration tests cover the interaction between all components on a single page.
+Their abstraction level is comparable to how a user would interact with the UI.
+
+### When to use integration tests
+
+<details>
+ <summary>page bundles (<code>index.js</code> files in <code>app/assets/javascripts/pages/</code>)</summary>
+ Testing the page bundles ensures the corresponding frontend components integrate well.
+</details>
+
+<details>
+ <summary>Vue applications outside of page bundles</summary>
+ Testing Vue applications as a whole ensures the corresponding frontend components integrate well.
+</details>
+
+### What to mock in integration tests
+
+<details>
+ <summary>HAML views (use fixtures instead)</summary>
+ Rendering HAML views requires a Rails environment including a running database which we cannot rely on in frontend tests.
+</details>
+
+<details>
+ <summary>all server requests</summary>
+ Similar to unit and component tests, when running component tests, the backend may not be reachable.
+ Therefore all outgoing requests need to be mocked.
+</details>
+
+<details>
+ <summary>asynchronous background operations that are not perceivable on the page</summary>
+ Background operations that affect the page need to be tested on this level.
+ All other background operations cannot be stopped or waited on, so they will continue running in the following tests and cause side effects.
+</details>
+
+### What *not* to mock in integration tests
+
+<details>
+ <summary>DOM</summary>
+ Testing on the real DOM ensures our components work in the environment they are meant for.
+ Part of this will be delegated to <a href="https://gitlab.com/gitlab-org/quality/team-tasks/issues/45">cross-browser testing</a>.
+</details>
+
+<details>
+ <summary>properties or state of components</summary>
+ On this level, all tests can only perform actions a user would do.
+ For example to change the state of a component, a click event would be fired.
+</details>
+
+<details>
+ <summary>Vuex stores</summary>
+ When testing the frontend code of a page as a whole, the interaction between Vue components and Vuex stores is covered as well.
+</details>
+
+## Feature tests
+
+In contrast to [frontend integration tests](#frontend-integration-tests), feature tests make requests against the real backend instead of using fixtures.
+This also implies that database queries are executed which makes this category significantly slower.
+
+### When to use feature tests
+
+- use cases that require a backend and cannot be tested using fixtures
+- behavior that is not part of a page bundle but defined globally
### Relevant notes
@@ -48,33 +265,9 @@ wait_for_requests
expect(page).not_to have_selector('.card')
```
----
-
-## Karma tests `/spec/javascripts/**/*.js`
-
-These are the more frontend-focused, at the moment. They're **faster** than `rspec` and make for very quick testing of frontend components.
-
-### When do we write/update these tests?
-
-When we add/update a method/action/mutation to Vue or Vuex, we write karma tests to ensure the logic we wrote doesn't break. We should, however, refrain from writing tests that double-test Vue's internal features.
-
-### Relevant notes
-
-Karma tests are run against a virtual DOM.
+## Test helpers
-To populate the DOM, we can use fixtures to fake the generation of HTML instead of having Rails do that.
-
-Be sure to check the [best practices for karma tests](../../testing_guide/frontend_testing.html#best-practices).
-
-### Vue and Vuex
-
-Test as much as possible without double-testing Vue's internal features, as mentioned above.
-
-Make sure to test computedProperties, mutations, actions. Run the action and test that the proper mutations are committed.
-
-Also check these [notes on testing Vue components](../../fe_guide/vue.html#testing-vue-components).
-
-#### Vuex Helper: `testAction`
+### Vuex Helper: `testAction`
We have a helper available to make testing actions easier, as per [official documentation](https://vuex.vuejs.org/en/testing.html):
@@ -97,7 +290,7 @@ testAction(
Check an example in [spec/javascripts/ide/stores/actions_spec.jsspec/javascripts/ide/stores/actions_spec.js](https://gitlab.com/gitlab-org/gitlab-ce/blob/master/spec/javascripts/ide/stores/actions_spec.js).
-#### Vue Helper: `mountComponent`
+### Vue Helper: `mountComponent`
To make mounting a Vue component easier and more readable, we have a few helpers available in `spec/helpers/vue_mount_component_helper`.
@@ -133,6 +326,7 @@ afterEach(() => {
vm.$destroy();
});
```
+
## Testing with older browsers
Some regressions only affect a specific browser version. We can install and test in particular browsers with either Firefox or Browserstack using the following steps:
@@ -140,20 +334,21 @@ Some regressions only affect a specific browser version. We can install and test
### Browserstack
-[Browserstack](https://www.browserstack.com/) allows you to test more than 1200 mobile devices and browsers.
+[Browserstack](https://www.browserstack.com/) allows you to test more than 1200 mobile devices and browsers.
You can use it directly through the [live app](https://www.browserstack.com/live) or you can install the [chrome extension](https://chrome.google.com/webstore/detail/browserstack/nkihdmlheodkdfojglpcjjmioefjahjb) for easy access.
You can find the credentials on 1Password, under `frontendteam@gitlab.com`.
### Firefox
#### macOS
+
You can download any older version of Firefox from the releases FTP server, https://ftp.mozilla.org/pub/firefox/releases/
1. From the website, select a version, in this case `50.0.1`.
-2. Go to the mac folder.
-3. Select your preferred language, you will find the dmg package inside, download it.
-4. Drag and drop the application to any other folder but the `Applications` folder.
-5. Rename the application to something like `Firefox_Old`.
-6. Move the application to the `Applications` folder.
-7. Open up a terminal and run `/Applications/Firefox_Old.app/Contents/MacOS/firefox-bin -profilemanager` to create a new profile specific to that Firefox version.
-8. Once the profile has been created, quit the app, and run it again like normal. You now have a working older Firefox version.
+1. Go to the mac folder.
+1. Select your preferred language, you will find the dmg package inside, download it.
+1. Drag and drop the application to any other folder but the `Applications` folder.
+1. Rename the application to something like `Firefox_Old`.
+1. Move the application to the `Applications` folder.
+1. Open up a terminal and run `/Applications/Firefox_Old.app/Contents/MacOS/firefox-bin -profilemanager` to create a new profile specific to that Firefox version.
+1. Once the profile has been created, quit the app, and run it again like normal. You now have a working older Firefox version.
diff --git a/doc/development/performance.md b/doc/development/performance.md
index e738f2b4b66..4cc2fdc9a58 100644
--- a/doc/development/performance.md
+++ b/doc/development/performance.md
@@ -9,17 +9,17 @@ The process of solving performance problems is roughly as follows:
1. Make sure there's an issue open somewhere (e.g., on the GitLab CE issue
tracker), create one if there isn't. See [#15607][#15607] for an example.
-2. Measure the performance of the code in a production environment such as
+1. Measure the performance of the code in a production environment such as
GitLab.com (see the [Tooling](#tooling) section below). Performance should be
measured over a period of _at least_ 24 hours.
-3. Add your findings based on the measurement period (screenshots of graphs,
+1. Add your findings based on the measurement period (screenshots of graphs,
timings, etc) to the issue mentioned in step 1.
-4. Solve the problem.
-5. Create a merge request, assign the "Performance" label and assign it to
+1. Solve the problem.
+1. Create a merge request, assign the "Performance" label and assign it to
[@yorickpeterse][yorickpeterse] for reviewing.
-6. Once a change has been deployed make sure to _again_ measure for at least 24
+1. Once a change has been deployed make sure to _again_ measure for at least 24
hours to see if your changes have any impact on the production environment.
-7. Repeat until you're done.
+1. Repeat until you're done.
When providing timings make sure to provide:
@@ -94,14 +94,14 @@ result of this should be used instead of the `Benchmark` module.
In short:
-1. Don't trust benchmarks you find on the internet.
-2. Never make claims based on just benchmarks, always measure in production to
+- Don't trust benchmarks you find on the internet.
+- Never make claims based on just benchmarks, always measure in production to
confirm your findings.
-3. X being N times faster than Y is meaningless if you don't know what impact it
+- X being N times faster than Y is meaningless if you don't know what impact it
will actually have on your production environment.
-4. A production environment is the _only_ benchmark that always tells the truth
+- A production environment is the _only_ benchmark that always tells the truth
(unless your performance monitoring systems are not set up correctly).
-5. If you must write a benchmark use the benchmark-ips Gem instead of Ruby's
+- If you must write a benchmark use the benchmark-ips Gem instead of Ruby's
`Benchmark` module.
## Profiling
diff --git a/doc/development/post_deployment_migrations.md b/doc/development/post_deployment_migrations.md
index cfc91539bee..5986efa9974 100644
--- a/doc/development/post_deployment_migrations.md
+++ b/doc/development/post_deployment_migrations.md
@@ -57,13 +57,13 @@ depends on this column being present while it's running. Normally you'd follow
these steps in such a case:
1. Stop the GitLab instance
-2. Run the migration removing the column
-3. Start the GitLab instance again
+1. Run the migration removing the column
+1. Start the GitLab instance again
Using post deployment migrations we can instead follow these steps:
1. Deploy a new version of GitLab while ignoring post deployment migrations
-2. Re-run `rake db:migrate` but without the environment variable set
+1. Re-run `rake db:migrate` but without the environment variable set
Here we don't need any downtime as the migration takes place _after_ a new
version (which doesn't depend on the column anymore) has been deployed.
diff --git a/doc/development/query_count_limits.md b/doc/development/query_count_limits.md
index 310e3faf61b..b3ecaf30d8a 100644
--- a/doc/development/query_count_limits.md
+++ b/doc/development/query_count_limits.md
@@ -8,8 +8,8 @@ in test environments we'll raise an error when this threshold is exceeded.
When a test fails because it executes more than 100 SQL queries there are two
solutions to this problem:
-1. Reduce the number of SQL queries that are executed.
-2. Whitelist the controller or API endpoint.
+- Reduce the number of SQL queries that are executed.
+- Whitelist the controller or API endpoint.
You should only resort to whitelisting when an existing controller or endpoint
is to blame as in this case reducing the number of SQL queries can take a lot of
diff --git a/doc/development/swapping_tables.md b/doc/development/swapping_tables.md
index 6b990ece72c..29cd6a43aff 100644
--- a/doc/development/swapping_tables.md
+++ b/doc/development/swapping_tables.md
@@ -8,8 +8,8 @@ Let's say you want to swap the table "events" with "events_for_migration". In
this case you need to follow 3 steps:
1. Rename "events" to "events_temporary"
-2. Rename "events_for_migration" to "events"
-3. Rename "events_temporary" to "events_for_migration"
+1. Rename "events_for_migration" to "events"
+1. Rename "events_temporary" to "events_for_migration"
Rails allows you to do this using the `rename_table` method:
diff --git a/doc/development/switching_to_rails5.md b/doc/development/switching_to_rails5.md
new file mode 100644
index 00000000000..c9a4ce1a1d1
--- /dev/null
+++ b/doc/development/switching_to_rails5.md
@@ -0,0 +1,27 @@
+# Switching to Rails 5
+
+GitLab switched recently to Rails 5. This is a big change (especially for backend development) and it introduces couple of temporary inconveniences.
+
+## After the switch, I found a broken feature. What do I do?
+
+Many fixes and tweaks were done to make our codebase compatible with Rails 5, but it's possible that not all issues were found. If you find an bug, please create an issue and assign it the ~rails5 label.
+
+## It takes much longer to run CI pipelines that build GitLab. Why?
+
+We are temporarily running CI pipelines with Rails 4 and 5 so that we ensure we remain compatible with Rails 4 in case we must revert back to Rails 4 from Rails 5 (this can double the duration of CI pipelines).
+
+We might revert back to Rails 4 if we found a major issue we were unable to quickly fix.
+
+Once we are sure we can stay with Rails 5, we will stop running CI pipelines with Rails 4.
+
+## Can I skip running Rails 4 tests?
+
+If you are sure that your merge request doesn't introduce any incompatibility, you can just include `norails4` anywhere in your branch name and Rails 4 tests will be skipped.
+
+## CI is failing on my test with Rails 4. How can I debug it?
+
+You can run specs locally with Rails 4 using the following command:
+
+```sh
+BUNDLE_GEMFILE=Gemfile.rails4 RAILS5=0 bundle exec rspec ...
+```
diff --git a/doc/development/testing_guide/end_to_end_tests.md b/doc/development/testing_guide/end_to_end_tests.md
index 21ec926414d..e9f236c6b3a 100644
--- a/doc/development/testing_guide/end_to_end_tests.md
+++ b/doc/development/testing_guide/end_to_end_tests.md
@@ -1,32 +1,37 @@
-# End-to-End Testing
+# End-to-end Testing
-## What is End-to-End testing?
+## What is end-to-end testing?
-End-to-End testing is a strategy used to check whether your application works
-as expected across entire software stack and architecture, including
-integration of all microservices and components that are supposed to work
+End-to-end testing is a strategy used to check whether your application works
+as expected across the entire software stack and architecture, including
+integration of all micro-services and components that are supposed to work
together.
## How do we test GitLab?
We use [Omnibus GitLab][omnibus-gitlab] to build GitLab packages and then we
-test these packages using [GitLab QA][gitlab-qa] project, which is entirely
-black-box, click-driven testing framework.
+test these packages using the [GitLab QA orchestrator][gitlab-qa] tool, which is
+a black-box testing framework for the API and the UI.
### Testing nightly builds
We run scheduled pipeline each night to test nightly builds created by Omnibus.
-You can find these nightly pipelines at [GitLab QA pipelines page][gitlab-qa-pipelines].
+You can find these nightly pipelines at [gitlab-org/quality/nightly/pipelines][quality-nightly-pipelines].
+
+### Testing staging
+
+We run scheduled pipeline each night to test staging.
+You can find these nightly pipelines at [gitlab-org/quality/staging/pipelines][quality-staging-pipelines].
### Testing code in merge requests
It is possible to run end-to-end tests (eventually being run within a
[GitLab QA pipeline][gitlab-qa-pipelines]) for a merge request by triggering
-the `package-and-qa` manual action, that should be present in a merge request
-widget.
+the `package-and-qa` manual action in the `test` stage, that should be present
+in a merge request widget (unless the merge request is from a fork).
Manual action that starts end-to-end tests is also available in merge requests
-in Omnibus GitLab project.
+in [Omnibus GitLab][omnibus-gitlab].
Below you can read more about how to use it and how does it work.
@@ -35,46 +40,56 @@ Below you can read more about how to use it and how does it work.
Currently, we are using _multi-project pipeline_-like approach to run QA
pipelines.
-1. Developer triggers a manual action, that can be found in CE and EE merge
+1. Developer triggers a manual action, that can be found in CE / EE merge
requests. This starts a chain of pipelines in multiple projects.
-1. The script being executed triggers a pipeline in GitLab Omnibus and waits
-for the resulting status. We call this a _status attribution_.
+1. The script being executed triggers a pipeline in [Omnibus GitLab][omnibus-gitlab]
+and waits for the resulting status. We call this a _status attribution_.
-1. GitLab packages are being built in Omnibus pipeline. Packages are going to be
-pushed to Container Registry.
+1. GitLab packages are being built in the [Omnibus GitLab][omnibus-gitlab]
+pipeline. Packages are then pushed to its Container Registry.
1. When packages are ready, and available in the registry, a final step in the
-pipeline, that is now running in Omnibus, triggers a new pipeline in the GitLab
-QA project. It also waits for a resulting status.
+[Omnibus GitLab][omnibus-gitlab] pipeline, triggers a new
+[GitLab QA pipeline][gitlab-qa-pipelines]. It also waits for a resulting status.
1. GitLab QA pulls images from the registry, spins-up containers and runs tests
against a test environment that has been just orchestrated by the `gitlab-qa`
tool.
-1. The result of the GitLab QA pipeline is being propagated upstream, through
-Omnibus, back to CE / EE merge request.
+1. The result of the [GitLab QA pipeline][gitlab-qa-pipelines] is being
+propagated upstream, through Omnibus, back to the CE / EE merge request.
#### How do I write tests?
In order to write new tests, you first need to learn more about GitLab QA
-architecture. See the [documentation about it][gitlab-qa-architecture] in
-GitLab QA project.
+architecture. See the [documentation about it][gitlab-qa-architecture].
-Once you decided where to put test environment orchestration scenarios and
-instance specs, take a look at the [relevant documentation][instance-qa-readme]
-and examples in [the `qa/` directory][instance-qa-examples].
+Once you decided where to put [test environment orchestration scenarios] and
+[instance-level scenarios], take a look at the [GitLab QA README][instance-qa-readme],
+the [GitLab QA orchestrator README][gitlab-qa-readme], and [the already existing
+instance-level scenarios][instance-level scenarios].
## Where can I ask for help?
You can ask question in the `#quality` channel on Slack (GitLab internal) or
you can find an issue you would like to work on in
-[the issue tracker][gitlab-qa-issues] and start a new discussion there.
+[the `gitlab-ce` issue tracker][gitlab-ce-issues],
+[the `gitlab-ee` issue tracker][gitlab-ce-issues], or
+[the `gitlab-qa` issue tracker][gitlab-qa-issues].
[omnibus-gitlab]: https://gitlab.com/gitlab-org/omnibus-gitlab
[gitlab-qa]: https://gitlab.com/gitlab-org/gitlab-qa
+[gitlab-qa-readme]: https://gitlab.com/gitlab-org/gitlab-qa/tree/master/README.md
[gitlab-qa-pipelines]: https://gitlab.com/gitlab-org/gitlab-qa/pipelines
+[quality-nightly-pipelines]: https://gitlab.com/gitlab-org/quality/nightly/pipelines
+[quality-staging-pipelines]: https://gitlab.com/gitlab-org/quality/staging/pipelines
[gitlab-qa-architecture]: https://gitlab.com/gitlab-org/gitlab-qa/blob/master/docs/architecture.md
-[gitlab-qa-issues]: https://gitlab.com/gitlab-org/gitlab-qa/issues
+[gitlab-qa-issues]: https://gitlab.com/gitlab-org/gitlab-qa/issues?label_name%5B%5D=new+scenario
+[gitlab-ce-issues]: https://gitlab.com/gitlab-org/gitlab-ce/issues?label_name[]=QA&label_name[]=test
+[gitlab-ee-issues]: https://gitlab.com/gitlab-org/gitlab-ee/issues?label_name[]=QA&label_name[]=test
+[test environment orchestration scenarios]: https://gitlab.com/gitlab-org/gitlab-qa/tree/master/lib/gitlab/qa/scenario
+[instance-level scenarios]: https://gitlab.com/gitlab-org/gitlab-ce/tree/master/qa/qa/specs/features
+[Page objects documentation]: https://gitlab.com/gitlab-org/gitlab-ce/tree/master/qa/qa/page/README.md
[instance-qa-readme]: https://gitlab.com/gitlab-org/gitlab-ce/tree/master/qa/README.md
[instance-qa-examples]: https://gitlab.com/gitlab-org/gitlab-ce/tree/master/qa/qa
diff --git a/doc/development/testing_guide/review_apps.md b/doc/development/testing_guide/review_apps.md
index 36d150c8a5b..1830641431e 100644
--- a/doc/development/testing_guide/review_apps.md
+++ b/doc/development/testing_guide/review_apps.md
@@ -24,7 +24,7 @@ Review Apps are automatically deployed by each pipeline, both in
[`scripts/review_apps/review-apps.sh`][review-apps.sh]
- These scripts are basically
[our official Auto DevOps scripts][Auto-DevOps.gitlab-ci.yml] where the
- default CNG images are overriden with the images built and stored in the
+ default CNG images are overridden with the images built and stored in the
[`CNG-mirror` project's registry][cng-mirror-registry].
- Since we're using [the official GitLab Helm chart][helm-chart], this means
you get a dedicated environment for your branch that's very close to what it
@@ -33,7 +33,7 @@ Review Apps are automatically deployed by each pipeline, both in
thanks to the direct link to it from the MR widget. The default username is
`root` and its password can be found in the 1Password secure note named
**gitlab-{ce,ee} Review App's root password** (note that there's currently
- [a bug where the default password seems to be overriden][password-bug]).
+ [a bug where the default password seems to be overridden][password-bug]).
**Additional notes:**
diff --git a/doc/development/testing_guide/smoke.md b/doc/development/testing_guide/smoke.md
index 3f2843cba6e..3360031c220 100644
--- a/doc/development/testing_guide/smoke.md
+++ b/doc/development/testing_guide/smoke.md
@@ -1,8 +1,9 @@
# Smoke Tests
-It is imperative in any testing suite that we have Smoke Tests. In short, smoke tests are will run quick sanity
-end-to-end functional tests from GitLab QA and are designed to run against the specified environment to ensure that
-basic functionality is working.
+It is imperative in any testing suite that we have Smoke Tests. In short, smoke
+tests will run quick sanity end-to-end functional tests from GitLab QA and are
+designed to run against the specified environment to ensure that basic
+functionality is working.
Currently, our suite consists of this basic functionality coverage:
@@ -11,6 +12,8 @@ Currently, our suite consists of this basic functionality coverage:
- Issue Creation
- Merge Request Creation
+Smoke tests have the `:smoke` RSpec metadata.
+
---
[Return to Testing documentation](index.md)
diff --git a/doc/development/testing_guide/testing_levels.md b/doc/development/testing_guide/testing_levels.md
index 32ed22ca3ed..a8671fc3aa3 100644
--- a/doc/development/testing_guide/testing_levels.md
+++ b/doc/development/testing_guide/testing_levels.md
@@ -34,7 +34,11 @@ records should use stubs/doubles as much as possible.
Formal definition: https://en.wikipedia.org/wiki/Integration_testing
-These kind of tests ensure that individual parts of the application work well together, without the overhead of the actual app environment (i.e. the browser). These tests should assert at the request/response level: status code, headers, body. They're useful to test permissions, redirections, what view is rendered etc.
+These kind of tests ensure that individual parts of the application work well
+together, without the overhead of the actual app environment (i.e. the browser).
+These tests should assert at the request/response level: status code, headers,
+body.
+They're useful to test permissions, redirections, what view is rendered etc.
| Code path | Tests path | Testing engine | Notes |
| --------- | ---------- | -------------- | ----- |
@@ -67,20 +71,40 @@ run JavaScript tests, so you can either run unit tests (e.g. test a single
JavaScript method), or integration tests (e.g. test a component that is composed
of multiple components).
-## System tests or feature tests
+## White-box tests at the system level (formerly known as System / Feature tests)
-Formal definition: https://en.wikipedia.org/wiki/System_testing.
+Formal definitions:
-These kind of tests ensure the application works as expected from a user point
-of view (aka black-box testing). These tests should test a happy path for a
-given page or set of pages, and a test case should be added for any regression
+- https://en.wikipedia.org/wiki/System_testing
+- https://en.wikipedia.org/wiki/White-box_testing
+
+These kind of tests ensure the GitLab *Rails* application (i.e.
+`gitlab-ce`/`gitlab-ee`) works as expected from a *browser* point of view.
+
+Note that:
+
+- knowledge of the internals of the application are still required
+- data needed for the tests are usually created directly using RSpec factories
+- expectations are often set on the database or objects state
+
+These tests should only be used when:
+
+- the functionality/component being tested is small
+- the internal state of the objects/database *needs* to be tested
+- it cannot be tested at a lower level
+
+For instance, to test the breadcrumbs on a given page, writing a system test
+makes sense since it's a small component, which cannot be tested at the unit or
+controller level.
+
+Only test the happy path, but make sure to add a test case for any regression
that couldn't have been caught at lower levels with better tests (i.e. if a
regression is found, regression tests should be added at the lowest-level
possible).
| Tests path | Testing engine | Notes |
| ---------- | -------------- | ----- |
-| `spec/features/` | [Capybara] + [RSpec] | If your spec has the `:js` metadata, the browser driver will be [Poltergeist], otherwise it's using [RackTest]. |
+| `spec/features/` | [Capybara] + [RSpec] | If your test has the `:js` metadata, the browser driver will be [Poltergeist], otherwise it's using [RackTest]. |
### Consider **not** writing a system test!
@@ -89,7 +113,7 @@ we have enough Unit & Integration tests), we shouldn't need to duplicate their
thorough testing at the System test level.
It's very easy to add tests, but a lot harder to remove or improve tests, so one
-should take care of not introducing too many (slow and duplicated) specs.
+should take care of not introducing too many (slow and duplicated) tests.
The reasons why we should follow these best practices are as follows:
@@ -107,29 +131,33 @@ The reasons why we should follow these best practices are as follows:
[Poltergeist]: https://github.com/teamcapybara/capybara#poltergeist
[RackTest]: https://github.com/teamcapybara/capybara#racktest
-## Black-box tests or end-to-end tests
+## Black-box tests at the system level, aka end-to-end tests
+
+Formal definitions:
+
+- https://en.wikipedia.org/wiki/System_testing
+- https://en.wikipedia.org/wiki/Black-box_testing
GitLab consists of [multiple pieces] such as [GitLab Shell], [GitLab Workhorse],
[Gitaly], [GitLab Pages], [GitLab Runner], and GitLab Rails. All theses pieces
are configured and packaged by [GitLab Omnibus].
-[GitLab QA] is a tool that allows to test that all these pieces integrate well
-together by building a Docker image for a given version of GitLab Rails and
-running feature tests (i.e. using Capybara) against it.
+The QA framework and instance-level scenarios are [part of GitLab Rails] so that
+they're always in-sync with the codebase (especially the views).
-The actual test scenarios and steps are [part of GitLab Rails] so that they're
-always in-sync with the codebase.
+Note that:
-### Smoke tests
-
-Smoke tests are quick tests that may be run at any time (especially after the pre-deployment migrations).
+- knowledge of the internals of the application are not required
+- data needed for the tests can only be created using the GUI or the API
+- expectations can only be made against the browser page and API responses
-Much like feature tests - these tests run against the UI and ensure that basic functionality is working.
+Every new feature should come with a [test plan].
-> See [Smoke Tests](smoke.md) for more information.
+| Tests path | Testing engine | Notes |
+| ---------- | -------------- | ----- |
+| `qa/qa/specs/features/` | [Capybara] + [RSpec] + Custom QA framework | Tests should be placed under their corresponding [Product category] |
-Read a separate document about [end-to-end tests](end_to_end_tests.md) to
-learn more.
+> See [end-to-end tests](end_to_end_tests.md) for more information.
[multiple pieces]: ../architecture.md#components
[GitLab Shell]: https://gitlab.com/gitlab-org/gitlab-shell
@@ -138,8 +166,29 @@ learn more.
[GitLab Pages]: https://gitlab.com/gitlab-org/gitlab-pages
[GitLab Runner]: https://gitlab.com/gitlab-org/gitlab-runner
[GitLab Omnibus]: https://gitlab.com/gitlab-org/omnibus-gitlab
-[GitLab QA]: https://gitlab.com/gitlab-org/gitlab-qa
[part of GitLab Rails]: https://gitlab.com/gitlab-org/gitlab-ce/tree/master/qa
+[test plan]: https://gitlab.com/gitlab-org/gitlab-ce/tree/master/.gitlab/issue_templates/Test%20plan.md
+[Product category]: https://about.gitlab.com/handbook/product/categories/
+
+### Smoke tests
+
+Smoke tests are quick tests that may be run at any time (especially after the
+pre-deployment migrations).
+
+These tests run against the UI and ensure that basic functionality is working.
+
+> See [Smoke Tests](smoke.md) for more information.
+
+### GitLab QA orchestrator
+
+[GitLab QA orchestrator] is a tool that allows to test that all these pieces
+integrate well together by building a Docker image for a given version of GitLab
+Rails and running end-to-end tests (i.e. using Capybara) against it.
+
+Learn more in the [GitLab QA orchestrator README][gitlab-qa-readme].
+
+[GitLab QA orchestrator]: https://gitlab.com/gitlab-org/gitlab-qa
+[gitlab-qa-readme]: https://gitlab.com/gitlab-org/gitlab-qa/tree/master/README.md
## EE-specific tests
diff --git a/doc/development/ui_guide.md b/doc/development/ui_guide.md
index df6ac452300..dd206bb2ae9 100644
--- a/doc/development/ui_guide.md
+++ b/doc/development/ui_guide.md
@@ -54,12 +54,12 @@ information from database or file system
When exporting SVGs, be sure to follow the following guidelines:
-1. Convert all strokes to outlines.
-2. Use pathfinder tools to combine overlapping paths and create compound paths.
-3. SVGs that are limited to one color should be exported without a fill color so the color can be set using CSS.
-4. Ensure that exported SVGs have been run through an [SVG cleaner](https://github.com/RazrFalcon/SVGCleaner) to remove unused elements and attributes.
+- Convert all strokes to outlines.
+- Use pathfinder tools to combine overlapping paths and create compound paths.
+- SVGs that are limited to one color should be exported without a fill color so the color can be set using CSS.
+- Ensure that exported SVGs have been run through an [SVG cleaner](https://github.com/RazrFalcon/SVGCleaner) to remove unused elements and attributes.
-You can open your svg in a text editor to ensure that it is clean.
+You can open your svg in a text editor to ensure that it is clean.
Incorrect files will look like this:
```xml
diff --git a/doc/development/ux_guide/users.md b/doc/development/ux_guide/users.md
index f9c395b2dff..30386e728c4 100644
--- a/doc/development/ux_guide/users.md
+++ b/doc/development/ux_guide/users.md
@@ -101,7 +101,7 @@ GitLab's interface initially attracted Nazim when he was comparing version contr
### Demographics
**Age**
-
+
42 years old
**Location**
@@ -148,11 +148,11 @@ Matthieu describes GitLab as:
>"the only tool that offers the real feeling of having everything you need in one place."
-He credits himself as being entirely responsible for moving his company to GitLab.
+He credits himself as being entirely responsible for moving his company to GitLab.
### Frustrations
#### Updating to the latest release
-Matthieu introduced his company to GitLab. He is responsible for maintaining and managing the company's installation in addition to his day job. He feels updates are too frequent and he doesn't always have sufficient time to update GitLab. As a result, he's not up to date with releases.
+Matthieu introduced his company to GitLab. He is responsible for maintaining and managing the company's installation in addition to his day job. He feels updates are too frequent and he doesn't always have sufficient time to update GitLab. As a result, he's not up to date with releases.
Matthieu tried to set up automatic updates, however, as he isn't a Systems Administrator, he wasn't confident in his setup. He feels he should be able to "upgrade without users even noticing" but hasn't figured out how to do this yet. Matthieu would like the "update process to be triggered from the Admin Panel, perhaps accompanied with a changelog and the option to skip updates."
@@ -173,11 +173,11 @@ It's Matthieu's responsibility to get teams across his organization up and runni
He states that there has been: "a sluggishness of others to adapt" and it's "a low-effort adaptation at that."
### Goals
-* To save time. One of the reasons Matthieu moved his company to GitLab was to reduce the effort it took him to manage and configure multiple tools, thus saving him time. He has to balance his day job in addition to managing the company's GitLab installation and onboarding new teams to GitLab.
-* To use a platform which is easy to manage. Matthieu isn't a Systems Administrator, and when updating GitLab, creating backups, etc. He would prefer to work within GitLab's UI. Explanations / guided instructions when configuring settings in GitLab's interface would really help Matthieu. He needs reassurance that what he is about to change is
+* To save time. One of the reasons Matthieu moved his company to GitLab was to reduce the effort it took him to manage and configure multiple tools, thus saving him time. He has to balance his day job in addition to managing the company's GitLab installation and onboarding new teams to GitLab.
+* To use a platform which is easy to manage. Matthieu isn't a Systems Administrator, and when updating GitLab, creating backups, etc. He would prefer to work within GitLab's UI. Explanations / guided instructions when configuring settings in GitLab's interface would really help Matthieu. He needs reassurance that what he is about to change is
-1. the right setting
-2. will provide him with the desired result he wants.
+- The right setting.
+- Will provide him with the desired result he wants.
* Matthieu needs to educate his colleagues about GitLab. Matthieu's colleagues won't adopt GitLab as they're unaware of its capabilities and the positive impact it could have on their work. Matthieu needs support in getting this message across to them.
@@ -307,4 +307,4 @@ Karolina has an interest in UX and therefore has strong opinions about how GitLa
### Goals
* To develop her programming experience and to learn from other developers.
* To contribute to both her own and other open source projects.
-* To use a fast and intuitive version control platform. \ No newline at end of file
+* To use a fast and intuitive version control platform.
diff --git a/doc/development/what_requires_downtime.md b/doc/development/what_requires_downtime.md
index b668c9de6a0..3630a28fae9 100644
--- a/doc/development/what_requires_downtime.md
+++ b/doc/development/what_requires_downtime.md
@@ -300,7 +300,7 @@ The same applies to `rename_column_using_background_migration`:
1. Create a migration using the helper, which will schedule background
migrations to spread the writes over a longer period of time.
-2. In the next monthly release, create a clean-up migration to steal from the
+1. In the next monthly release, create a clean-up migration to steal from the
Sidekiq queues, migrate any missing rows, and cleanup the rename. This
migration should skip the steps after stealing from the Sidekiq queues if the
column has already been renamed.