summaryrefslogtreecommitdiff
path: root/doc/development
diff options
context:
space:
mode:
Diffstat (limited to 'doc/development')
-rw-r--r--doc/development/README.md9
-rw-r--r--doc/development/adding_database_indexes.md3
-rw-r--r--doc/development/application_slis/index.md130
-rw-r--r--doc/development/application_slis/rails_request_apdex.md234
-rw-r--r--doc/development/architecture.md2
-rw-r--r--doc/development/cascading_settings.md4
-rw-r--r--doc/development/changelog.md6
-rw-r--r--doc/development/cicd/index.md2
-rw-r--r--doc/development/cicd/templates.md6
-rw-r--r--doc/development/code_review.md3
-rw-r--r--doc/development/contributing/design.md129
-rw-r--r--doc/development/contributing/issue_workflow.md23
-rw-r--r--doc/development/contributing/merge_request_workflow.md8
-rw-r--r--doc/development/dangerbot.md2
-rw-r--r--doc/development/database/database_migration_pipeline.md2
-rw-r--r--doc/development/database/database_reviewer_guidelines.md1
-rw-r--r--doc/development/database/efficient_in_operator_queries.md61
-rw-r--r--doc/development/database/keyset_pagination.md2
-rw-r--r--doc/development/database/multiple_databases.md118
-rw-r--r--doc/development/database/transaction_guidelines.md58
-rw-r--r--doc/development/database_debugging.md4
-rw-r--r--doc/development/database_review.md13
-rw-r--r--doc/development/distributed_tracing.md2
-rw-r--r--doc/development/documentation/feature_flags.md30
-rw-r--r--doc/development/documentation/img/manual_build_docs_v14_3.png (renamed from doc/development/documentation/img/manual_build_docs.png)bin14855 -> 14855 bytes
-rw-r--r--doc/development/documentation/index.md51
-rw-r--r--doc/development/documentation/redirects.md25
-rw-r--r--doc/development/documentation/review_apps.md2
-rw-r--r--doc/development/documentation/site_architecture/index.md2
-rw-r--r--doc/development/documentation/structure.md2
-rw-r--r--doc/development/documentation/styleguide/img/admin_access_level.pngbin0 -> 9821 bytes
-rw-r--r--doc/development/documentation/styleguide/index.md141
-rw-r--r--doc/development/documentation/styleguide/word_list.md109
-rw-r--r--doc/development/documentation/testing.md1
-rw-r--r--doc/development/documentation/workflow.md24
-rw-r--r--doc/development/ee_features.md2
-rw-r--r--doc/development/elasticsearch.md10
-rw-r--r--doc/development/experiment_guide/index.md2
-rw-r--r--doc/development/fe_guide/accessibility.md2
-rw-r--r--doc/development/fe_guide/content_editor.md20
-rw-r--r--doc/development/fe_guide/droplab/droplab.md281
-rw-r--r--doc/development/fe_guide/droplab/plugins/ajax.md46
-rw-r--r--doc/development/fe_guide/droplab/plugins/filter.md55
-rw-r--r--doc/development/fe_guide/droplab/plugins/index.md14
-rw-r--r--doc/development/fe_guide/droplab/plugins/input_setter.md72
-rw-r--r--doc/development/fe_guide/editor_lite.md9
-rw-r--r--doc/development/fe_guide/haml.md15
-rw-r--r--doc/development/fe_guide/index.md8
-rw-r--r--doc/development/fe_guide/logging.md86
-rw-r--r--doc/development/fe_guide/storybook.md16
-rw-r--r--doc/development/fe_guide/style/scss.md2
-rw-r--r--doc/development/feature_categorization/index.md5
-rw-r--r--doc/development/feature_flags/index.md2
-rw-r--r--doc/development/file_storage.md3
-rw-r--r--doc/development/filtering_by_label.md22
-rw-r--r--doc/development/go_guide/go_upgrade.md187
-rw-r--r--doc/development/go_guide/index.md74
-rw-r--r--doc/development/graphql_guide/pagination.md6
-rw-r--r--doc/development/image_scaling.md6
-rw-r--r--doc/development/import_project.md8
-rw-r--r--doc/development/index.md8
-rw-r--r--doc/development/instrumentation.md161
-rw-r--r--doc/development/integrations/secure.md17
-rw-r--r--doc/development/kubernetes.md2
-rw-r--r--doc/development/merge_request_performance_guidelines.md10
-rw-r--r--doc/development/migration_style_guide.md8
-rw-r--r--doc/development/packages.md8
-rw-r--r--doc/development/pipelines.md631
-rw-r--r--doc/development/profiling.md6
-rw-r--r--doc/development/rails_update.md110
-rw-r--r--doc/development/redis.md7
-rw-r--r--doc/development/redis/new_redis_instance.md132
-rw-r--r--doc/development/repository_mirroring.md2
-rw-r--r--doc/development/reusing_abstractions.md2
-rw-r--r--doc/development/ruby_upgrade.md275
-rw-r--r--doc/development/scalability.md2
-rw-r--r--doc/development/service_ping/dictionary.md2
-rw-r--r--doc/development/service_ping/implement.md8
-rw-r--r--doc/development/service_ping/index.md10
-rw-r--r--doc/development/service_ping/metrics_dictionary.md4
-rw-r--r--doc/development/service_ping/metrics_lifecycle.md5
-rw-r--r--doc/development/service_ping/review_guidelines.md2
-rw-r--r--doc/development/sidekiq_style_guide.md66
-rw-r--r--doc/development/snowplow/dictionary.md44
-rw-r--r--doc/development/snowplow/implementation.md543
-rw-r--r--doc/development/snowplow/index.md723
-rw-r--r--doc/development/snowplow/review_guidelines.md12
-rw-r--r--doc/development/snowplow/schemas.md166
-rw-r--r--doc/development/sql.md2
-rw-r--r--doc/development/stage_group_dashboards.md12
-rw-r--r--doc/development/testing_guide/best_practices.md2
-rw-r--r--doc/development/testing_guide/ci.md46
-rw-r--r--doc/development/testing_guide/end_to_end/best_practices.md38
-rw-r--r--doc/development/testing_guide/end_to_end/dynamic_element_validation.md2
-rw-r--r--doc/development/testing_guide/end_to_end/feature_flags.md2
-rw-r--r--doc/development/testing_guide/end_to_end/index.md2
-rw-r--r--doc/development/testing_guide/end_to_end/page_objects.md2
-rw-r--r--doc/development/testing_guide/end_to_end/rspec_metadata_tests.md3
-rw-r--r--doc/development/testing_guide/end_to_end/running_tests_that_require_special_setup.md12
-rw-r--r--doc/development/testing_guide/flaky_tests.md4
-rw-r--r--doc/development/testing_guide/frontend_testing.md23
-rw-r--r--doc/development/testing_guide/img/review-app-parent-pipeline.pngbin0 -> 136842 bytes
-rw-r--r--doc/development/testing_guide/index.md2
-rw-r--r--doc/development/testing_guide/review_apps.md174
-rw-r--r--doc/development/usage_ping/dictionary.md2
-rw-r--r--doc/development/usage_ping/index.md9
-rw-r--r--doc/development/usage_ping/metrics_dictionary.md9
-rw-r--r--doc/development/usage_ping/metrics_instrumentation.md9
-rw-r--r--doc/development/usage_ping/product_intelligence_review.md9
-rw-r--r--doc/development/usage_ping/review_guidelines.md9
-rw-r--r--doc/development/windows.md6
111 files changed, 3281 insertions, 2226 deletions
diff --git a/doc/development/README.md b/doc/development/README.md
deleted file mode 100644
index 0e6c2f63f9e..00000000000
--- a/doc/development/README.md
+++ /dev/null
@@ -1,9 +0,0 @@
----
-redirect_to: 'index.md'
-remove_date: '2021-09-28'
----
-
-This document was moved to [another location](index.md).
-
-<!-- This redirect file can be deleted after 2021-09-28. -->
-<!-- Before deletion, see: https://docs.gitlab.com/ee/development/documentation/#move-or-rename-a-page -->
diff --git a/doc/development/adding_database_indexes.md b/doc/development/adding_database_indexes.md
index 16dd581113c..9ca08ab1dc2 100644
--- a/doc/development/adding_database_indexes.md
+++ b/doc/development/adding_database_indexes.md
@@ -275,7 +275,8 @@ You can verify if the MR was deployed to GitLab.com by executing
`/chatops run auto_deploy status <merge_sha>`. To verify existence of
the index, you can:
-- Use a meta-command in #database-lab, such as: `\di <index_name>`
+- Use a meta-command in #database-lab, such as: `\d <index_name>`
+ - Ensure that the index is not [`invalid`](https://www.postgresql.org/docs/12/sql-createindex.html#:~:text=The%20psql%20%5Cd%20command%20will%20report%20such%20an%20index%20as%20INVALID)
- Ask someone in #database to check if the index exists
- With proper access, you can also verify directly on production or in a
production clone
diff --git a/doc/development/application_slis/index.md b/doc/development/application_slis/index.md
new file mode 100644
index 00000000000..c1d7ac9fa0c
--- /dev/null
+++ b/doc/development/application_slis/index.md
@@ -0,0 +1,130 @@
+---
+stage: Platforms
+group: Scalability
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
+---
+
+# GitLab Application Service Level Indicators (SLIs)
+
+> [Introduced](https://gitlab.com/groups/gitlab-com/gl-infra/-/epics/525) in GitLab 14.4
+
+It is possible to define [Service Level Indicators
+(SLIs)](https://en.wikipedia.org/wiki/Service_level_indicator)
+directly in the Ruby codebase. This keeps the definition of operations
+and their success close to the implementation and allows the people
+building features to easily define how these features should be
+monitored.
+
+Defining an SLI causes 2
+[Prometheus
+counters](https://prometheus.io/docs/concepts/metric_types/#counter)
+to be emitted from the rails application:
+
+- `gitlab_sli:<sli name>:total`: incremented for each operation.
+- `gitlab_sli:<sli_name>:success_total`: incremented for successful
+ operations.
+
+## Existing SLIs
+
+1. [`rails_request_apdex`](rails_request_apdex.md)
+
+## Defining a new SLI
+
+An SLI can be defined using the `Gitlab::Metrics::Sli` class.
+
+Before the first scrape, it is important to have [initialized the SLI
+with all possible
+label-combinations](https://prometheus.io/docs/practices/instrumentation/#avoid-missing-metrics). This
+avoid confusing results when using these counters in calculations.
+
+To initialize an SLI, use the `.inilialize_sli` class method, for
+example:
+
+```ruby
+Gitlab::Metrics::Sli.initialize_sli(:received_email, [
+ {
+ feature_category: :issue_tracking,
+ email_type: :create_issue
+ },
+ {
+ feature_category: :service_desk,
+ email_type: :service_desk
+ },
+ {
+ feature_category: :code_review,
+ email_type: :create_merge_request
+ }
+])
+```
+
+Metrics must be initialized before they get
+scraped for the first time. This could be done at the start time of the
+process that will emit them, in which case we need to pay attention
+not to increase application's boot time too much. This is preferable
+if possible.
+
+Alternatively, if initializing would take too long, this can be done
+during the first scrape. We need to make sure we don't do it for every
+scrape. This can be done as follows:
+
+```ruby
+def initialize_request_slis_if_needed!
+ return if Gitlab::Metrics::Sli.initialized?(:rails_request_apdex)
+ Gitlab::Metrics::Sli.initialize_sli(:rails_request_apdex, possible_request_labels)
+end
+```
+
+Also pay attention to do it for the different metrics
+endpoints we have. Currently the
+[`WebExporter`](https://gitlab.com/gitlab-org/gitlab/blob/master/lib/gitlab/metrics/exporter/web_exporter.rb)
+and the
+[`HealthController`](https://gitlab.com/gitlab-org/gitlab/blob/master/app/controllers/health_controller.rb)
+for Rails and
+[`SidekiqExporter`](https://gitlab.com/gitlab-org/gitlab/blob/master/lib/gitlab/metrics/exporter/sidekiq_exporter.rb)
+for Sidekiq.
+
+## Tracking operations for an SLI
+
+Tracking an operation in the newly defined SLI can be done like this:
+
+```ruby
+Gitlab::Metrics::Sli[:received_email].increment(
+ labels: {
+ feature_category: :service_desk,
+ email_type: :service_desk
+ },
+ success: issue_created?
+)
+```
+
+Calling `#increment` on this SLI will increment the total Prometheus counter
+
+```prometheus
+gitlab_sli:received_email:total{ feature_category='service_desk', email_type='service_desk' }
+```
+
+If the `success:` argument passed is truthy, then the success counter
+will also be incremented:
+
+```prometheus
+gitlab_sli:received_email:success_total{ feature_category='service_desk', email_type='service_desk' }
+```
+
+## Using the SLI in service monitoring and alerts
+
+When the application is emitting metrics for the new SLI, those need
+to be consumed in the service catalog to result in alerts, and be
+included in the error budget for stage groups and GitLab.com's overall
+availability.
+
+This is currently being worked on in [this
+project](https://gitlab.com/groups/gitlab-com/gl-infra/-/epics/573). As
+part of [this
+issue](https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/1307)
+we will update the documentation.
+
+For any question, please don't hesitate to createan issue in [the
+Scalability issue
+tracker](https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues)
+or come find us in
+[#g_scalability](https://gitlab.slack.com/archives/CMMF8TKR9) on Slack.
diff --git a/doc/development/application_slis/rails_request_apdex.md b/doc/development/application_slis/rails_request_apdex.md
new file mode 100644
index 00000000000..e1ab5368578
--- /dev/null
+++ b/doc/development/application_slis/rails_request_apdex.md
@@ -0,0 +1,234 @@
+---
+stage: Platforms
+group: Scalability
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
+---
+
+# Rails request apdex SLI
+
+> [Introduced](https://gitlab.com/groups/gitlab-com/gl-infra/-/epics/525) in GitLab 14.4
+
+NOTE:
+This SLI is not yet used in [error budgets for stage
+groups](../stage_group_dashboards.md#error-budget) or service
+monitoring. This is being worked on in [this
+project](https://gitlab.com/groups/gitlab-com/gl-infra/-/epics/573).
+
+The request apdex SLI is [an SLI defined in the application](index.md)
+that measures the duration of successful requests as an indicator for
+application performance. This includes the REST and GraphQL API, and the
+regular controller endpoints. It consists of these counters:
+
+1. `gitlab_sli:rails_request_apdex:total`: This counter gets
+ incremented for every request that did not result in a response
+ with a 5xx status code. This means that slow failures don't get
+ counted twice: The request is already counted in the error-SLI.
+
+1. `gitlab_sli:rails_request_apdex:success_total`: This counter gets
+ incremented for every successful request that performed faster than
+ the [defined target duration depending on the endpoint's
+ urgency](#adjusting-request-urgency).
+
+Both these counters are labeled with:
+
+1. `endpoint_id`: The identification of the Rails Controller or the
+ Grape-API endpoint
+
+1. `feature_category`: The feature category specified for that
+ controller or API endpoint.
+
+## Request Apdex SLO
+
+These counters can be combined into a success ratio, the objective for
+this ratio is defined in the service catalog per service:
+
+1. [Web: 0.998](https://gitlab.com/gitlab-com/runbooks/blob/master/metrics-catalog/services/web.jsonnet#L19)
+1. [API: 0.995](https://gitlab.com/gitlab-com/runbooks/blob/master/metrics-catalog/services/api.jsonnet#L19)
+1. [Git: 0.998](https://gitlab.com/gitlab-com/runbooks/blob/master/metrics-catalog/services/git.jsonnet#L22)
+
+This means that for this SLI to meet SLO, the ratio recorded needs to
+be higher than those defined above.
+
+For example: for the web-service, we want at least 99.8% of requests
+to be faster than their target duration.
+
+These are the targets we use for alerting and service montoring. So
+durations should be set keeping those into account. So we would not
+cause alerts. But the goal would be to set the urgency to a target
+that users would be satisfied with.
+
+Both successful measurements and unsuccessful ones have an impact on the
+error budget for stage groups.
+
+## Adjusting request urgency
+
+Not all endpoints perform the same type of work, so it is possible to
+define different urgencies for different endpoints. An endpoint with a
+lower urgency can have a longer request duration than endpoints that
+are high urgency.
+
+Long-running requests are more expensive for our
+infrastructure: while one request is being served, the thread remains
+occupied for the duration of that request. So nothing else can be handled by that
+thread. Because of Ruby's Global VM Lock, the thread might keep the
+lock and stall other requests handled by the same Puma worker
+process. The request is in fact a noisy neighbor for other requests
+handled by the worker. This is why the upper bound for a target
+duration is capped at 5 seconds.
+
+## Decreasing the urgency (setting a higher target duration)
+
+Increasing the urgency on an existing endpoint can be done on
+a case-by-case basis. Please take the following into account:
+
+1. Apdex is about perceived performance, if a user is actively waiting
+ for the result of a request, waiting 5 seconds might not be
+ acceptable. While if the endpoint is used by an automation
+ requiring a lot of data, 5 seconds could be okay.
+
+ A product manager can help to identify how an endpoint is used.
+
+1. The workload for some endpoints can sometimes differ greatly
+ depending on the parameters specified by the caller. The urgency
+ needs to accomodate that. In some cases, it might be interesting to
+ define a separate [application SLI](index.md#defining-a-new-sli)
+ for what the endpoint is doing.
+
+ When the endpoints in certain cases turn into no-ops, making them
+ very fast, we should ignore these fast requests when setting the
+ target. For example, if the `MergeRequests::DraftsController` is
+ hit for every merge request being viewed, but doesn't need to
+ render anything in most cases, then we should pick the target that
+ would still accomodate the endpoint performing work.
+
+1. Consider the dependent resources consumed by the endpoint. If the endpoint
+ loads a lot of data from Gitaly or the database and this is causing
+ it to not perform satisfactory. It could be better to optimize the
+ way the data is loaded rather than increasing the target duration
+ by lowering the urgency.
+
+ In cases like this, it might be appropriate to temporarily decrease
+ urgency to make the endpoint meet SLO, if this is bearable for the
+ infrastructure. In such cases, please link an issue from a code
+ comment.
+
+ If the endpoint consumes a lot of CPU time, we should also consider
+ this: these kinds of requests are the kind of noisy neighbors we
+ should try to keep as short as possible.
+
+1. Traffic characteristics should also be taken into account: if the
+ trafic to the endpoint is bursty, like CI traffic spinning up a
+ big batch of jobs hitting the same endpoint, then having these
+ endpoints take 5s is not acceptable from an infrastructure point of
+ view. We cannot scale up the fleet fast enough to accomodate for
+ the incoming slow requests alongside the regular traffic.
+
+When lowering the urgency for an existing endpoint, please involve a
+[Scalability team member](https://about.gitlab.com/handbook/engineering/infrastructure/team/scalability/#team-members)
+in the review. We can use request rates and durations available in the
+logs to come up with a recommendation. Picking a threshold can be done
+using the same process as for [increasing
+urgency](#increasing-urgency-setting-a-lower-target-duration), picking
+a duration that is higher than the SLO for the service.
+
+We shouldn't set the longest durations on endpoints in the merge
+requests that introduces them, since we don't yet have data to support
+the decision.
+
+## Increasing urgency (setting a lower target duration)
+
+When decreasing the target duration, we need to make sure the endpoint
+still meets SLO for the fleet that handles the request. You can use the
+information in the logs to determine this:
+
+1. Open [this table in
+ Kibana](https://log.gprd.gitlab.net/goto/bbb6465c68eb83642269e64a467df3df)
+
+1. The table loads information for the busiest endpoints by
+ default. You can speed things up by adding a filter for
+ `json.caller_id.keyword` and adding the identifier you're intersted
+ in (for example: `Projects::RawController#show`).
+
+1. Check the [appropriate percentile duration](#request-apdex-slo) for
+ the service the endpoint is handled by. The overall duration should
+ be lower than the target you intend to set.
+
+1. If the overall duration is below the intended targed. Please also
+ check the peaks over time in [this
+ graph](https://log.gprd.gitlab.net/goto/9319c4a402461d204d13f3a4924a89fc)
+ in Kibana. Here, the percentile in question should not peak above
+ the target duration we want to set.
+
+Since decreasing a threshold too much could result in alerts for the
+apdex degradation, please also involve a Scalability team member in
+the merge reqeust.
+
+## How to adjust the urgency
+
+The urgency can be specified similar to how endpoints [get a feature
+category](../feature_categorization/index.md).
+
+For endpoints that don't have a specific target, the default urgency (1s duration) will be used.
+
+The following configurations are available:
+
+| Urgency | Duration in seconds | Notes |
+|----------|---------------------|-----------------------------------------------|
+| :high | 0.25s | |
+| :medium | 0.5s | |
+| :default | 1s | This is the default when nothing is specified |
+| :low | 5s | |
+
+### Rails controller
+
+An urgency can be specified for all actions in a controller like this:
+
+```ruby
+class Boards::ListsController < ApplicationController
+ urgency :high
+end
+```
+
+To specify the urgency also for certain actions in a controller, they
+can be specified like this:
+
+```ruby
+class Boards::ListsController < ApplicationController
+ urgency :high, [:index, :show]
+end
+```
+
+### Grape endpoints
+
+To specify the urgency for an entire API class, this can be done as
+follows:
+
+```ruby
+module API
+ class Issues < ::API::Base
+ urgency :low
+ end
+end
+```
+
+To specify the urgency also for certain actions in a API class, they
+can be specified like this:
+
+```ruby
+module API
+ class Issues < ::API::Base
+ urgency :medium, [
+ '/groups/:id/issues',
+ '/groups/:id/issues_statistics'
+ ]
+ end
+end
+```
+
+Or, we can specify the urgency per endpoint:
+
+```ruby
+get 'client/features', urgency: :low do
+ # endpoint logic
+end
+```
diff --git a/doc/development/architecture.md b/doc/development/architecture.md
index fe2b621da29..9accd4a3595 100644
--- a/doc/development/architecture.md
+++ b/doc/development/architecture.md
@@ -813,7 +813,7 @@ Starting with GitLab 13.0, Puma is the default web server.
- [Project page](https://gitlab.com/gitlab-org/gitlab/-/blob/master/README.md)
- Configuration:
- - [Omnibus](https://docs.gitlab.com/omnibus/settings/puma.html)
+ - [Omnibus](../administration/operations/puma.md)
- [Charts](https://docs.gitlab.com/charts/charts/gitlab/webservice/)
- [Source](../install/installation.md#configure-it)
- [GDK](https://gitlab.com/gitlab-org/gitlab/-/blob/master/config/gitlab.yml.example)
diff --git a/doc/development/cascading_settings.md b/doc/development/cascading_settings.md
index 0fa0e220ba9..a85fc52d303 100644
--- a/doc/development/cascading_settings.md
+++ b/doc/development/cascading_settings.md
@@ -135,7 +135,7 @@ Renders the enforcement checkbox.
| `attribute` | Name of the setting. For example, `:delayed_project_removal`. | `String` or `Symbol` | `true` |
| `group` | Current group. | [`Group`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/models/group.rb) | `true` |
| `form` | [Rails FormBuilder object](https://apidock.com/rails/ActionView/Helpers/FormBuilder). | [`ActionView::Helpers::FormBuilder`](https://apidock.com/rails/ActionView/Helpers/FormBuilder) | `true` |
-| `setting_locked` | If the setting is locked by an ancestor group or admin setting. Can be calculated with [`cascading_namespace_setting_locked?`](https://gitlab.com/gitlab-org/gitlab/-/blob/c2736823b8e922e26fd35df4f0cd77019243c858/app/helpers/namespaces_helper.rb#L86). | `Boolean` | `true` |
+| `setting_locked` | If the setting is locked by an ancestor group or administrator setting. Can be calculated with [`cascading_namespace_setting_locked?`](https://gitlab.com/gitlab-org/gitlab/-/blob/c2736823b8e922e26fd35df4f0cd77019243c858/app/helpers/namespaces_helper.rb#L86). | `Boolean` | `true` |
| `help_text` | Text shown below the checkbox. | `String` | `false` (Subgroups cannot change this setting.) |
[`_setting_label_checkbox.html.haml`](https://gitlab.com/gitlab-org/gitlab/-/blob/c2736823b8e922e26fd35df4f0cd77019243c858/app/views/shared/namespaces/cascading_settings/_setting_label_checkbox.html.haml)
@@ -147,7 +147,7 @@ Renders the label for a checkbox setting.
| `attribute` | Name of the setting. For example, `:delayed_project_removal`. | `String` or `Symbol` | `true` |
| `group` | Current group. | [`Group`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/models/group.rb) | `true` |
| `form` | [Rails FormBuilder object](https://apidock.com/rails/ActionView/Helpers/FormBuilder). | [`ActionView::Helpers::FormBuilder`](https://apidock.com/rails/ActionView/Helpers/FormBuilder) | `true` |
-| `setting_locked` | If the setting is locked by an ancestor group or admin setting. Can be calculated with [`cascading_namespace_setting_locked?`](https://gitlab.com/gitlab-org/gitlab/-/blob/c2736823b8e922e26fd35df4f0cd77019243c858/app/helpers/namespaces_helper.rb#L86). | `Boolean` | `true` |
+| `setting_locked` | If the setting is locked by an ancestor group or administrator setting. Can be calculated with [`cascading_namespace_setting_locked?`](https://gitlab.com/gitlab-org/gitlab/-/blob/c2736823b8e922e26fd35df4f0cd77019243c858/app/helpers/namespaces_helper.rb#L86). | `Boolean` | `true` |
| `settings_path_helper` | Lambda function that generates a path to the ancestor setting. For example, `settings_path_helper: -> (locked_ancestor) { edit_group_path(locked_ancestor, anchor: 'js-permissions-settings') }` | `Lambda` | `true` |
| `help_text` | Text shown below the checkbox. | `String` | `false` (`nil`) |
diff --git a/doc/development/changelog.md b/doc/development/changelog.md
index be46d61eb4c..2753257c941 100644
--- a/doc/development/changelog.md
+++ b/doc/development/changelog.md
@@ -62,8 +62,8 @@ The value must be the full URL of the merge request.
### GitLab Enterprise changes
-If a change is for GitLab Enterprise Edition, you must also add the trailer `EE:
-true`:
+If a change is exclusively for GitLab Enterprise Edition, **you must add** the
+trailer `EE: true`:
```plaintext
Update git vendor to gitlab
@@ -77,6 +77,8 @@ MR: https://gitlab.com/foo/bar/-/merge_requests/123
EE: true
```
+**Do not** add the trailer for changes that apply to both EE and CE.
+
## What warrants a changelog entry?
- Any change that introduces a database migration, whether it's regular, post,
diff --git a/doc/development/cicd/index.md b/doc/development/cicd/index.md
index b4e32066ba8..82fd37eacaf 100644
--- a/doc/development/cicd/index.md
+++ b/doc/development/cicd/index.md
@@ -157,7 +157,7 @@ On top of that, we have the following types of jobs:
- `Ci::Build` ... The job to be executed by runners.
- `Ci::Bridge` ... The job to trigger a downstream pipeline.
-- `GenericCommitStatus` ... The job to be executed in an external CI/CD system e.g. Jenkins.
+- `GenericCommitStatus` ... The job to be executed in an external CI/CD system, for example Jenkins.
When you use the "Job" terminology in codebase, readers would
assume that the class/object is any type of above.
diff --git a/doc/development/cicd/templates.md b/doc/development/cicd/templates.md
index 3fc464e661f..b74a1d0d58a 100644
--- a/doc/development/cicd/templates.md
+++ b/doc/development/cicd/templates.md
@@ -325,8 +325,14 @@ projects on `gitlab.com`:
After you're confident the latest template can be moved to stable:
1. Update the stable template with the content of the latest version.
+1. Remove the migration template from `Gitlab::Template::GitlabCiYmlTemplate::TEMPLATES_WITH_LATEST_VERSION` const.
1. Remove the corresponding feature flag.
+NOTE:
+Feature flags are enabled by default in RSpec, so all tests are performed
+against the latest templates. You should also test the stable templates
+with `stub_feature_flags(redirect_to_latest_template_<name>: false)`.
+
### Further reading
There is an [open issue](https://gitlab.com/gitlab-org/gitlab/-/issues/17716) about
diff --git a/doc/development/code_review.md b/doc/development/code_review.md
index 12cc63ef56d..89516c2168b 100644
--- a/doc/development/code_review.md
+++ b/doc/development/code_review.md
@@ -106,6 +106,7 @@ with [domain expertise](#domain-experts).
1. If your merge request includes user-facing changes (*3*), it must be
**approved by a [Product Designer](https://about.gitlab.com/handbook/engineering/projects/#gitlab_reviewers_UX)**,
based on assignments in the appropriate [DevOps stage group](https://about.gitlab.com/handbook/product/categories/#devops-stages).
+ See the [design and user interface guidelines](contributing/design.md) for details.
1. If your merge request includes adding a new JavaScript library (*1*)...
- If the library significantly increases the
[bundle size](https://gitlab.com/gitlab-org/frontend/playground/webpack-memory-metrics/-/blob/master/doc/report.md), it must
@@ -156,7 +157,7 @@ See the [test engineering process](https://about.gitlab.com/handbook/engineering
1. I have confirmed that this change is [backwards compatible across updates](multi_version_compatibility.md), or I have decided that this does not apply.
1. I have properly separated EE content from FOSS, or this MR is FOSS only.
- [Where should EE code go?](ee_features.md#separation-of-ee-code)
-1. If I am introducing a new expectation for existing data, I have confirmed that existing data meets this expectation or I have made this expectation optional rather than required.
+1. I have considered that existing data may be surprisingly varied. For example, a new model validation can break existing records. Consider making validation on existing data optional rather than required if you haven't confirmed that existing data will pass validation.
##### Performance, reliability, and availability
diff --git a/doc/development/contributing/design.md b/doc/development/contributing/design.md
index 9e8375fcbdd..e85f5dd8349 100644
--- a/doc/development/contributing/design.md
+++ b/doc/development/contributing/design.md
@@ -5,34 +5,119 @@ group: Development
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
-# Implement design & UI elements
+# Design and user interface changes
-For guidance on UX implementation at GitLab, please refer to our [Design System](https://design.gitlab.com/).
+Follow these guidelines when contributing or reviewing design and user interface
+(UI) changes. Refer to our [code review guide](../code_review.md) for broader
+advice and best practices for code review in general.
-The UX team uses labels to manage their workflow.
+The basis for most of these guidelines is [Pajamas](https://design.gitlab.com/),
+GitLab design system. We encourage you to [contribute to Pajamas](https://design.gitlab.com/get-started/contribute/)
+with additions and improvements.
-The `~UX` label on an issue is a signal to the UX team that it will need UX attention.
-To better understand the priority by which UX tackles issues, see the [UX section](https://about.gitlab.com/handbook/engineering/ux/) of the handbook.
+## Merge request reviews
-Once an issue has been worked on and is ready for development, a UXer removes the `~UX` label and applies the `~"UX ready"` label to that issue.
+As a merge request (MR) author, you must include _Before_ and _After_
+screenshots (or videos) of your changes in the description, as explained in our
+[MR workflow](merge_request_workflow.md). These screenshots/videos are very helpful
+for all reviewers and can speed up the review process, especially if the changes
+are small.
-There is a special type label called `~"product discovery"` intended for UX (user experience),
-PM (product manager), FE (frontend), and BE (backend). It represents a discovery issue to discuss the problem and
-potential solutions. The final output for this issue could be a doc of
-requirements, a design artifact, or even a prototype. The solution will be
-developed in a subsequent milestone.
+## Checklist
-`~"product discovery"` issues are like any other issue and should contain a milestone label, `~Deliverable` or `~Stretch`, when scheduled in the current milestone.
+Check these aspects both when _designing_ and _reviewing_ UI changes.
-The initial issue should be about the problem we are solving. If a separate [product discovery issue](https://about.gitlab.com/handbook/engineering/ux/ux-department-workflow/#how-we-use-labels)
-is needed for additional research and design work, it will be created by a PM or UX person.
-Assign the `~UX`, `~"product discovery"` and `~Deliverable` labels, add a milestone and
-use a title that makes it clear that the scheduled issue is product discovery
-(for example, `Product discovery for XYZ`).
+### Writing
-In order to complete a product discovery issue in a release, you must complete the following:
+- Follow [Pajamas](https://design.gitlab.com/content/punctuation/) as the primary
+ guidelines for UI text and [documentation style guide](../documentation/styleguide/index.md)
+ as the secondary.
+- Use clear and consistent [terminology](https://design.gitlab.com/content/terminology/).
+- Check grammar and spelling.
+- Consider help content and follow its [guidelines](https://design.gitlab.com/usability/helping-users/).
+- Request review from the [appropriate Technical Writer](https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers),
+ indicating any specific files or lines they should review, and how to preview
+ or understand the location/context of the text from the user's perspective.
-1. UXer removes the `~UX` label, adds the `~"UX ready"` label.
-1. Modify the issue description in the product discovery issue to contain the final design. If it makes sense, the original information indicating the need for the design can be moved to a lower "Original Information" section.
-1. Copy the design to the description of the delivery issue for which the product discovery issue was created. Do not simply refer to the product discovery issue as a separate source of truth.
-1. In some cases, a product discovery issue also identifies future enhancements that will not go into the issue that originated the product discovery issue. For these items, create new issues containing the designs to ensure they are not lost. Put the issues in the backlog if they are agreed upon as good ideas. Otherwise leave them for triage.
+### Patterns
+
+- Consider similar patterns used in the product and justify in the issue when diverging
+ from them.
+- Use appropriate [components](https://design.gitlab.com/components/overview/)
+ and [data visualizations](https://design.gitlab.com/data-visualization/overview/).
+
+### Visual design
+
+Check visual design properties using your browser's _elements inspector_ ([Chrome](https://developer.chrome.com/docs/devtools/css/),
+[Firefox](https://developer.mozilla.org/en-US/docs/Tools/Page_Inspector/How_to/Open_the_Inspector)).
+
+- Use recommended [colors](https://design.gitlab.com/product-foundations/colors/)
+ and [typography](https://design.gitlab.com/product-foundations/type-fundamentals/).
+- Follow [layout guidelines](https://design.gitlab.com/layout/grid/).
+- Use existing [icons](http://gitlab-org.gitlab.io/gitlab-svgs/) and [illustrations](http://gitlab-org.gitlab.io/gitlab-svgs/illustrations/)
+ or propose new ones according to [iconography](https://design.gitlab.com/product-foundations/iconography/)
+ and [illustration](https://design.gitlab.com/product-foundations/illustration/)
+ guidelines.
+- _Optionally_ consider [dark mode](../../user/profile/preferences.md#dark-mode). [^1]
+
+ [^1]: You're not required to design for [dark mode](../../user/profile/preferences.md#dark-mode) while the feature is in [alpha](https://about.gitlab.com/handbook/product/gitlab-the-product/#alpha). The [UX Foundations team](https://about.gitlab.com/direction/ecosystem/foundations/) plans to improve the dark mode in the future. Until we integrate [Pajamas](https://design.gitlab.com/) components into the product and the underlying design strategy is in place to support dark mode, we cannot guarantee that we won't introduce bugs and debt to this mode. At your discretion, evaluate the need to create dark mode patches.
+
+### States
+
+Check states using your browser's _styles inspector_ to toggle CSS pseudo-classes
+like `:hover` and others ([Chrome](https://developer.chrome.com/docs/devtools/css/reference/#pseudo-class),
+[Firefox](https://developer.mozilla.org/en-US/docs/Tools/Page_Inspector/How_to/Examine_and_edit_CSS#viewing_common_pseudo-classes)).
+
+- Account for all applicable states ([error](https://design.gitlab.com/content/error-messages),
+ rest, loading, focus, hover, selected, disabled).
+- Account for states dependent on data size ([empty](https://design.gitlab.com/regions/empty-states),
+ some data, and lots of data).
+- Account for states dependent on user role, user preferences, and subscription.
+- Consider animations and transitions, and follow their [guidelines](https://design.gitlab.com/product-foundations/motion/).
+
+### Responsive
+
+Check responsive behavior using your browser's _responsive mode_ ([Chrome](https://developer.chrome.com/docs/devtools/device-mode/#viewport),
+[Firefox](https://developer.mozilla.org/en-US/docs/Tools/Responsive_Design_Mode)).
+
+- Account for resizing, collapsing, moving, or wrapping of elements across
+ all breakpoints (even if larger viewports are prioritized).
+- Provide the same information and actions in all breakpoints.
+
+### Accessibility
+
+Check accessibility using your browser's _accessibility inspector_ ([Chrome](https://developer.chrome.com/docs/devtools/accessibility/reference/),
+[Firefox](https://developer.mozilla.org/en-US/docs/Tools/Accessibility_inspector#accessing_the_accessibility_inspector)).
+
+- Conform to level AA of the World Wide Web Consortium (W3C) [Web Content Accessibility Guidelines 2.1](https://www.w3.org/TR/WCAG21/),
+ according to our [statement of compliance](https://design.gitlab.com/accessibility/a11y/).
+- Follow accessibility [best practices](https://design.gitlab.com/accessibility/best-practices/)
+ and [checklist](../fe_guide/accessibility.md#quick-checklist).
+
+### Handoff
+
+When the design is ready, _before_ starting its implementation:
+
+- Share design specifications in the related issue, preferably through a [Figma link](https://help.figma.com/hc/en-us/articles/360040531773-Share-Files-with-anyone-using-Link-Sharing#Copy_links)
+ link or [GitLab Designs feature](../../user/project/issues/design_management.md#the-design-management-section).
+ See [when you should use each tool](https://about.gitlab.com/handbook/engineering/ux/product-designer/#deliver).
+- Document user flow and states (for example, using [Mermaid flowcharts in Markdown](../../user/markdown.md#mermaid)).
+- Document animations and transitions.
+- Document responsive behaviors.
+- Document non-evident behaviors (for example, field is auto-focused).
+- Document accessibility behaviors (for example, using [accessibility annotations in Figma](https://www.figma.com/file/g7QtDbfxF3pCdWiyskIr0X/Accessibility-bluelines)).
+- Contribute new icons or illustrations to the [GitLab SVGs](https://gitlab.com/gitlab-org/gitlab-svgs)
+ project.
+
+### Follow-ups
+
+At any moment, but usually _during_ or _after_ the design's implementation:
+
+- Contribute [issues to Pajamas](https://design.gitlab.com/get-started/contribute#contribute-an-issue)
+ for additions or enhancements to the design system.
+- Create issues with the [`~UX debt`](issue_workflow.md#technical-and-ux-debt)
+ label for intentional deviations from the agreed-upon UX requirements due to
+ time or feasibility challenges, linking back to the corresponding issue(s) or
+ MR(s).
+- Create issues for [feature additions or enhancements](issue_workflow.md#feature-proposals)
+ outside the agreed-upon UX requirements to avoid scope creep.
diff --git a/doc/development/contributing/issue_workflow.md b/doc/development/contributing/issue_workflow.md
index 29f6eb57160..d0f107ba98a 100644
--- a/doc/development/contributing/issue_workflow.md
+++ b/doc/development/contributing/issue_workflow.md
@@ -151,7 +151,7 @@ From the handbook's
page:
> Categories are high-level capabilities that may be a standalone product at
-another company. e.g. Portfolio Management.
+another company, such as Portfolio Management, for example.
It's highly recommended to add a category label, as it's used by our triage
automation to
@@ -182,7 +182,7 @@ From the handbook's
[Product stages, groups, and categories](https://about.gitlab.com/handbook/product/categories/#hierarchy)
page:
-> Features: Small, discrete functionalities. e.g. Issue weights. Some common
+> Features: Small, discrete functionalities, for example Issue weights. Some common
features are listed within parentheses to facilitate finding responsible PMs by keyword.
It's highly recommended to add a feature label if no category label applies, as
@@ -303,7 +303,7 @@ We automatically add the ~"Accepting merge requests" label to issues
that match the [triage policy](https://about.gitlab.com/handbook/engineering/quality/triage-operations/#accepting-merge-requests).
We recommend people that have never contributed to any open source project to
-look for issues labeled `~"Accepting merge requests"` with a [weight of 1](https://gitlab.com/groups/gitlab-org/-/issues?state=opened&label_name[]=Accepting+merge+requests&assignee_id=None&sort=weight&weight=1) or the `~"Good for new contributors"` [label](https://gitlab.com/gitlab-org/gitlab/-/issues?scope=all&state=opened&label_name[]=good%20for%20new%20contributors&assignee_id=None) attached to it.
+look for issues labeled `~"Accepting merge requests"` with a [weight of 1](https://gitlab.com/groups/gitlab-org/-/issues?state=opened&label_name[]=Accepting+merge+requests&assignee_id=None&sort=weight&weight=1) or the `~"good for new contributors"` [label](https://gitlab.com/gitlab-org/gitlab/-/issues?scope=all&state=opened&label_name[]=good%20for%20new%20contributors&assignee_id=None) attached to it.
More experienced contributors are very welcome to tackle
[any of them](https://gitlab.com/groups/gitlab-org/-/issues?state=opened&label_name[]=Accepting+merge+requests&assignee_id=None).
@@ -342,19 +342,22 @@ To create a feature proposal, open an issue on the
[issue tracker](https://gitlab.com/gitlab-org/gitlab/-/issues).
In order to help track the feature proposals, we have created a
-[`feature`](https://gitlab.com/gitlab-org/gitlab/-/issues?label_name=feature) label. For the time being, users that are not members
-of the project cannot add labels. You can instead ask one of the [core team](https://about.gitlab.com/community/core-team/)
-members to add the label ~feature to the issue or add the following
+[`feature`](https://gitlab.com/gitlab-org/gitlab/-/issues?label_name=feature) label.
+For the time being, users that are not members of the project cannot add labels.
+You can instead ask one of the [core team](https://about.gitlab.com/community/core-team/)
+members to add the label `~feature` to the issue or add the following
code snippet right after your description in a new line: `~feature`.
Please keep feature proposals as small and simple as possible, complex ones
might be edited to make them small and simple.
-Please submit Feature Proposals using the ['Feature Proposal' issue template](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/issue_templates/Feature%20proposal%20-%20detailed.md) provided on the issue tracker.
+Please submit feature proposals using the ['Feature Proposal' issue template](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/issue_templates/Feature%20proposal%20-%20detailed.md) provided on the issue tracker.
-For changes in the interface, it is helpful to include a mockup. Issues that add to, or change, the interface should
-be given the ~"UX" label. This will allow the UX team to provide input and guidance. You may
-need to ask one of the [core team](https://about.gitlab.com/community/core-team/) members to add the label, if you do not have permissions to do it by yourself.
+For changes to the user interface (UI), follow our [design and UI guidelines](design.md),
+and include a visual example (screenshot, wireframe, or mockup). Such issues should
+be given the `~UX"` label for the Product Design team to provide input and guidance.
+You may need to ask one of the [core team](https://about.gitlab.com/community/core-team/)
+members to add the label, if you do not have permissions to do it by yourself.
If you want to create something yourself, consider opening an issue first to
discuss whether it is interesting to include this in GitLab.
diff --git a/doc/development/contributing/merge_request_workflow.md b/doc/development/contributing/merge_request_workflow.md
index 25561764bd6..a521d89db2b 100644
--- a/doc/development/contributing/merge_request_workflow.md
+++ b/doc/development/contributing/merge_request_workflow.md
@@ -18,8 +18,8 @@ in order to ensure the work is finished before the release date.
If you want to add a new feature that is not labeled, it is best to first create
an issue (if there isn't one already) and leave a comment asking for it
-to be marked as `Accepting Merge Requests`. Please include screenshots or
-wireframes of the proposed feature if it will also change the UI.
+to be marked as `Accepting merge requests`. See the [feature proposals](issue_workflow.md#feature-proposals)
+section.
Merge requests should be submitted to the appropriate project at GitLab.com, for example
[GitLab](https://gitlab.com/gitlab-org/gitlab/-/merge_requests),
@@ -255,14 +255,14 @@ requirements.
1. The change is tested in a review app where possible and if appropriate.
1. The new feature does not degrade the user experience of the product.
1. The change is evaluated to [limit the impact of far-reaching work](https://about.gitlab.com/handbook/engineering/development/#reducing-the-impact-of-far-reaching-work).
-1. An agreed-upon rollout plan.
+1. An agreed-upon [rollout plan](https://about.gitlab.com/handbook/engineering/development/processes/rollout-plans/).
1. Merged by a project maintainer.
### Production use
1. Confirmed to be working in staging before implementing the change in production, where possible.
1. Confirmed to be working in the production with no new [Sentry](https://about.gitlab.com/handbook/engineering/#sentry) errors after the contribution is deployed.
-1. Confirmed that the rollout plan has been completed.
+1. Confirmed that the [rollout plan](https://about.gitlab.com/handbook/engineering/development/processes/rollout-plans) has been completed.
1. If there is a performance risk in the change, I have analyzed the performance of the system before and after the change.
1. *If the merge request uses feature flags, per-project or per-group enablement, and a staged rollout:*
- Confirmed to be working on GitLab projects.
diff --git a/doc/development/dangerbot.md b/doc/development/dangerbot.md
index d9b922cb60e..aca37e2182a 100644
--- a/doc/development/dangerbot.md
+++ b/doc/development/dangerbot.md
@@ -141,7 +141,7 @@ at GitLab so far:
- Their availability:
- No "OOO"/"PTO"/"Parental Leave" in their GitLab or Slack status.
- No `:red_circle:`/`:palm_tree:`/`:beach:`/`:beach_umbrella:`/`:beach_with_umbrella:` emojis in GitLab or Slack status.
- - (Experimental) Their timezone: people for which the local hour is between
+ - (Experimental) Their time zone: people for which the local hour is between
6 AM and 2 PM are eligible to be picked. This is to ensure they have a good
chance to get to perform a review during their current work day. The experimentation is tracked in
[this issue](https://gitlab.com/gitlab-org/quality/team-tasks/-/issues/563)
diff --git a/doc/development/database/database_migration_pipeline.md b/doc/development/database/database_migration_pipeline.md
index 5a8ce89a362..ce7e1801abc 100644
--- a/doc/development/database/database_migration_pipeline.md
+++ b/doc/development/database/database_migration_pipeline.md
@@ -50,6 +50,6 @@ Some additional information is included at the bottom of the comment:
| Result | Description |
|----------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| Migrations pending on GitLab.com | A summary of migrations not deployed yet to GitLab.com. This info is useful when testing a migration that was merged but not deployed yet. |
+| Migrations pending on GitLab.com | A summary of migrations not deployed yet to GitLab.com. This information is useful when testing a migration that was merged but not deployed yet. |
| Clone details | A link to the `Postgres.ai` thin clone created for this testing pipeline, along with information about its expiry. This can be used to further explore the results of running the migration. Only accessible by database maintainers or with an access request. |
| Artifacts | A link to the pipeline's artifacts. Full query logs for each migration (ending in `.log`) are available there and only accessible by database maintainers or with an access request. |
diff --git a/doc/development/database/database_reviewer_guidelines.md b/doc/development/database/database_reviewer_guidelines.md
index 59653c6dde3..bc18e606f21 100644
--- a/doc/development/database/database_reviewer_guidelines.md
+++ b/doc/development/database/database_reviewer_guidelines.md
@@ -71,6 +71,7 @@ topics and use cases. The most frequently required during database reviewing are
- [Migrations style guide](../migration_style_guide.md) for creating safe SQL migrations.
- [Avoiding downtime in migrations](../avoiding_downtime_in_migrations.md).
- [SQL guidelines](../sql.md) for working with SQL queries.
+- [Guidelines for JiHu contributions with database migrations](https://about.gitlab.com/handbook/ceo/chief-of-staff-team/jihu-support/jihu-database-change-process.html)
## How to apply to become a database maintainer
diff --git a/doc/development/database/efficient_in_operator_queries.md b/doc/development/database/efficient_in_operator_queries.md
index bc72bce30bf..1e706890f64 100644
--- a/doc/development/database/efficient_in_operator_queries.md
+++ b/doc/development/database/efficient_in_operator_queries.md
@@ -66,9 +66,9 @@ The execution of the query can be largely broken down into three steps:
1. The database sorts the `issues` rows in memory by `created_at` and returns `LIMIT 20` rows to
the end-user. For large groups, this final step requires both large memory and CPU resources.
-<details>
-<summary>Expand this sentence to see the execution plan for this DB query.</summary>
-<pre><code>
+Execution plan for this DB query:
+
+```sql
Limit (cost=90170.07..90170.12 rows=20 width=1329) (actual time=967.597..967.607 rows=20 loops=1)
Buffers: shared hit=239127 read=3060
I/O Timings: read=336.879
@@ -106,8 +106,7 @@ The execution of the query can be largely broken down into three steps:
Planning Time: 7.750 ms
Execution Time: 967.973 ms
(36 rows)
-</code></pre>
-</details>
+```
The performance of the query depends on the number of rows in the database.
On average, we can say the following:
@@ -226,7 +225,12 @@ Gitlab::Pagination::Keyset::InOperatorOptimization::QueryBuilder.new(
- `finder_query` loads the actual record row from the database. It must also be a lambda, where
the order by column expressions is available for locating the record. In this example, the
yielded values are `created_at` and `id` SQL expressions. Finding a record is very fast via the
- primary key, so we don't use the `created_at` value.
+ primary key, so we don't use the `created_at` value. Providing the `finder_query` lambda is optional.
+ If it's not given, the IN operator optimization will only make the ORDER BY columns available to
+ the end-user and not the full database row.
+
+ If it's not given, the IN operator optimization will only make the ORDER BY columns available to
+ the end-user and not the full database row.
The following database index on the `issues` table must be present
to make the query execute efficiently:
@@ -235,9 +239,9 @@ to make the query execute efficiently:
"idx_issues_on_project_id_and_created_at_and_id" btree (project_id, created_at, id)
```
-<details>
-<summary>Expand this sentence to see the SQL query.</summary>
-<pre><code>
+The SQL query:
+
+```sql
SELECT "issues".*
FROM
(WITH RECURSIVE "array_cte" AS MATERIALIZED
@@ -348,8 +352,7 @@ SELECT (records).*
FROM "recursive_keyset_cte" AS "issues"
WHERE (COUNT <> 0)) issues -- filtering out the initializer row
LIMIT 20
-</code></pre>
-</details>
+```
### Using the `IN` query optimization
@@ -461,9 +464,9 @@ Gitlab::Pagination::Keyset::InOperatorOptimization::QueryBuilder.new(
).execute.limit(20)
```
-<details>
-<summary>Expand this sentence to see the SQL query.</summary>
-<pre><code>
+The SQL query:
+
+```sql
SELECT "issues".*
FROM
(WITH RECURSIVE "array_cte" AS MATERIALIZED
@@ -581,9 +584,7 @@ FROM
FROM "recursive_keyset_cte" AS "issues"
WHERE (COUNT <> 0)) issues
LIMIT 20
-</code>
-</pre>
-</details>
+```
NOTE:
To make the query efficient, the following columns need to be covered with an index: `project_id`, `issue_type`, `created_at`, and `id`.
@@ -611,6 +612,32 @@ Gitlab::Pagination::Keyset::Iterator.new(scope: scope, **opts).each_batch(of: 10
end
```
+NOTE:
+The query loads complete database rows from the disk. This may cause increased I/O and slower
+database queries. Depending on the use case, the primary key is often only
+needed for the batch query to invoke additional statements. For example, `UPDATE` or `DELETE`. The
+`id` column is included in the `ORDER BY` columns (`created_at` and `id`) and is already
+loaded. In this case, you can omit the `finder_query` parameter.
+
+Example for loading the `ORDER BY` columns only:
+
+```ruby
+scope = Issue.order(:created_at, :id)
+array_scope = Group.find(9970).all_projects.select(:id)
+array_mapping_scope = -> (id_expression) { Issue.where(Issue.arel_table[:project_id].eq(id_expression)) }
+
+opts = {
+ in_operator_optimization_options: {
+ array_scope: array_scope,
+ array_mapping_scope: array_mapping_scope
+ }
+}
+
+Gitlab::Pagination::Keyset::Iterator.new(scope: scope, **opts).each_batch(of: 100) do |records|
+ puts records.select(:id).map { |r| [r.id] } # only id and created_at are available
+end
+```
+
#### Keyset pagination
The optimization works out of the box with GraphQL and the `keyset_paginate` helper method.
diff --git a/doc/development/database/keyset_pagination.md b/doc/development/database/keyset_pagination.md
index fd62c36b753..4f0b353a37f 100644
--- a/doc/development/database/keyset_pagination.md
+++ b/doc/development/database/keyset_pagination.md
@@ -169,7 +169,7 @@ Consider the following scope:
scope = Issue.where(project_id: 10).order(Gitlab::Database.nulls_last_order('relative_position', 'DESC'))
# SELECT "issues".* FROM "issues" WHERE "issues"."project_id" = 10 ORDER BY relative_position DESC NULLS LAST
-scope.keyset_paginate # raises: Gitlab::Pagination::Keyset::Paginator::UnsupportedScopeOrder: The order on the scope does not support keyset pagination
+scope.keyset_paginate # raises: Gitlab::Pagination::Keyset::UnsupportedScopeOrder: The order on the scope does not support keyset pagination
```
The `keyset_paginate` method raises an error because the order value on the query is a custom SQL string and not an [`Arel`](https://www.rubydoc.info/gems/arel) AST node. The keyset library cannot automatically infer configuration values from these kinds of queries.
diff --git a/doc/development/database/multiple_databases.md b/doc/development/database/multiple_databases.md
index 0fd9f821fab..0ba752ba3a6 100644
--- a/doc/development/database/multiple_databases.md
+++ b/doc/development/database/multiple_databases.md
@@ -6,16 +6,14 @@ info: To determine the technical writer assigned to the Stage/Group associated w
# Multiple Databases
-In order to scale GitLab, the GitLab application database
-will be [decomposed into multiple
-databases](https://gitlab.com/groups/gitlab-org/-/epics/6168).
+To scale GitLab, the we are
+[decomposing the GitLab application database into multiple databases](https://gitlab.com/groups/gitlab-org/-/epics/6168).
-## CI Database
+## CI/CD Database
-Support for configuring the GitLab Rails application to use a distinct
-database for CI tables was added in [GitLab
-14.1](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/64289). This
-feature is still under development, and is not ready for production use.
+> Support for configuring the GitLab Rails application to use a distinct
+database for CI/CD tables was [introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/64289)
+in GitLab 14.1. This feature is still under development, and is not ready for production use.
By default, GitLab is configured to use only one main database. To
opt-in to use a main database, and CI database, modify the
@@ -92,8 +90,8 @@ test: &test
### Migrations
-Any migrations that affect `Ci::CiDatabaseRecord` models
-and their tables must be placed in two directories for now:
+Place any migrations that affect `Ci::CiDatabaseRecord` models
+and their tables in two directories:
- `db/migrate`
- `db/ci_migrate`
@@ -394,7 +392,8 @@ You can see a real example of using this method for fixing a cross-join in
#### Allowlist for existing cross-joins
A cross-join across databases can be explicitly allowed by wrapping the code in the
-`::Gitlab::Database.allow_cross_joins_across_databases` helper method.
+`::Gitlab::Database.allow_cross_joins_across_databases` helper method. Alternative
+way is to mark a given relation as `relation.allow_cross_joins_across_databases`.
This method should only be used:
@@ -405,16 +404,113 @@ This method should only be used:
The `allow_cross_joins_across_databases` helper method can be used as follows:
```ruby
+# Scope the block executing a object from database
::Gitlab::Database.allow_cross_joins_across_databases(url: 'https://gitlab.com/gitlab-org/gitlab/-/issues/336590') do
subject.perform(1, 4)
end
```
+```ruby
+# Mark a relation as allowed to cross-join databases
+def find_actual_head_pipeline
+ all_pipelines
+ .allow_cross_joins_across_databases(url: 'https://gitlab.com/gitlab-org/gitlab/-/issues/336891')
+ .for_sha_or_source_sha(diff_head_sha)
+ .first
+end
+```
+
The `url` parameter should point to an issue with a milestone for when we intend
to fix the cross-join. If the cross-join is being used in a migration, we do not
need to fix the code. See <https://gitlab.com/gitlab-org/gitlab/-/issues/340017>
for more details.
+### Removing cross-database transactions
+
+When dealing with multiple databases, it's important to pay close attention to data modification
+that affects more than one database.
+[Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/339811) GitLab 14.4, an automated check
+prevents cross-database modifications.
+
+When at least two different databases are modified during a transaction initiated on any database
+server, the application triggers a cross-database modification error (only in test environment).
+
+Example:
+
+```ruby
+# Open transaction on Main DB
+ApplicationRecord.transaction do
+ ci_build.update!(updated_at: Time.current) # UPDATE on CI DB
+ ci_build.project.update!(updated_at: Time.current) # UPDATE on Main DB
+end
+# raises error: Cross-database data modification of 'main, ci' were detected within
+# a transaction modifying the 'ci_build, projects' tables
+```
+
+The code example above updates the timestamp for two records within a transaction. With the
+ongoing work on the CI database decomposition, we cannot ensure the schematics of a database
+transaction.
+If the second update query fails, the first update query will not be
+rolled back because the `ci_build` record is located on a different database server. For
+more information, look at the
+[transaction guidelines](transaction_guidelines.md#dangerous-example-third-party-api-calls)
+page.
+
+#### Fixing cross-database errors
+
+##### Removing the transaction block
+
+Without an open transaction, the cross-database modification check cannot raise an error.
+By making this change, we sacrifice consistency. In case of an application failure after the
+first `UPDATE` query, the second `UPDATE` query will never execute.
+
+The same code without the `transaction` block:
+
+```ruby
+ci_build.update!(updated_at: Time.current) # CI DB
+ci_build.project.update!(updated_at: Time.current) # Main DB
+```
+
+##### Async processing
+
+If we need more guarantee that an operation finishes the work consistently we can execute it
+within a background job. A background job is scheduled asynchronously and retried several times
+in case of an error. There is still a very small chance of introducing inconsistency.
+
+Example:
+
+```ruby
+current_time = Time.current
+
+MyAsyncConsistencyJob.perform_async(cu_build.id)
+
+ci_build.update!(updated_at: current_time)
+ci_build.project.update!(updated_at: current_time)
+```
+
+The `MyAsyncConsistencyJob` would also attempt to update the timestamp if they differ.
+
+##### Aiming for perfect consistency
+
+At this point, we don't have the tooling (we might not even need it) to ensure similar consistency
+characteristics as we had with one database. If you think that the code you're working on requires
+these properties, then you can disable the cross-database modification check by wrapping to
+offending database queries with a block and create a follow-up issue mentioning the sharding group
+(`gitlab-org/sharding-group`).
+
+```ruby
+Gitlab::Database.allow_cross_joins_across_databases(url: 'gitlab issue URL') do
+ ApplicationRecord.transaction do
+ ci_build.update!(updated_at: Time.current) # UPDATE on CI DB
+ ci_build.project.update!(updated_at: Time.current) # UPDATE on Main DB
+ end
+end
+```
+
+Don't hesitate to reach out to the
+[sharding group](https://about.gitlab.com/handbook/engineering/development/enablement/sharding/)
+for advice.
+
## `config/database.yml`
GitLab will support running multiple databases in the future, for example to [separate tables for the continuous integration features](https://gitlab.com/groups/gitlab-org/-/epics/6167) from the main database. In order to prepare for this change, we [validate the structure of the configuration](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/67877) in `database.yml` to ensure that only known databases are used.
diff --git a/doc/development/database/transaction_guidelines.md b/doc/development/database/transaction_guidelines.md
index 4c586135015..2806bd217db 100644
--- a/doc/development/database/transaction_guidelines.md
+++ b/doc/development/database/transaction_guidelines.md
@@ -8,17 +8,21 @@ info: To determine the technical writer assigned to the Stage/Group associated w
This document gives a few examples of the usage of database transactions in application code.
-For further reference please check PostgreSQL documentation about [transactions](https://www.postgresql.org/docs/current/tutorial-transactions.html).
+For further reference, check PostgreSQL documentation about [transactions](https://www.postgresql.org/docs/current/tutorial-transactions.html).
## Database decomposition and sharding
-The [sharding group](https://about.gitlab.com/handbook/engineering/development/enablement/sharding/) plans to split the main GitLab database and move some of the database tables to other database servers.
+The [sharding group](https://about.gitlab.com/handbook/engineering/development/enablement/sharding/) plans
+to split the main GitLab database and move some of the database tables to other database servers.
-The group will start decomposing the `ci_*` related database tables first. To maintain the current application development experience, tooling and static analyzers will be added to the codebase to ensure correct data access and data modification methods. By using the correct form for defining database transactions, we can save significant refactoring work in the future.
+We'll start decomposing the `ci_*`-related database tables first. To maintain the current application
+development experience, we'll add tooling and static analyzers to the codebase to ensure correct
+data access and data modification methods. By using the correct form for defining database transactions,
+we can save significant refactoring work in the future.
## The transaction block
-The `ActiveRecord` library provides a convenient way to group database statements into a transaction.
+The `ActiveRecord` library provides a convenient way to group database statements into a transaction:
```ruby
issue = Issue.find(10)
@@ -30,16 +34,19 @@ ApplicationRecord.transaction do
end
```
-This transaction involves two database tables, in case of an error, each `UPDATE` statement will be rolled back to the previous, consistent state.
+This transaction involves two database tables. In case of an error, each `UPDATE`
+statement rolls back to the previous consistent state.
NOTE:
Avoid referencing the `ActiveRecord::Base` class and use `ApplicationRecord` instead.
## Transaction and database locks
-When a transaction block is opened, the database will try to acquire the necessary locks on the resources. The type of locks will depend on the actual database statements.
+When a transaction block is opened, the database tries to acquire the necessary
+locks on the resources. The type of locks depend on the actual database statements.
-Consider a concurrent update scenario where the following code is executed at the same time from two different processes:
+Consider a concurrent update scenario where the following code is executed at the
+same time from two different processes:
```ruby
issue = Issue.find(10)
@@ -51,15 +58,22 @@ ApplicationRecord.transaction do
end
```
-The database will try to acquire the `FOR UPDATE` lock for the referenced `issue` and `project` records. In our case, we have two competing transactions for these locks, one of them will successfully acquire them. The other transaction will have to wait in the lock queue until the first transaction finishes. The execution of the second transaction is blocked at this point.
+The database tries to acquire the `FOR UPDATE` lock for the referenced `issue` and
+`project` records. In our case, we have two competing transactions for these locks,
+and only one of them will successfully acquire them. The other transaction will have
+to wait in the lock queue until the first transaction finishes. The execution of the
+second transaction is blocked at this point.
## Transaction speed
-To prevent lock contention and maintain stable application performance, the transaction block should finish as fast as possible. When a transaction acquires locks, it will hold on to them until the transaction finishes.
+To prevent lock contention and maintain stable application performance, the transaction
+block should finish as fast as possible. When a transaction acquires locks, it holds
+on to them until the transaction finishes.
-Apart from application performance, long-running transactions can also affect the application upgrade processes by blocking database migrations.
+Apart from application performance, long-running transactions can also affect application
+upgrade processes by blocking database migrations.
-### Dangerous example: 3rd party API calls
+### Dangerous example: third-party API calls
Consider the following example:
@@ -73,20 +87,29 @@ Member.transaction do
end
```
-Here, we ensure that the `notification_email_sent` column is updated only when the `send_notification_email` method succeeds. The `send_notification_email` method executes a network request to an email sending service. If the underlying infrastructure does not specify timeouts or the network call takes too long time, the database transaction will stay open.
+Here, we ensure that the `notification_email_sent` column is updated only when the
+`send_notification_email` method succeeds. The `send_notification_email` method
+executes a network request to an email sending service. If the underlying infrastructure
+does not specify timeouts or the network call takes too long time, the database transaction
+stays open.
Ideally, a transaction should only contain database statements.
Avoid doing in a `transaction` block:
-- External network requests such as: triggering Sidekiq jobs, sending emails, HTTP API calls and running database statements using a different connection.
+- External network requests such as:
+ - Triggering Sidekiq jobs.
+ - Sending emails.
+ - HTTP API calls.
+ - Running database statements using a different connection.
- File system operations.
- Long, CPU intensive computation.
- Calling `sleep(n)`.
## Explicit model referencing
-If a transaction modifies records from the same database table, it's advised to use the `Model.transaction` block:
+If a transaction modifies records from the same database table, we advise to use the
+`Model.transaction` block:
```ruby
build_1 = Ci::Build.find(1)
@@ -98,7 +121,8 @@ Ci::Build.transaction do
end
```
-The transaction above will use the same database connection for the transaction as the models in the `transaction` block. In a multi-database environment the following example would be dangerous:
+The transaction above uses the same database connection for the transaction as the models
+in the `transaction` block. In a multi-database environment the following example is dangerous:
```ruby
# `ci_builds` table is located on another database
@@ -114,4 +138,6 @@ ActiveRecord::Base.transaction do
end
```
-The `ActiveRecord::Base` class uses a different database connection than the `Ci::Build` records. The two statements in the transaction block will not be part of the transaction and will not be rolled back in case something goes wrong. They act as 3rd part calls.
+The `ActiveRecord::Base` class uses a different database connection than the `Ci::Build` records.
+The two statements in the transaction block will not be part of the transaction and will not be
+rolled back in case something goes wrong. They act as 3rd part calls.
diff --git a/doc/development/database_debugging.md b/doc/development/database_debugging.md
index b1c8508c884..7c17a39746e 100644
--- a/doc/development/database_debugging.md
+++ b/doc/development/database_debugging.md
@@ -72,8 +72,8 @@ bundle exec rails db -e development
## Access the database with a GUI
-Most GUIs (DataGrid, RubyMine, DBeaver) require a TCP connection to the database, but by default
-the database runs on a UNIX socket. To be able to access the database from these tools, some steps
+Most GUIs (DataGrid, RubyMine, DBeaver) require a TCP connection to the database, but by default
+the database runs on a UNIX socket. To be able to access the database from these tools, some steps
are needed:
1. On the GDK root directory, run:
diff --git a/doc/development/database_review.md b/doc/development/database_review.md
index 42bfa656a61..dcd5baab177 100644
--- a/doc/development/database_review.md
+++ b/doc/development/database_review.md
@@ -38,13 +38,24 @@ migration only.
### Required
-The following artifacts are required prior to submitting for a ~database review.
+You must provide the following artifacts when you request a ~database review.
If your merge request description does not include these items, the review will be reassigned back to the author.
+#### Migrations
+
If new migrations are introduced, in the MR **you are required to provide**:
- The output of both migrating (`db:migrate`) and rolling back (`db:rollback`) for all migrations.
+Note that we have automated tooling for
+[GitLab](https://gitlab.com/gitlab-org/gitlab) (provided by the
+`db:check-migrations` pipeline job) that provides this output for migrations on
+~database merge requests. You do not need to provide this information manually
+if the bot can do it for you. The bot also checks that migrations are correctly
+reversible.
+
+#### Queries
+
If new queries have been introduced or existing queries have been updated, **you are required to provide**:
- [Query plans](#query-plans) for each raw SQL query included in the merge request along with the link to the query plan following each raw SQL snippet.
diff --git a/doc/development/distributed_tracing.md b/doc/development/distributed_tracing.md
index f8184a562ec..1e85abf585c 100644
--- a/doc/development/distributed_tracing.md
+++ b/doc/development/distributed_tracing.md
@@ -57,7 +57,7 @@ on non-Go GitLab subsystems.
## Enabling distributed tracing
GitLab uses the `GITLAB_TRACING` environment variable to configure distributed tracing. The same
-configuration is used for all components (e.g., Workhorse, Rails, etc).
+configuration is used for all components (for example, Workhorse, Rails, etc).
When `GITLAB_TRACING` is not set, the application isn't instrumented, meaning that there is
no overhead at all.
diff --git a/doc/development/documentation/feature_flags.md b/doc/development/documentation/feature_flags.md
index 5a4d365ed20..c169af1958e 100644
--- a/doc/development/documentation/feature_flags.md
+++ b/doc/development/documentation/feature_flags.md
@@ -37,14 +37,14 @@ FLAG:
| If the feature is... | Use this text |
|--------------------------|---------------|
-| Available | `On self-managed GitLab, by default this feature is available. To hide the feature, ask an administrator to [disable the <flag name> flag](<path to>/administration/feature_flags.md).` |
-| Unavailable | `On self-managed GitLab, by default this feature is not available. To make it available, ask an administrator to [enable the <flag name> flag](<path to>/administration/feature_flags.md).` |
-| Available, per-group | `On self-managed GitLab, by default this feature is available. To hide the feature per group, ask an administrator to [disable the <flag name> flag](<path to>/administration/feature_flags.md).` |
-| Unavailable, per-group | `On self-managed GitLab, by default this feature is not available. To make it available per group, ask an administrator to [enable the <flag name> flag](<path to>/administration/feature_flags.md).` |
-| Available, per-project | `On self-managed GitLab, by default this feature is available. To hide the feature per project or for your entire instance, ask an administrator to [disable the <flag name> flag](<path to>/administration/feature_flags.md).` |
-| Unavailable, per-project | `On self-managed GitLab, by default this feature is not available. To make it available per project or for your entire instance, ask an administrator to [enable the <flag name> flag](<path to>/administration/feature_flags.md).` |
-| Available, per-user | `On self-managed GitLab, by default this feature is available. To hide the feature per user, ask an administrator to [disable the <flag name> flag](<path to>/administration/feature_flags.md).` |
-| Unavailable, per-user | `On self-managed GitLab, by default this feature is not available. To make it available per user, ask an administrator to [enable the <flag name> flag](<path to>/administration/feature_flags.md).` |
+| Available | `On self-managed GitLab, by default this feature is available. To hide the feature, ask an administrator to [disable the feature flag](<path to>/administration/feature_flags.md) named <flag name>.` |
+| Unavailable | `On self-managed GitLab, by default this feature is not available. To make it available, ask an administrator to [enable the feature flag](<path to>/administration/feature_flags.md) named <flag name>.` |
+| Available, per-group | `On self-managed GitLab, by default this feature is available. To hide the feature per group, ask an administrator to [disable the feature flag](<path to>/administration/feature_flags.md) named <flag name>.` |
+| Unavailable, per-group | `On self-managed GitLab, by default this feature is not available. To make it available per group, ask an administrator to [enable the feature flag](<path to>/administration/feature_flags.md) named <flag name>.` |
+| Available, per-project | `On self-managed GitLab, by default this feature is available. To hide the feature per project or for your entire instance, ask an administrator to [disable the feature flag](<path to>/administration/feature_flags.md) named <flag name>.` |
+| Unavailable, per-project | `On self-managed GitLab, by default this feature is not available. To make it available per project or for your entire instance, ask an administrator to [enable the feature flag](<path to>/administration/feature_flags.md) named <flag name>.` |
+| Available, per-user | `On self-managed GitLab, by default this feature is available. To hide the feature per user, ask an administrator to [disable the feature flag](<path to>/administration/feature_flags.md) named <flag name>.` |
+| Unavailable, per-user | `On self-managed GitLab, by default this feature is not available. To make it available per user, ask an administrator to [enable the feature flag](<path to>/administration/feature_flags.md) named <flag name>.` |
### GitLab.com availability information
@@ -67,7 +67,7 @@ When the state of a flag changes (for example, disabled by default to enabled by
Possible version history entries are:
```markdown
-> - [Introduced](issue-link) in GitLab X.X. [Deployed behind the <flag name> flag](../../administration/feature_flags.md), disabled by default.
+> - [Introduced](issue-link) in GitLab X.X [with a flag](../../administration/feature_flags.md) named <flag name>. Disabled by default.
> - [Enabled on GitLab.com](issue-link) in GitLab X.X.
> - [Enabled on GitLab.com](issue-link) in GitLab X.X. Available to GitLab.com administrators only.
> - [Enabled on self-managed](issue-link) in GitLab X.X.
@@ -80,31 +80,31 @@ Possible version history entries are:
The following examples show the progression of a feature flag.
```markdown
-> Introduced in GitLab 13.7. [Deployed behind the `forti_token_cloud` flag](../../administration/feature_flags.md), disabled by default.
+> Introduced in GitLab 13.7 [with a flag](../../administration/feature_flags.md) named `forti_token_cloud`. Disabled by default.
FLAG:
On self-managed GitLab, by default this feature is not available. To make it available,
-ask an administrator to [enable the `forti_token_cloud` flag](../administration/feature_flags.md).`
+ask an administrator to [enable the featured flag](../administration/feature_flags.md) named `forti_token_cloud`.
The feature is not ready for production use.
```
When the feature is enabled in production, you can update the version history:
```markdown
-> - Introduced in GitLab 13.7. [Deployed behind the `forti_token_cloud` flag](../../administration/feature_flags.md), disabled by default.
+> - Introduced in GitLab 13.7 [with a flag](../../administration/feature_flags.md) named `forti_token_cloud`. Disabled by default.
> - [Enabled on self-managed](https://gitlab.com/issue/etc) GitLab 13.8.
FLAG:
On self-managed GitLab, by default this feature is available. To hide the feature per user,
-ask an administrator to [disable the `forti_token_cloud` flag](../administration/feature_flags.md).
+ask an administrator to [disable the feature flag](../administration/feature_flags.md) named `forti_token_cloud`.
```
And, when the feature is done and fully available to all users:
```markdown
-> - Introduced in GitLab 13.7. [Deployed behind the `forti_token_cloud` flag](../../administration/feature_flags.md), disabled by default.
+> - Introduced in GitLab 13.7 [with a flag](../../administration/feature_flags.md) named `forti_token_cloud`. Disabled by default.
> - [Enabled on self-managed](https://gitlab.com/issue/etc) GitLab 13.8.
> - [Enabled on GitLab.com](https://gitlab.com/issue/etc) in GitLab 13.9.
-> - [Feature flag `forti_token_cloud`](https://gitlab.com/issue/etc) removed in GitLab 14.0.
+> - [Feature flag removed](https://gitlab.com/issue/etc) in GitLab 14.0.
> - [Generally available](issue-link) in GitLab 14.0.
```
diff --git a/doc/development/documentation/img/manual_build_docs.png b/doc/development/documentation/img/manual_build_docs_v14_3.png
index e366a2f7ec4..e366a2f7ec4 100644
--- a/doc/development/documentation/img/manual_build_docs.png
+++ b/doc/development/documentation/img/manual_build_docs_v14_3.png
Binary files differ
diff --git a/doc/development/documentation/index.md b/doc/development/documentation/index.md
index a597ea512c6..75538fe1fe7 100644
--- a/doc/development/documentation/index.md
+++ b/doc/development/documentation/index.md
@@ -108,23 +108,6 @@ info: To determine the technical writer assigned to the Stage/Group associated w
If you need help determining the correct stage, read [Ask for help](workflow.md#ask-for-help).
-### Document type metadata
-
-Originally discussed in [this epic](https://gitlab.com/groups/gitlab-org/-/epics/1280),
-each page should have a metadata tag called `type`. It can be one or more of the
-following:
-
-- `index`: It consists mostly of a list of links to other pages.
- [Example page](../../index.md).
-- `concepts`: The background or context of a subject.
- [Example page](../../topics/autodevops/index.md).
-- `howto`: Specific use case instructions.
- [Example page](../../ssh/index.md).
-- `tutorial`: Learn a process/concept by doing.
- [Example page](../../gitlab-basics/start-using-git.md).
-- `reference`: A collection of information used as a reference to use a feature
- or a functionality. [Example page](../../ci/yaml/index.md).
-
### Redirection metadata
The following metadata should be added when a page is moved to another location:
@@ -154,7 +137,12 @@ Each page can have additional, optional metadata (set in the
[default.html](https://gitlab.com/gitlab-org/gitlab-docs/-/blob/fc3577921343173d589dfa43d837b4307e4e620f/layouts/default.html#L30-52)
Nanoc layout), which is displayed at the top of the page if defined.
-## Move or rename a page
+### Deprecated metadata
+
+The `type` metadata parameter is deprecated but still exists in documentation
+pages. You can safely remove the `type` metadata parameter and its values.
+
+## Move, rename, or delete a page
See [redirects](redirects.md).
@@ -214,8 +202,10 @@ with the following conventions:
The help text follows the [Pajamas guidelines](https://design.gitlab.com/usability/helping-users/#formatting-help-content).
-Use the following special cases depending on the context, ensuring all links
-are inside `_()` so they can be translated:
+#### Linking to `/help` in HAML
+
+Use the following special cases depending on the context, ensuring all link text
+is inside `_()` so it can be translated:
- Linking to a doc page. In its most basic form, the HAML code to generate a
link to the `/help` page is:
@@ -260,6 +250,27 @@ helpPagePath('user/permissions', { anchor: 'anchor-link' })
This is preferred over static paths, as the helper also works on instances installed under a [relative URL](../../install/relative_url.md).
+#### Linking to `/help` in Ruby
+
+To link to the documentation from within Ruby code, use the following code block as a guide, ensuring all link text is inside `_()` so it can
+be translated:
+
+```ruby
+docs_link = link_to _('Learn more.'), help_page_url('user/permissions', anchor: 'anchor-link'), target: '_blank', rel: 'noopener noreferrer'
+_('This is a text describing the option/feature in a sentence. %{docs_link}').html_safe % { docs_link: docs_link.html_safe }
+```
+
+In cases where you need to generate a link from outside of views/helpers, where the `link_to` and `help_page_url` methods are not available, use the following code block
+as a guide where the methods are fully qualified:
+
+```ruby
+docs_link = ActionController::Base.helpers.link_to _('Learn more.'), Rails.application.routes.url_helpers.help_page_url('user/permissions', anchor: 'anchor-link'), target: '_blank', rel: 'noopener noreferrer'
+_('This is a text describing the option/feature in a sentence. %{docs_link}').html_safe % { docs_link: docs_link.html_safe }
+```
+
+Do not use `include ActionView::Helpers::UrlHelper` just to make the `link_to` method available as you might see in some existing code. Read more in
+[issue 340567](https://gitlab.com/gitlab-org/gitlab/-/issues/340567).
+
### GitLab `/help` tests
Several [RSpec tests](https://gitlab.com/gitlab-org/gitlab/-/blob/master/spec/features/help_pages_spec.rb)
diff --git a/doc/development/documentation/redirects.md b/doc/development/documentation/redirects.md
index eb6878f5870..ef94c33a276 100644
--- a/doc/development/documentation/redirects.md
+++ b/doc/development/documentation/redirects.md
@@ -15,13 +15,26 @@ description: Learn how to contribute to GitLab Documentation.
# Redirects in GitLab documentation
-Moving or renaming a document is the same as changing its location. Be sure
-to assign a technical writer to any merge request that renames or moves a page.
-Technical Writers can help with any questions and can review your change.
+When you move, rename, or delete a page, you must add a redirect. Redirects reduce
+how often users get 404s when visiting the documentation site from out-of-date links, like:
+
+- Bookmarks
+- Links from external sites
+- Links from old blog posts
+- Links in the documentation site global navigation
+
+Add a redirect to ensure:
-When moving or renaming a page, you must redirect browsers to the new page.
-This ensures users find the new page, and have the opportunity to update their
-bookmarks.
+- Users see the new page and can update or delete their bookmark.
+- External sites can update their links, especially sites that have automation that
+ check for redirecting links.
+- The documentation site global navigation does not link to a missing page.
+
+ The links in the global navigation are already tested in the `gitlab-docs` project.
+ If the redirect is missing, the `gitlab-docs` project's `main` branch might break.
+
+Be sure to assign a technical writer to any merge request that moves, renames, or deletes a page.
+Technical Writers can help with any questions and can review your change.
There are two types of redirects:
diff --git a/doc/development/documentation/review_apps.md b/doc/development/documentation/review_apps.md
index 2b8c412f165..a5094ea87f0 100644
--- a/doc/development/documentation/review_apps.md
+++ b/doc/development/documentation/review_apps.md
@@ -26,7 +26,7 @@ to render and preview the documentation locally.
If a merge request has documentation changes, use the `review-docs-deploy` manual job
to deploy the documentation review app for your merge request.
-![Manual trigger a documentation review app](img/manual_build_docs.png)
+![Manual trigger a documentation review app](img/manual_build_docs_v14_3.png)
The `review-docs-deploy*` job triggers a cross project pipeline and builds the
docs site with your changes. When the pipeline finishes, the review app URL
diff --git a/doc/development/documentation/site_architecture/index.md b/doc/development/documentation/site_architecture/index.md
index cd69154217c..d1736e10000 100644
--- a/doc/development/documentation/site_architecture/index.md
+++ b/doc/development/documentation/site_architecture/index.md
@@ -230,7 +230,7 @@ reports.
## Monthly release process (versions)
The docs website supports versions and each month we add the latest one to the list.
-For more information, read about the [monthly release process](https://about.gitlab.com/handbook/engineering/ux/technical-writing/workflow/#monthly-documentation-releases).
+For more information, read about the [monthly release process](https://gitlab.com/gitlab-org/gitlab-docs/-/blob/main/doc/releases.md).
## Review Apps for documentation merge requests
diff --git a/doc/development/documentation/structure.md b/doc/development/documentation/structure.md
index a9b93997906..229dbb077fe 100644
--- a/doc/development/documentation/structure.md
+++ b/doc/development/documentation/structure.md
@@ -149,6 +149,8 @@ For the heading:
If you do not put the full error in the title, include it in the body text.
+If multiple causes or workarounds exist, consider putting them into a table format.
+
## Other types of content
There are other types of content in the GitLab documentation that don't
diff --git a/doc/development/documentation/styleguide/img/admin_access_level.png b/doc/development/documentation/styleguide/img/admin_access_level.png
new file mode 100644
index 00000000000..191ba78cd6c
--- /dev/null
+++ b/doc/development/documentation/styleguide/img/admin_access_level.png
Binary files differ
diff --git a/doc/development/documentation/styleguide/index.md b/doc/development/documentation/styleguide/index.md
index 2cbecc91b20..72491ab3a33 100644
--- a/doc/development/documentation/styleguide/index.md
+++ b/doc/development/documentation/styleguide/index.md
@@ -1091,47 +1091,51 @@ However, they should be used sparingly because:
- They are difficult and expensive to localize.
- They cannot be read by screen readers.
-If you do include an image in the documentation, ensure it provides value.
-Don't use `lorem ipsum` text. Try to replicate how the feature would be
-used in a real-world scenario, and [use realistic text](#fake-user-information).
+When needed, use images to help the reader understand:
-### Capture the image
+- Where they are in a complicated process.
+- How they should interact with the application.
-Use images to help the reader understand where they are in a process, or how
-they need to interact with the application.
+### Capture the image
When you take screenshots:
-- **Capture the most relevant area of the page.** Don't include unnecessary white
- space or areas of the page that don't help illustrate the point. The left
- sidebar of the GitLab user interface can change, so don't include the sidebar
- if it's not necessary.
+- **Ensure it provides value.** Don't use `lorem ipsum` text.
+ Try to replicate how the feature would be used in a real-world scenario, and
+ [use realistic text](#fake-user-information).
+- **Capture only the relevant UI.** Don't include unnecessary white
+ space or areas of the UI that don't help illustrate the point. The
+ sidebars in GitLab can change, so don't include
+ them in screenshots unless absolutely necessary.
- **Keep it small.** If you don't need to show the full width of the screen, don't.
- A value of 1000 pixels is a good maximum width for your screenshot image.
+ Reduce the size of your browser window as much as possible to keep elements close
+ together and reduce empty space. Try to keep the screenshot dimensions as small as possible.
+- **Review how the image renders on the page.** Preview the image locally or use the
+review app in the merge request. Make sure the image isn't blurry or overwhelming.
- **Be consistent.** Coordinate screenshots with the other screenshots already on
- a documentation page. For example, if other screenshots include the left
- sidebar, include the sidebar in all screenshots.
+ a documentation page for a consistent reading experience.
### Save the image
+- Resize any wide or tall screenshots if needed, but make sure the screenshot is
+ still clear after being resized and compressed.
+- All images **must** be [compressed](#compress-images) to 100KB or less.
+ In many cases, 25-50KB or less is often possible without reducing image quality.
- Save the image with a lowercase filename that's descriptive of the feature
- or concept in the image. If the image is of the GitLab interface, append the
- GitLab version to the filename, based on this format:
- `image_name_vX_Y.png`. For example, for a screenshot taken from the pipelines
- page of GitLab 11.1, a valid name is `pipelines_v11_1.png`. If you're adding an
- illustration that doesn't include parts of the user interface, add the release
- number corresponding to the release the image was added to; for an MR added to
- 11.1's milestone, a valid name for an illustration is `devops_diagram_v11_1.png`.
+ or concept in the image:
+ - If the image is of the GitLab interface, append the GitLab version to the filename,
+ based on this format: `image_name_vX_Y.png`. For example, for a screenshot taken
+ from the pipelines page of GitLab 11.1, a valid name is `pipelines_v11_1.png`.
+ - If you're adding an illustration that doesn't include parts of the user interface,
+ add the release number corresponding to the release the image was added to.
+ For an MR added to 11.1's milestone, a valid name for an illustration is `devops_diagram_v11_1.png`.
- Place images in a separate directory named `img/` in the same directory where
the `.md` document that you're working on is located.
- Consider using PNG images instead of JPEG.
-- [Compress all PNG images](#compress-images).
- Compress GIFs with <https://ezgif.com/optimize> or similar tool.
- Images should be used (only when necessary) to illustrate the description
of a process, not to replace it.
-- Max image size: 100KB (GIFs included).
-- See also how to link and embed [videos](#videos) to illustrate the
- documentation.
+- See also how to link and embed [videos](#videos) to illustrate the documentation.
### Add the image link to content
@@ -1152,8 +1156,11 @@ known tool is [`pngquant`](https://pngquant.org/), which is cross-platform and
open source. Install it by visiting the official website and following the
instructions for your OS.
+If you use macOS and want all screenshots to be compressed automatically, read
+[One simple trick to make your screenshots 80% smaller](https://about.gitlab.com/blog/2020/01/30/simple-trick-for-smaller-screenshots/).
+
GitLab has a [Ruby script](https://gitlab.com/gitlab-org/gitlab/-/blob/master/bin/pngquant)
-that you can use to automate the process. In the root directory of your local
+that you can use to simplify the manual process. In the root directory of your local
copy of `https://gitlab.com/gitlab-org/gitlab`, run in a terminal:
- Before compressing, if you want, check that all documentation PNG images have
@@ -1360,13 +1367,15 @@ Do not use words to describe the icon:
## Alert boxes
-Use alert boxes to call attention to information.
+Use alert boxes to call attention to information. Use them sparingly, and never have an alert box immediately follow another alert box.
Alert boxes are generated when one of these words is followed by a line break:
- `FLAG:`
- `NOTE:`
- `WARNING:`
+- `INFO:` (Marketing only)
+- `DISCLAIMER:`
For example:
@@ -1423,6 +1432,58 @@ It renders on the GitLab documentation site as:
WARNING:
This is something to be warned about.
+### Info
+
+The Marketing team uses the `INFO` alert to add information relating
+to sales and marketing efforts.
+
+The text in an `INFO:` alert always renders in a floating text box to the right of the text around it.
+To view the rendered GitLab docs site, check the review app in the MR. You might need to move the text up or down
+in the surrounding text, depending on where you'd like to floating box to appear.
+
+For example, if your page has text like this:
+
+```markdown
+This is an introductory paragraph. GitLab uses the SSH protocol to securely communicate with Git.
+When you use SSH keys to authenticate to the GitLab remote server,
+you don't need to supply your username and password each time.
+
+INFO:
+Here is some information. This information is an important addition to how you
+work with GitLab and you might want to consider it.
+
+And here is another paragraph. GitLab uses the SSH protocol to securely communicate with Git.
+When you use SSH keys to authenticate to the GitLab remote server,
+you don't need to supply your username and password each time.
+
+And here is another paragraph. GitLab uses the SSH protocol to securely communicate with Git.
+When you use SSH keys to authenticate to the GitLab remote server,
+you don't need to supply your username and password each time.
+```
+
+It renders on the GitLab documentation site as:
+
+This is an introductory paragraph. GitLab uses the SSH protocol to securely communicate with Git.
+When you use SSH keys to authenticate to the GitLab remote server,
+you don't need to supply your username and password each time.
+
+INFO:
+Here is some information. This information is an important addition to how you
+work with GitLab and you might want to consider it.
+
+And here is another paragraph. GitLab uses the SSH protocol to securely communicate with Git.
+When you use SSH keys to authenticate to the GitLab remote server,
+you don't need to supply your username and password each time.
+
+And here is another paragraph. GitLab uses the SSH protocol to securely communicate with Git.
+When you use SSH keys to authenticate to the GitLab remote server,
+you don't need to supply your username and password each time.
+
+### Disclaimer
+
+Use to describe future functionality only.
+For more information, see [Legal disclaimer for future features](#legal-disclaimer-for-future-features).
+
## Blockquotes
For highlighting a text inside a blockquote, use this format:
@@ -1616,6 +1677,34 @@ For example:
You can say that we plan to remove a feature.
+#### Legal disclaimer for future features
+
+If you **must** write about features we have not yet delivered, put this exact disclaimer near the content it applies to.
+
+```markdown
+DISCLAIMER:
+This page contains information related to upcoming products, features, and functionality.
+It is important to note that the information presented is for informational purposes only.
+Please do not rely on this information for purchasing or planning purposes.
+As with all projects, the items mentioned on this page are subject to change or delay.
+The development, release, and timing of any products, features, or functionality remain at the
+sole discretion of GitLab Inc.
+```
+
+It renders on the GitLab documentation site as:
+
+DISCLAIMER:
+This page contains information related to upcoming products, features, and functionality.
+It is important to note that the information presented is for informational purposes only.
+Please do not rely on this information for purchasing or planning purposes.
+As with all projects, the items mentioned on this page are subject to change or delay.
+The development, release, and timing of any products, features, or functionality remain at the
+sole discretion of GitLab Inc.
+
+If all of the content on the page is not available, use the disclaimer once at the top of the page.
+
+If the content in a topic is not ready, use the disclaimer in the topic.
+
### Removing versions after each major release
Whenever a major GitLab release occurs, we remove all version references
diff --git a/doc/development/documentation/styleguide/word_list.md b/doc/development/documentation/styleguide/word_list.md
index eafe0e7a1c2..f1e6a147571 100644
--- a/doc/development/documentation/styleguide/word_list.md
+++ b/doc/development/documentation/styleguide/word_list.md
@@ -29,9 +29,39 @@ Try to avoid using **above** when referring to an example or table in a document
- In the previous example, the dog had fleas.
-## admin, admin area
+Do not use **above** when referring to versions of the product. Use [**later**](#later) instead.
-Use **administration**, **administrator**, **administer**, or **Admin Area** instead. ([Vale](../testing.md#vale) rule: [`Admin.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab/Admin.yml))
+- Do: In GitLab 14.4 and later...
+- Do not: In GitLab 14.4 and above...
+- Do not: In GitLab 14.4 and higher...
+
+## access level
+
+Access levels are different than [roles](#roles) or [permissions](#permissions).
+When you create a user, you choose an access level: **Regular**, **Auditor**, or **Admin**.
+
+Capitalize these words when you refer to the UI. Otherwise use lowercase.
+
+## administrator
+
+Use **administrator** instead of **admin** when talking about a user's access level.
+Use lowercase unless you are referring to the **Admin** access level you select in the UI.
+
+To view the administrator access type, in the GitLab UI, go to the Admin Area and select
+**Users**. Then select **New user**.
+
+![admin access level](img/admin_access_level.png)
+
+An **administrator** is not a [role](#roles) or [permission](#permissions).
+
+- Do: To do this thing, you must be an administrator.
+- Do: To do this thing, you must have the administrator access level.
+- Do not: To do this thing, you must have the Admin role.
+
+## Admin Area
+
+Use title case **Admin Area** to refer to the area of the UI that you access when you select **Menu > Admin**.
+This area of the UI says **Admin Area** at the top of the page and on the menu.
## allow, enable
@@ -54,9 +84,14 @@ in the handbook when writing about Alpha features.
Instead of **and/or**, use **or** or rewrite the sentence to spell out both options.
+## and so on
+
+Do not use **and so on**. Instead, be more specific. For details, see
+[the Microsoft style guide](https://docs.microsoft.com/en-us/style-guide/a-z-word-list-term-collections/a/and-so-on).
+
## area
-Use [**section**](#section) instead of **area**. The only exception is [the Admin Area](#admin-admin-area).
+Use [**section**](#section) instead of **area**. The only exception is [the Admin Area](#admin-area).
## below
@@ -150,8 +185,8 @@ When writing about the Developer role:
- Do not use the phrase, **if you are a developer** to mean someone who is assigned the Developer
role. Instead, write it out. For example, **if you are assigned the Developer role**.
- To describe a situation where the Developer role is the minimum required:
- - Avoid: the Developer role or higher
- - Use instead: at least the Developer role
+ - Do: at least the Developer role
+ - Do not: the Developer role or higher
Do not use **Developer permissions**. A user who is assigned the Developer role has a set of associated permissions.
@@ -207,7 +242,8 @@ Use lowercase for **epic board**.
## etc.
-Try to avoid **etc.**. Be as specific as you can. Do not use **and so on** as a replacement.
+Try to avoid **etc.**. Be as specific as you can. Do not use
+[**and so on**](#and-so-on) as a replacement.
- Do: You can update objects, like merge requests and issues.
- Do not: You can update objects, like merge requests, issues, etc.
@@ -220,8 +256,8 @@ Use **expand** instead of **open** when you are talking about expanding or colla
Use **box** instead of **field** or **text box**.
-- Avoid: In the **Variable name** field, enter `my text`.
-- Use instead: In the **Variable name** box, enter `my text`.
+- Do: In the **Variable name** box, enter `my text`.
+- Do not: In the **Variable name** field, enter `my text`.
## foo
@@ -265,8 +301,8 @@ When writing about the Guest role:
- Do not use the phrase, **if you are a guest** to mean someone who is assigned the Guest
role. Instead, write it out. For example, **if you are assigned the Guest role**.
- To describe a situation where the Guest role is the minimum required:
- - Avoid: the Guest role or higher
- - Use instead: at least the Guest role
+ - Do: at least the Guest role
+ - Do not: the Guest role or higher
Do not use **Guest permissions**. A user who is assigned the Guest role has a set of associated permissions.
@@ -282,15 +318,16 @@ Do not use **high availability** or **HA**. Instead, direct readers to the GitLa
Do not use **higher** when talking about version numbers.
-- Do: In GitLab 14.1 and later.
-- Do not: In GitLab 14.1 and higher.
+- Do: In GitLab 14.4 and later...
+- Do not: In GitLab 14.4 and higher...
+- Do not: In GitLab 14.4 and above...
## hit
Don't use **hit** to mean **press**.
-- Avoid: Hit the **ENTER** button.
-- Use instead: Press **ENTER**.
+- Do: Press **ENTER**.
+- Do not: Hit the **ENTER** button.
## I
@@ -326,8 +363,9 @@ If you want to use **CI** with the word **job**, use **CI/CD job** rather than *
Use **later** when talking about version numbers.
-- Avoid: In GitLab 14.1 and higher.
-- Use instead: In GitLab 14.1 and later.
+- Do: In GitLab 14.1 and later...
+- Do not: In GitLab 14.1 and higher...
+- Do not: In GitLab 14.1 and above...
## list
@@ -354,8 +392,8 @@ When writing about the Maintainer role:
- Do not use the phrase, **if you are a maintainer** to mean someone who is assigned the Maintainer
role. Instead, write it out. For example, **if you are assigned the Maintainer role**.
- To describe a situation where the Maintainer role is the minimum required:
- - Avoid: the Maintainer role or higher
- - Use instead: at least the Maintainer role
+ - Do: at least the Maintainer role
+ - Do not: the Maintainer role or higher
Do not use **Maintainer permissions**. A user who is assigned the Maintainer role has a set of associated permissions.
@@ -416,6 +454,13 @@ Do not use **note that** because it's wordy.
- Do: You can change the settings.
- Do not: Note that you can change the settings.
+## once
+
+The word **once** means **one time**. Don't use it to mean **after** or **when**.
+
+- Do: When the process is complete...
+- Do not: Once the process is complete...
+
## Owner
When writing about the Owner role:
@@ -429,7 +474,9 @@ Do not use **Owner permissions**. A user who is assigned the Owner role has a se
## permissions
-Do not use **roles** and **permissions** interchangeably. Each user is assigned a role. Each role includes a set of permissions.
+Do not use [**roles**](#roles) and **permissions** interchangeably. Each user is assigned a role. Each role includes a set of permissions.
+
+Permissions are not the same as [**access levels**](#access-level).
## please
@@ -454,8 +501,8 @@ When writing about the Reporter role:
- Do not use the phrase, **if you are a reporter** to mean someone who is assigned the Reporter
role. Instead, write it out. For example, **if you are assigned the Reporter role**.
- To describe a situation where the Reporter role is the minimum required:
- - Avoid: the Reporter role or higher
- - Use instead: at least the Reporter role
+ - Do: at least the Reporter role
+ - Do not: the Reporter role or higher
Do not use **Reporter permissions**. A user who is assigned the Reporter role has a set of associated permissions.
@@ -465,12 +512,23 @@ Use title case for **Repository Mirroring**.
## roles
-Do not use **roles** and **permissions** interchangeably. Each user is assigned a role. Each role includes a set of permissions.
+Do not use **roles** and [**permissions**](#permissions) interchangeably. Each user is assigned a role. Each role includes a set of permissions.
+
+Roles are not the same as [**access levels**](#access-level).
## runner, runners
Use lowercase for **runners**. These are the agents that run CI/CD jobs. See also [GitLab Runner](#gitlab-runner) and [this issue](https://gitlab.com/gitlab-org/gitlab/-/issues/233529).
+## (s)
+
+Do not use **(s)** to make a word optionally plural. It can slow down comprehension. For example:
+
+Do: Select the jobs you want.
+Do not: Select the job(s) you want.
+
+If you can select multiples of something, then write the word as plural.
+
## sanity check
Do not use **sanity check**. Use **check for completeness** instead. ([Vale](../testing.md#vale) rule: [`InclusionAbleism.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab/InclusionAbleism.yml))
@@ -514,6 +572,13 @@ You can use **single sign-on**.
Do not use **simply** or **simple**. If the user doesn't find the process to be simple, we lose their trust. ([Vale](../testing.md#vale) rule: [`Simplicity.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab/Simplicity.yml))
+## since
+
+The word **since** indicates a timeframe. For example, **Since 1984, Bon Jovi has existed**. Don't use **since** to mean **because**.
+
+- Do: Because you have the Developer role, you can delete the widget.
+- Do not: Since you have the Developer role, you can delete the widget.
+
## slashes
Instead of **and/or**, use **or** or re-write the sentence. This rule also applies to other slashes, like **follow/unfollow**. Some exceptions (like **CI/CD**) are allowed.
diff --git a/doc/development/documentation/testing.md b/doc/development/documentation/testing.md
index dfa2f3ed55a..13648a7c710 100644
--- a/doc/development/documentation/testing.md
+++ b/doc/development/documentation/testing.md
@@ -206,7 +206,6 @@ Vale returns three types of results: `suggestion`, `warning`, and `error`:
(after the Technical Writing team completes its cleanup). Warnings don't break CI. See a list of
[warning-level rules](https://gitlab.com/search?utf8=✓&snippets=false&scope=&repository_ref=master&search=path%3Adoc%2F.vale%2Fgitlab+Warning%3A&group_id=9970&project_id=278964).
- **Error**-level results are Style Guide violations, and should contain clear explanations
- about how to resolve the error. Errors break CI and are displayed in CI job output.
of how to resolve the error. Errors break CI and are displayed in CI job output. See a list of
[error-level rules](https://gitlab.com/search?utf8=✓&snippets=false&scope=&repository_ref=master&search=path%3Adoc%2F.vale%2Fgitlab+Error%3A&group_id=9970&project_id=278964).
diff --git a/doc/development/documentation/workflow.md b/doc/development/documentation/workflow.md
index 31c38bc1446..90c1137e5c5 100644
--- a/doc/development/documentation/workflow.md
+++ b/doc/development/documentation/workflow.md
@@ -99,7 +99,7 @@ The process involves the following:
- Primary Reviewer. Review by a [code reviewer](https://about.gitlab.com/handbook/engineering/projects/)
or other appropriate colleague to confirm accuracy, clarity, and completeness. This can be skipped
for minor fixes without substantive content changes.
-- Technical Writer (Optional). If not completed for a merge request prior to merging, must be scheduled
+- Technical Writer (Optional). If not completed for a merge request before merging, must be scheduled
post-merge. Schedule post-merge reviews only if an urgent merge is required. To request a:
- Pre-merge review, assign the Technical Writer listed for the applicable
[DevOps stage group](https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments).
@@ -111,7 +111,7 @@ The process involves the following:
- Ensure the appropriate labels are applied, including any required to pick a merge request into
a release.
- Ensure that, if there has not been a Technical Writer review completed or scheduled, they
- [create the required issue](https://gitlab.com/gitlab-org/gitlab/-/issues/new?issuable_template=Doc%20Review), assign to the Technical Writer of the given stage group,
+ [create the required issue](https://gitlab.com/gitlab-org/gitlab/-/issues/new?issuable_template=Doc%20Review), assign it to the Technical Writer of the given stage group,
and link it from the merge request.
The process is reflected in the **Documentation**
@@ -130,10 +130,10 @@ immediately after merge by the developer or maintainer. For this,
create an issue using the [Doc Review description template](https://gitlab.com/gitlab-org/gitlab/-/issues/new?issuable_template=Doc%20Review)
and link to it from the merged merge request that introduced the documentation change.
-Circumstances where a regular pre-merge Technical Writer review might be skipped include:
+Circumstances, where a regular pre-merge Technical Writer review might be skipped, include:
-- There is a short amount of time left before the milestone release. If there are less than three days
- remaining, seek a post-merge review and ping the writer via Slack to ensure the review is
+- There is a short amount of time left before the milestone release. If less than three
+ days are remaining, seek a post-merge review and ping the writer via Slack to ensure the review is
completed as soon as possible.
- The size of the change is small and you have a high degree of confidence
that early users of the feature (for example, GitLab.com users) can easily
@@ -156,15 +156,15 @@ Remember:
Ensure the following if skipping an initial Technical Writer review:
-- That [product badges](styleguide/index.md#product-tier-badges) are applied.
-- That the GitLab [version](styleguide/index.md#gitlab-versions) that
- introduced the feature has been included.
-- That changes to headings don't affect in-app hyperlinks.
+- [Product badges](styleguide/index.md#product-tier-badges) are applied.
+- The GitLab [version](styleguide/index.md#gitlab-versions) that
+ introduced the feature is included.
+- Changes to headings don't affect in-app hyperlinks.
- Specific [user permissions](../../user/permissions.md) are documented.
-- That new documents are linked from higher-level indexes, for discoverability.
-- Style guide is followed:
+- New documents are linked from higher-level indexes, for discoverability.
+- The style guide is followed:
- For [directories and files](styleguide/index.md#work-with-directories-and-files).
- For [images](styleguide/index.md#images).
Merge requests that change the location of documentation must always be reviewed by a Technical
-Writer prior to merging.
+Writer before merging.
diff --git a/doc/development/ee_features.md b/doc/development/ee_features.md
index 42fb9fd42fc..7f74d9660e9 100644
--- a/doc/development/ee_features.md
+++ b/doc/development/ee_features.md
@@ -40,7 +40,7 @@ By default, merge request pipelines for development run in an EE-context only. I
developing features that differ between FOSS and EE, you may wish to run pipelines in a
FOSS context as well.
-To run pipelines in both contexts, include `RUN AS-IF-FOSS` in the merge request title.
+To run pipelines in both contexts, add the `~"pipeline:run-as-if-foss"` label to the merge request.
See the [As-if-FOSS jobs](pipelines.md#as-if-foss-jobs) pipelines documentation for more information.
diff --git a/doc/development/elasticsearch.md b/doc/development/elasticsearch.md
index bba4e1cda23..68d8b424331 100644
--- a/doc/development/elasticsearch.md
+++ b/doc/development/elasticsearch.md
@@ -233,11 +233,11 @@ Any data or index cleanup needed to support migration retries should be handled
will re-enqueue itself with a delay which is set using the `throttle_delay` option described below. The batching
must be handled within the `migrate` method, this setting controls the re-enqueuing only.
-- `batch_size` - Sets the number of documents modified during a `batched!` migration run. This size should be set to a value which allows the updates
- enough time to finish. This can be tuned in combination with the `throttle_delay` option described below. The batching
- must be handled within a custom `migrate` method or by using the [`Elastic::MigrationBackfillHelper`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/app/workers/concerns/elastic/migration_backfill_helper.rb)
- `migrate` method which uses this setting. Default value is 1000 documents.
-
+- `batch_size` - Sets the number of documents modified during a `batched!` migration run. This size should be set to a value which allows the updates
+enough time to finish. This can be tuned in combination with the `throttle_delay` option described below. The batching
+must be handled within a custom `migrate` method or by using the [`Elastic::MigrationBackfillHelper`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/app/workers/concerns/elastic/migration_backfill_helper.rb)
+`migrate` method which uses this setting. Default value is 1000 documents.
+
- `throttle_delay` - Sets the wait time in between batch runs. This time should be set high enough to allow each migration batch
enough time to finish. Additionally, the time should be less than 30 minutes since that is how often the
[`Elastic::MigrationWorker`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/app/workers/elastic/migration_worker.rb)
diff --git a/doc/development/experiment_guide/index.md b/doc/development/experiment_guide/index.md
index 4de272fec20..1b1f756d4c0 100644
--- a/doc/development/experiment_guide/index.md
+++ b/doc/development/experiment_guide/index.md
@@ -61,7 +61,7 @@ Therefore, you should postpone this effort until the [experiment cleanup process
We recommend the following workflow:
-1. Review the Pajamas guidelines for [icons](https://design.gitlab.com/product-foundations/iconography) and [illustrations](https://design.gitlab.com/product-foundations/illustration).
+1. Review the Pajamas guidelines for [icons](https://design.gitlab.com/product-foundations/iconography/) and [illustrations](https://design.gitlab.com/product-foundations/illustration/).
1. Add an icon or illustration as an `.svg` file in the `/app/assets/images` (or EE) path in the GitLab repository.
1. Use `image_tag` or `image_path` to render it via the asset pipeline.
1. **If the experiment is a success**, designers add the new icon or illustration to the Pajamas UI kit as part of the cleanup process.
diff --git a/doc/development/fe_guide/accessibility.md b/doc/development/fe_guide/accessibility.md
index 7c870de9a6c..c4ebef4c289 100644
--- a/doc/development/fe_guide/accessibility.md
+++ b/doc/development/fe_guide/accessibility.md
@@ -334,7 +334,7 @@ Keep in mind that:
- When you add `:hover` styles, in most cases you should add `:focus` styles too so that the styling is applied for both mouse **and** keyboard users.
- If you remove an interactive element's `outline`, make sure you maintain visual focus state in another way such as with `box-shadow`.
-See the [Pajamas Keyboard-only page](https://design.gitlab.com/accessibility-audits/keyboard-only) for more detail.
+See the [Pajamas Keyboard-only page](https://design.gitlab.com/accessibility-audits/keyboard-only/) for more detail.
## Tabindex
diff --git a/doc/development/fe_guide/content_editor.md b/doc/development/fe_guide/content_editor.md
index 956e7d0d56e..139825655e9 100644
--- a/doc/development/fe_guide/content_editor.md
+++ b/doc/development/fe_guide/content_editor.md
@@ -11,7 +11,7 @@ experience for [GitLab Flavored Markdown](../../user/markdown.md) in the GitLab
It also serves as the foundation for implementing Markdown-focused editors
that target other engines, like static site generators.
-We use [tiptap 2.0](https://www.tiptap.dev/) and [ProseMirror](https://prosemirror.net/)
+We use [tiptap 2.0](https://tiptap.dev/) and [ProseMirror](https://prosemirror.net/)
to build the Content Editor. These frameworks provide a level of abstraction on top of
the native
[`contenteditable`](https://developer.mozilla.org/en-US/docs/Web/Guide/HTML/Editable_content) web technology.
@@ -143,7 +143,7 @@ The Content Editor is composed of three main layers:
### Editing tools UI
The editing tools UI are Vue components that display the editor's state and
-dispatch [commands](https://www.tiptap.dev/api/commands/#commands) to mutate it.
+dispatch [commands](https://tiptap.dev/api/commands/#commands) to mutate it.
They are located in the `~/content_editor/components` directory. For example,
the **Bold** toolbar button displays the editor's state by becoming active when
the user selects bold text. This button also dispatches the `toggleBold` command
@@ -159,7 +159,7 @@ sequenceDiagram
#### Node views
-We implement [node views](https://www.tiptap.dev/guide/node-views/vue/#node-views-with-vue)
+We implement [node views](https://tiptap.dev/guide/node-views/vue/#node-views-with-vue)
to provide inline editing tools for some content types, like tables and images. Node views
allow separating the presentation of a content type from its
[model](https://prosemirror.net/docs/guide/#doc.data_structures). Using a Vue component in
@@ -209,7 +209,7 @@ the following events:
- `blur`
- `error`.
-Learn more about these events in [Tiptap's event guide](https://www.tiptap.dev/api/events/).
+Learn more about these events in [Tiptap's event guide](https://tiptap.dev/api/events/).
```html
<script>
@@ -246,7 +246,7 @@ export default {
### The Tiptap editor object
-The Tiptap [Editor](https://www.tiptap.dev/api/editor) class manages
+The Tiptap [Editor](https://tiptap.dev/api/editor) class manages
the editor's state and encapsulates all the business logic that powers
the Content Editor. The Content Editor constructs a new instance of this class and
provides all the necessary extensions to support
@@ -255,9 +255,9 @@ provides all the necessary extensions to support
#### Implement new extensions
Extensions are the building blocks of the Content Editor. You can learn how to implement
-new ones by reading [Tiptap's guide](https://www.tiptap.dev/guide/custom-extensions).
-We recommend checking the list of built-in [nodes](https://www.tiptap.dev/api/nodes) and
-[marks](https://www.tiptap.dev/api/marks) before implementing a new extension
+new ones by reading [Tiptap's guide](https://tiptap.dev/guide/custom-extensions).
+We recommend checking the list of built-in [nodes](https://tiptap.dev/api/nodes) and
+[marks](https://tiptap.dev/api/marks) before implementing a new extension
from scratch.
Store the Content Editor extensions in the `~/content_editor/extensions` directory.
@@ -326,8 +326,8 @@ sequenceDiagram
```
Deserializers live in the extension modules. Read Tiptap's
-[parseHTML](https://www.tiptap.dev/guide/custom-extensions#parse-html) and
-[addAttributes](https://www.tiptap.dev/guide/custom-extensions#attributes) documentation to
+[parseHTML](https://tiptap.dev/guide/custom-extensions#parse-html) and
+[addAttributes](https://tiptap.dev/guide/custom-extensions#attributes) documentation to
learn how to implement them. Titap's API is a wrapper around ProseMirror's
[schema spec API](https://prosemirror.net/docs/ref/#model.SchemaSpec).
diff --git a/doc/development/fe_guide/droplab/droplab.md b/doc/development/fe_guide/droplab/droplab.md
deleted file mode 100644
index 8f1ecc115fe..00000000000
--- a/doc/development/fe_guide/droplab/droplab.md
+++ /dev/null
@@ -1,281 +0,0 @@
----
-stage: none
-group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
----
-
-# DropLab
-
-A generic dropdown for all of your custom dropdown needs.
-
-## Usage
-
-DropLab can be used by adding a `data-dropdown-trigger` HTML attribute. This
-attribute allows us to find the "trigger" _(toggle)_ for the dropdown, whether
-it's a button, link or input.
-
-The value of the `data-dropdown-trigger` should be a CSS selector that DropLab
-can use to find the trigger's dropdown list.
-
-You should also add the `data-dropdown` attribute to declare the dropdown list.
-The value is irrelevant.
-
-The DropLab class has no side effects, so you must always call `.init` when the
-DOM is ready. `DropLab.prototype.init` takes the same arguments as `DropLab.prototype.addHook`.
-If you don't provide any arguments, it globally queries and instantiates all
-DropLab-compatible dropdowns.
-
-```html
-<a href="#" data-dropdown-trigger="#list">Toggle</a>
-
-<ul id="list" data-dropdown>
- <!-- ... -->
-<ul>
-```
-
-```javascript
-const droplab = new DropLab();
-droplab.init();
-```
-
-As noted, we have a "Toggle" link that's declared as a trigger. It provides a
-selector to find the dropdown list it should control.
-
-### Static data
-
-You can add static list items.
-
-```html
-<a href="#" data-dropdown-trigger="#list">Toggle</a>
-
-<ul id="list" data-dropdown>
- <li>Static value 1</li>
- <li>Static value 2</li>
-<ul>
-```
-
-```javascript
-const droplab = new DropLab();
-droplab.init();
-```
-
-### Explicit instantiation
-
-You can pass the trigger and list elements as constructor arguments to return a
-non-global instance of DropLab using the `DropLab.prototype.init` method.
-
-```html
-<a href="#" id="trigger" data-dropdown-trigger="#list">Toggle</a>
-
-<ul id="list" data-dropdown>
- <!-- ... -->
-<ul>
-```
-
-```javascript
-const trigger = document.getElementById('trigger');
-const list = document.getElementById('list');
-
-const droplab = new DropLab();
-droplab.init(trigger, list);
-```
-
-You can also add hooks to an existing DropLab instance using `DropLab.prototype.addHook`.
-
-```html
-<a href="#" data-dropdown-trigger="#auto-dropdown">Toggle</a>
-<ul id="auto-dropdown" data-dropdown><!-- ... --><ul>
-
-<a href="#" id="trigger" data-dropdown-trigger="#list">Toggle</a>
-<ul id="list" data-dropdown><!-- ... --><ul>
-```
-
-```javascript
-const droplab = new DropLab();
-
-droplab.init();
-
-const trigger = document.getElementById('trigger');
-const list = document.getElementById('list');
-
-droplab.addHook(trigger, list);
-```
-
-### Dynamic data
-
-Adding `data-dynamic` to your dropdown element enables dynamic list
-rendering.
-
-You can template a list item using the keys of the data object provided. Use the
-handlebars syntax `{{ value }}` to HTML escape the value. Use the `<%= value %>`
-syntax to interpolate the value. Use the `<%= value %>` syntax to evaluate the
-value.
-
-Passing an array of objects to `DropLab.prototype.addData` renders that data
-for all `data-dynamic` dropdown lists tracked by that DropLab instance.
-
-```html
-<a href="#" data-dropdown-trigger="#list">Toggle</a>
-
-<ul id="list" data-dropdown data-dynamic>
- <li><a href="#" data-id="{{id}}">{{text}}</a></li>
-</ul>
-```
-
-```javascript
-const droplab = new DropLab();
-
-droplab.init().addData([{
- id: 0,
- text: 'Jacob',
-}, {
- id: 1,
- text: 'Jeff',
-}]);
-```
-
-Alternatively, you can specify a specific dropdown to add this data to by
-passing the data as the second argument and the `id` of the trigger element as
-the first argument.
-
-```html
-<a href="#" data-dropdown-trigger="#list" id="trigger">Toggle</a>
-
-<ul id="list" data-dropdown data-dynamic>
- <li><a href="#" data-id="{{id}}">{{text}}</a></li>
-</ul>
-```
-
-```javascript
-const droplab = new DropLab();
-
-droplab.init().addData('trigger', [{
- id: 0,
- text: 'Jacob',
-}, {
- id: 1,
- text: 'Jeff',
-}]);
-```
-
-This allows you to mix static and dynamic content, even with one trigger.
-
-Note the use of scoping regarding the `data-dropdown` attribute to capture both
-dropdown lists, one of which is dynamic.
-
-```html
-<input id="trigger" data-dropdown-trigger="#list">
-<div id="list" data-dropdown>
- <ul>
- <li><a href="#">Static item 1</a></li>
- <li><a href="#">Static item 2</a></li>
- </ul>
- <ul data-dynamic>
- <li><a href="#" data-id="{{id}}">{{text}}</a></li>
- </ul>
-</div>
-```
-
-```javascript
-const droplab = new DropLab();
-
-droplab.init().addData('trigger', [{
- id: 0,
- text: 'Jacob',
-}, {
- id: 1,
- text: 'Jeff',
-}]);
-```
-
-## Internal selectors
-
-DropLab adds some CSS classes to help lower the barrier to integration.
-
-For example:
-
-- The `droplab-item-selected` CSS class is added to items that have been
- selected either by a mouse click or by enter key selection.
-- The `droplab-item-active` CSS class is added to items that have been selected
- using arrow key navigation.
-- You can add the `droplab-item-ignore` CSS class to any item that you don't
- want to be selectable. For example, an `<li class="divider"></li>` list
- divider element that shouldn't be interactive.
-
-## Internal events
-
-DropLab uses some custom events to help lower the barrier to integration.
-
-For example:
-
-- The `click.dl` event is fired when an `li` list item has been clicked. It's
- also fired when a list item has been selected with the keyboard. It's also
- fired when a `HookButton` button is clicked (a registered `button` tag or `a`
- tag trigger).
-- The `input.dl` event is fired when a `HookInput` (a registered `input` tag
- trigger) triggers an `input` event.
-- The `mousedown.dl` event is fired when a `HookInput` triggers a `mousedown`
- event.
-- The `keyup.dl` event is fired when a `HookInput` triggers a `keyup` event.
-- The `keydown.dl` event is fired when a `HookInput` triggers a `keydown` event.
-
-These custom events add a `detail` object to the vanilla `Event` object that
-provides some potentially useful data.
-
-## Plugins
-
-Plugins are objects that are registered to be executed when a hook is added (when
-a DropLab trigger and dropdown are instantiated).
-
-If no modules API is detected, the library falls back as it does with
-`window.DropLab` and adds `window.DropLab.plugins.PluginName`.
-
-### Usage
-
-To use plugins, you can pass them in an array as the third argument of
-`DropLab.prototype.init` or `DropLab.prototype.addHook`. Some plugins require
-configuration values; the configuration object can be passed as the fourth argument.
-
-```html
-<a href="#" id="trigger" data-dropdown-trigger="#list">Toggle</a>
-<ul id="list" data-dropdown><!-- ... --><ul>
-```
-
-```javascript
-const droplab = new DropLab();
-
-const trigger = document.getElementById('trigger');
-const list = document.getElementById('list');
-
-droplab.init(trigger, list, [droplabAjax], {
- droplabAjax: {
- endpoint: '/some-endpoint',
- method: 'setData',
- },
-});
-```
-
-### Documentation
-
-Refer to the list of available [DropLab plugins](plugins/index.md) for
-information about their use.
-
-### Development
-
-When plugins are initialised for a DropLab trigger+dropdown, DropLab calls the
-plugins' `init` function, so this must be implemented in the plugin.
-
-```javascript
-class MyPlugin {
- static init() {
- this.someProp = 'someProp';
- this.someMethod();
- }
-
- static someMethod() {
- this.otherProp = 'otherProp';
- }
-}
-
-export default MyPlugin;
-```
diff --git a/doc/development/fe_guide/droplab/plugins/ajax.md b/doc/development/fe_guide/droplab/plugins/ajax.md
deleted file mode 100644
index f12f8f260c7..00000000000
--- a/doc/development/fe_guide/droplab/plugins/ajax.md
+++ /dev/null
@@ -1,46 +0,0 @@
----
-stage: none
-group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
----
-
-# Ajax plugin
-
-`Ajax` is a DropLab plugin that allows for retrieving and rendering list data
-from a server.
-
-## Usage
-
-Add the `Ajax` object to the plugins array of a `DropLab.prototype.init` or
-`DropLab.prototype.addHook` call.
-
-`Ajax` requires 2 configuration values: the `endpoint` and `method`.
-
-- `endpoint`: Should be a URL to the request endpoint.
-- `method`: Should be `setData` or `addData`.
-- `setData`: Completely replaces the dropdown with the response data.
-- `addData`: Appends the response data to the current dropdown list.
-
-```html
-<a href="#" id="trigger" data-dropdown-trigger="#list">Toggle</a>
-<ul id="list" data-dropdown><!-- ... --><ul>
-```
-
-```javascript
-const droplab = new DropLab();
-
-const trigger = document.getElementById('trigger');
-const list = document.getElementById('list');
-
-droplab.addHook(trigger, list, [Ajax], {
- Ajax: {
- endpoint: '/some-endpoint',
- method: 'setData',
- },
-});
-```
-
-Optionally, you can set `loadingTemplate` to a HTML string. This HTML string
-replaces the dropdown list while the request is pending.
-
-Additionally, you can set `onError` to a function to catch any XHR errors.
diff --git a/doc/development/fe_guide/droplab/plugins/filter.md b/doc/development/fe_guide/droplab/plugins/filter.md
deleted file mode 100644
index 79f10cdb6c1..00000000000
--- a/doc/development/fe_guide/droplab/plugins/filter.md
+++ /dev/null
@@ -1,55 +0,0 @@
----
-stage: none
-group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
----
-
-# Filter plugin
-
-`Filter` is a DropLab plugin that allows for filtering data that has been added
-to the dropdown using a simple fuzzy string search of an input value.
-
-## Usage
-
-Add the `Filter` object to the plugins array of a `DropLab.prototype.init` or
-`DropLab.prototype.addHook` call.
-
-- `Filter`: Requires a configuration value for `template`.
-- `template`: Should be the key of the objects within your data array that you
- want to compare to the user input string, for filtering.
-
-```html
-<input href="#" id="trigger" data-dropdown-trigger="#list">
-<ul id="list" data-dropdown data-dynamic>
- <li><a href="#" data-id="{{id}}">{{text}}</a></li>
-<ul>
-```
-
-```javascript
-const droplab = new DropLab();
-
-const trigger = document.getElementById('trigger');
-const list = document.getElementById('list');
-
-droplab.init(trigger, list, [Filter], {
- Filter: {
- template: 'text',
- },
-});
-
-droplab.addData('trigger', [{
- id: 0,
- text: 'Jacob',
-}, {
- id: 1,
- text: 'Jeff',
-}]);
-```
-
-In the previous code, the input string is compared against the `test` key of the
-passed data objects.
-
-Optionally you can set `filterFunction` to a function. This function is then
-used instead of `Filter`'s built-in string search. `filterFunction` is passed
-two arguments: the first is one of the data objects, and the second is the
-current input value.
diff --git a/doc/development/fe_guide/droplab/plugins/index.md b/doc/development/fe_guide/droplab/plugins/index.md
deleted file mode 100644
index c7a2865ca83..00000000000
--- a/doc/development/fe_guide/droplab/plugins/index.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-stage: none
-group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
-description: A list of DropLab plugins.
----
-
-# DropLab plugins
-
-The following plugins are available for use with [DropLab](../droplab.md):
-
-- [Ajax plugin](ajax.md)
-- [Filter plugin](filter.md)
-- [InputSetter plugin](input_setter.md)
diff --git a/doc/development/fe_guide/droplab/plugins/input_setter.md b/doc/development/fe_guide/droplab/plugins/input_setter.md
deleted file mode 100644
index a3c073520cb..00000000000
--- a/doc/development/fe_guide/droplab/plugins/input_setter.md
+++ /dev/null
@@ -1,72 +0,0 @@
----
-stage: none
-group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
----
-
-# InputSetter plugin
-
-`InputSetter` is a DropLab plugin that allows for updating DOM out of the scope
-of DropLab when a list item is clicked.
-
-## Usage
-
-Add the `InputSetter` object to the plugins array of a `DropLab.prototype.init`
-or `DropLab.prototype.addHook` call.
-
-- `InputSetter`: Requires a configuration value for `input` and `valueAttribute`.
-- `input`: The DOM element that you want to manipulate.
-- `valueAttribute`: A string that's the name of an attribute on your list items
- that's used to get the value to update the `input` element with.
-
-You can also set the `InputSetter` configuration to an array of objects, which
-allows you to update multiple elements.
-
-```html
-<input id="input" value="">
-<div id="div" data-selected-id=""></div>
-
-<input href="#" id="trigger" data-dropdown-trigger="#list">
-<ul id="list" data-dropdown data-dynamic>
- <li><a href="#" data-id="{{id}}">{{text}}</a></li>
-<ul>
-```
-
-```javascript
-const droplab = new DropLab();
-
-const trigger = document.getElementById('trigger');
-const list = document.getElementById('list');
-
-const input = document.getElementById('input');
-const div = document.getElementById('div');
-
-droplab.init(trigger, list, [InputSetter], {
- InputSetter: [{
- input: input,
- valueAttribute: 'data-id',
- } {
- input: div,
- valueAttribute: 'data-id',
- inputAttribute: 'data-selected-id',
- }],
-});
-
-droplab.addData('trigger', [{
- id: 0,
- text: 'Jacob',
-}, {
- id: 1,
- text: 'Jeff',
-}]);
-```
-
-In the previous code, if the second list item was clicked, it would update the
-`#input` element to have a `value` of `1`, it would also update the `#div`
-element's `data-selected-id` to `1`.
-
-Optionally, you can set `inputAttribute` to a string that's the name of an
-attribute on your `input` element that you want to update. If you don't provide
-an `inputAttribute`, `InputSetter` updates the `value` of the `input`
-element if it's an `INPUT` element, or the `textContent` of the `input` element
-if it isn't an `INPUT` element.
diff --git a/doc/development/fe_guide/editor_lite.md b/doc/development/fe_guide/editor_lite.md
deleted file mode 100644
index 5020bf9eeeb..00000000000
--- a/doc/development/fe_guide/editor_lite.md
+++ /dev/null
@@ -1,9 +0,0 @@
----
-redirect_to: 'source_editor.md'
-remove_date: '2021-09-19'
----
-
-This document was moved to [another location](source_editor.md).
-
-<!-- This redirect file can be deleted after <2021-09-19>. -->
-<!-- Before deletion, see: https://docs.gitlab.com/ee/development/documentation/#move-or-rename-a-page -->
diff --git a/doc/development/fe_guide/haml.md b/doc/development/fe_guide/haml.md
index 8f501007755..f905fdad77e 100644
--- a/doc/development/fe_guide/haml.md
+++ b/doc/development/fe_guide/haml.md
@@ -57,7 +57,7 @@ For example:
When using the GitLab UI form builder, the following components are available for use in HAML.
NOTE:
-Currently only `gitlab_ui_checkbox_component` is available but more components are planned.
+Currently only the listed components are available but more components are planned.
#### gitlab_ui_checkbox_component
@@ -72,3 +72,16 @@ Currently only `gitlab_ui_checkbox_component` is available but more components a
| `checked_value` | Value when checkbox is checked. | `String` | `false` (`'1'`) |
| `unchecked_value` | Value when checkbox is unchecked. | `String` | `false` (`'0'`) |
| `label_options` | Options that are passed to [Rails `label` method](https://api.rubyonrails.org/classes/ActionView/Helpers/FormBuilder.html#method-i-label). | `Hash` | `false` (`{}`) |
+
+#### gitlab_ui_radio_component
+
+[GitLab UI Docs](https://gitlab-org.gitlab.io/gitlab-ui/?path=/story/base-form-form-radio--default)
+
+| Argument | Description | Type | Required (default value) |
+|---|---|---|---|
+| `method` | Attribute on the object passed to `gitlab_ui_form_for`. | `Symbol` | `true` |
+| `value` | The value of the radio tag. | `Symbol` | `true` |
+| `label` | Radio label. | `String` | `true` |
+| `help_text` | Help text displayed below the radio button. | `String` | `false` (`nil`) |
+| `radio_options` | Options that are passed to [Rails `radio_button` method](https://api.rubyonrails.org/classes/ActionView/Helpers/FormBuilder.html#method-i-radio_button). | `Hash` | `false` (`{}`) |
+| `label_options` | Options that are passed to [Rails `label` method](https://api.rubyonrails.org/classes/ActionView/Helpers/FormBuilder.html#method-i-label). | `Hash` | `false` (`{}`) |
diff --git a/doc/development/fe_guide/index.md b/doc/development/fe_guide/index.md
index a6b49394733..9ef4375d795 100644
--- a/doc/development/fe_guide/index.md
+++ b/doc/development/fe_guide/index.md
@@ -133,9 +133,13 @@ Best practices for monitoring and maximizing frontend performance.
Frontend security practices.
-## [Accessibility](accessibility.md)
+## Accessibility
-Our accessibility standards and resources.
+Our [accessibility standards and resources](accessibility.md).
+
+## Logging
+
+Best practices for [client-side logging](logging.md) for GitLab frontend development.
## [Internationalization (i18n) and Translations](../i18n/externalization.md)
diff --git a/doc/development/fe_guide/logging.md b/doc/development/fe_guide/logging.md
new file mode 100644
index 00000000000..26633eade43
--- /dev/null
+++ b/doc/development/fe_guide/logging.md
@@ -0,0 +1,86 @@
+---
+stage: none
+group: unassigned
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
+---
+
+# Client-side logging for frontend development
+
+This guide contains the best practices for client-side logging for GitLab
+frontend development.
+
+## When to log to the browser console
+
+We do not want to log unnecessarily to the browser console, as excessively
+noisy console logs are not easy to read, parse, or process. We **do** want to
+give visibility to unintended events in the system. If a possible but unexpected
+exception occurs during runtime, we want to log the details of this exception.
+These logs can give significantly helpful context to end users creating issues, or
+contributors diagnosing problems.
+
+Whenever a `catch(e)` exists, and `e` is something unexpected, log the details.
+
+### What makes an error unexpected?
+
+Sometimes a caught exception can be part of normal operations. For instance, third-party
+libraries might throw an exception based on certain inputs. If we can gracefully
+handle these exceptions, then they are expected. Don't log them noisily.
+For example:
+
+```javascript
+try {
+ // Here, we call a method based on some user input.
+ // `doAThing` will throw an exception if the input is invalid.
+ const userInput = getUserInput();
+ doAThing(userInput);
+} catch (e) {
+ if (e instanceof FooSyntaxError) {
+ // To handle a `FooSyntaxError`, we just need to instruct the user to change their input.
+ // This isn't unexpected, and is part of normal operations.
+ setUserMessage(`Try writing better code. ${e.message}`);
+ } else {
+ // We're not sure what `e` is, so something unexpected and bad happened...
+ logError(e);
+ setUserMessage('Something unexpected happened...');
+ }
+}
+```
+
+## How to log an error
+
+We have a helpful `~/lib/logger` module which encapsulates how we can
+consistently log runtime errors in GitLab. Import `logError` from this
+module, and use it as you normally would `console.error`. Pass the actual `Error`
+object, so the stack trace and other details can be captured in the log:
+
+```javascript
+// 1. Import the logger module.
+import { logError } from '~/lib/logger';
+
+export const doThing = () => {
+ return foo()
+ .then(() => {
+ // ...
+ })
+ .catch(e => {
+ // 2. Use `logError` like you would `console.error`.
+ logError('An unexpected error occurred while doing the thing', e);
+
+ // We may or may not want to present that something bad happened to the end user.
+ showThingFailed();
+ });
+};
+```
+
+## Relation to frontend observability
+
+Client-side logging is strongly related to
+[Frontend observability](https://about.gitlab.com/company/team/structure/working-groups/frontend-observability/).
+We want unexpected errors to be observed by our monitoring systems, so
+we can quickly react to user-facing issues. For a number of reasons, it is
+unfeasible to send every log to the monitoring system. Don't shy away from using
+`~/lib/logger`, but consider controlling which messages passed to `~/lib/logger`
+are actually sent to the monitoring systems.
+
+A cohesive logging module helps us control these side effects consistently
+across the various entry points.
diff --git a/doc/development/fe_guide/storybook.md b/doc/development/fe_guide/storybook.md
index 15225cc1deb..a46157d2cad 100644
--- a/doc/development/fe_guide/storybook.md
+++ b/doc/development/fe_guide/storybook.md
@@ -33,19 +33,25 @@ Stories can be added for any Vue component in the `gitlab` repository.
To add a story:
1. Create a new `.stories.js` file in the same directory as the Vue component.
- The file name should have the same prefix as the Vue component.
+ The filename should have the same prefix as the Vue component.
```txt
vue_shared/
├─ components/
│ ├─ sidebar
- │ │ ├─ todo_button.vue
- │ │ ├─ todo_button.stories.js
+ │ | ├─ todo_toggle
+ │ | | ├─ todo_button.vue
+ │ │ | ├─ todo_button.stories.js
```
1. Write the story as per the [official Storybook instructions](https://storybook.js.org/docs/vue/writing-stories/introduction/)
Notes:
- Specify the `title` field of the story as the component's file path from the `javascripts/` directory,
- e.g. if the component is located at `app/assets/javascripts/vue_shared/components/sidebar/todo_toggle/todo_button.vue`, specify the `title` as
- `vue_shared/components/To-do Button`. This will ensure the Storybook navigation maps closely to our internal directory structure.
+ e.g. if the component is located at `app/assets/javascripts/vue_shared/components/sidebar/todo_toggle/todo_button.vue`, specify the story `title` as `vue_shared/components/sidebar/todo_toggle/todo_button`. This will ensure the Storybook navigation maps closely to our internal directory structure.
+
+## Mock backend APIs
+
+GitLab’s Storybook uses [MirajeJS](https://miragejs.com/) to mock REST and GraphQL APIs. Storybook shares the MirajeJS server
+with the [frontend integration tests](../testing_guide/testing_levels.md#frontend-integration-tests). You can find the MirajeJS
+configuration files in `spec/frontend_integration/mock_server`.
diff --git a/doc/development/fe_guide/style/scss.md b/doc/development/fe_guide/style/scss.md
index 6d9bbdd3f2d..ffaaa3e87c7 100644
--- a/doc/development/fe_guide/style/scss.md
+++ b/doc/development/fe_guide/style/scss.md
@@ -45,7 +45,7 @@ result (such as `ml-1` becoming `gl-ml-2`).
If a class you need has not been added to GitLab UI, you get to add it! Follow the naming patterns documented in the [utility files](https://gitlab.com/gitlab-org/gitlab-ui/-/tree/main/src/scss/utility-mixins) and refer to [GitLab UI's CSS documentation](https://gitlab.com/gitlab-org/gitlab-ui/-/blob/main/doc/contributing/adding_css.md#adding-utility-mixins) for more details, especially about adding responsive and stateful rules.
-If it is not possible to wait for a GitLab UI update (generally one day), add the class to [`utilities.scss`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/assets/stylesheets/utilities.scss) following the same naming conventions documented in GitLab UI. A follow—up issue to backport the class to GitLab UI and delete it from GitLab should be opened.
+If it is not possible to wait for a GitLab UI update (generally one day), add the class to [`utilities.scss`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/assets/stylesheets/utilities.scss) following the same naming conventions documented in GitLab UI. A follow-up issue to backport the class to GitLab UI and delete it from GitLab should be opened.
#### When should I create component classes?
diff --git a/doc/development/feature_categorization/index.md b/doc/development/feature_categorization/index.md
index 2f0f8101b53..20325facc75 100644
--- a/doc/development/feature_categorization/index.md
+++ b/doc/development/feature_categorization/index.md
@@ -72,6 +72,11 @@ class SomeCrossCuttingConcernWorker
end
```
+When possible, workers marked as "not owned" use their caller's
+category (worker or HTTP endpoint) in metrics and logs.
+For instance, `ReactiveCachingWorker` can have multiple feature
+categories in metrics and logs.
+
## Rails controllers
Specifying feature categories on controller actions can be done using
diff --git a/doc/development/feature_flags/index.md b/doc/development/feature_flags/index.md
index 1962d5262ce..987ff7c9fe8 100644
--- a/doc/development/feature_flags/index.md
+++ b/doc/development/feature_flags/index.md
@@ -21,7 +21,7 @@ All newly-introduced feature flags should be [disabled by default](https://about
NOTE:
This document is the subject of continued work as part of an epic to [improve internal usage of Feature Flags](https://gitlab.com/groups/gitlab-org/-/epics/3551). Raise any suggestions as new issues and attach them to the epic.
-For an [overview of the feature flag lifecycle](https://about.gitlab.com/handbook/product-development-flow/feature-flag-lifecycle/#feature-flag-lifecycle), or if you need help deciding [if you should use a feature flag](https://about.gitlab.com/handbook/product-development-flow/feature-flag-lifecycle/#when-to-use-feature-flags) or not, please see the [feature flag lifecycle](https://about.gitlab.com/handbook/product-development-flow/feature-flag-lifecycle) handbook page.
+For an [overview of the feature flag lifecycle](https://about.gitlab.com/handbook/product-development-flow/feature-flag-lifecycle/#feature-flag-lifecycle), or if you need help deciding [if you should use a feature flag](https://about.gitlab.com/handbook/product-development-flow/feature-flag-lifecycle/#when-to-use-feature-flags) or not, please see the [feature flag lifecycle](https://about.gitlab.com/handbook/product-development-flow/feature-flag-lifecycle/) handbook page.
## When to use feature flags
diff --git a/doc/development/file_storage.md b/doc/development/file_storage.md
index 71fc81a6ea3..d161206f44d 100644
--- a/doc/development/file_storage.md
+++ b/doc/development/file_storage.md
@@ -28,6 +28,8 @@ There are many places where file uploading is used, according to contexts:
- LFS Objects
- Merge request diffs
- Design Management design thumbnails
+- Topic
+ - Topic avatars
## Disk storage
@@ -42,6 +44,7 @@ they are still not 100% standardized. You can see them below:
| User avatars | yes | `uploads/-/system/user/avatar/:id/:filename` | `AvatarUploader` | User |
| User snippet attachments | yes | `uploads/-/system/personal_snippet/:id/:random_hex/:filename` | `PersonalFileUploader` | Snippet |
| Project avatars | yes | `uploads/-/system/project/avatar/:id/:filename` | `AvatarUploader` | Project |
+| Topic avatars | yes | `uploads/-/system/projects/topic/avatar/:id/:filename` | `AvatarUploader` | Topic |
| Issues/MR/Notes Markdown attachments | yes | `uploads/:project_path_with_namespace/:random_hex/:filename` | `FileUploader` | Project |
| Issues/MR/Notes Legacy Markdown attachments | no | `uploads/-/system/note/attachment/:id/:filename` | `AttachmentUploader` | Note |
| Design Management design thumbnails | yes | `uploads/-/system/design_management/action/image_v432x230/:id/:filename` | `DesignManagement::DesignV432x230Uploader` | DesignManagement::Action |
diff --git a/doc/development/filtering_by_label.md b/doc/development/filtering_by_label.md
index 2b9c7efc087..6f9811f7e05 100644
--- a/doc/development/filtering_by_label.md
+++ b/doc/development/filtering_by_label.md
@@ -82,6 +82,19 @@ AND (EXISTS (
While this worked without schema changes, and did improve readability somewhat,
it did not improve query performance.
+### Attempt A2: use label IDs in the WHERE EXISTS clause
+
+In [merge request #34503](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/34503), we followed a similar approach to A1. But this time, we
+did a separate query to fetch the IDs of the labels used in the filter so that we avoid the `JOIN` in the `EXISTS` clause and filter directly by
+`label_links.label_id`. We also added a new index on `label_links` for the `target_id`, `label_id`, and `target_type` columns to speed up this query.
+
+Finding the label IDs wasn't straightforward because there could be multiple labels with the same title within a single root namespace. We solved
+this by grouping the label IDs by title and then using the array of IDs in the `EXISTS` clauses.
+
+This resulted in a significant performance improvement. However, this optimization could not be applied to the dashboard pages
+where we do not have a project or group context. We could not easily search for the label IDs here because that would mean searching across all
+projects and groups that the user has access to.
+
## Attempt B: Denormalize using an array column
Having [removed MySQL support in GitLab 12.1](https://about.gitlab.com/blog/2019/06/27/removing-mysql-support/),
@@ -159,9 +172,8 @@ However, at present, the disadvantages outweigh the advantages.
## Conclusion
-We have yet to find a method that is demonstrably better than the current
-method, when considering:
+We found a method A2 that does not need denormalization and improves the query performance significantly. This
+did not apply to all cases, but we were able to apply method A1 to the rest of the cases so that we remove the
+`GROUP BY` and `HAVING` clauses in all scenarios.
-1. Query performance.
-1. Readability.
-1. Ease of maintaining schema consistency.
+This simplified the query and improved the performance in the most common cases.
diff --git a/doc/development/go_guide/go_upgrade.md b/doc/development/go_guide/go_upgrade.md
new file mode 100644
index 00000000000..75a093a55ac
--- /dev/null
+++ b/doc/development/go_guide/go_upgrade.md
@@ -0,0 +1,187 @@
+---
+stage: none
+group: unassigned
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
+---
+
+# Managing Go versions
+
+## Overview
+
+All Go binaries, with the exception of
+[GitLab Runner](https://gitlab.com/gitlab-org/gitlab-runner) and [Security Projects](https://gitlab.com/gitlab-org/security-products), are built in
+projects managed by the [Distribution team](https://about.gitlab.com/handbook/product/categories/#distribution-group).
+
+The [Omnibus GitLab](https://gitlab.com/gitlab-org/omnibus-gitlab) project creates a
+single, monolithic operating system package containing all the binaries, while
+the [Cloud-Native GitLab (CNG)](https://gitlab.com/gitlab-org/build/CNG) project
+publishes a set of Docker images deployed and configured by Helm Charts or
+the GitLab Operator.
+
+Testing matrices for all projects using Go must include the version shipped
+by Distribution:
+
+- [Check the Go version shipping with Omnibus GitLab](https://gitlab.com/gitlab-org/gitlab-omnibus-builder/-/blob/master/docker/VERSIONS#L6).
+- [Check the Go version shipping with Cloud-Native GitLab (CNG)](https://gitlab.com/gitlab-org/build/cng/blob/master/ci_files/variables.yml#L12).
+
+## Supporting multiple Go versions
+
+Individual Golang projects need to support multiple Go versions because:
+
+- When a new version of Go is released, we should start integrating it into the CI pipelines to verify compatibility with the new compiler.
+- We must support the [official Omnibus GitLab Go version](#updating-go-version), which may be behind the latest minor release.
+- When Omnibus switches Go version, we still may need to support the old one for security backports.
+
+These 3 requirements may easily be satisfied by keeping support for the [3 latest minor versions of Go](https://golang.org/dl/).
+
+It is ok to drop support for the oldest Go version and support only the 2 latest releases,
+if this is enough to support backports to the last 3 minor GitLab releases.
+
+For example, if we want to drop support for `go 1.11` in GitLab `12.10`, we need
+to verify which Go versions we are using in `12.9`, `12.8`, and `12.7`. We do not
+consider the active milestone, `12.10`, because a backport for `12.7` is required
+in case of a critical security release.
+
+- If both [Omnibus GitLab and Cloud-Native GitLab (CNG)](#updating-go-version) were using Go `1.12` in GitLab `12.7` and later,
+ then we can safely drop support for `1.11`.
+- If Omnibus GitLab or Cloud-Native GitLab (CNG) were using `1.11` in GitLab `12.7`, then we still need to keep
+ support for Go `1.11` for easier backporting of security fixes.
+
+## Updating Go version
+
+We should always:
+
+- Use the same Go version for Omnibus GitLab and Cloud Native GitLab.
+- Use a [supported version](https://golang.org/doc/devel/release#policy).
+- Use the most recent patch-level for that version to keep up with security fixes.
+
+Changing the version affects every project being compiled, so it's important to
+ensure that all projects have been updated to test against the new Go version
+before changing the package builders to use it. Despite [Go's compatibility promise](https://golang.org/doc/go1compat),
+changes between minor versions can expose bugs or cause problems in our projects.
+
+### Upgrade process
+
+The upgrade process involves several key steps:
+
+- [Track component updates and validation](#tracking-work).
+- [Track component integration for release](#tracking-work).
+- [Communication with stakeholders](#communication-plan).
+
+#### Tracking work
+
+Use [the product categories page](https://about.gitlab.com/handbook/product/categories/)
+if you need help finding the correct person or labels:
+
+1. Create the epic in `gitlab-org` group:
+ - Title the epic `Update Go version to <VERSION_NUMBER>`.
+ - Ping the engineering managers responsible for [the projects listed below](#known-dependencies-using-go).
+
+1. Create an upgrade issue for each dependency in the [location indicated below](#known-dependencies-using-go)
+ titled `Support building with Go <VERSION_NUMBER>`. Add the proper label to each issue for easier triage.
+
+ NOTE:
+ The upgrade issues must include [upgrade validation items](#upgrade-validation)
+ in their definition of done. Creating a second [performance testing issue](#upgrade-validation)
+ titled `Validate operation and performance at scale with Go <VERSION_NUMBER>`
+ is strongly recommended to help with scheduling tasks and managing workloads.
+
+1. Schedule an update with the [GitLab Development Kit](https://gitlab.com/gitlab-org/gitlab-development-kit/-/issues):
+ - Title the issue `Support using Go version <VERSION_NUMBER>`.
+ - Set the issue as related to every issue created in the previous step.
+1. Schedule one issue per Secure Stage team and add the `devops::secure` label to each:
+ - [Static Analysis tracker](https://gitlab.com/gitlab-org/gitlab/-/issues).
+ - [Composition Analysis tracker](https://gitlab.com/gitlab-org/gitlab/-/issues).
+ - [Container Security tracker](https://gitlab.com/gitlab-org/gitlab/-/issues).
+1. Schedule builder updates with Distribution projects:
+ - Dependency and GitLab Development Kit issues created in previous steps should be set as blockers.
+ - Each issue should have the title `Support building with Go <VERSION_NUMBER>` and description as noted:
+ - [Cloud-Native GitLab](https://gitlab.com/gitlab-org/charts/gitlab/-/issues)
+
+ ```plaintext
+ Update the `GO_VERSION` in `ci_files/variables.yml`.
+ ```
+
+ - [Omnibus GitLab Builder](https://gitlab.com/gitlab-org/gitlab-omnibus-builder/-/issues)
+
+ ```plaintext
+ Update `GO_VERSION` in `docker/VERSIONS`.
+ ```
+
+ - [Omnibus GitLab](https://gitlab.com/gitlab-org/omnibus-gitlab/-/issues)
+
+ ```plaintext
+ Update `BUILDER_IMAGE_REVISION` in `.gitlab-ci.yml` to match tag from builder.
+ ```
+
+ NOTE:
+ If the component is not automatically upgraded for [Omnibus GitLab](https://gitlab.com/gitlab-org/omnibus-gitlab/-/issues)
+ and [Cloud Native GitLab](https://gitlab.com/gitlab-org/charts/gitlab/-/issues),
+ issues should be opened in their respective trackers titled `Updated bundled version of COMPONENT_NAME`
+ and set as blocked by the component's upgrade issue.
+
+#### Known dependencies using Go
+
+| Component Name | Where to track work |
+|-------------------------------|---------------------|
+| [Alertmanager](https://github.com/prometheus/alertmanager) | [Issue Tracker](https://gitlab.com/gitlab-org/gitlab/-/issues) |
+| Docker Distribution Pruner | [Issue Tracker](https://gitlab.com/gitlab-org/docker-distribution-pruner) |
+| Gitaly | [Issue Tracker](https://gitlab.com/gitlab-org/gitaly/-/issues) |
+| GitLab Compose Kit | [Issuer Tracker](https://gitlab.com/gitlab-org/gitlab-compose-kit/-/issues) |
+| GitLab Container Registry | [Issue Tracker](https://gitlab.com/gitlab-org/container-registry) |
+| GitLab Elasticsearch Indexer | [Issue Tracker](https://gitlab.com/gitlab-org/gitlab-elasticsearch-indexer/-/issues) |
+| GitLab Kubernetes Agent (KAS) | [Issue Tracker](https://gitlab.com/gitlab-org/cluster-integration/gitlab-agent/-/issues) |
+| GitLab Pages | [Issue Tracker](https://gitlab.com/gitlab-org/gitlab-pages/-/issues) |
+| GitLab Quality Images | [Issue Tracker](https://gitlab.com/gitlab-org/gitlab-build-images/-/issues) |
+| GitLab Shell | [Issue Tracker](https://gitlab.com/gitlab-org/gitlab-shell/-/issues) |
+| GitLab Workhorse | [Issue Tracker](https://gitlab.com/gitlab-org/gitlab/-/issues) |
+| Labkit | [Issue Tracker](https://gitlab.com/gitlab-org/labkit/-/issues) |
+| [Node Exporter](https://github.com/prometheus/node_exporter) | [Issue Tracker](https://gitlab.com/gitlab-org/gitlab/-/issues) |
+| [PgBouncer Exporter](https://github.com/prometheus-community/pgbouncer_exporter) | [Issue Tracker](https://gitlab.com/gitlab-org/gitlab/-/issues) |
+| [Postgres Exporter](https://github.com/prometheus-community/postgres_exporter) | [Issue Tracker](https://gitlab.com/gitlab-org/gitlab/-/issues) |
+| [Prometheus](https://github.com/prometheus/prometheus) | [Issue Tracker](https://gitlab.com/gitlab-org/gitlab/-/issues) |
+| [Redis Exporter](https://github.com/oliver006/redis_exporter) | [Issue Tracker](https://gitlab.com/gitlab-org/gitlab/-/issues) |
+
+#### Communication plan
+
+Communication is required at several key points throughout the process and should
+be included in the relevant issues as part of the definition of done:
+
+1. Immediately after creating the epic, it should be posted to Slack. Community members must ask the pinged engineering managers for assistance with this step. The responsible GitLab team member should share a link to the epic in the following Slack channels:
+ - `#backend`
+ - `#development`
+1. Immediately after merging the GitLab Development Kit Update, the same maintainer should add an entry to the engineering week-in-review sync and
+ announce the change in the following Slack channels:
+ - `#backend`
+ - `#development`
+1. Immediately upon merge of the updated Go versions in
+ [Cloud-Native GitLab](https://gitlab.com/gitlab-org/build/CNG) and
+ [Omnibus GitLab](https://gitlab.com/gitlab-org/omnibus-gitlab) add the
+ change to the engineering-week-in-review sync and announce in the following
+ Slack channels:
+ - `#backend`
+ - `#development`
+ - `#releases`
+
+#### Upgrade validation
+
+Upstream component maintainers must validate their Go-based projects using:
+
+- Established unit tests in the codebase.
+- Procedures established in [Merge Request Performance Guidelines](../merge_request_performance_guidelines.md).
+- Procedures established in [Performance, Reliability, and Availability guidelines](../code_review.md#performance-reliability-and-availability).
+
+Upstream component maintainers should consider validating their Go-based
+projects with:
+
+- Isolated component operation performance tests.
+
+ Integration tests are costly and should be testing inter-component
+ operational issues. Isolated component testing reduces mean time to
+ feedback on updates and decreases resource burn across the organization.
+
+- Components should have end-to-end test coverage in the GitLab Performance Test tool.
+- Integration validation through installation of fresh packages **_and_** upgrade from previous versions for:
+ - Single GitLab Node
+ - Reference Architecture Deployment
+ - Geo Deployment
diff --git a/doc/development/go_guide/index.md b/doc/development/go_guide/index.md
index 0ee73da48db..44f8924be04 100644
--- a/doc/development/go_guide/index.md
+++ b/doc/development/go_guide/index.md
@@ -23,6 +23,16 @@ experiences. Several projects were started with different standards and they
can still have specifics. They are described in their respective
`README.md` or `PROCESS.md` files.
+## Go language versions
+
+The Go upgrade documentation [provides an overview](go_upgrade.md#overview)
+of how GitLab manages and ships Go binary support.
+
+If a GitLab component requires a newer version of Go, please
+follow the [upgrade process](go_upgrade.md#updating-go-version) to ensure no customer, team, or component is adversely impacted.
+
+Sometimes, individual projects must also [manage builds with multiple versions of Go](go_upgrade.md#supporting-multiple-go-versions).
+
## Dependency Management
Go uses a source-based strategy for dependency management. Dependencies are
@@ -417,70 +427,6 @@ Generated Docker images should have the program at their `Entrypoint` to create
portable commands. That way, anyone can run the image, and without parameters
it displays its help message (if `cli` has been used).
-## Distributing Go binaries
-
-With the exception of [GitLab Runner](https://gitlab.com/gitlab-org/gitlab-runner),
-which publishes its own binaries, our Go binaries are created by projects
-managed by the [Distribution group](https://about.gitlab.com/handbook/product/categories/#distribution-group).
-
-The [Omnibus GitLab](https://gitlab.com/gitlab-org/omnibus-gitlab) project creates a
-single, monolithic operating system package containing all the binaries, while
-the [Cloud-Native GitLab (CNG)](https://gitlab.com/gitlab-org/build/CNG) project
-publishes a set of Docker images and Helm charts to glue them together.
-
-Both approaches use the same version of Go for all projects, so it's important
-to ensure all our Go-using projects have at least one Go version in common in
-their test matrices. You can check the version of Go currently being used by
-[Omnibus](https://gitlab.com/gitlab-org/gitlab-omnibus-builder/blob/master/docker/Dockerfile_debian_10#L59),
-and the version being used for [CNG](https://gitlab.com/gitlab-org/build/cng/blob/master/ci_files/variables.yml#L12).
-
-### Updating Go version
-
-We should always use a [supported version](https://golang.org/doc/devel/release#policy)
-of Go, that is, one of the three most recent minor releases, and should always use
-the most recent patch-level for that version, as it may contain security fixes.
-
-Changing the version affects every project being compiled, so it's important to
-ensure that all projects have been updated to test against the new Go version
-before changing the package builders to use it. Despite [Go's compatibility promise](https://golang.org/doc/go1compat),
-changes between minor versions can expose bugs or cause problems in our projects.
-
-Once you've picked a new Go version to use, the steps to update Omnibus and CNG
-are:
-
-- [Create a merge request in the CNG project](https://gitlab.com/gitlab-org/build/CNG/-/edit/master/ci_files/variables.yml?branch_name=update-go-version),
- update the `GO_VERSION` in `ci_files/variables.yml`.
-- [Create a merge request in the `gitlab-omnibus-builder` project](https://gitlab.com/gitlab-org/gitlab-omnibus-builder/-/edit/master/docker/VERSIONS?branch_name=update-go-version),
- update the `GO_VERSION` in `docker/VERSIONS`.
-- Tag a new release of `gitlab-omnibus-builder` containing the change.
-- [Create a merge request in the `omnibus-gitlab` project](https://gitlab.com/gitlab-org/omnibus-gitlab/edit/master/.gitlab-ci.yml?branch_name=update-gitlab-omnibus-builder-version),
- update the `BUILDER_IMAGE_REVISION` to match the newly-created tag.
-
-To reduce unnecessary differences between two distribution methods, Omnibus and
-CNG **should always use the same Go version**.
-
-### Supporting multiple Go versions
-
-Individual Golang-projects need to support multiple Go versions for the following reasons:
-
-1. When a new Go release is out, we should start integrating it into the CI pipelines to verify compatibility with the new compiler.
-1. We must support the [Omnibus official Go version](#updating-go-version), which may be behind the latest minor release.
-1. When Omnibus switches Go version, we still may need to support the old one for security backports.
-
-These 3 requirements may easily be satisfied by keeping support for the 3 latest minor versions of Go.
-
-It's ok to drop support for the oldest Go version and support only 2 latest releases,
-if this is enough to support backports to the last 3 GitLab minor releases.
-
-Example:
-
-In case we want to drop support for `go 1.11` in GitLab `12.10`, we need to verify which Go versions we are using in `12.9`, `12.8`, and `12.7`.
-
-We do not consider the active milestone, `12.10`, because a backport for `12.7` is required in case of a critical security release.
-
-1. If both [Omnibus and CNG](#updating-go-version) were using Go `1.12` in GitLab `12.7` and later, then we safely drop support for `1.11`.
-1. If Omnibus or CNG were using `1.11` in GitLab `12.7`, then we still need to keep support for Go `1.11` for easier backporting of security fixes.
-
## Secure Team standards and style guidelines
The following are some style guidelines that are specific to the Secure Team.
diff --git a/doc/development/graphql_guide/pagination.md b/doc/development/graphql_guide/pagination.md
index a37c47f1b11..1f40a605cfe 100644
--- a/doc/development/graphql_guide/pagination.md
+++ b/doc/development/graphql_guide/pagination.md
@@ -338,9 +338,9 @@ describe 'sorting and pagination' do
let(:ordered_issues) { issues.sort_by(&:weight) }
it_behaves_like 'sorted paginated query' do
- let(:sort_param) { :WEIGHT_ASC }
- let(:first_param) { 2 }
- let(:expected_results) { ordered_issues.map(&:iid) }
+ let(:sort_param) { :WEIGHT_ASC }
+ let(:first_param) { 2 }
+ let(:all_records) { ordered_issues.map(&:iid) }
end
end
end
diff --git a/doc/development/image_scaling.md b/doc/development/image_scaling.md
index 79687b66711..82ca8cf8e83 100644
--- a/doc/development/image_scaling.md
+++ b/doc/development/image_scaling.md
@@ -31,8 +31,8 @@ The hard-coded rules only permit:
Furthermore, configuration in Workhorse can lead to the image scaler rejecting a request if:
-- The image file is too large (controlled by [`max_filesize`](- we only rescale images that do not exceed a configured size in bytes (see [`max_filesize`](https://gitlab.com/gitlab-org/gitlab-workhorse/-/blob/67ab3a2985d2097392f93523ae1cffe0dbf01b31/config.toml.example#L17)))).
-- Too many image scalers are already running (controlled by [`max_scaler_procs`](https://gitlab.com/gitlab-org/gitlab-workhorse/-/blob/67ab3a2985d2097392f93523ae1cffe0dbf01b31/config.toml.example#L16)).
+- The image file is too large (controlled by [`max_filesize`](- we only rescale images that do not exceed a configured size in bytes (see [`max_filesize`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/workhorse/config.toml.example#L22)))).
+- Too many image scalers are already running (controlled by [`max_scaler_procs`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/workhorse/config.toml.example#L21)).
For instance, here are two different URLs that serve the GitLab project avatar both in its
original size and scaled down to 64 pixels. Only the second request will trigger the image scaler:
@@ -73,7 +73,7 @@ we simply follow the path we take to serve any ordinary upload.
### Workhorse
Assuming Rails decided the request to be valid, Workhorse will take over. Upon receiving the `send-scaled-image`
-instruction through the Rails response, a [special response injector](https://gitlab.com/gitlab-org/gitlab-workhorse/-/blob/master/internal/imageresizer/image_resizer.go)
+instruction through the Rails response, a [special response injector](https://gitlab.com/gitlab-org/gitlab/-/blob/master/workhorse/internal/imageresizer/image_resizer.go)
will be invoked that knows how to rescale images. The only inputs it requires are the location of the image
(a path if the image resides in block storage, or a URL to remote storage otherwise) and the desired width.
Workhorse will handle the location transparently so Rails does not need to be concerned with where the image
diff --git a/doc/development/import_project.md b/doc/development/import_project.md
index d021126c8eb..9872aa239dc 100644
--- a/doc/development/import_project.md
+++ b/doc/development/import_project.md
@@ -111,9 +111,9 @@ public folder (for example `/tmp/`) fixes the issue.
##### `Name can contain only letters, digits, emojis ...`
```plaintext
-Name can contain only letters, digits, emojis, '_', '.', dash, space. It must start with letter,
-digit, emoji or '_'. and Path can contain only letters, digits, '_', '-' and '.'. Cannot start
-with '-', end in '.git' or end in '.atom'
+Name can contain only letters, digits, emojis, '_', '.', '+', dashes, or spaces. It must start with a letter,
+digit, emoji, or '_', and Path can contain only letters, digits, '_', '-', or '.'. It cannot start
+with '-', end in '.git', or end in '.atom'.
```
The project name specified in `project_path` is not valid for one of the specified reasons.
@@ -216,6 +216,6 @@ This is due to a [n+1 calls limit being set for development setups](gitaly.md#to
Many of the tests also require a GitLab Personal Access Token. This is due to numerous endpoints themselves requiring authentication.
-[The official GitLab docs detail how to create this token](../user/profile/personal_access_tokens.md#create-a-personal-access-token). The tests require that the token is generated by an admin user and that it has the `API` and `read_repository` permissions.
+[The official GitLab docs detail how to create this token](../user/profile/personal_access_tokens.md#create-a-personal-access-token). The tests require that the token is generated by an administrator and that it has the `API` and `read_repository` permissions.
Details on how to use the Access Token with each type of test are found in their respective documentation.
diff --git a/doc/development/index.md b/doc/development/index.md
index e8e7369f6c5..3780c986367 100644
--- a/doc/development/index.md
+++ b/doc/development/index.md
@@ -18,7 +18,7 @@ the [Handbook](https://about.gitlab.com/handbook/).
For information on using GitLab to work on your own software projects, see the
[GitLab user documentation](../user/index.md).
-For information on working with the GitLab APIs, see the [API documentation](../api/README.md).
+For information on working with the GitLab APIs, see the [API documentation](../api/index.md).
For information about how to install, configure, update, and upgrade your own
GitLab instance, see the [administration documentation](../administration/index.md).
@@ -173,6 +173,7 @@ the [reviewer values](https://about.gitlab.com/handbook/engineering/workflow/rev
- [Windows Development on GCP](windows.md)
- [FIPS compliance](fips_compliance.md)
- [`Gemfile` guidelines](gemfile.md)
+- [Ruby upgrade guidelines](ruby_upgrade.md)
### Things to be aware of
@@ -189,6 +190,7 @@ the [reviewer values](https://about.gitlab.com/handbook/engineering/workflow/rev
- [Issuable-like Rails models](issuable-like-models.md)
- [Issue types vs first-class types](issue_types.md)
- [DeclarativePolicy framework](policies.md)
+- [Rails update guidelines](rails_update.md)
### Debugging
@@ -215,6 +217,7 @@ the [reviewer values](https://about.gitlab.com/handbook/engineering/workflow/rev
- [How to dump production data to staging](db_dump.md)
- [Geo development](geo.md)
- [Redis guidelines](redis.md)
+ - [Adding a new Redis instance](redis/new_redis_instance.md)
- [Sidekiq guidelines](sidekiq_style_guide.md) for working with Sidekiq workers
- [Working with Gitaly](gitaly.md)
- [Elasticsearch integration docs](elasticsearch.md)
@@ -253,8 +256,6 @@ the [reviewer values](https://about.gitlab.com/handbook/engineering/workflow/rev
## Performance guides
-- [Instrumentation](instrumentation.md) for Ruby code running in production
- environments.
- [Performance guidelines](performance.md) for writing code, benchmarks, and
certain patterns to avoid.
- [Caching guidelines](caching.md) for using caching in Rails under a GitLab environment.
@@ -334,6 +335,7 @@ See [database guidelines](database/index.md).
- [Features inside `.gitlab/`](features_inside_dot_gitlab.md)
- [Dashboards for stage groups](stage_group_dashboards.md)
- [Preventing transient bugs](transient/prevention-patterns.md)
+- [GitLab Application SLIs](application_slis/index.md)
## Other GitLab Development Kit (GDK) guides
diff --git a/doc/development/instrumentation.md b/doc/development/instrumentation.md
deleted file mode 100644
index 83e7444bb1f..00000000000
--- a/doc/development/instrumentation.md
+++ /dev/null
@@ -1,161 +0,0 @@
----
-stage: Monitor
-group: Monitor
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
----
-
-# Instrumenting Ruby code **(FREE)**
-
-[GitLab Performance Monitoring](../administration/monitoring/performance/index.md) allows instrumenting of both methods and custom
-blocks of Ruby code. Method instrumentation is the primary form of
-instrumentation with block-based instrumentation only being used when we want to
-drill down to specific regions of code within a method.
-
-Please refer to [Product Intelligence](https://about.gitlab.com/handbook/product/product-intelligence-guide/) if you are tracking product usage patterns.
-
-## Instrumenting Methods
-
-Instrumenting methods is done by using the `Gitlab::Metrics::Instrumentation`
-module. This module offers a few different methods that can be used to
-instrument code:
-
-- `instrument_method`: Instruments a single class method.
-- `instrument_instance_method`: Instruments a single instance method.
-- `instrument_class_hierarchy`: Given a Class, this method recursively
- instruments all sub-classes (both class and instance methods).
-- `instrument_methods`: Instruments all public and private class methods of a
- Module.
-- `instrument_instance_methods`: Instruments all public and private instance
- methods of a Module.
-
-To remove the need for typing the full `Gitlab::Metrics::Instrumentation`
-namespace you can use the `configure` class method. This method simply yields
-the supplied block while passing `Gitlab::Metrics::Instrumentation` as its
-argument. An example:
-
-```ruby
-Gitlab::Metrics::Instrumentation.configure do |conf|
- conf.instrument_method(Foo, :bar)
- conf.instrument_method(Foo, :baz)
-end
-```
-
-Using this method is in general preferred over directly calling the various
-instrumentation methods.
-
-Method instrumentation should be added in the initializer
-`config/initializers/zz_metrics.rb`.
-
-### Examples
-
-Instrumenting a single method:
-
-```ruby
-Gitlab::Metrics::Instrumentation.configure do |conf|
- conf.instrument_method(User, :find_by)
-end
-```
-
-Instrumenting an entire class hierarchy:
-
-```ruby
-Gitlab::Metrics::Instrumentation.configure do |conf|
- conf.instrument_class_hierarchy(ActiveRecord::Base)
-end
-```
-
-Instrumenting all public class methods:
-
-```ruby
-Gitlab::Metrics::Instrumentation.configure do |conf|
- conf.instrument_methods(User)
-end
-```
-
-### Checking Instrumented Methods
-
-The easiest way to check if a method has been instrumented is to check its
-source location. For example:
-
-```ruby
-method = Banzai::Renderer.method(:render)
-
-method.source_location
-```
-
-If the source location points to `lib/gitlab/metrics/instrumentation.rb` you
-know the method has been instrumented.
-
-If you're using Pry you can use the `$` command to display the source code of a
-method (along with its source location), this is easier than running the above
-Ruby code. In case of the above snippet you'd run the following:
-
-- `$ Banzai::Renderer.render`
-
-This prints a result similar to:
-
-```plaintext
-From: /path/to/your/gitlab/lib/gitlab/metrics/instrumentation.rb @ line 148:
-Owner: #<Module:0x0055f0865c6d50>
-Visibility: public
-Number of lines: 21
-
-def #{name}(#{args_signature})
- if trans = Gitlab::Metrics::Instrumentation.transaction
- trans.measure_method(#{label.inspect}) { super }
- else
- super
- end
-end
-```
-
-## Instrumenting Ruby Blocks
-
-Measuring blocks of Ruby code is done by calling `Gitlab::Metrics.measure` and
-passing it a block. For example:
-
-```ruby
-Gitlab::Metrics.measure(:foo) do
- ...
-end
-```
-
-The block is executed and the execution time is stored as a set of fields in the
-currently running transaction. If no transaction is present the block is yielded
-without measuring anything.
-
-Three values are measured for a block:
-
-- The real time elapsed, stored in `NAME_real_time`.
-- The CPU time elapsed, stored in `NAME_cpu_time`.
-- The call count, stored in `NAME_call_count`.
-
-Both the real and CPU timings are measured in milliseconds.
-
-Multiple calls to the same block results in the final values being the sum
-of all individual values. Take this code for example:
-
-```ruby
-3.times do
- Gitlab::Metrics.measure(:sleep) do
- sleep 1
- end
-end
-```
-
-Here, the final value of `sleep_real_time` is `3`, and not `1`.
-
-## Tracking Custom Events
-
-Besides instrumenting code GitLab Performance Monitoring also supports tracking
-of custom events. This is primarily intended to be used for tracking business
-metrics such as the number of Git pushes, repository imports, and so on.
-
-To track a custom event simply call `Gitlab::Metrics.add_event` passing it an
-event name and a custom set of (optional) tags. For example:
-
-```ruby
-Gitlab::Metrics.add_event(:user_login, email: current_user.email)
-```
-
-Event names should be verbs such as `push_repository` and `remove_branch`.
diff --git a/doc/development/integrations/secure.md b/doc/development/integrations/secure.md
index d37ce29e353..34293845d17 100644
--- a/doc/development/integrations/secure.md
+++ b/doc/development/integrations/secure.md
@@ -534,15 +534,24 @@ affecting version `2.50.3-2+deb9u1` of Debian package `glib2.0`:
},
"version": "2.50.3-2+deb9u1",
"operating_system": "debian:9",
- "image": "index.docker.io/library/nginx:1.18"
+ "image": "index.docker.io/library/nginx:1.18",
+ "kubernetes_resource": {
+ "namespace": "production",
+ "kind": "Deployment",
+ "name": "nginx-ingress",
+ "container_name": "nginx",
+ "agent_id": "1"
+ }
}
```
-The affected package is found when scanning the image of the pod `index.docker.io/library/nginx:1.18`.
+The affected package is found when scanning a deployment using the `index.docker.io/library/nginx:1.18` image.
The location fingerprint of a Cluster Image Scanning vulnerability combines the
-`operating_system` and the package `name`, so these attributes are mandatory. The `image` is also
-mandatory. All other attributes are optional.
+`namespace`, `kind`, `name`, and `container_name` fields from the `kubernetes_resource`,
+as well as the package `name`, so these fields are required. The `image` field is also mandatory.
+The `cluster_id` and `agent_id` are mutually exclusive, and one of them must be present.
+All other fields are optional.
#### SAST
diff --git a/doc/development/kubernetes.md b/doc/development/kubernetes.md
index 9e67227ec7f..45c94019c63 100644
--- a/doc/development/kubernetes.md
+++ b/doc/development/kubernetes.md
@@ -7,7 +7,7 @@ info: To determine the technical writer assigned to the Stage/Group associated w
# Kubernetes integration - development guidelines **(FREE)**
This document provides various guidelines when developing for the GitLab
-[Kubernetes integration](../user/project/clusters/index.md).
+[Kubernetes integration](../user/infrastructure/clusters/index.md).
## Development
diff --git a/doc/development/merge_request_performance_guidelines.md b/doc/development/merge_request_performance_guidelines.md
index d87b7bcb5af..cbf3c09b28b 100644
--- a/doc/development/merge_request_performance_guidelines.md
+++ b/doc/development/merge_request_performance_guidelines.md
@@ -377,16 +377,6 @@ comment. Instead of always rendering these kind of elements they should only be
rendered when actually needed. This ensures we don't spend time generating
Haml/HTML when it's not used.
-## Instrumenting New Code
-
-**Summary:** always add instrumentation for new classes, modules, and methods.
-
-Newly added classes, modules, and methods must be instrumented. This ensures
-we can track the performance of this code over time.
-
-For more information see [Instrumentation](instrumentation.md). This guide
-describes how to add instrumentation and where to add it.
-
## Use of Caching
**Summary:** cache data in memory or in Redis when it's needed multiple times in
diff --git a/doc/development/migration_style_guide.md b/doc/development/migration_style_guide.md
index ce564551fbf..e03b96a0e14 100644
--- a/doc/development/migration_style_guide.md
+++ b/doc/development/migration_style_guide.md
@@ -879,7 +879,7 @@ See the [text data type](database/strings_and_the_text_data_type.md) style guide
## Timestamp column type
By default, Rails uses the `timestamp` data type that stores timestamp data
-without timezone information. The `timestamp` data type is used by calling
+without time zone information. The `timestamp` data type is used by calling
either the `add_timestamps` or the `timestamps` method.
Also, Rails converts the `:datetime` data type to the `timestamp` one.
@@ -904,15 +904,15 @@ end
```
Instead of using these methods, one should use the following methods to store
-timestamps with timezones:
+timestamps with time zones:
- `add_timestamps_with_timezone`
- `timestamps_with_timezone`
- `datetime_with_timezone`
This ensures all timestamps have a time zone specified. This, in turn, means
-existing timestamps don't suddenly use a different timezone when the system's
-timezone changes. It also makes it very clear which timezone was used in the
+existing timestamps don't suddenly use a different time zone when the system's
+time zone changes. It also makes it very clear which time zone was used in the
first place.
## Storing JSON in database
diff --git a/doc/development/packages.md b/doc/development/packages.md
index 869a1755d8f..38c1b941eaf 100644
--- a/doc/development/packages.md
+++ b/doc/development/packages.md
@@ -30,9 +30,9 @@ The existing database model requires the following:
### API endpoints
-Package systems work with GitLab via API. For example `lib/api/npm_packages.rb`
+Package systems work with GitLab via API. For example `lib/api/npm_project_packages.rb`
implements API endpoints to work with npm clients. So, the first thing to do is to
-add a new `lib/api/your_name_packages.rb` file with API endpoints that are
+add a new `lib/api/your_name_project_packages.rb` file with API endpoints that are
necessary to make the package system client to work. Usually that means having
endpoints like:
@@ -48,7 +48,7 @@ GET https://gitlab.com/api/v4/projects/<your_project_id>/packages/npm/
PUT https://gitlab.com/api/v4/projects/<your_project_id>/packages/npm/
```
-Group-level and instance-level endpoints are good to have but are optional.
+Group-level and instance-level endpoints should only be considered after the project-level endpoint is available in production.
#### Remote hierarchy
@@ -168,7 +168,7 @@ The implementation of the different Merge Requests varies between different pack
The MVC must support [Personal Access Tokens](../user/profile/personal_access_tokens.md) right from the start. We currently support two options for these tokens: OAuth and Basic Access.
-OAuth authentication is already supported. You can see an example in the [npm API](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/api/npm_packages.rb).
+OAuth authentication is already supported. You can see an example in the [npm API](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/api/npm_project_packages.rb).
[Basic Access authentication](https://developer.mozilla.org/en-US/docs/Web/HTTP/Authentication)
support is done by overriding a specific function in the API helpers, like
diff --git a/doc/development/pipelines.md b/doc/development/pipelines.md
index dd45091a31b..45982d6075b 100644
--- a/doc/development/pipelines.md
+++ b/doc/development/pipelines.md
@@ -6,8 +6,7 @@ info: To determine the technical writer assigned to the Stage/Group associated w
# Pipelines for the GitLab project
-Pipelines for [`gitlab-org/gitlab`](https://gitlab.com/gitlab-org/gitlab) and [`gitlab-org/gitlab-foss`](https://gitlab.com/gitlab-org/gitlab-foss) (as well as the
-`dev` instance's mirrors) are configured in the usual
+Pipelines for [`gitlab-org/gitlab`](https://gitlab.com/gitlab-org/gitlab) (as well as the `dev` instance's) is configured in the usual
[`.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab-ci.yml)
which itself includes files under
[`.gitlab/ci/`](https://gitlab.com/gitlab-org/gitlab/-/tree/master/.gitlab/ci)
@@ -17,29 +16,206 @@ We're striving to [dogfood](https://about.gitlab.com/handbook/engineering/#dogfo
GitLab [CI/CD features and best-practices](../ci/yaml/index.md)
as much as possible.
-## Overview
+## Minimal test jobs before a merge request is approved
-Pipelines for the GitLab project are created using the [`workflow:rules` keyword](../ci/yaml/index.md#workflow)
-feature of the GitLab CI/CD.
+**To reduce the pipeline cost and shorten the job duration, before a merge request is approved, the pipeline will run a minimal set of RSpec & Jest tests that are related to the merge request changes.**
-Pipelines are always created for the following scenarios:
+After a merge request has been approved, the pipeline would contain the full RSpec & Jest tests. This will ensure that all tests
+have been run before a merge request is merged.
-- `main` branch, including on schedules, pushes, merges, and so on.
-- Merge requests.
-- Tags.
-- Stable, `auto-deploy`, and security branches.
+### RSpec minimal jobs
-Pipeline creation is also affected by the following CI/CD variables:
+#### Determining related RSpec test files in a merge request
-- If `$FORCE_GITLAB_CI` is set, pipelines are created.
-- If `$GITLAB_INTERNAL` is not set, pipelines are not created.
+To identify the minimal set of tests needed, we use the [`test_file_finder` gem](https://gitlab.com/gitlab-org/ci-cd/test_file_finder), with two strategies:
-No pipeline is created in any other cases (for example, when pushing a branch with no
-MR for it).
+- dynamic mapping from test coverage tracing (generated via the [Crystalball gem](https://github.com/toptal/crystalball))
+ ([see where it's used](https://gitlab.com/gitlab-org/gitlab/-/blob/47d507c93779675d73a05002e2ec9c3c467cd698/tooling/bin/find_tests#L15))
+- static mapping maintained in the [`tests.yml` file](https://gitlab.com/gitlab-org/gitlab/-/blob/master/tests.yml) for special cases that cannot
+ be mapped via coverage tracing ([see where it's used](https://gitlab.com/gitlab-org/gitlab/-/blob/47d507c93779675d73a05002e2ec9c3c467cd698/tooling/bin/find_tests#L12))
-The source of truth for these workflow rules is defined in [`.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab-ci.yml).
+The test mappings contain a map of each source files to a list of test files which is dependent of the source file.
+
+In the `detect-tests` job, we use this mapping to identify the minimal tests needed for the current merge request.
+
+#### Exceptional cases
+
+In addition, there are a few circumstances where we would always run the full RSpec tests:
+
+- when the `pipeline:run-all-rspec` label is set on the merge request
+- when the merge request is created by an automation (e.g. Gitaly update or MR targeting a stable branch)
+- when any CI config file is changed (i.e. `.gitlab-ci.yml` or `.gitlab/ci/**/*`)
+
+### Jest minimal jobs
+
+#### Determining related Jest test files in a merge request
+
+To identify the minimal set of tests needed, we pass a list of all the changed files into `jest` using the [`--findRelatedTests`](https://jestjs.io/docs/cli#--findrelatedtests-spaceseparatedlistofsourcefiles) option.
+In this mode, `jest` would resolve all the dependencies of related to the changed files, which include test files that have these files in the dependency chain.
+
+#### Exceptional cases
+
+In addition, there are a few circumstances where we would always run the full Jest tests:
+
+- when the `pipeline:run-all-rspec` label is set on the merge request
+- when the merge request is created by an automation (e.g. Gitaly update or MR targeting a stable branch)
+- when any CI config file is changed (i.e. `.gitlab-ci.yml` or `.gitlab/ci/**/*`)
+- when any frontend "core" file is changed (i.e. `package.json`, `yarn.lock`, `babel.config.js`, `jest.config.*.js`, `config/helpers/**/*.js`)
+- when any vendored JavaScript file is changed (i.e. `vendor/assets/javascripts/**/*`)
+- when any backend file is changed ([see the patterns list for details](https://gitlab.com/gitlab-org/gitlab/-/blob/3616946936c1adbd9e754c1bd06f86ba670796d8/.gitlab/ci/rules.gitlab-ci.yml#L205-216))
+
+## Fail-fast job in merge request pipelines
+
+To provide faster feedback when a merge request breaks existing tests, we are experimenting with a
+fail-fast mechanism.
+
+An `rspec fail-fast` job is added in parallel to all other `rspec` jobs in a merge
+request pipeline. This job runs the tests that are directly related to the changes
+in the merge request.
+
+If any of these tests fail, the `rspec fail-fast` job fails, triggering a
+`fail-pipeline-early` job to run. The `fail-pipeline-early` job:
+
+- Cancels the currently running pipeline and all in-progress jobs.
+- Sets pipeline to have status `failed`.
+
+For example:
+
+```mermaid
+graph LR
+ subgraph "prepare stage";
+ A["detect-tests"]
+ end
+
+ subgraph "test stage";
+ B["jest"];
+ C["rspec migration"];
+ D["rspec unit"];
+ E["rspec integration"];
+ F["rspec system"];
+ G["rspec fail-fast"];
+ end
+
+ subgraph "post-test stage";
+ Z["fail-pipeline-early"];
+ end
+
+ A --"artifact: list of test files"--> G
+ G --"on failure"--> Z
+```
+
+The `rspec fail-fast` is a no-op if there are more than 10 test files related to the
+merge request. This prevents `rspec fail-fast` duration from exceeding the average
+`rspec` job duration and defeating its purpose.
+
+This number can be overridden by setting a CI/CD variable named `RSPEC_FAIL_FAST_TEST_FILE_COUNT_THRESHOLD`.
+
+## Test jobs
+
+We have dedicated jobs for each [testing level](testing_guide/testing_levels.md) and each job runs depending on the
+changes made in your merge request.
+If you want to force all the RSpec jobs to run regardless of your changes, you can add the `pipeline:run-all-rspec` label to the merge request.
+
+WARNING:
+Forcing all jobs on docs only related MRs would not have the prerequisite jobs and would lead to errors
+
+### Test suite parallelization
+
+Our current RSpec tests parallelization setup is as follows:
+
+1. The `retrieve-tests-metadata` job in the `prepare` stage ensures we have a
+ `knapsack/report-master.json` file:
+ - The `knapsack/report-master.json` file is fetched from the latest `main` pipeline which runs `update-tests-metadata`
+ (for now it's the 2-hourly scheduled master pipeline), if it's not here we initialize the file with `{}`.
+1. Each `[rspec|rspec-ee] [unit|integration|system|geo] n m` job are run with
+ `knapsack rspec` and should have an evenly distributed share of tests:
+ - It works because the jobs have access to the `knapsack/report-master.json`
+ since the "artifacts from all previous stages are passed by default".
+ - the jobs set their own report path to
+ `"knapsack/${TEST_TOOL}_${TEST_LEVEL}_${DATABASE}_${CI_NODE_INDEX}_${CI_NODE_TOTAL}_report.json"`.
+ - if knapsack is doing its job, test files that are run should be listed under
+ `Report specs`, not under `Leftover specs`.
+1. The `update-tests-metadata` job (which only runs on scheduled pipelines for
+ [the canonical project](https://gitlab.com/gitlab-org/gitlab) takes all the
+ `knapsack/rspec*_pg_*.json` files and merge them all together into a single
+ `knapsack/report-master.json` file that is saved as artifact.
+
+After that, the next pipeline uses the up-to-date `knapsack/report-master.json` file.
+
+### Monitoring
+
+The GitLab test suite is [monitored](performance.md#rspec-profiling) for the `main` branch, and any branch
+that includes `rspec-profile` in their name.
+
+### Logging
+
+- Rails logging to `log/test.log` is disabled by default in CI [for
+ performance reasons](https://jtway.co/speed-up-your-rails-test-suite-by-6-in-1-line-13fedb869ec4). To override this setting, provide the
+ `RAILS_ENABLE_TEST_LOG` environment variable.
+
+## Review app jobs
+
+Consult the [Review Apps](testing_guide/review_apps.md) dedicated page for more information.
+
+## As-if-FOSS jobs
+
+The `* as-if-foss` jobs run the GitLab test suite "as if FOSS", meaning as if the jobs would run in the context
+of the `gitlab-org/gitlab-foss` project. These jobs are only created in the following cases:
+
+- when the `pipeline:run-as-if-foss` label is set on the merge request
+- when the merge request is created in the `gitlab-org/security/gitlab` project
+- when any CI config file is changed (i.e. `.gitlab-ci.yml` or `.gitlab/ci/**/*`)
+
+The `* as-if-foss` jobs are run in addition to the regular EE-context jobs. They have the `FOSS_ONLY='1'` variable
+set and get the `ee/` folder removed before the tests start running.
+
+The intent is to ensure that a change doesn't introduce a failure after the `gitlab-org/gitlab` project is synced to
+the `gitlab-org/gitlab-foss` project.
+
+## As-if-JH jobs
+
+The `* as-if-jh` jobs run the GitLab test suite "as if JiHu", meaning as if the jobs would run in the context
+of [the `gitlab-jh/gitlab` project](jh_features_review.md). These jobs are only created in the following cases:
+
+- when the `pipeline:run-as-if-jh` label is set on the merge request
+- when the `pipeline:run-all-rspec` label is set on the merge request
+- when any code or backstage file is changed
+- when any startup CSS file is changed
+
+The `* as-if-jh` jobs are run in addition to the regular EE-context jobs. The `jh/` folder is added before the tests start running.
+
+The intent is to ensure that a change doesn't introduce a failure after the `gitlab-org/gitlab` project is synced to
+the `gitlab-jh/gitlab` project.
+
+## PostgreSQL versions testing
+
+Our test suite runs against PG12 as GitLab.com runs on PG12 and
+[Omnibus defaults to PG12 for new installs and upgrades](../administration/package_information/postgresql_versions.md).
+
+We do run our test suite against PG11 and PG13 on nightly scheduled pipelines.
+
+We also run our test suite against PG11 upon specific database library changes in MRs and `main` pipelines (with the `rspec db-library-code pg11` job).
+
+### Current versions testing
+
+| Where? | PostgreSQL version |
+| ------ | ------------------ |
+| MRs | 12, 11 for DB library changes |
+| `main` (non-scheduled pipelines) | 12, 11 for DB library changes |
+| 2-hourly scheduled pipelines | 12, 11 for DB library changes |
+| `nightly` scheduled pipelines | 12, 11, 13 |
+
+### Long-term plan
+
+We follow the [PostgreSQL versions shipped with Omnibus GitLab](../administration/package_information/postgresql_versions.md):
+
+| PostgreSQL version | 14.1 (July 2021) | 14.2 (August 2021) | 14.3 (September 2021) | 14.4 (October 2021) | 14.5 (November 2021) | 14.6 (December 2021) |
+| -------------------| ---------------------- | ---------------------- | ---------------------- | ---------------------- | ---------------------- | ---------------------- |
+| PG12 | MRs/`2-hour`/`nightly` | MRs/`2-hour`/`nightly` | MRs/`2-hour`/`nightly` | MRs/`2-hour`/`nightly` | MRs/`2-hour`/`nightly` | MRs/`2-hour`/`nightly` |
+| PG11 | `nightly` | `nightly` | `nightly` | `nightly` | `nightly` | `nightly` |
+| PG13 | `nightly` | `nightly` | `nightly` | `nightly` | `nightly` | `nightly` |
-### Pipelines for Merge Requests
+## Pipelines types for merge requests
In general, pipelines for an MR fall into one or more of the following types,
depending on the changes made in the MR:
@@ -53,7 +229,7 @@ We use the [`rules:`](../ci/yaml/index.md#rules) and [`needs:`](../ci/yaml/index
to determine the jobs that need to be run in a pipeline. Note that an MR that includes multiple types of changes would
have a pipelines that include jobs from multiple types (for example, a combination of docs-only and code-only pipelines).
-#### Documentation only MR pipeline
+### Documentation only MR pipeline
[Reference pipeline](https://gitlab.com/gitlab-org/gitlab/-/pipelines/250546928):
@@ -71,7 +247,7 @@ graph LR
end
```
-#### Code-only MR pipeline
+### Code-only MR pipeline
[Reference pipeline](https://gitlab.com/gitlab-org/gitlab/pipelines/136295694)
@@ -102,7 +278,6 @@ graph RL;
1-16["brakeman-sast"];
1-17["eslint-sast"];
1-18["kubesec-sast"];
- 1-19["nodejs-scan-sast"];
1-20["secrets-sast"];
1-21["static-analysis (14 minutes)"];
click 1-21 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=6914471&udv=0"
@@ -123,7 +298,7 @@ graph RL;
2_1-1 & 2_1-2 & 2_1-3 & 2_1-4 --> 1-6;
end
- 2_2-2["rspec frontend_fixture/rspec-ee frontend_fixture (7 minutes)"];
+ 2_2-2["rspec-all frontend_fixture (7 minutes)"];
class 2_2-2 criticalPath;
click 2_2-2 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=7910143&udv=0"
2_2-4["memory-on-boot (3.5 minutes)"];
@@ -155,7 +330,7 @@ graph RL;
3_1-1["jest (14.5 minutes)"];
class 3_1-1 criticalPath;
click 3_1-1 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=6914204&udv=0"
- subgraph "Needs `rspec frontend_fixture/rspec-ee frontend_fixture`";
+ subgraph "Needs `rspec-all frontend_fixture`";
3_1-1 --> 2_2-2;
end
@@ -173,7 +348,7 @@ graph RL;
end
```
-#### Frontend-only MR pipeline
+### Frontend-only MR pipeline
[Reference pipeline](https://gitlab.com/gitlab-org/gitlab/pipelines/134661039):
@@ -204,7 +379,6 @@ graph RL;
1-16["brakeman-sast"];
1-17["eslint-sast"];
1-18["kubesec-sast"];
- 1-19["nodejs-scan-sast"];
1-20["secrets-sast"];
1-21["static-analysis (14 minutes)"];
click 1-21 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=6914471&udv=0"
@@ -226,7 +400,7 @@ graph RL;
2_1-1 & 2_1-2 & 2_1-3 & 2_1-4 --> 1-6;
end
- 2_2-2["rspec frontend_fixture/rspec-ee frontend_fixture (7 minutes)"];
+ 2_2-2["rspec-all frontend_fixture (7 minutes)"];
class 2_2-2 criticalPath;
click 2_2-2 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=7910143&udv=0"
2_2-4["memory-on-boot (3.5 minutes)"];
@@ -266,7 +440,7 @@ graph RL;
3_1-1["jest (14.5 minutes)"];
class 3_1-1 criticalPath;
click 3_1-1 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=6914204&udv=0"
- subgraph "Needs `rspec frontend_fixture/rspec-ee frontend_fixture`";
+ subgraph "Needs `rspec-all frontend_fixture`";
3_1-1 --> 2_2-2;
end
@@ -299,7 +473,7 @@ graph RL;
end
```
-#### QA-only MR pipeline
+### QA-only MR pipeline
[Reference pipeline](https://gitlab.com/gitlab-org/gitlab/pipelines/134645109):
@@ -330,7 +504,6 @@ graph RL;
1-16["brakeman-sast"];
1-17["eslint-sast"];
1-18["kubesec-sast"];
- 1-19["nodejs-scan-sast"];
1-20["secrets-sast"];
1-21["static-analysis (14 minutes)"];
click 1-21 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=6914471&udv=0"
@@ -358,261 +531,53 @@ graph RL;
end
```
-### Fail-fast pipeline in Merge Requests
-
-To provide faster feedback when a Merge Request breaks existing tests, we are experimenting with a
-fail-fast mechanism.
-
-An `rspec fail-fast` job is added in parallel to all other `rspec` jobs in a Merge
-Request pipeline. This job runs the tests that are directly related to the changes
-in the Merge Request.
-
-If any of these tests fail, the `rspec fail-fast` job fails, triggering a
-`fail-pipeline-early` job to run. The `fail-pipeline-early` job:
-
-- Cancels the currently running pipeline and all in-progress jobs.
-- Sets pipeline to have status `failed`.
-
-For example:
-
-```mermaid
-graph LR
- subgraph "prepare stage";
- A["detect-tests"]
- end
-
- subgraph "test stage";
- B["jest"];
- C["rspec migration"];
- D["rspec unit"];
- E["rspec integration"];
- F["rspec system"];
- G["rspec fail-fast"];
- end
-
- subgraph "post-test stage";
- Z["fail-pipeline-early"];
- end
-
- A --"artifact: list of test files"--> G
- G --"on failure"--> Z
-```
-
-A Merge Request author may choose to opt-out of the fail fast mechanism by doing one of the following:
-
-- Adding the `pipeline:skip-rspec-fail-fast` label to the merge request
-- Starting the `dont-interrupt-me` job found in the `sync` stage of a Merge Request pipeline.
-
-The `rspec fail-fast` is a no-op if there are more than 10 test files related to the
-Merge Request. This prevents `rspec fail-fast` duration from exceeding the average
-`rspec` job duration and defeating its purpose.
-
-This number can be overridden by setting a CI/CD variable named `RSPEC_FAIL_FAST_TEST_FILE_COUNT_THRESHOLD`.
-
-NOTE:
-This experiment is only enabled when the CI/CD variable `RSPEC_FAIL_FAST_ENABLED=true` is set.
-
-#### Determining related test files in a Merge Request
-
-The test files related to the Merge Request are determined using the [`test_file_finder`](https://gitlab.com/gitlab-org/ci-cd/test_file_finder) gem.
-We are using a custom mapping between source file to test files, maintained in the `tests.yml` file.
-
-### RSpec minimal jobs
-
-Before a merge request is approved, the pipeline will run a minimal set of RSpec tests that are related to the merge request changes.
-This is to reduce the pipeline cost and shorten the job duration.
-
-To identify the minimal set of tests needed, we use [Crystalball gem](https://github.com/toptal/crystalball) to create a test mapping.
-The test mapping contains a map of each source files to a list of test files which is dependent of the source file.
-This mapping is currently generated using a combination of test coverage tracing and a static mapping.
-In the `detect-tests` job, we use this mapping to identify the minimal tests needed for the current Merge Request.
-
-After a merge request has been approved, the pipeline would contain the full RSpec tests. This will ensure that all tests
-have been run before a merge request is merged.
-
-### Jest minimal jobs
-
-Before a merge request is approved, the pipeline will run a minimal set of Jest tests that are related to the merge request changes.
-This is to reduce the pipeline cost and shorten the job duration.
-
-To identify the minimal set of tests needed, we pass a list of all the changed files into `jest` using the [`--findRelatedTests`](https://jestjs.io/docs/cli#--findrelatedtests-spaceseparatedlistofsourcefiles) option.
-In this mode, `jest` would resolve all the dependencies of related to the changed files, which include test files that have these files in the dependency chain.
-
-After a merge request has been approved, the pipeline would contain the full Jest tests. This will ensure that all tests
-have been run before a merge request is merged.
-
-In addition, there are a few circumstances where we would always run the full Jest tests:
-
-- when `package.json`, `yarn.lock`, `jest` config changes
-- when vendored JavaScript is changed
-- when `.graphql` files are changed
-
-### PostgreSQL versions testing
-
-Our test suite runs against PG12 as GitLab.com runs on PG12 and
-[Omnibus defaults to PG12 for new installs and upgrades](../administration/package_information/postgresql_versions.md),
-Our test suite is currently running against PG11, since GitLab.com still runs on PG11.
-
-We do run our test suite against PG11 on nightly scheduled pipelines as well as upon specific
-database library changes in MRs and `main` pipelines (with the `rspec db-library-code pg11` job).
-
-#### Current versions testing
-
-| Where? | PostgreSQL version |
-| ------ | ------------------ |
-| MRs | 12, 11 for DB library changes |
-| `main` (non-scheduled pipelines) | 12, 11 for DB library changes |
-| 2-hourly scheduled pipelines | 12, 11 for DB library changes |
-| `nightly` scheduled pipelines | 12, 11 |
-
-#### Long-term plan
-
-We follow the [PostgreSQL versions shipped with Omnibus GitLab](../administration/package_information/postgresql_versions.md):
-
-| PostgreSQL version | 13.11 (April 2021) | 13.12 (May 2021) | 14.0 (June 2021?) |
-| -------------------| ---------------------- | ---------------------- | ---------------------- |
-| PG12 | `nightly` | MRs/`2-hour`/`nightly` | MRs/`2-hour`/`nightly` |
-| PG11 | MRs/`2-hour`/`nightly` | `nightly` | `nightly` |
-
-### Test jobs
-
-Consult [GitLab tests in the Continuous Integration (CI) context](testing_guide/ci.md)
-for more information.
-
-We have dedicated jobs for each [testing level](testing_guide/testing_levels.md) and each job runs depending on the
-changes made in your merge request.
-If you want to force all the RSpec jobs to run regardless of your changes, you can add the `pipeline:run-all-rspec` label to the merge request.
-
-> Forcing all jobs on docs only related MRs would not have the prerequisite jobs and would lead to errors
-
-### Review app jobs
-
-Consult the [Review Apps](testing_guide/review_apps.md) dedicated page for more information.
-
-### As-if-FOSS jobs
-
-The `* as-if-foss` jobs allows the GitLab test suite "as-if-FOSS", meaning as if the jobs would run in the context
-of the `gitlab-org/gitlab-foss` project. These jobs are only created in the following cases:
-
-- `gitlab-org/security/gitlab` merge requests.
-- Merge requests with the `pipeline:run-as-if-foss` label
-- Merge requests that changes the CI configuration.
-
-The `* as-if-foss` jobs are run in addition to the regular EE-context jobs. They have the `FOSS_ONLY='1'` variable
-set and get their EE-specific folders removed before the tests start running.
-
-The intent is to ensure that a change doesn't introduce a failure after the `gitlab-org/gitlab` project is synced to
-the `gitlab-org/gitlab-foss` project.
-
-## Performance
-
-### Interruptible pipelines
-
-By default, all jobs are [interruptible](../ci/yaml/index.md#interruptible), except the
-`dont-interrupt-me` job which runs automatically on `main`, and is `manual`
-otherwise.
-
-If you want a running pipeline to finish even if you push new commits to a merge
-request, be sure to start the `dont-interrupt-me` job before pushing.
-
-### Caching strategy
-
-1. All jobs must only pull caches by default.
-1. All jobs must be able to pass with an empty cache. In other words, caches are only there to speed up jobs.
-1. We currently have several different cache definitions defined in
- [`.gitlab/ci/global.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/ci/global.gitlab-ci.yml),
- with fixed keys:
- - `.setup-test-env-cache`
- - `.rails-cache`
- - `.static-analysis-cache`
- - `.coverage-cache`
- - `.danger-review-cache`
- - `.qa-cache`
- - `.yarn-cache`
- - `.assets-compile-cache` (the key includes `${NODE_ENV}` so it's actually two different caches).
-1. These cache definitions are composed of [multiple atomic caches](../ci/caching/index.md#use-multiple-caches).
-1. Only the following jobs, running in 2-hourly scheduled pipelines, are pushing (that is, updating) to the caches:
- - `update-setup-test-env-cache`, defined in [`.gitlab/ci/rails.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/ci/rails.gitlab-ci.yml).
- - `update-gitaly-binaries-cache`, defined in [`.gitlab/ci/rails.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/ci/rails.gitlab-ci.yml).
- - `update-static-analysis-cache`, defined in [`.gitlab/ci/rails.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/ci/rails.gitlab-ci.yml).
- - `update-qa-cache`, defined in [`.gitlab/ci/qa.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/ci/qa.gitlab-ci.yml).
- - `update-assets-compile-production-cache`, defined in [`.gitlab/ci/frontend.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/ci/frontend.gitlab-ci.yml).
- - `update-assets-compile-test-cache`, defined in [`.gitlab/ci/frontend.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/ci/frontend.gitlab-ci.yml).
- - `update-yarn-cache`, defined in [`.gitlab/ci/frontend.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/ci/frontend.gitlab-ci.yml).
- - `update-storybook-yarn-cache`, defined in [`.gitlab/ci/frontend.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/ci/frontend.gitlab-ci.yml).
-1. These jobs can also be forced to run in merge requests with the `pipeline:update-cache` label (this can be useful to warm the caches in a MR that updates the cache keys).
-
-### Artifacts strategy
+## CI configuration internals
-We limit the artifacts that are saved and retrieved by jobs to the minimum in order to reduce the upload/download time and costs, as well as the artifacts storage.
+### Workflow rules
-### Pre-clone step
+Pipelines for the GitLab project are created using the [`workflow:rules` keyword](../ci/yaml/index.md#workflow)
+feature of the GitLab CI/CD.
-The `gitlab-org/gitlab` project on GitLab.com uses a [pre-clone step](https://gitlab.com/gitlab-org/gitlab/-/issues/39134)
-to seed the project with a recent archive of the repository. This is done for
-several reasons:
+Pipelines are always created for the following scenarios:
-- It speeds up builds because a 800 MB download only takes seconds, as opposed to a full Git clone.
-- It significantly reduces load on the file server, as smaller deltas mean less time spent in `git pack-objects`.
+- `main` branch, including on schedules, pushes, merges, and so on.
+- Merge requests.
+- Tags.
+- Stable, `auto-deploy`, and security branches.
-The pre-clone step works by using the `CI_PRE_CLONE_SCRIPT` variable
-[defined by GitLab.com shared runners](../ci/runners/build_cloud/linux_build_cloud.md#pre-clone-script).
+Pipeline creation is also affected by the following CI/CD variables:
-The `CI_PRE_CLONE_SCRIPT` is currently defined as a project CI/CD variable:
+- If `$FORCE_GITLAB_CI` is set, pipelines are created.
+- If `$GITLAB_INTERNAL` is not set, pipelines are not created.
-```shell
-(
- echo "Downloading archived master..."
- wget -O /tmp/gitlab.tar.gz https://storage.googleapis.com/gitlab-ci-git-repo-cache/project-278964/gitlab-master-shallow.tar.gz
+No pipeline is created in any other cases (for example, when pushing a branch with no
+MR for it).
- if [ ! -f /tmp/gitlab.tar.gz ]; then
- echo "Repository cache not available, cloning a new directory..."
- exit
- fi
+The source of truth for these workflow rules is defined in [`.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab-ci.yml).
- rm -rf $CI_PROJECT_DIR
- echo "Extracting tarball into $CI_PROJECT_DIR..."
- mkdir -p $CI_PROJECT_DIR
- cd $CI_PROJECT_DIR
- tar xzf /tmp/gitlab.tar.gz
- rm -f /tmp/gitlab.tar.gz
- chmod a+w $CI_PROJECT_DIR
-)
-```
+### Default image
-The first step of the script downloads `gitlab-master.tar.gz` from
-Google Cloud Storage. There is a [GitLab CI job named `cache-repo`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/ci/cache-repo.gitlab-ci.yml#L5)
-that is responsible for keeping that archive up-to-date. Every two hours
-on a scheduled pipeline, it does the following:
+The default image is defined in [`.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab-ci.yml).
-1. Creates a fresh clone of the `gitlab-org/gitlab` repository on GitLab.com.
-1. Saves the data as a `.tar.gz`.
-1. Uploads it into the Google Cloud Storage bucket.
+<!-- vale gitlab.Spelling = NO -->
-When a CI job runs with this configuration, the output looks something like this:
+It includes Ruby, Go, Git, Git LFS, Chrome, Node, Yarn, PostgreSQL, and Graphics Magick.
-```shell
-$ eval "$CI_PRE_CLONE_SCRIPT"
-Downloading archived master...
-Extracting tarball into /builds/group/project...
-Fetching changes...
-Reinitialized existing Git repository in /builds/group/project/.git/
-```
+<!-- vale gitlab.Spelling = YES -->
-Note that the `Reinitialized existing Git repository` message shows that
-the pre-clone step worked. The runner runs `git init`, which
-overwrites the Git configuration with the appropriate settings to fetch
-from the GitLab repository.
+The images used in our pipelines are configured in the
+[`gitlab-org/gitlab-build-images`](https://gitlab.com/gitlab-org/gitlab-build-images)
+project, which is push-mirrored to [`gitlab/gitlab-build-images`](https://dev.gitlab.org/gitlab/gitlab-build-images)
+for redundancy.
-`CI_REPO_CACHE_CREDENTIALS` contains the Google Cloud service account
-JSON for uploading to the `gitlab-ci-git-repo-cache` bucket. (If you're a
-GitLab Team Member, find credentials in the
-[GitLab shared 1Password account](https://about.gitlab.com/handbook/security/#1password-for-teams).
+The current version of the build images can be found in the
+["Used by GitLab section"](https://gitlab.com/gitlab-org/gitlab-build-images/blob/master/.gitlab-ci.yml).
-Note that this bucket should be located in the same continent as the
-runner, or [you can incur network egress charges](https://cloud.google.com/storage/pricing).
+### Default variables
-## CI configuration internals
+In addition to the [predefined CI/CD variables](../ci/variables/predefined_variables.md),
+each pipeline includes default variables defined in
+[`.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab-ci.yml).
### Stages
@@ -644,24 +609,6 @@ that is deployed in stage `review`.
[an issue with the deployment](https://gitlab.com/gitlab-org/gitlab/-/issues/233458)).
- `notify`: This stage includes jobs that notify various failures to Slack.
-### Default image
-
-The default image is defined in [`.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab-ci.yml).
-
-<!-- vale gitlab.Spelling = NO -->
-
-It includes Ruby, Go, Git, Git LFS, Chrome, Node, Yarn, PostgreSQL, and Graphics Magick.
-
-<!-- vale gitlab.Spelling = YES -->
-
-The images used in our pipelines are configured in the
-[`gitlab-org/gitlab-build-images`](https://gitlab.com/gitlab-org/gitlab-build-images)
-project, which is push-mirrored to [`gitlab/gitlab-build-images`](https://dev.gitlab.org/gitlab/gitlab-build-images)
-for redundancy.
-
-The current version of the build images can be found in the
-["Used by GitLab section"](https://gitlab.com/gitlab-org/gitlab-build-images/blob/master/.gitlab-ci.yml).
-
### Dependency Proxy
Some of the jobs are using images from Docker Hub, where we also use
@@ -681,12 +628,6 @@ Projects in the `gitlab-org` group pull from the Dependency Proxy, while
forks that reside on any other personal namespaces or groups fall back to
Docker Hub unless `${GITLAB_DEPENDENCY_PROXY}` is also defined there.
-### Default variables
-
-In addition to the [predefined CI/CD variables](../ci/variables/predefined_variables.md),
-each pipeline includes default variables defined in
-[`.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab-ci.yml).
-
### Common job definitions
Most of the jobs [extend from a few CI definitions](../ci/yaml/index.md#extends)
@@ -756,8 +697,6 @@ and included in `rules` definitions via [YAML anchors](../ci/yaml/index.md#ancho
| `if-dot-com-gitlab-org-and-security-tag` | Limit jobs creation to tags for the `gitlab-org` and `gitlab-org/security` groups on GitLab.com. | |
| `if-dot-com-ee-schedule` | Limits jobs to scheduled pipelines for the `gitlab-org/gitlab` project on GitLab.com. | |
| `if-cache-credentials-schedule` | Limits jobs to scheduled pipelines with the `$CI_REPO_CACHE_CREDENTIALS` variable set. | |
-| `if-rspec-fail-fast-disabled` | Limits jobs to pipelines with `$RSPEC_FAIL_FAST_ENABLED` CI/CD variable not set to `"true"`. | |
-| `if-rspec-fail-fast-skipped` | Matches if the pipeline is for a merge request and the MR has label ~"pipeline:skip-rspec-fail-fast". | |
| `if-security-pipeline-merge-result` | Matches if the pipeline is for a security merge request triggered by `@gitlab-release-tools-bot`. | |
<!-- vale gitlab.Substitutions = YES -->
@@ -783,6 +722,114 @@ and included in `rules` definitions via [YAML anchors](../ci/yaml/index.md#ancho
| `code-qa-patterns` | Combination of `code-patterns` and `qa-patterns`. |
| `code-backstage-qa-patterns` | Combination of `code-patterns`, `backstage-patterns`, and `qa-patterns`. |
+## Performance
+
+### Interruptible pipelines
+
+By default, all jobs are [interruptible](../ci/yaml/index.md#interruptible), except the
+`dont-interrupt-me` job which runs automatically on `main`, and is `manual`
+otherwise.
+
+If you want a running pipeline to finish even if you push new commits to a merge
+request, be sure to start the `dont-interrupt-me` job before pushing.
+
+### Caching strategy
+
+1. All jobs must only pull caches by default.
+1. All jobs must be able to pass with an empty cache. In other words, caches are only there to speed up jobs.
+1. We currently have several different cache definitions defined in
+ [`.gitlab/ci/global.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/ci/global.gitlab-ci.yml),
+ with fixed keys:
+ - `.setup-test-env-cache`
+ - `.rails-cache`
+ - `.static-analysis-cache`
+ - `.coverage-cache`
+ - `.danger-review-cache`
+ - `.qa-cache`
+ - `.yarn-cache`
+ - `.assets-compile-cache` (the key includes `${NODE_ENV}` so it's actually two different caches).
+1. These cache definitions are composed of [multiple atomic caches](../ci/caching/index.md#use-multiple-caches).
+1. Only the following jobs, running in 2-hourly scheduled pipelines, are pushing (that is, updating) to the caches:
+ - `update-setup-test-env-cache`, defined in [`.gitlab/ci/rails.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/ci/rails.gitlab-ci.yml).
+ - `update-gitaly-binaries-cache`, defined in [`.gitlab/ci/rails.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/ci/rails.gitlab-ci.yml).
+ - `update-static-analysis-cache`, defined in [`.gitlab/ci/rails.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/ci/rails.gitlab-ci.yml).
+ - `update-qa-cache`, defined in [`.gitlab/ci/qa.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/ci/qa.gitlab-ci.yml).
+ - `update-assets-compile-production-cache`, defined in [`.gitlab/ci/frontend.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/ci/frontend.gitlab-ci.yml).
+ - `update-assets-compile-test-cache`, defined in [`.gitlab/ci/frontend.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/ci/frontend.gitlab-ci.yml).
+ - `update-yarn-cache`, defined in [`.gitlab/ci/frontend.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/ci/frontend.gitlab-ci.yml).
+ - `update-storybook-yarn-cache`, defined in [`.gitlab/ci/frontend.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/ci/frontend.gitlab-ci.yml).
+1. These jobs can also be forced to run in merge requests with the `pipeline:update-cache` label (this can be useful to warm the caches in a MR that updates the cache keys).
+
+### Artifacts strategy
+
+We limit the artifacts that are saved and retrieved by jobs to the minimum in order to reduce the upload/download time and costs, as well as the artifacts storage.
+
+### Pre-clone step
+
+The `gitlab-org/gitlab` project on GitLab.com uses a [pre-clone step](https://gitlab.com/gitlab-org/gitlab/-/issues/39134)
+to seed the project with a recent archive of the repository. This is done for
+several reasons:
+
+- It speeds up builds because a 800 MB download only takes seconds, as opposed to a full Git clone.
+- It significantly reduces load on the file server, as smaller deltas mean less time spent in `git pack-objects`.
+
+The pre-clone step works by using the `CI_PRE_CLONE_SCRIPT` variable
+[defined by GitLab.com shared runners](../ci/runners/build_cloud/linux_build_cloud.md#pre-clone-script).
+
+The `CI_PRE_CLONE_SCRIPT` is currently defined as a project CI/CD variable:
+
+```shell
+(
+ echo "Downloading archived master..."
+ wget -O /tmp/gitlab.tar.gz https://storage.googleapis.com/gitlab-ci-git-repo-cache/project-278964/gitlab-master-shallow.tar.gz
+
+ if [ ! -f /tmp/gitlab.tar.gz ]; then
+ echo "Repository cache not available, cloning a new directory..."
+ exit
+ fi
+
+ rm -rf $CI_PROJECT_DIR
+ echo "Extracting tarball into $CI_PROJECT_DIR..."
+ mkdir -p $CI_PROJECT_DIR
+ cd $CI_PROJECT_DIR
+ tar xzf /tmp/gitlab.tar.gz
+ rm -f /tmp/gitlab.tar.gz
+ chmod a+w $CI_PROJECT_DIR
+)
+```
+
+The first step of the script downloads `gitlab-master.tar.gz` from
+Google Cloud Storage. There is a [GitLab CI job named `cache-repo`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/ci/cache-repo.gitlab-ci.yml#L5)
+that is responsible for keeping that archive up-to-date. Every two hours
+on a scheduled pipeline, it does the following:
+
+1. Creates a fresh clone of the `gitlab-org/gitlab` repository on GitLab.com.
+1. Saves the data as a `.tar.gz`.
+1. Uploads it into the Google Cloud Storage bucket.
+
+When a CI job runs with this configuration, the output looks something like this:
+
+```shell
+$ eval "$CI_PRE_CLONE_SCRIPT"
+Downloading archived master...
+Extracting tarball into /builds/group/project...
+Fetching changes...
+Reinitialized existing Git repository in /builds/group/project/.git/
+```
+
+Note that the `Reinitialized existing Git repository` message shows that
+the pre-clone step worked. The runner runs `git init`, which
+overwrites the Git configuration with the appropriate settings to fetch
+from the GitLab repository.
+
+`CI_REPO_CACHE_CREDENTIALS` contains the Google Cloud service account
+JSON for uploading to the `gitlab-ci-git-repo-cache` bucket. (If you're a
+GitLab Team Member, find credentials in the
+[GitLab shared 1Password account](https://about.gitlab.com/handbook/security/#1password-for-teams).
+
+Note that this bucket should be located in the same continent as the
+runner, or [you can incur network egress charges](https://cloud.google.com/storage/pricing).
+
---
[Return to Development documentation](index.md)
diff --git a/doc/development/profiling.md b/doc/development/profiling.md
index a58e1d60cc5..656b30402a6 100644
--- a/doc/development/profiling.md
+++ b/doc/development/profiling.md
@@ -98,11 +98,13 @@ profile and log output to S3.
## Speedscope flamegraphs
-You can generate a flamegraph for a particular URL by adding the `performance_bar=flamegraph` parameter to the request.
+You can generate a flamegraph for a particular URL by selecting a flamegraph sampling mode button in the performance bar or by adding the `performance_bar=flamegraph` parameter to the request.
![Speedscope](img/speedscope_v13_12.png)
-More information about the views can be found in the [Speedscope docs](https://github.com/jlfwong/speedscope#views)
+Find more information about the views in the [Speedscope docs](https://github.com/jlfwong/speedscope#views).
+
+Find more information about different sampling modes in the [Stackprof docs](https://github.com/tmm1/stackprof#sampling).
This is enabled for all users that can access the performance bar.
diff --git a/doc/development/rails_update.md b/doc/development/rails_update.md
new file mode 100644
index 00000000000..f25d68a8900
--- /dev/null
+++ b/doc/development/rails_update.md
@@ -0,0 +1,110 @@
+---
+stage: none
+group: unassigned
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
+---
+
+# Rails update guidelines
+
+We strive to run GitLab using the latest Rails releases to benefit from performance, security updates, and new features.
+
+## Rails update approach
+
+1. [Prepare an MR for GitLab](#prepare-an-mr-for-gitlab).
+1. [Prepare an MR for Gitaly](#prepare-an-mr-for-gitaly).
+1. [Create patch releases and backports for security patches](#create-patch-releases-and-backports-for-security-patches).
+
+### Prepare an MR for GitLab
+
+1. Check the [Upgrading Ruby on Rails](https://guides.rubyonrails.org/upgrading_ruby_on_rails.html) guide and prepare the application for the upcoming changes.
+1. Update the `rails` gem version in `Gemfile`.
+1. Run `bundle update rails`.
+1. Run the update task `rake rails:update`.
+1. Update the `activesupport` version in `qa/Gemfile`.
+1. Run `bundle update --conservative activesupport` in the `qa` folder.
+1. Resolve any Bundler conflicts.
+1. Ensure that `@rails/ujs` and `@rails/actioncable` npm packages match the new rails version in [`package.json`](https://gitlab.com/gitlab-org/gitlab/blob/master/package.json).
+1. Create an MR with the `pipeline:run-all-rspec` label and see if pipeline breaks.
+1. To resolve and debug spec failures use `git bisect` against the rails repository. See the [debugging section](#git-bisect-against-rails) below.
+1. Include links to the Gem diffs between the two versions in the merge request description. For example, this is the gem diff for [`activesupport` 6.1.3.2 to
+6.1.4.1](https://my.diffend.io/gems/activerecord/6.1.3.2/6.1.4.1).
+
+### Prepare an MR for Gitaly
+
+1. Update the `activesupport` gem version in [Gitaly Ruby's Gemfile](https://gitlab.com/gitlab-org/gitaly/-/blob/master/ruby/Gemfile).
+1. Run `bundle update --conservative activesupport`.
+1. Create an MR against the Gitaly project with these changes.
+1. Make this MR dependent on the MR created in the GitLab project.
+1. Merged this MR only **after** merging the GitLab project's MR.
+
+### Create patch releases and backports for security patches
+
+If the Rails update was over a patch release and it contains important security fixes,
+make sure to release it in a
+GitLab patch release to self-managed customers. Consult with our [release managers](https://about.gitlab.com/community/release-managers/)
+for how to proceed.
+
+### Deprecation Logger
+
+We also log Ruby and Rails deprecation warnings into a dedicated log file, `log/deprecation_json.log`. It provides
+clues when there is code that is not adequately covered by tests and hence would slip past `DeprecationToolkitEnv`.
+
+For GitLab SaaS, GitLab team members can inspect these log events in Kibana (`https://log.gprd.gitlab.net/goto/f7cebf1ff05038d901ba2c45925c7e01`).
+
+## Git bisect against Rails
+
+Usually, if you know which Rails change caused the spec to fail, it adds additional context and
+helps to find the fix for the failure.
+To efficiently and quickly find which Rails change caused the spec failure you can use the
+[`git bisect`](https://git-scm.com/docs/git-bisect) command against the Rails repository:
+
+1. Clone the `rails` project in a folder of your choice. For example, it might be the GDK root dir:
+
+ ```shell
+ cd <GDK_FOLDER>
+ git clone https://github.com/rails/rails.git
+ ```
+
+1. Replace the `gem 'rails'` line in GitLab `Gemfile` with:
+
+ ```ruby
+ gem 'rails', ENV['RAILS_VERSION'], path: ENV['RAILS_FOLDER']
+ ```
+
+1. Set the `RAILS_FOLDER` env variable with the folder you cloned Rails into:
+
+ ```shell
+ export RAILS_FOLDER="<GDK_FOLDER>/rails"
+ ```
+
+1. Change the directory to `RAILS_FOLDER` and set the range for the `git bisect` command:
+
+ ```shell
+ cd $RAILS_FOLDER
+ git bisect start <NEW_VERSION_TAG> <OLD_VERSION_TAG>
+ ```
+
+ Where `<NEW_VERSION_TAG>` is the tag where the spec is red and `<OLD_VERSION_TAG>` is the one with the green spec.
+ For example, `git bisect start v6.1.4.1 v6.1.3.2` if we're upgrading from version 6.1.3.2 to 6.1.4.1.
+ Replace `<NEW_VERSION_TAG>` with the tag where the spec is red and `<OLD_VERSION_TAG>` with the one with the green spec. For example, `git bisect start v6.1.4.1 v6.1.3.2` if we're upgrading from version 6.1.3.2 to 6.1.4.1.
+ In the output, you can see how many steps approximately it takes to find the commit.
+1. Start the `git bisect` process and pass spec's file name(s) to `scripts/rails-update-bisect` as an argument or arguments. It can be faster to pick only one example instead of an entire spec file.
+
+ ```shell
+ git bisect run <GDK_FOLDER>/gitlab/scripts/rails-update-bisect spec/models/ability_spec.rb
+ # OR
+ git bisect run <GDK_FOLDER>/gitlab/scripts/rails-update-bisect spec/models/ability_spec.rb:7
+ ```
+
+1. When the process is completed, `git bisect` prints the commit hash, which you can use to find the corresponding MR in the [`rails/rails`](https://github.com/rails/rails) repository.
+1. Execute `git bisect reset` to exit the `bisect` mode.
+1. Revert the changes to `Gemfile`:
+
+ ```shell
+ git checkout -- Gemfile
+ ```
+
+### Follow-up reading material
+
+- [Upgrading Ruby on Rails guide](https://guides.rubyonrails.org/upgrading_ruby_on_rails.html)
+- [Rails releases page](https://github.com/rails/rails/releases)
diff --git a/doc/development/redis.md b/doc/development/redis.md
index e631a6ec80c..fa07cebdc61 100644
--- a/doc/development/redis.md
+++ b/doc/development/redis.md
@@ -6,12 +6,17 @@ info: To determine the technical writer assigned to the Stage/Group associated w
# Redis guidelines
+## Redis instances
+
GitLab uses [Redis](https://redis.io) for the following distinct purposes:
- Caching (mostly via `Rails.cache`).
- As a job processing queue with [Sidekiq](sidekiq_style_guide.md).
- To manage the shared application state.
+- To store CI trace chunks.
- As a Pub/Sub queue backend for ActionCable.
+- Rate limiting state storage.
+- Sessions.
In most environments (including the GDK), all of these point to the same
Redis instance.
@@ -29,6 +34,8 @@ more often than it is read.
If [Geo](geo.md) is enabled, each Geo node gets its own, independent Redis
database.
+We have [development documentation on adding a new Redis instance](redis/new_redis_instance.md).
+
## Key naming
Redis is a flat namespace with no hierarchy, which means we must pay attention
diff --git a/doc/development/redis/new_redis_instance.md b/doc/development/redis/new_redis_instance.md
new file mode 100644
index 00000000000..37ee51ebb82
--- /dev/null
+++ b/doc/development/redis/new_redis_instance.md
@@ -0,0 +1,132 @@
+---
+stage: none
+group: unassigned
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
+---
+
+# Add a new Redis instance
+
+GitLab can make use of multiple [Redis instances](../redis.md#redis-instances).
+These instances are functionally partitioned so that, for example, we
+can store [CI trace chunks](../../administration/job_logs.md#incremental-logging-architecture)
+from one Redis instance while storing sessions in another.
+
+From time to time we might want to add a new Redis instance. Typically this will
+be a functional partition split from one of the existing instances such as the
+cache or shared state. This document describes an approach
+for adding a new Redis instance that handles existing data, based on
+prior examples:
+
+- [Dedicated Redis instance for Trace Chunk storage](https://gitlab.com/groups/gitlab-com/gl-infra/-/epics/462).
+- [Create dedicated Redis instance for Rate Limiting data](https://gitlab.com/groups/gitlab-com/gl-infra/-/epics/526).
+
+This document does not cover the operational side of preparing and configuring
+the new Redis instance in detail, but the example epics do contain information
+on previous approaches to this.
+
+## Step 1: Support configuring the new instance
+
+Before we can switch any features to using the new instance, we have to support
+configuring it and referring to it in the codebase. We must support the
+main installation types:
+
+- Source installs (including development environments) - [example MR](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/62767)
+- Omnibus - [example MR](https://gitlab.com/gitlab-org/omnibus-gitlab/-/merge_requests/5316)
+- Helm charts - [example MR](https://gitlab.com/gitlab-org/charts/gitlab/-/merge_requests/2031)
+
+### Fallback instance
+
+In the application code, we need to define a fallback instance in case the new
+instance is not configured. For example, if a GitLab instance has already
+configured a separate shared state Redis, and we are partitioning data from the
+shared state Redis, our new instance's configuration should default to that of
+the shared state Redis when it's not present. Otherwise we could break instances
+that don't configure the new Redis instance as soon as it's available.
+
+You can [define a `.config_fallback` method](https://gitlab.com/gitlab-org/gitlab/-/blob/a75471dd744678f1a59eeb99f71fca577b155acd/lib/gitlab/redis/wrapper.rb#L69-87)
+in `Gitlab::Redis::Wrapper` (the base class for all Redis instances)
+that defines the instance to be used if this one is not configured. If we were
+adding a `Foo` instance that should fall back to `SharedState`, we can do that
+like this:
+
+```ruby
+module Gitlab
+ module Redis
+ class Foo < ::Gitlab::Redis::Wrapper
+ # The data we store on Foo used to be stored on SharedState.
+ def self.config_fallback
+ SharedState
+ end
+ end
+ end
+end
+```
+
+We should also add specs like those in
+[`trace_chunks_spec.rb`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/spec/lib/gitlab/redis/trace_chunks_spec.rb)
+to ensure that this fallback works correctly.
+
+## Step 2: Support writing to and reading from the new instance
+
+When migrating to the new instance, we must account for cases where data is
+either on:
+
+- The 'old' (original) instance.
+- The new one that we have just added support for.
+
+As a result we may need to support reading from and writing to both
+instances, depending on some condition.
+
+The exact condition to use varies depending on the data to be migrated. For
+the trace chunks case above, there was already a database column indicating where the
+data was stored (as there are other storage options than Redis).
+
+This step may not apply if the data has a very short lifetime (a few minutes at most)
+and is not critical. In that case, we
+may decide that it is OK to incur a small amount of data loss and switch
+over through configuration only.
+
+If there is not a more natural way to mark where the data is stored, using a
+[feature flag](../feature_flags/index.md) may be convenient:
+
+- It does not require an application restart to take effect.
+- It applies to all application instances (Sidekiq, API, web, etc.) at
+ the same time.
+- It supports incremental rollout - ideally by actor (project, group,
+ user, etc.) - so that we can monitor for errors and roll back easily.
+
+## Step 3: Migrate the data
+
+We then need to configure the new instance for GitLab.com's production and
+staging environments. Hopefully it will be possible to test this change
+effectively on staging, to at least make sure that basic usage continues to
+work.
+
+After that is done, we can roll out the change to production. Ideally this would
+be in an incremental fashion, following the
+[standard incremental rollout](../feature_flags/controls.md#rolling-out-changes)
+documentation for feature flags.
+
+When we have been using the new instance 100% of the time in production for a
+while and there are no issues, we can proceed.
+
+## Step 4: clean up after the migration
+
+<!-- markdownlint-disable MD044 -->
+We may choose to keep the migration paths or remove them, depending on whether
+or not we expect self-managed instances to perform this migration.
+[gitlab-com/gl-infra/scalability#1131](https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/1131#note_603354746)
+contains a discussion on this topic for the trace chunks feature flag. It may
+be - as in that case - that we decide that the maintenance costs of supporting
+the migration code are higher than the benefits of allowing self-managed
+instances to perform this migration seamlessly, if we expect self-managed
+instances to cope without this functional partition.
+<!-- markdownlint-enable MD044 -->
+
+If we decide to keep the migration code:
+
+- We should document the migration steps.
+- If we used a feature flag, we should ensure it's an [ops type feature
+ flag](../feature_flags/index.md#ops-type), as these are long-lived flags.
+
+Otherwise, we can remove the flags and conclude the project.
diff --git a/doc/development/repository_mirroring.md b/doc/development/repository_mirroring.md
index bb4c62d70ee..573ffaccaf9 100644
--- a/doc/development/repository_mirroring.md
+++ b/doc/development/repository_mirroring.md
@@ -11,7 +11,7 @@ info: To determine the technical writer assigned to the Stage/Group associated w
<!-- vale gitlab.Spelling = NO -->
In December 2018, Tiago Botelho hosted a Deep Dive (GitLab team members only: `https://gitlab.com/gitlab-org/create-stage/issues/1`)
-on the GitLab [Pull Repository Mirroring functionality](../user/project/repository/repository_mirroring.md#pull-from-a-remote-repository)
+on the GitLab [Pull Repository Mirroring functionality](../user/project/repository/mirror/pull.md)
to share his domain specific knowledge with anyone who may work in this part of the
codebase in the future. You can find the <i class="fa fa-youtube-play youtube" aria-hidden="true"></i> [recording on YouTube](https://www.youtube.com/watch?v=sSZq0fpdY-Y),
and the slides in [PDF](https://gitlab.com/gitlab-org/create-stage/uploads/8693404888a941fd851f8a8ecdec9675/Gitlab_Create_-_Pull_Mirroring_Deep_Dive.pdf).
diff --git a/doc/development/reusing_abstractions.md b/doc/development/reusing_abstractions.md
index ded6b074324..568e8a9d123 100644
--- a/doc/development/reusing_abstractions.md
+++ b/doc/development/reusing_abstractions.md
@@ -183,6 +183,8 @@ queries they produce.
Everything in `app/presenters`, used for exposing complex data to a Rails view,
without having to create many instance variables.
+See [the documentation](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/presenters/README.md) for more information.
+
### Serializers
Everything in `app/serializers`, used for presenting the response to a request,
diff --git a/doc/development/ruby_upgrade.md b/doc/development/ruby_upgrade.md
new file mode 100644
index 00000000000..ad6bff8499a
--- /dev/null
+++ b/doc/development/ruby_upgrade.md
@@ -0,0 +1,275 @@
+---
+stage: none
+group: unassigned
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
+---
+
+# Ruby upgrade guidelines
+
+We strive to run GitLab using the latest Ruby MRI releases to benefit from performance and
+security updates and new Ruby APIs. When upgrading Ruby across GitLab, we should do
+so in a way that:
+
+- Is least disruptive to contributors.
+- Optimizes for GitLab SaaS availability.
+- Maintains Ruby version parity across all parts of GitLab.
+
+Before making changes to Ruby versions, read through this document carefully and entirely to get a high-level
+understanding of what changes may be necessary. It is likely that every Ruby upgrade is a little
+different than the one before it, so assess the order and necessity of the documented
+steps.
+
+## Scope of a Ruby upgrade
+
+The first thing to consider when upgrading Ruby is scope. In general, we consider
+the following areas in which Ruby updates may have to occur:
+
+- The main GitLab Rails repository.
+- Any ancillary Ruby system repositories.
+- Any third-party libraries used by systems in these repositories.
+- Any GitLab libraries used by systems in these repositories.
+
+We may not always have to touch all of these. For instance, a patch-level Ruby update is
+unlikely to require updates in third-party gems.
+
+### Patch, minor, and major upgrades
+
+When assessing scope, the Ruby version level matters. For instance, it is harder and riskier
+to upgrade GitLab from Ruby 2.x to 3.x than it is to upgrade from Ruby 2.7.2 to 2.7.4, as
+patch releases are typically restricted to security or bug fixes.
+Be aware of this when preparing an upgrade and plan accordingly.
+
+To help you estimate the scope of future upgrades, see the efforts required for the following upgrades:
+
+- [Patch upgrade 2.7.2 -> 2.7.4](https://gitlab.com/gitlab-org/gitlab/-/issues/335890)
+- [Minor upgrade 2.6.x -> 2.7.x](https://gitlab.com/groups/gitlab-org/-/epics/2380)
+- [Major upgrade 2.x.x -> 3.x.x](https://gitlab.com/groups/gitlab-org/-/epics/5149)
+
+## Affected audiences and targets
+
+Before any upgrade, consider all audiences and targets, ordered by how immediately they are affected by Ruby upgrades:
+
+1. **Developers.** We have many contributors to GitLab and related projects both inside and outside the company. Changing files such as `.ruby-version` affects everyone using tooling that interprets these files.
+The developers are affected as soon as they pull from the repository containing the merged changes.
+1. **GitLab CI/CD.** We heavily lean on CI/CD for code integration and testing. CI/CD jobs do not interpret files such as `.ruby-version`.
+Instead, they use the Ruby installed in the Docker container they execute in, which is defined in `.gitlab-ci.yml`.
+The container images used in these jobs are maintained in the [`gitlab-build-images`](https://gitlab.com/gitlab-org/gitlab-build-images) repository.
+When we merge an update to an image, CI/CD jobs are affected as soon as the [image is built](https://gitlab.com/gitlab-org/gitlab-build-images/#pushing-a-rebuild-image).
+1. **GitLab SaaS**. GitLab.com is deployed from customized Helm charts that use Docker images from [Cloud Native GitLab (CNG)](https://gitlab.com/gitlab-org/build/CNG).
+Just like CI/CD, `.ruby-version` is meaningless in this environment. Instead, those Docker images must be patched to upgrade Ruby.
+GitLab SaaS is affected with the next deployment.
+1. **Self-managed GitLab.** Customers installing GitLab via [Omnibus](https://gitlab.com/gitlab-org/omnibus-gitlab) use none of the above.
+Instead, their Ruby version is defined by the [Ruby software bundle](https://gitlab.com/gitlab-org/omnibus-gitlab/blob/master/config/software/ruby.rb) in Omnibus.
+Self-managed customers are affected as soon as they upgrade to the release containing this change.
+
+## Ruby upgrade approach
+
+Timing all steps in a Ruby upgrade correctly is critical. As a general guideline, consider the following:
+
+- For smaller upgrades where production behavior is unlikely to change, aim to keep the version gap between
+repositories and production minimal. Coordinate with stakeholders to merge all changes closely together
+(within a day or two) to avoid drift. In this scenario the likely order is to upgrade developer tooling and
+environments first, production second.
+- For larger changes, the risk of going to production with a new Ruby is significant. In this case, try to get into a
+position where all known incompatibilities with the new Ruby version are already fixed, then work
+with production engineers to deploy the new Ruby to a subset of the GitLab production fleet. In this scenario
+the likely order is to update production first, developer tooling and environments second. This makes rollbacks
+easier in case of critical regressions in production.
+
+Either way, we found that from past experience the following approach works well, with some steps likely only
+necessary for minor and major upgrades. Note that some of these steps can happen in parallel or may have their
+order reversed as described above.
+
+### Create an epic
+
+Tracking this work in an epic is useful to get a sense of progress. For larger upgrades, include a
+timeline in the epic description so stakeholders know when the final switch is expected to go live.
+
+Break changes to individual repositories into separate issues under this epic.
+
+### Communicate the intent to upgrade
+
+Especially for upgrades that introduce or deprecate features,
+communicate early that an upgrade is due, ideally with an associated timeline. Provide links to important or
+noteworthy changes, so developers can start to familiarize themselves with
+changes ahead of time.
+
+GitLab team members should announce the intent in relevant Slack channels (`#backend` and `#development` at minimum)
+and Engineering Week In Review (EWIR). Include a link to the upgrade epic in your
+[communication](https://about.gitlab.com/handbook/engineering/#communication).
+
+### Add new Ruby to CI/CD and development environments
+
+To build and run Ruby gems and the GitLab Rails application with a new Ruby, you must first prepare CI/CD
+and developer environments to include the new Ruby version.
+At this stage, you *must not make it the default Ruby yet*, but make it optional instead. This allows
+for a smoother transition by supporting both old and new Ruby versions for a period of time.
+
+There are two places that require changes:
+
+1. **[GitLab Build Images](https://gitlab.com/gitlab-org/gitlab-build-images).** These are Docker images
+we use for runners and other Docker-based pre-production environments. The kind of change necessary
+depends on the scope.
+ - For [patch level updates](https://gitlab.com/gitlab-org/gitlab-build-images/-/merge_requests/418), it should suffice to increment the patch level of `RUBY_VERSION`.
+All projects building against the same minor release automatically download the new patch release.
+ - For [major and minor updates](https://gitlab.com/gitlab-org/gitlab-build-images/-/merge_requests/320), create a new set of Docker images that can be used side-by-side with existing images during the upgrade process. **Important:** Make sure to copy over all Ruby patch files
+in the `/patches` directory to a new folder matching the Ruby version you upgrade to, or they aren't applied.
+1. **[GitLab Development Kit (GDK)](https://gitlab.com/gitlab-org/gitlab-development-kit).**
+Update GDK to add the new Ruby as an additional option for
+developers to choose from. This typically only requires it to be appended to `.tool-versions` so `asdf`
+users will benefit from this. Other users will have to install it manually
+([example](https://gitlab.com/gitlab-org/gitlab-development-kit/-/merge_requests/2136).)
+
+For larger version upgrades, consider working with [Quality Engineering](https://about.gitlab.com/handbook/engineering/quality/)
+to identify and set up a test plan.
+
+### Update third-party gems
+
+For patch releases this is unlikely to be necessary, but
+for minor and major releases, there could be breaking changes or Bundler dependency issues when gems
+pin Ruby to a particular version. A good way to find out is to create a merge request in `gitlab-org/gitlab`
+and see what breaks.
+
+### Update GitLab gems and related systems
+
+This is typically necessary, since gems or Ruby applications that we maintain ourselves contain the build setup such as
+`.ruby-version`, `.tool-versions`, or `.gitlab-ci.yml` files. While there isn't always a technical necessity to
+update these repositories for the GitLab Rails application to work with a new Ruby,
+it is good practice to keep Ruby versions in lock-step across all our repositories. For minor and major
+upgrades, add new CI/CD jobs to these repositories using the new Ruby.
+A [build matrix definition](../ci/yaml/index.md#parallel-matrix-jobs) can do this efficiently.
+
+#### Decide which repositories to update
+
+When upgrading Ruby, consider updating the following repositories:
+
+- [Gitaly](https://gitlab.com/gitlab-org/gitaly) ([example](https://gitlab.com/gitlab-org/gitaly/-/merge_requests/3771))
+- [GitLab Labkit](https://gitlab.com/gitlab-org/labkit-ruby) ([example](https://gitlab.com/gitlab-org/labkit-ruby/-/merge_requests/79))
+- [GitLab Exporter](https://gitlab.com/gitlab-org/gitlab-exporter) ([example](https://gitlab.com/gitlab-org/gitlab-exporter/-/merge_requests/150))
+- [GitLab Experiment](https://gitlab.com/gitlab-org/gitlab-experiment) ([example](https://gitlab.com/gitlab-org/gitlab-experiment/-/merge_requests/128))
+- [Gollum Lib](https://gitlab.com/gitlab-org/gollum-lib) ([example](https://gitlab.com/gitlab-org/gollum-lib/-/merge_requests/21))
+- [GitLab Helm Chart](https://gitlab.com/gitlab-org/charts/gitlab) ([example](https://gitlab.com/gitlab-org/charts/gitlab/-/merge_requests/2162))
+- [GitLab Sidekiq fetcher](https://gitlab.com/gitlab-org/sidekiq-reliable-fetch) ([example](https://gitlab.com/gitlab-org/sidekiq-reliable-fetch/-/merge_requests/33))
+- [Prometheus Ruby Mmap Client](https://gitlab.com/gitlab-org/prometheus-client-mmap) ([example](https://gitlab.com/gitlab-org/prometheus-client-mmap/-/merge_requests/59))
+- [GitLab-mail_room](https://gitlab.com/gitlab-org/gitlab-mail_room) ([example](https://gitlab.com/gitlab-org/gitlab-mail_room/-/merge_requests/16))
+
+To assess which of these repositories are critical to be updated alongside the main GitLab application consider:
+
+- The Ruby version scope.
+- The role that the service or library plays in the overall functioning of GitLab.
+
+Refer to the [list of GitLab projects](https://about.gitlab.com/handbook/engineering/projects/) for a complete
+account of which repositories could be affected.
+For smaller version upgrades, it can be acceptable to delay updating libraries that are non-essential or where
+we are certain that the main application test suite would catch regressions under a new Ruby version.
+
+NOTE:
+Consult with the respective code owners whether it is acceptable to merge these changes ahead
+of updating the GitLab application. It might be best to get the necessary approvals
+but wait to merge the change until everything is ready.
+
+### Prepare the GitLab application MR
+
+With the dependencies updated and the new gem versions released, you can update the main Rails
+application with any necessary changes, similar to the gems and related systems.
+On top of that, update the documentation to reflect the version change in the installation
+and update instructions ([example](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/68363)).
+
+NOTE:
+Be especially careful with timing this merge request, since as soon as it is merged, all GitLab contributors
+will be affected by it and the changes will be deployed. You must ensure that this MR remains
+open until everything else is ready, but it can be useful to get approval early to reduce lead time.
+
+### Give developers time to upgrade (grace period)
+
+With the new Ruby made available as an option, and all merge requests either ready or merged,
+there should be a grace period (1 week at minimum) during which developers can
+install the new Ruby on their machines. For GDK and `asdf` users this should happen automatically
+via `gdk update`.
+
+This pause is a good time to assess the risk of this upgrade for GitLab SaaS.
+For Ruby upgrades that are high risk, such as major version upgrades, it is recommended to
+coordinate the changes with the infrastructure team through a [change management request](https://about.gitlab.com/handbook/engineering/infrastructure/change-management/).
+Create this issue early to give everyone enough time to schedule and prepare changes.
+
+### Make it the default Ruby
+
+If there are no known version compatibility issues left, and the grace
+period has passed, all affected repositories and developer tools should be updated to make the new Ruby
+default.
+
+At this point, update the [GitLab Compose Kit (GCK)](https://gitlab.com/gitlab-org/gitlab-compose-kit).
+This is an alternative development environment for users that prefer to run GitLab in `docker-compose`.
+This project relies on the same Docker images as our runners, so it should maintain parity with changes
+in that repository. This change is only necessary when the minor or major version changes
+([example](https://gitlab.com/gitlab-org/gitlab-compose-kit/-/merge_requests/176).)
+
+As mentioned above, if the impact of the Ruby upgrade on SaaS availability is uncertain, it is
+prudent to skip this step until you have verified that it runs smootly in production via a staged
+rollout. In this case, go to the next step first, and then, after the verification period has passed, promote
+the new Ruby to be the new default.
+
+### Update CNG and Omnibus, merge the GitLab MR
+
+The last step is to use the new Ruby in production. This
+requires updating Omnibus and production Docker images to use the new version.
+Helm charts may also have to be updated if there were changes to related systems that maintain
+their own charts (such as `gitlab-exporter`.)
+
+To use the new Ruby in production, update the following projects:
+
+- [Cloud-native GitLab Docker Images (CNG)](https://gitlab.com/gitlab-org/build/CNG) ([example](https://gitlab.com/gitlab-org/build/CNG/-/merge_requests/739))
+- [Omnibus GitLab](https://gitlab.com/gitlab-org/omnibus-gitlab) ([example](https://gitlab.com/gitlab-org/omnibus-gitlab/-/merge_requests/5545))
+
+If you submit a change management request, coordinate the rollout with infrastructure
+engineers. When dealing with larger upgrades, involve [Release Managers](https://about.gitlab.com/community/release-managers/)
+in the rollout plan.
+
+### Create patch releases and backports for security patches
+
+If the upgrade was a patch release and contains important security fixes, it should be released as a
+GitLab patch release to self-managed customers. Consult our [release managers](https://about.gitlab.com/community/release-managers/)
+for how to proceed.
+
+## Ruby upgrade tooling
+
+There are several tools that ease the upgrade process.
+
+### Deprecation Toolkit
+
+A common problem with Ruby upgrades is that deprecation warnings turn into errors. This means that every single
+deprecation warning must be resolved before making the switch. To avoid new warnings from making it into the
+main application branch, we use [`DeprecationToolkitEnv`](https://gitlab.com/gitlab-org/gitlab/blob/master/spec/deprecation_toolkit_env.rb).
+This module observes deprecation warnings emitted from spec runs and turns them into test failures. This prevents
+developers from checking in new code that would fail under a new Ruby.
+
+Sometimes it cannot be avoided to introduce new warnings, for example when a Ruby gem we use emits these warnings
+and we have no control over it. In these cases, add silences, like [this merge request](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/68865) did.
+
+### Deprecation Logger
+
+We also log Ruby and Rails deprecation warnings to a dedicated log file, `log/deprecation_json.log`
+(see [GitLab Developers Guide to Logging](logging.md) for where to find GitLab log files),
+which can provide clues when there is code that is not adequately covered by tests and hence would slip past `DeprecationToolkitEnv`.
+
+For GitLab SaaS, GitLab team members can inspect these log events in Kibana
+(`https://log.gprd.gitlab.net/goto/f7cebf1ff05038d901ba2c45925c7e01`).
+
+## Recommendations
+
+During the upgrade process, consider the following recommendations:
+
+- **Front-load as many changes as possible.** Especially for minor and major releases, it is likely that application
+code will break or change. Any changes that are backward compatible should be merged into the main branch and
+released independently ahead of the Ruby version upgrade. This ensures that we move in small increments and
+get feedback from production environments early.
+- **Create an experimental branch for larger updates.** We generally try to avoid long-running topic branches,
+but for purposes of feedback and experimentation, it can be useful to have such a branch to get regular
+feedback from CI/CD when running a newer Ruby. This can be helpful when first assessing what problems
+we might run into, as [this MR](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/50640) demonstrates.
+These experimental branches are not intended to be merged; they can be closed once all required changes have been broken out
+and merged back independently.
+- **Give yourself enough time to fix problems ahead of a milestone release.** GitLab moves fast.
+As a Ruby upgrade requires many MRs to be sent and reviewed, make sure all changes are merged at least a week
+before the 22nd. This gives us extra time to act if something breaks. If in doubt, it is better to
+postpone the upgrade to the following month, as we [prioritize availability over velocity](https://about.gitlab.com/handbook/engineering/#prioritizing-technical-decisions).
diff --git a/doc/development/scalability.md b/doc/development/scalability.md
index 824c98b4b03..fdae66b7abc 100644
--- a/doc/development/scalability.md
+++ b/doc/development/scalability.md
@@ -45,7 +45,7 @@ many groups or projects, and the access level (including guest, developer, or
maintainer) to groups and projects determines what users can see and
what they can access.
-Users with admin access can access all projects and even impersonate
+Users with the administrator role can access all projects and even impersonate
users.
#### Sharding and partitioning
diff --git a/doc/development/service_ping/dictionary.md b/doc/development/service_ping/dictionary.md
index e7e8464ff7a..810c789bc03 100644
--- a/doc/development/service_ping/dictionary.md
+++ b/doc/development/service_ping/dictionary.md
@@ -1,4 +1,4 @@
---
-redirect_to: 'https://gitlab-org.gitlab.io/growth/product-intelligence/metric-dictionary'
+redirect_to: 'https://metrics.gitlab.com/index.html'
remove_date: '2021-11-10'
---
diff --git a/doc/development/service_ping/implement.md b/doc/development/service_ping/implement.md
index d86b06a6965..34a845147dc 100644
--- a/doc/development/service_ping/implement.md
+++ b/doc/development/service_ping/implement.md
@@ -340,6 +340,9 @@ WARNING:
HyperLogLog (HLL) is a probabilistic algorithm and its **results always includes some small error**. According to [Redis documentation](https://redis.io/commands/pfcount), data from
used HLL implementation is "approximated with a standard error of 0.81%".
+NOTE:
+ A user's consent for usage_stats (`User.single_user&.requires_usage_stats_consent?`) is not checked during the data tracking stage due to performance reasons. Keys corresponding to those counters are present in Redis even if `usage_stats_consent` is still required. However, no metric is collected from Redis and reported back to GitLab as long as `usage_stats_consent` is required.
+
With `Gitlab::UsageDataCounters::HLLRedisCounter` we have available data structures used to count unique values.
Implemented using Redis methods [PFADD](https://redis.io/commands/pfadd) and [PFCOUNT](https://redis.io/commands/pfcount).
@@ -939,7 +942,6 @@ Aggregated metrics collected in `7d` and `28d` time frames are added into Servic
:packages => 155,
:personal_snippets => 2106,
:project_snippets => 407,
- :promoted_issues => 719,
:aggregated_metrics => {
:example_metrics_union => 7,
:example_metrics_intersection => 2
@@ -1028,9 +1030,9 @@ Example metrics persistence:
class UsageData
def count_secure_pipelines(time_period)
...
- relation = ::Security::Scan.latest_successful_by_build.by_scan_types(scan_type).where(security_scans: time_period)
+ relation = ::Security::Scan.by_scan_types(scan_type).where(time_period)
- pipelines_with_secure_jobs['dependency_scanning_pipeline'] = estimate_batch_distinct_count(relation, :commit_id, batch_size: 1000, start: start_id, finish: finish_id) do |result|
+ pipelines_with_secure_jobs['dependency_scanning_pipeline'] = estimate_batch_distinct_count(relation, :pipeline_id, batch_size: 1000, start: start_id, finish: finish_id) do |result|
::Gitlab::Usage::Metrics::Aggregates::Sources::PostgresHll
.save_aggregated_metrics(metric_name: 'dependency_scanning_pipeline', recorded_at_timestamp: recorded_at, time_period: time_period, data: result)
end
diff --git a/doc/development/service_ping/index.md b/doc/development/service_ping/index.md
index 0a94fa2ff6c..19bf7446da9 100644
--- a/doc/development/service_ping/index.md
+++ b/doc/development/service_ping/index.md
@@ -68,9 +68,9 @@ We use the following terminology to describe the Service Ping components:
Starting with GitLab version 14.1, free self-managed users running [GitLab EE](../ee_features.md) can receive paid features by registering with GitLab and sending us activity data via [Service Ping](#what-is-service-ping). Features introduced here do not remove the feature from its paid tier. Users can continue to access the features in a paid tier without sharing usage data.
-The paid feature available in this offering is [Email from GitLab](../../tools/email.md).
-Administrators can use this [Premium](https://about.gitlab.com/pricing/premium/) feature to streamline
-their workflow by emailing all or some instance users directly from the Admin Area.
+##### Features available in 14.1 and later
+
+1. [Email from GitLab](../../tools/email.md).
NOTE:
Registration is not yet required for participation, but will be added in a future milestone.
@@ -110,7 +110,7 @@ To disable Service Ping in the GitLab UI:
1. On the top bar, select **Menu > Admin**.
1. On the left sidebar, select **Settings > Metrics and profiling**.
1. Expand the **Usage statistics** section.
-1. Clear the **Enable service ping** checkbox.
+1. Clear the **Enable Service Ping** checkbox.
1. Select **Save changes**.
### Disable Service Ping using the configuration file
@@ -554,5 +554,5 @@ To work around this bug, you have two options:
1. In GitLab, on the top bar, select **Menu > Admin**.
1. On the left sidebar, select **Settings > Metrics and profiling**.
1. Expand **Usage Statistics**.
- 1. Clear the **Enable service ping** checkbox.
+ 1. Clear the **Enable Service Ping** checkbox.
1. Select **Save Changes**.
diff --git a/doc/development/service_ping/metrics_dictionary.md b/doc/development/service_ping/metrics_dictionary.md
index 8dc2d2255d1..c1478e6290e 100644
--- a/doc/development/service_ping/metrics_dictionary.md
+++ b/doc/development/service_ping/metrics_dictionary.md
@@ -6,7 +6,9 @@ info: To determine the technical writer assigned to the Stage/Group associated w
# Metrics Dictionary Guide
-This guide describes the [Metrics Dictionary](https://gitlab-org.gitlab.io/growth/product-intelligence/metric-dictionary) and how it's implemented.
+[Service Ping](index.md) metrics are defined in the
+[Metrics Dictionary](https://metrics.gitlab.com/index.html).
+This guide describes the dictionary and how it's implemented.
## Metrics Definition and validation
diff --git a/doc/development/service_ping/metrics_lifecycle.md b/doc/development/service_ping/metrics_lifecycle.md
index c0446aece8b..46040146de2 100644
--- a/doc/development/service_ping/metrics_lifecycle.md
+++ b/doc/development/service_ping/metrics_lifecycle.md
@@ -136,7 +136,10 @@ Product Intelligence team as inactive and is assigned to the group owner for rev
We are working on automating this process. See [this issue](https://gitlab.com/gitlab-org/gitlab/-/issues/338466) for details.
-Only deprecated metrics can be removed from Service Ping.
+Metrics can be removed from Service Ping if they:
+
+- Were previously [deprecated](#deprecate-a-metric).
+- Are not used in any Sisense dashboard.
For an example of the metric removal process take a look at this [example issue](https://gitlab.com/gitlab-org/gitlab/-/issues/297029)
diff --git a/doc/development/service_ping/review_guidelines.md b/doc/development/service_ping/review_guidelines.md
index 048b705636f..eb64d460b5a 100644
--- a/doc/development/service_ping/review_guidelines.md
+++ b/doc/development/service_ping/review_guidelines.md
@@ -14,7 +14,7 @@ general best practices for code reviews, refer to our [code review guide](../cod
## Resources for reviewers
- [Service Ping Guide](index.md)
-- [Metrics Dictionary](https://gitlab-org.gitlab.io/growth/product-intelligence/metric-dictionary)
+- [Metrics Dictionary](https://metrics.gitlab.com/index.html)
## Review process
diff --git a/doc/development/sidekiq_style_guide.md b/doc/development/sidekiq_style_guide.md
index 04b7e2f5c45..d45e2073fe7 100644
--- a/doc/development/sidekiq_style_guide.md
+++ b/doc/development/sidekiq_style_guide.md
@@ -154,12 +154,6 @@ A good example of that would be a cache expiration worker.
A job scheduled for an idempotent worker is [deduplicated](#deduplication) when
an unstarted job with the same arguments is already in the queue.
-WARNING:
-For [data consistency jobs](#job-data-consistency-strategies), the deduplication is not compatible with the
-`data_consistency` attribute set to `:sticky` or `:delayed`.
-The reason for this is that deduplication always takes into account the latest binary replication pointer into account, not the first one.
-There is an [open issue](https://gitlab.com/gitlab-org/gitlab/-/issues/325291) to improve this.
-
### Ensuring a worker is idempotent
Make sure the worker tests pass using the following shared example:
@@ -285,6 +279,55 @@ module AuthorizedProjectUpdate
end
```
+### Deduplication with load balancing
+
+> [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/6763) in GitLab 14.4.
+
+Jobs that declare either `:sticky` or `:delayed` data consistency
+are eligible for database load-balancing.
+In both cases, jobs are [scheduled in the future](#scheduling-jobs-in-the-future) with a short delay (1 second).
+This minimizes the chance of replication lag after a write.
+
+If you really want to deduplicate jobs eligible for load balancing,
+specify `including_scheduled: true` argument when defining deduplication strategy:
+
+```ruby
+class DelayedIdempotentWorker
+ include ApplicationWorker
+ data_consistency :delayed
+
+ deduplicate :until_executing, including_scheduled: true
+ idempotent!
+
+ # ...
+end
+```
+
+#### Preserve the latest WAL location for idempotent jobs
+
+> - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/69372) in GitLab 14.3.
+> - [Enabled on GitLab.com](https://gitlab.com/gitlab-org/gitlab/-/issues/338350) in GitLab 14.4.
+
+The deduplication always take into account the latest binary replication pointer, not the first one.
+This happens because we drop the same job scheduled for the second time and the Write-Ahead Log (WAL) is lost.
+This could lead to comparing the old WAL location and reading from a stale replica.
+
+To support both deduplication and maintaining data consistency with load balancing,
+we are preserving the latest WAL location for idempotent jobs in Redis.
+This way we are always comparing the latest binary replication pointer,
+making sure that we read from the replica that is fully caught up.
+
+FLAG:
+On self-managed GitLab, by default this feature is not available.
+To make it available,
+ask an administrator to [enable the preserve_latest_wal_locations_for_idempotent_jobs flag](../administration/feature_flags.md).
+FLAG:
+On self-managed GitLab, by default this feature is not available.
+To make it available,
+ask an administrator to [enable the `preserve_latest_wal_locations_for_idempotent_jobs` flag](../administration/feature_flags.md).
+This feature flag is related to GitLab development and is not intended to be used by GitLab administrators, though.
+On GitLab.com, this feature is available but can be configured by GitLab.com administrators only.
+
## Limited capacity worker
It is possible to limit the number of concurrent running jobs for a worker class
@@ -553,11 +596,6 @@ class DelayedWorker
end
```
-For [idempotent jobs](#idempotent-jobs), the deduplication is not compatible with the
-`data_consistency` attribute set to `:sticky` or `:delayed`.
-The reason for this is that deduplication always takes into account the latest binary replication pointer into account, not the first one.
-There is an [open issue](https://gitlab.com/gitlab-org/gitlab/-/issues/325291) to improve this.
-
### `feature_flag` property
The `feature_flag` property allows you to toggle a job's `data_consistency`,
@@ -583,6 +621,12 @@ class DelayedWorker
end
```
+### Data consistency with idempotent jobs
+
+For [idempotent jobs](#idempotent-jobs) that declare either `:sticky` or `:delayed` data consistency, we are
+[preserving the latest WAL location](#preserve-the-latest-wal-location-for-idempotent-jobs) while deduplicating,
+ensuring that we read from the replica that is fully caught up.
+
## Jobs with External Dependencies
Most background jobs in the GitLab application communicate with other GitLab
diff --git a/doc/development/snowplow/dictionary.md b/doc/development/snowplow/dictionary.md
index 589d6f6fb9f..02e9ba5ce20 100644
--- a/doc/development/snowplow/dictionary.md
+++ b/doc/development/snowplow/dictionary.md
@@ -1,44 +1,4 @@
---
-stage: Growth
-group: Product Intelligence
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+redirect_to: 'https://metrics.gitlab.com/snowplow.html'
+remove_date: '2021-12-28'
---
-
-<!---
- This documentation is auto generated by a script.
-
- Please do not edit this file directly, check generate_event_dictionary task on lib/tasks/gitlab/snowplow.rake.
---->
-
-<!-- vale gitlab.Spelling = NO -->
-
-# Event Dictionary
-
-This file is autogenerated, please do not edit it directly.
-
-To generate these files from the GitLab repository, run:
-
-```shell
-bundle exec rake gitlab:snowplow:generate_event_dictionary
-```
-
-The Event Dictionary is based on the following event definition YAML files:
-
-- [`config/events`](https://gitlab.com/gitlab-org/gitlab/-/tree/f9a404301ca22d038e7b9a9eb08d9c1bbd6c4d84/config/events)
-- [`ee/config/events`](https://gitlab.com/gitlab-org/gitlab/-/tree/f9a404301ca22d038e7b9a9eb08d9c1bbd6c4d84/ee/config/events)
-
-## Event definitions
-
-### `epics promote`
-
-| category | action | label | property | value |
-|---|---|---|---|---|
-| `epics` | `promote` | `` | `The string "issue_id"` | `ID of the issue` |
-
-Issue promoted to epic
-
-YAML definition: `/ee/config/events/epics_promote.yml`
-
-Owner: `group::product planning`
-
-Tiers: `premium`, `ultimate`
diff --git a/doc/development/snowplow/implementation.md b/doc/development/snowplow/implementation.md
new file mode 100644
index 00000000000..0d81b442850
--- /dev/null
+++ b/doc/development/snowplow/implementation.md
@@ -0,0 +1,543 @@
+---
+stage: Growth
+group: Product Intelligence
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
+---
+
+# Implement Snowplow tracking
+
+This page describes how to:
+
+- Implement Snowplow frontend and backend tracking
+- Test Snowplow events
+
+## Snowplow JavaScript frontend tracking
+
+GitLab provides a `Tracking` interface that wraps the [Snowplow JavaScript tracker](https://docs.snowplowanalytics.com/docs/collecting-data/collecting-from-own-applications/javascript-trackers/)
+to track custom events.
+
+For the recommended frontend tracking implementation, see [Usage recommendations](#usage-recommendations).
+
+Tracking implementations must have an `action` and a `category`. You can provide additional
+categories from the [structured event taxonomy](index.md#structured-event-taxonomy) with an `extra` object
+that accepts key-value pairs.
+
+| Field | Type | Default value | Description |
+|:-----------|:-------|:---------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| `category` | string | `document.body.dataset.page` | Page or subsection of a page in which events are captured. |
+| `action` | string | generic | Action the user is taking. Clicks must be `click` and activations must be `activate`. For example, focusing a form field is `activate_form_input`, and clicking a button is `click_button`. |
+| `data` | object | `{}` | Additional data such as `label`, `property`, `value`, `context` as described in [Structured event taxonomy](index.md#structured-event-taxonomy), and `extra` (key-value pairs object). |
+
+### Usage recommendations
+
+- Use [data attributes](#implement-data-attribute-tracking) on HTML elements that emit `click`, `show.bs.dropdown`, or `hide.bs.dropdown` events.
+- Use the [Vue mixin](#implement-vue-component-tracking) for tracking custom events, or if the supported events for data attributes are not propagating.
+- Use the [tracking class](#implement-raw-javascript-tracking) when tracking raw JavaScript files.
+
+### Implement data attribute tracking
+
+To implement tracking for HAML or Vue templates, add a [`data-track` attribute](#data-track-attributes) to the element.
+
+The following example shows `data-track-*` attributes assigned to a button:
+
+```haml
+%button.btn{ data: { track: { action: "click_button", label: "template_preview", property: "my-template" } } }
+```
+
+```html
+<button class="btn"
+ data-track-action="click_button"
+ data-track-label="template_preview"
+ data-track-property="my-template"
+ data-track-extra='{ "template_variant": "primary" }'
+/>
+```
+
+#### `data-track` attributes
+
+| Attribute | Required | Description |
+|:----------------------|:---------|:------------|
+| `data-track-action` | true | Action the user is taking. Clicks must be prepended with `click` and activations must be prepended with `activate`. For example, focusing a form field is `activate_form_input` and clicking a button is `click_button`. Replaces `data-track-event`, which was [deprecated](https://gitlab.com/gitlab-org/gitlab/-/issues/290962) in GitLab 13.11. |
+| `data-track-label` | false | The specific element or object to act on. This can be: the label of the element, for example, a tab labeled 'Create from template' for `create_from_template`; a unique identifier if no text is available, for example, `groups_dropdown_close` for closing the Groups dropdown in the top bar; or the name or title attribute of a record being created. |
+| `data-track-property` | false | Any additional property of the element, or object being acted on. |
+| `data-track-value` | false | Describes a numeric value or something directly related to the event. This could be the value of an input. For example, `10` when clicking `internal` visibility. If omitted, this is the element's `value` property or `undefined`. For checkboxes, the default value is the element's checked attribute or `0` when unchecked. |
+| `data-track-extra` | false | A key-value pair object passed as a valid JSON string. This attribute is added to the `extra` property in our [`gitlab_standard`](schemas.md#gitlab_standard) schema. |
+| `data-track-context` | false | The `context` as described in our [Structured event taxonomy](index.md#structured-event-taxonomy). |
+
+#### Event listeners
+
+Event listeners bind at the document level to handle click events in elements with data attributes.
+This allows them to be handled when the DOM re-renders or changes. Document-level binding reduces
+the likelihood that click events stop propagating up the DOM tree.
+
+If click events stop propagating, you must implement listeners and [Vue component tracking](#implement-vue-component-tracking) or [raw JavaScript tracking](#implement-raw-javascript-tracking).
+
+#### Helper methods
+
+Use the following Ruby helper:
+
+```ruby
+tracking_attrs(label, action, property) # { data: { track_label... } }
+
+%button{ **tracking_attrs('main_navigation', 'click_button', 'navigation') }
+```
+
+If you use the GitLab helper method [`nav_link`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/helpers/tab_helper.rb#L76), you must wrap `html_options` under the `html_options` keyword argument. If you
+use the `ActionView` helper method [`link_to`](https://api.rubyonrails.org/v5.2.3/classes/ActionView/Helpers/UrlHelper.html#method-i-link_to), you don't need to wrap `html_options`.
+
+```ruby
+# Bad
+= nav_link(controller: ['dashboard/groups', 'explore/groups'], data: { track_label: "explore_groups",
+track_action: "click_button" })
+
+# Good
+= nav_link(controller: ['dashboard/groups', 'explore/groups'], html_options: { data: { track_label:
+"explore_groups", track_action: "click_button" } })
+
+# Good (other helpers)
+= link_to explore_groups_path, title: _("Explore"), data: { track_label: "explore_groups", track_action:
+"click_button" }
+```
+
+### Implement Vue component tracking
+
+For custom event tracking, use a Vue `mixin` in components. Vue `mixin` exposes the `Tracking.event`
+static method and the `track` method called from components or templates. You can specify tracking
+options in `data` or `computed`. These options override any defaults and allow the values to be dynamic
+from props or based on state.
+
+Default options are passed when an event is tracked from the component. If you don't specify an option,
+the default `document.body.dataset.page` is used. The default options are:
+
+- `category`
+- `label`
+- `property`
+- `value`
+
+To implement Vue component tracking:
+
+1. Import the `Tracking` library and request a `mixin`:
+
+ ```javascript
+ import Tracking from '~/tracking';
+ const trackingMixin = Tracking.mixin;
+ ```
+
+1. Provide categories to track the event from the component. For example, to track all events in a
+component with a label, use the `label` category:
+
+ ```javascript
+ import Tracking from '~/tracking';
+ const trackingMixin = Tracking.mixin({ label: 'right_sidebar' });
+ ```
+
+1. In the component, declare the Vue `mixin`:
+
+ ```javascript
+ export default {
+ mixins: [trackingMixin],
+ // ...[component implementation]...
+ data() {
+ return {
+ expanded: false,
+ tracking: {
+ label: 'left_sidebar',
+ },
+ };
+ },
+ };
+ ```
+
+1. To receive event data as a tracking object or computed property:
+ - Declare it in the `data` function. Use a `tracking` object when default event properties are dynamic or provided at runtime:
+
+ ```javascript
+ export default {
+ name: 'RightSidebar',
+ mixins: [Tracking.mixin()],
+ data() {
+ return {
+ tracking: {
+ label: 'right_sidebar',
+ // category: '',
+ // property: '',
+ // value: '',
+ // experiment: '',
+ // extra: {},
+ },
+ };
+ },
+ };
+ ```
+
+ - Declare it in the event data in the `track` function. This object merges with any previously provided options:
+
+ ```javascript
+ this.track('click_button', {
+ label: 'right_sidebar',
+ });
+ ```
+
+1. Optional. Use the `track` method in a template:
+
+ ```html
+ <template>
+ <div>
+ <button data-testid="toggle" @click="toggle">Toggle</button>
+
+ <div v-if="expanded">
+ <p>Hello world!</p>
+ <button @click="track('click_action')">Track another event</button>
+ </div>
+ </div>
+ </template>
+ ```
+
+The following example shows an implementation of Vue component tracking:
+
+```javascript
+export default {
+ name: 'RightSidebar',
+ mixins: [Tracking.mixin({ label: 'right_sidebar' })],
+ data() {
+ return {
+ expanded: false,
+ };
+ },
+ methods: {
+ toggle() {
+ this.expanded = !this.expanded;
+ // Additional data will be merged, like `value` below
+ this.track('click_toggle', { value: Number(this.expanded) });
+ }
+ }
+};
+```
+
+#### Testing example
+
+```javascript
+import { mockTracking } from 'helpers/tracking_helper';
+// mockTracking(category, documentOverride, spyMethod)
+
+describe('RightSidebar.vue', () => {
+ let trackingSpy;
+ let wrapper;
+
+ beforeEach(() => {
+ trackingSpy = mockTracking(undefined, wrapper.element, jest.spyOn);
+ });
+
+ const findToggle = () => wrapper.find('[data-testid="toggle"]');
+
+ it('tracks turning off toggle', () => {
+ findToggle().trigger('click');
+
+ expect(trackingSpy).toHaveBeenCalledWith(undefined, 'click_toggle', {
+ label: 'right_sidebar',
+ value: 0,
+ });
+ });
+});
+```
+
+### Implement raw JavaScript tracking
+
+To call custom event tracking and instrumentation directly from the JavaScript file, call the `Tracking.event` static function.
+
+The following example demonstrates tracking a click on a button by manually calling `Tracking.event`.
+
+```javascript
+import Tracking from '~/tracking';
+
+const button = document.getElementById('create_from_template_button');
+
+button.addEventListener('click', () => {
+ Tracking.event('dashboard:projects:index', 'click_button', {
+ label: 'create_from_template',
+ property: 'template_preview',
+ extra: {
+ templateVariant: 'primary',
+ valid: 1,
+ },
+ });
+});
+```
+
+#### Testing example
+
+```javascript
+import Tracking from '~/tracking';
+
+describe('MyTracking', () => {
+ let wrapper;
+
+ beforeEach(() => {
+ jest.spyOn(Tracking, 'event');
+ });
+
+ const findButton = () => wrapper.find('[data-testid="create_from_template"]');
+
+ it('tracks event', () => {
+ findButton().trigger('click');
+
+ expect(Tracking.event).toHaveBeenCalledWith(undefined, 'click_button', {
+ label: 'create_from_template',
+ property: 'template_preview',
+ extra: {
+ templateVariant: 'primary',
+ valid: true,
+ },
+ });
+ });
+});
+```
+
+### Form tracking
+
+To enable Snowplow automatic [form tracking](https://docs.snowplowanalytics.com/docs/collecting-data/collecting-from-own-applications/javascript-trackers/javascript-tracker/javascript-tracker-v2/tracking-specific-events/#form-tracking):
+
+1. Call `Tracking.enableFormTracking` when the DOM is ready.
+1. Provide a `config` object that includes at least one of the following elements:
+ - `forms` determines the forms to track. Identified by the CSS class name.
+ - `fields` determines the fields inside the tracked forms to track. Identified by the field `name`.
+1. Optional. Provide a list of contexts as the second argument. The [`gitlab_standard`](schemas.md#gitlab_standard) schema is excluded from these events.
+
+```javascript
+Tracking.enableFormTracking({
+ forms: { allow: ['sign-in-form', 'password-recovery-form'] },
+ fields: { allow: ['terms_and_conditions', 'newsletter_agreement'] },
+});
+```
+
+#### Testing example
+
+```javascript
+import Tracking from '~/tracking';
+
+describe('MyFormTracking', () => {
+ let formTrackingSpy;
+
+ beforeEach(() => {
+ formTrackingSpy = jest
+ .spyOn(Tracking, 'enableFormTracking')
+ .mockImplementation(() => null);
+ });
+
+ it('initialized with the correct configuration', () => {
+ expect(formTrackingSpy).toHaveBeenCalledWith({
+ forms: { allow: ['sign-in-form', 'password-recovery-form'] },
+ fields: { allow: ['terms_and_conditions', 'newsletter_agreement'] },
+ });
+ });
+});
+```
+
+## Implement Ruby backend tracking
+
+`Gitlab::Tracking` is an interface that wraps the [Snowplow Ruby Tracker](https://docs.snowplowanalytics.com/docs/collecting-data/collecting-from-own-applications/ruby-tracker/) for tracking custom events.
+Backend tracking provides:
+
+- User behavior tracking
+- Instrumentation to monitor and visualize performance over time in a section or aspect of code.
+
+To add custom event tracking and instrumentation, call the `GitLab::Tracking.event` class method.
+For example:
+
+```ruby
+class Projects::CreateService < BaseService
+ def execute
+ project = Project.create(params)
+
+ Gitlab::Tracking.event('Projects::CreateService', 'create_project', label: project.errors.full_messages.to_sentence,
+ property: project.valid?.to_s, project: project, user: current_user, namespace: namespace)
+ end
+end
+```
+
+Use the following arguments:
+
+| Argument | Type | Default value | Description |
+|------------|---------------------------|---------------|-----------------------------------------------------------------------------------------------------------------------------------|
+| `category` | String | | Area or aspect of the application. For example, `HealthCheckController` or `Lfs::FileTransformer`. |
+| `action` | String | | The action being taken. For example, a controller action such as `create`, or an Active Record callback. |
+| `label` | String | nil | The specific element or object to act on. This can be one of the following: the label of the element, for example, a tab labeled 'Create from template' for `create_from_template`; a unique identifier if no text is available, for example, `groups_dropdown_close` for closing the Groups dropdown in the top bar; or the name or title attribute of a record being created. |
+| `property` | String | nil | Any additional property of the element, or object being acted on. |
+| `value` | Numeric | nil | Describes a numeric value or something directly related to the event. This could be the value of an input. For example, `10` when clicking `internal` visibility. |
+| `context` | Array\[SelfDescribingJSON\] | nil | An array of custom contexts to send with this event. Most events should not have any custom contexts. |
+| `project` | Project | nil | The project associated with the event. |
+| `user` | User | nil | The user associated with the event. |
+| `namespace` | Namespace | nil | The namespace associated with the event. |
+| `extra` | Hash | `{}` | Additional keyword arguments are collected into a hash and sent with the event. |
+
+### Unit testing
+
+To test backend Snowplow events, use the `expect_snowplow_event` helper. For more information, see
+[testing best practices](../testing_guide/best_practices.md#test-snowplow-events).
+
+### Performance
+
+We use the [AsyncEmitter](https://docs.snowplowanalytics.com/docs/collecting-data/collecting-from-own-applications/ruby-tracker/emitters/#the-asyncemitter-class) when tracking events, which allows for instrumentation calls to be run in a background thread. This is still an active area of development.
+
+## Develop and test Snowplow
+
+To develop and test a Snowplow event, there are several tools to test frontend and backend events:
+
+| Testing Tool | Frontend Tracking | Backend Tracking | Local Development Environment | Production Environment | Production Environment |
+|----------------------------------------------|--------------------|---------------------|-------------------------------|------------------------|------------------------|
+| Snowplow Analytics Debugger Chrome Extension | Yes | No | Yes | Yes | Yes |
+| Snowplow Inspector Chrome Extension | Yes | No | Yes | Yes | Yes |
+| Snowplow Micro | Yes | Yes | Yes | No | No |
+
+### Test frontend events
+
+Before you test frontend events in development, you must:
+
+1. [Enable Snowplow tracking in the Admin Area](index.md#enable-snowplow-tracking).
+1. Turn off ad blockers that could prevent Snowplow JavaScript from loading in your environment.
+1. Turn off "Do Not Track" (DNT) in your browser.
+
+All URLs are pseudonymized. The entity identifier [replaces](https://docs.snowplowanalytics.com/docs/collecting-data/collecting-from-own-applications/javascript-trackers/javascript-tracker/javascript-tracker-v2/tracker-setup/other-parameters-2/#Setting_a_custom_page_URL_and_referrer_URL) personally identifiable
+information (PII). PII includes usernames, group, and project names.
+
+#### Snowplow Analytics Debugger Chrome Extension
+
+[Snowplow Analytics Debugger](https://www.iglooanalytics.com/blog/snowplow-analytics-debugger-chrome-extension.html) is a browser extension for testing frontend events. It works in production, staging, and local development environments.
+
+1. Install the [Snowplow Analytics Debugger](https://chrome.google.com/webstore/detail/snowplow-analytics-debugg/jbnlcgeengmijcghameodeaenefieedm) Chrome browser extension.
+1. Open Chrome DevTools to the Snowplow Analytics Debugger tab.
+
+#### Snowplow Inspector Chrome Extension
+
+Snowplow Inspector Chrome Extension is a browser extension for testing frontend events. This works in production, staging, and local development environments.
+
+1. Install [Snowplow Inspector](https://chrome.google.com/webstore/detail/snowplow-inspector/maplkdomeamdlngconidoefjpogkmljm?hl=en).
+1. To open the extension, select the Snowplow Inspector icon beside the address bar.
+1. Click around on a webpage with Snowplow to see JavaScript events firing in the inspector window.
+
+### Test backend events
+
+#### Snowplow Micro
+
+[Snowplow Micro](https://snowplowanalytics.com/blog/2019/07/17/introducing-snowplow-micro/) is a
+Docker-based solution for testing backend and frontend in a local development environment. Snowplow Micro
+records the same events as the full Snowplow pipeline. To query events, use the Snowplow Micro API.
+
+To install and run Snowplow Micro, complete these steps to modify the
+[GitLab Development Kit (GDK)](https://gitlab.com/gitlab-org/gitlab-development-kit):
+
+1. Ensure Docker is installed and running.
+
+1. To install Snowplow Micro, clone the settings in
+[this project](https://gitlab.com/gitlab-org/snowplow-micro-configuration).
+
+1. Navigate to the directory with the cloned project,
+ and start the appropriate Docker container:
+
+ ```shell
+ ./snowplow-micro.sh
+ ```
+
+1. Use GDK to start the PostgreSQL terminal and connect
+ to the `gitlabhq_development` database:
+
+ ```shell
+ gdk psql -d gitlabhq_development
+ ```
+
+1. Update your instance's settings to enable Snowplow events and
+ point to the Snowplow Micro collector:
+
+ ```shell
+ update application_settings set snowplow_collector_hostname='localhost:9090', snowplow_enabled=true, snowplow_cookie_domain='.gitlab.com';
+ ```
+
+1. Update `DEFAULT_SNOWPLOW_OPTIONS` in `app/assets/javascripts/tracking/constants.js` to remove `forceSecureTracker: true`:
+
+ ```diff
+ diff --git a/app/assets/javascripts/tracking/constants.js b/app/assets/javascripts/tracking/constants.js
+ index 598111e4086..eff38074d4c 100644
+ --- a/app/assets/javascripts/tracking/constants.js
+ +++ b/app/assets/javascripts/tracking/constants.js
+ @@ -7,7 +7,6 @@ export const DEFAULT_SNOWPLOW_OPTIONS = {
+ appId: '',
+ userFingerprint: false,
+ respectDoNotTrack: true,
+ - forceSecureTracker: true,
+ eventMethod: 'post',
+ contexts: { webPage: true, performanceTiming: true },
+ formTracking: false,
+ ```
+
+1. Update `options` in `lib/gitlab/tracking.rb` to add `protocol` and `port`:
+
+ ```diff
+ diff --git a/lib/gitlab/tracking.rb b/lib/gitlab/tracking.rb
+ index 618e359211b..e9084623c43 100644
+ --- a/lib/gitlab/tracking.rb
+ +++ b/lib/gitlab/tracking.rb
+ @@ -41,7 +41,9 @@ def options(group)
+ cookie_domain: Gitlab::CurrentSettings.snowplow_cookie_domain,
+ app_id: Gitlab::CurrentSettings.snowplow_app_id,
+ form_tracking: additional_features,
+ - link_click_tracking: additional_features
+ + link_click_tracking: additional_features,
+ + protocol: 'http',
+ + port: 9090
+ }.transform_keys! { |key| key.to_s.camelize(:lower).to_sym }
+ end
+ ```
+
+1. Update `emitter` in `lib/gitlab/tracking/destinations/snowplow.rb` to change `protocol`:
+
+ ```diff
+ diff --git a/lib/gitlab/tracking/destinations/snowplow.rb b/lib/gitlab/tracking/destinations/snowplow.rb
+ index 4fa844de325..5dd9d0eacfb 100644
+ --- a/lib/gitlab/tracking/destinations/snowplow.rb
+ +++ b/lib/gitlab/tracking/destinations/snowplow.rb
+ @@ -40,7 +40,7 @@ def tracker
+ def emitter
+ SnowplowTracker::AsyncEmitter.new(
+ Gitlab::CurrentSettings.snowplow_collector_hostname,
+ - protocol: 'https'
+ + protocol: 'http'
+ )
+ end
+ end
+
+ ```
+
+1. Restart GDK:
+
+ ```shell
+ gdk restart
+ ```
+
+1. Send a test Snowplow event from the Rails console:
+
+ ```ruby
+ Gitlab::Tracking.event('category', 'action')
+ ```
+
+1. Navigate to `localhost:9090/micro/good` to see the event.
+
+#### Useful links
+
+- [Snowplow Micro repository](https://github.com/snowplow-incubator/snowplow-micro)
+- [Installation guide recording](https://www.youtube.com/watch?v=OX46fo_A0Ag)
+
+### Troubleshoot
+
+To control content security policy warnings when using an external host, modify `config/gitlab.yml`
+to allow or disallow them. To allow them, add the relevant host for `connect_src`. For example, for
+`https://snowplow.trx.gitlab.net`:
+
+```yaml
+development:
+ <<: *base
+ gitlab:
+ content_security_policy:
+ enabled: true
+ directives:
+ connect_src: "'self' http://localhost:* http://127.0.0.1:* ws://localhost:* wss://localhost:* ws://127.0.0.1:* https://snowplow.trx.gitlab.net/"
+```
diff --git a/doc/development/snowplow/index.md b/doc/development/snowplow/index.md
index e8b7d871b77..b8b35857adf 100644
--- a/doc/development/snowplow/index.md
+++ b/doc/development/snowplow/index.md
@@ -4,49 +4,37 @@ group: Product Intelligence
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
-# Snowplow Guide
+# Snowplow
-This guide provides an overview of how Snowplow works, and implementation details.
-
-For more information about Product Intelligence, see:
-
-- [Product Intelligence Guide](https://about.gitlab.com/handbook/product/product-intelligence-guide/)
-- [Service Ping Guide](../service_ping/index.md)
-
-More useful links:
-
-- [Product Intelligence Direction](https://about.gitlab.com/direction/product-intelligence/)
-- [Data Analysis Process](https://about.gitlab.com/handbook/business-technology/data-team/#data-analysis-process/)
-- [Data for Product Managers](https://about.gitlab.com/handbook/business-technology/data-team/programs/data-for-product-managers/)
-- [Data Infrastructure](https://about.gitlab.com/handbook/business-technology/data-team/platform/infrastructure/)
+This page provides an overview of how Snowplow works and how to enable it.
## What is Snowplow
-Snowplow is an enterprise-grade marketing and Product Intelligence platform which helps track the way users engage with our website and application.
+Snowplow is an enterprise-grade marketing and Product Intelligence platform that tracks how users engage with our website and application.
-[Snowplow](https://snowplowanalytics.com) consists of the following loosely-coupled sub-systems:
+[Snowplow](https://snowplowanalytics.com) consists of several loosely-coupled sub-systems:
-- **Trackers** fire Snowplow events. Snowplow has 12 trackers, covering web, mobile, desktop, server, and IoT.
-- **Collectors** receive Snowplow events from trackers. We have three different event collectors, synchronizing events either to Amazon S3, Apache Kafka, or Amazon Kinesis.
-- **Enrich** cleans up the raw Snowplow events, enriches them and puts them into storage. We have an Hadoop-based enrichment process, and a Kinesis-based or Kafka-based process.
-- **Storage** is where the Snowplow events live. We store the Snowplow events in a flat file structure on S3, and in the Redshift and PostgreSQL databases.
-- **Data modeling** is where event-level data is joined with other data sets and aggregated into smaller data sets, and business logic is applied. This produces a clean set of tables which make it easier to perform analysis on the data. We have data models for Redshift and Looker.
-- **Analytics** are performed on the Snowplow events or on the aggregate tables.
+- **Trackers** fire Snowplow events. Snowplow has twelve trackers that cover web, mobile, desktop, server, and IoT.
+- **Collectors** receive Snowplow events from trackers. We use different event collectors that synchronize events to Amazon S3, Apache Kafka, or Amazon Kinesis.
+- **Enrich** cleans raw Snowplow events, enriches them, and puts them into storage. There is a Hadoop-based enrichment process, and a Kinesis-based or Kafka-based process.
+- **Storage** stores Snowplow events. We store the Snowplow events in a flat file structure on S3, and in the Redshift and PostgreSQL databases.
+- **Data modeling** joins event-level data with other data sets, aggregates them into smaller data sets, and applies business logic. This produces a clean set of tables for data analysis. We use data models for Redshift and Looker.
+- **Analytics** are performed on Snowplow events or on aggregate tables.
![snowplow_flow](../img/snowplow_flow.png)
### Useful links
-- [Understanding the structure of Snowplow data](https://docs.snowplowanalytics.com/docs/understanding-your-pipeline/canonical-event/)
+- [Snowplow data structure](https://docs.snowplowanalytics.com/docs/understanding-your-pipeline/canonical-event/)
- [Our Iglu schema registry](https://gitlab.com/gitlab-org/iglu)
-- [List of events used in our codebase (Event Dictionary)](dictionary.md)
+- [List of events used in our codebase (Event Dictionary)](https://metrics.gitlab.com/snowplow.html)
## Enable Snowplow tracking
Tracking can be enabled at:
- The instance level, which enables tracking on both the frontend and backend layers.
-- The user level, though user tracking can be disabled on a per-user basis.
+- The user level. User tracking can be disabled on a per user basis.
GitLab respects the [Do Not Track](https://www.eff.org/issues/do-not-track) standard, so any user who has enabled the Do Not Track option in their browser is not tracked at a user level.
Snowplow tracking is enabled on GitLab.com, and we use it for most of our tracking strategy.
@@ -101,21 +89,22 @@ sequenceDiagram
## Structured event taxonomy
-When adding new click events, we should add them in a way that's internally consistent. If we don't, it is difficult to perform analysis across features because each feature captures events differently.
+Click events must be consistent. If each feature captures events differently, it can be difficult
+to perform analysis.
-The current method provides several attributes that are sent on each click event. Please try to follow these guidelines when specifying events to capture:
+Each click event provides attributes that describe the event.
-| attribute | type | required | description |
+| Attribute | Type | Required | Description |
| --------- | ------- | -------- | ----------- |
-| category | text | true | The page or backend area of the application. Unless infeasible, please use the Rails page attribute by default in the frontend, and namespace + class name on the backend. |
-| action | text | true | The action the user is taking, or aspect that's being instrumented. The first word should always describe the action or aspect: clicks should be `click`, activations should be `activate`, creations should be `create`, etc. Use underscores to describe what was acted on; for example, activating a form field would be `activate_form_input`. An interface action like clicking on a dropdown would be `click_dropdown`, while a behavior like creating a project record from the backend would be `create_project` |
-| label | text | false | The specific element or object to act on. This can be one of the following: the label of the element (for example, a tab labeled 'Create from template' for `create_from_template`), a unique identifier if no text is available (for example, `groups_dropdown_close` for closing the Groups dropdown in the top bar), or the name or title attribute of a record being created. |
+| category | text | true | The page or backend section of the application. Unless infeasible, use the Rails page attribute by default in the frontend, and namespace + class name on the backend. |
+| action | text | true | The action the user takes, or aspect that's being instrumented. The first word must describe the action or aspect. For example, clicks must be `click`, activations must be `activate`, creations must be `create`. Use underscores to describe what was acted on. For example, activating a form field is `activate_form_input`, an interface action like clicking on a dropdown is `click_dropdown`, a behavior like creating a project record from the backend is `create_project`. |
+| label | text | false | The specific element or object to act on. This can be one of the following: the label of the element, for example, a tab labeled 'Create from template' for `create_from_template`; a unique identifier if no text is available, for example, `groups_dropdown_close` for closing the Groups dropdown in the top bar; or the name or title attribute of a record being created. |
| property | text | false | Any additional property of the element, or object being acted on. |
| value | decimal | false | Describes a numeric value or something directly related to the event. This could be the value of an input. For example, `10` when clicking `internal` visibility. |
### Examples
-| category* | label | action | property** | value |
+| Category* | Label | Action | Property** | Value |
|-------------|------------------|-----------------------|----------|:-----:|
| `[root:index]` | `main_navigation` | `click_navigation_link` | `[link_label]` | - |
| `[groups:boards:show]` | `toggle_swimlanes` | `click_toggle_button` | - | `[is_active]` |
@@ -125,8 +114,8 @@ The current method provides several attributes that are sent on each click event
| `[projects:clusters:new]` | `chart_options` | `generate_link` | `[chart_link]` | - |
| `[projects:clusters:new]` | `chart_options` | `click_add_label_button` | `[label_id]` | - |
-_* It's ok to omit the category, and use the default._<br>
-_** Property is usually the best place for variable strings._
+_* If you choose to omit the category you can use the default._<br>
+_** Use property for variable strings._
### Reference SQL
@@ -152,651 +141,33 @@ ORDER BY collector_tstamp DESC
LIMIT 20
```
-### Web-specific parameters
-
-Snowplow JS adds many [web-specific parameters](https://docs.snowplowanalytics.com/docs/collecting-data/collecting-from-own-applications/snowplow-tracker-protocol/#Web-specific_parameters) to all web events by default.
-
-## Implement Snowplow JS (Frontend) tracking
-
-GitLab provides `Tracking`, an interface that wraps the [Snowplow JavaScript Tracker](https://docs.snowplowanalytics.com/docs/collecting-data/collecting-from-own-applications/javascript-trackers/) for tracking custom events. The simplest way to use it is to add `data-` attributes to clickable elements and dropdowns. There is also a Vue mixin (exposing a `track` method), and the static method `Tracking.event`. Each of these requires at minimum a `category` and an `action`. You can provide additional [Structured event taxonomy](#structured-event-taxonomy) properties along with an `extra` object that accepts key-value pairs.
-
-| field | type | default value | description |
-|:-----------|:-------|:---------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| `category` | string | `document.body.dataset.page` | Page or subsection of a page that events are being captured within. |
-| `action` | string | generic | Action the user is taking. Clicks should be `click` and activations should be `activate`, so for example, focusing a form field would be `activate_form_input`, and clicking a button would be `click_button`. |
-| `data` | object | `{}` | Additional data such as `label`, `property`, `value`, `context` (as described in our [Structured event taxonomy](#structured-event-taxonomy)), and `extra` (key-value pairs object). |
-
-### Usage recommendations
-
-- Use [data attributes](#tracking-with-data-attributes) on HTML elements that emits either the `click`, `show.bs.dropdown`, or `hide.bs.dropdown` events.
-- Use the [Vue mixin](#tracking-within-vue-components) when tracking custom events, or if the supported events for data attributes are not propagating.
-- Use the [Tracking class directly](#tracking-in-raw-javascript) when tracking on raw JS files.
-
-### Tracking with data attributes
-
-When working in HAML (or Vue templates) we can add `data-track-*` attributes to elements of interest. All elements that have a `data-track-action` attribute automatically have event tracking bound on clicks. You can provide extra data as a valid JSON string using `data-track-extra`.
-
-Below is an example of `data-track-*` attributes assigned to a button:
-
-```haml
-%button.btn{ data: { track: { action: "click_button", label: "template_preview", property: "my-template" } } }
-```
-
-```html
-<button class="btn"
- data-track-action="click_button"
- data-track-label="template_preview"
- data-track-property="my-template"
- data-track-extra='{ "template_variant": "primary" }'
-/>
-```
-
-Event listeners are bound at the document level to handle click events on or within elements with these data attributes. This allows them to be properly handled on re-rendering and changes to the DOM. Note that because of the way these events are bound, click events should not be stopped from propagating up the DOM tree. If click events are being stopped from propagating, you must implement your own listeners and follow the instructions in [Tracking within Vue components](#tracking-within-vue-components) or [Tracking in raw JavaScript](#tracking-in-raw-javascript).
-
-Below is a list of supported `data-track-*` attributes:
-
-| attribute | required | description |
-|:----------------------|:---------|:------------|
-| `data-track-action` | true | Action the user is taking. Clicks must be prepended with `click` and activations must be prepended with `activate`. For example, focusing a form field would be `activate_form_input` and clicking a button would be `click_button`. Replaces `data-track-event`, which was [deprecated](https://gitlab.com/gitlab-org/gitlab/-/issues/290962) in GitLab 13.11. |
-| `data-track-label` | false | The `label` as described in our [Structured event taxonomy](#structured-event-taxonomy). |
-| `data-track-property` | false | The `property` as described in our [Structured event taxonomy](#structured-event-taxonomy). |
-| `data-track-value` | false | The `value` as described in our [Structured event taxonomy](#structured-event-taxonomy). If omitted, this is the element's `value` property or `undefined`. For checkboxes, the default value is the element's checked attribute or `0` when unchecked. |
-| `data-track-extra` | false | A key-value pairs object passed as a valid JSON string. This is added to the `extra` property in our [`gitlab_standard`](#gitlab_standard) schema. |
-| `data-track-context` | false | The `context` as described in our [Structured event taxonomy](#structured-event-taxonomy). |
-
-#### Available helpers
-
-```ruby
-tracking_attrs(label, action, property) # { data: { track_label... } }
-
-%button{ **tracking_attrs('main_navigation', 'click_button', 'navigation') }
-```
-
-#### Caveats
-
-When using the GitLab helper method [`nav_link`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/helpers/tab_helper.rb#L76) be sure to wrap `html_options` under the `html_options` keyword argument.
-Be careful, as this behavior can be confused with the `ActionView` helper method [`link_to`](https://api.rubyonrails.org/v5.2.3/classes/ActionView/Helpers/UrlHelper.html#method-i-link_to) that does not require additional wrapping of `html_options`
-
-```ruby
-# Bad
-= nav_link(controller: ['dashboard/groups', 'explore/groups'], data: { track_label: "explore_groups", track_action: "click_button" })
-
-# Good
-= nav_link(controller: ['dashboard/groups', 'explore/groups'], html_options: { data: { track_label: "explore_groups", track_action: "click_button" } })
-
-# Good (other helpers)
-= link_to explore_groups_path, title: _("Explore"), data: { track_label: "explore_groups", track_action: "click_button" }
-```
-
-### Tracking within Vue components
-
-There's a tracking Vue mixin that can be used in components if more complex tracking is required. To use it, first import the `Tracking` library and request a mixin.
-
-```javascript
-import Tracking from '~/tracking';
-const trackingMixin = Tracking.mixin({ label: 'right_sidebar' });
-```
-
-You can provide default options that are passed along whenever an event is tracked from within your component. For example, if all events in a component should be tracked with a given `label`, you can provide one at this time. Available defaults are `category`, `label`, `property`, and `value`. If no category is specified, `document.body.dataset.page` is used as the default.
-
-You can then use the mixin normally in your component with the `mixin` Vue declaration. The mixin also provides the ability to specify tracking options in `data` or `computed`. These override any defaults and allow the values to be dynamic from props, or based on state.
-
-```javascript
-export default {
- mixins: [trackingMixin],
- // ...[component implementation]...
- data() {
- return {
- expanded: false,
- tracking: {
- label: 'left_sidebar',
- },
- };
- },
-};
-```
-
-The mixin provides a `track` method that can be called from within the template,
-or from component methods. An example of the whole implementation might look like this:
-
-```javascript
-export default {
- name: 'RightSidebar',
- mixins: [Tracking.mixin({ label: 'right_sidebar' })],
- data() {
- return {
- expanded: false,
- };
- },
- methods: {
- toggle() {
- this.expanded = !this.expanded;
- // Additional data will be merged, like `value` below
- this.track('click_toggle', { value: Number(this.expanded) });
- }
- }
-};
-```
-
-The event data can be provided with a `tracking` object, declared in the `data` function,
-or as a `computed property`. A `tracking` object is convenient when the default
-event properties are dynamic or provided at runtime.
-
-```javascript
-export default {
- name: 'RightSidebar',
- mixins: [Tracking.mixin()],
- data() {
- return {
- tracking: {
- label: 'right_sidebar',
- // category: '',
- // property: '',
- // value: '',
- // experiment: '',
- // extra: {},
- },
- };
- },
-};
-```
-
-The event data can be provided directly in the `track` function as well.
-This object merges with any previously provided options.
-
-```javascript
-this.track('click_button', {
- label: 'right_sidebar',
-});
-```
-
-Lastly, if needed within the template, you can use the `track` method directly as well.
-
-```html
-<template>
- <div>
- <button data-testid="toggle" @click="toggle">Toggle</button>
-
- <div v-if="expanded">
- <p>Hello world!</p>
- <button @click="track('click_action')">Track another event</button>
- </div>
- </div>
-</template>
-```
-
-#### Testing example
-
-```javascript
-import { mockTracking } from 'helpers/tracking_helper';
-// mockTracking(category, documentOverride, spyMethod)
-
-describe('RightSidebar.vue', () => {
- let trackingSpy;
- let wrapper;
-
- beforeEach(() => {
- trackingSpy = mockTracking(undefined, wrapper.element, jest.spyOn);
- });
-
- const findToggle = () => wrapper.find('[data-testid="toggle"]');
-
- it('tracks turning off toggle', () => {
- findToggle().trigger('click');
-
- expect(trackingSpy).toHaveBeenCalledWith(undefined, 'click_toggle', {
- label: 'right_sidebar',
- value: 0,
- });
- });
-});
-```
-
-### Tracking in raw JavaScript
-
-Custom event tracking and instrumentation can be added by directly calling the `Tracking.event` static function. The following example demonstrates tracking a click on a button by calling `Tracking.event` manually.
-
-```javascript
-import Tracking from '~/tracking';
-
-const button = document.getElementById('create_from_template_button');
-
-button.addEventListener('click', () => {
- Tracking.event('dashboard:projects:index', 'click_button', {
- label: 'create_from_template',
- property: 'template_preview',
- extra: {
- templateVariant: 'primary',
- valid: 1,
- },
- });
-});
-```
-
-#### Testing example
-
-```javascript
-import Tracking from '~/tracking';
-
-describe('MyTracking', () => {
- let wrapper;
-
- beforeEach(() => {
- jest.spyOn(Tracking, 'event');
- });
+#### Last 100 page view events
- const findButton = () => wrapper.find('[data-testid="create_from_template"]');
-
- it('tracks event', () => {
- findButton().trigger('click');
-
- expect(Tracking.event).toHaveBeenCalledWith(undefined, 'click_button', {
- label: 'create_from_template',
- property: 'template_preview',
- extra: {
- templateVariant: 'primary',
- valid: true,
- },
- });
- });
-});
-```
-
-### Form tracking
-
-Enable Snowplow automatic [form tracking](https://docs.snowplowanalytics.com/docs/collecting-data/collecting-from-own-applications/javascript-trackers/javascript-tracker/javascript-tracker-v2/tracking-specific-events/#form-tracking) by calling `Tracking.enableFormTracking` (after the DOM is ready) and providing a `config` object that includes at least one of the following elements:
-
-- `forms`: determines which forms are tracked, and are identified by the CSS class name.
-- `fields`: determines which fields inside the tracked forms are tracked, and are identified by the field `name`.
-
-An optional list of contexts can be provided as the second argument.
-Note that our [`gitlab_standard`](#gitlab_standard) schema is excluded from these events.
-
-```javascript
-Tracking.enableFormTracking({
- forms: { allow: ['sign-in-form', 'password-recovery-form'] },
- fields: { allow: ['terms_and_conditions', 'newsletter_agreement'] },
-});
-```
-
-#### Testing example
-
-```javascript
-import Tracking from '~/tracking';
-
-describe('MyFormTracking', () => {
- let formTrackingSpy;
-
- beforeEach(() => {
- formTrackingSpy = jest
- .spyOn(Tracking, 'enableFormTracking')
- .mockImplementation(() => null);
- });
-
- it('initialized with the correct configuration', () => {
- expect(formTrackingSpy).toHaveBeenCalledWith({
- forms: { allow: ['sign-in-form', 'password-recovery-form'] },
- fields: { allow: ['terms_and_conditions', 'newsletter_agreement'] },
- });
- });
-});
-```
-
-## Implement Snowplow Ruby (Backend) tracking
-
-GitLab provides `Gitlab::Tracking`, an interface that wraps the [Snowplow Ruby Tracker](https://docs.snowplowanalytics.com/docs/collecting-data/collecting-from-own-applications/ruby-tracker/) for tracking custom events.
-
-Custom event tracking and instrumentation can be added by directly calling the `GitLab::Tracking.event` class method, which accepts the following arguments:
-
-| argument | type | default value | description |
-|------------|---------------------------|---------------|-----------------------------------------------------------------------------------------------------------------------------------|
-| `category` | String | | Area or aspect of the application. This could be `HealthCheckController` or `Lfs::FileTransformer` for instance. |
-| `action` | String | | The action being taken, which can be anything from a controller action like `create` to something like an Active Record callback. |
-| `label` | String | nil | As described in [Structured event taxonomy](#structured-event-taxonomy). |
-| `property` | String | nil | As described in [Structured event taxonomy](#structured-event-taxonomy). |
-| `value` | Numeric | nil | As described in [Structured event taxonomy](#structured-event-taxonomy). |
-| `context` | Array\[SelfDescribingJSON\] | nil | An array of custom contexts to send with this event. Most events should not have any custom contexts. |
-| `project` | Project | nil | The project associated with the event. |
-| `user` | User | nil | The user associated with the event. |
-| `namespace` | Namespace | nil | The namespace associated with the event. |
-| `extra` | Hash | `{}` | Additional keyword arguments are collected into a hash and sent with the event. |
-
-Tracking can be viewed as either tracking user behavior, or can be used for instrumentation to monitor and visualize performance over time in an area or aspect of code.
-
-For example:
-
-```ruby
-class Projects::CreateService < BaseService
- def execute
- project = Project.create(params)
-
- Gitlab::Tracking.event('Projects::CreateService', 'create_project', label: project.errors.full_messages.to_sentence,
- property: project.valid?.to_s, project: project, user: current_user, namespace: namespace)
- end
-end
+```sql
+SELECT
+ -- page_url,
+ -- page_title,
+ -- referer_url,
+ -- marketing_medium,
+ -- marketing_source,
+ -- marketing_campaign,
+ -- browser_window_width,
+ -- device_is_mobile
+ *
+FROM legacy.snowplow_page_views_30
+ORDER BY page_view_start DESC
+LIMIT 100
```
-### Unit testing
-
-Use the `expect_snowplow_event` helper when testing backend Snowplow events. See [testing best practices](
-https://docs.gitlab.com/ee/development/testing_guide/best_practices.html#test-snowplow-events) for details.
-
-### Performance
-
-We use the [AsyncEmitter](https://docs.snowplowanalytics.com/docs/collecting-data/collecting-from-own-applications/ruby-tracker/emitters/#the-asyncemitter-class) when tracking events, which allows for instrumentation calls to be run in a background thread. This is still an active area of development.
-
-## Develop and test Snowplow
-
-There are several tools for developing and testing a Snowplow event.
-
-| Testing Tool | Frontend Tracking | Backend Tracking | Local Development Environment | Production Environment | Production Environment |
-|----------------------------------------------|--------------------|---------------------|-------------------------------|------------------------|------------------------|
-| Snowplow Analytics Debugger Chrome Extension | **{check-circle}** | **{dotted-circle}** | **{check-circle}** | **{check-circle}** | **{check-circle}** |
-| Snowplow Inspector Chrome Extension | **{check-circle}** | **{dotted-circle}** | **{check-circle}** | **{check-circle}** | **{check-circle}** |
-| Snowplow Micro | **{check-circle}** | **{check-circle}** | **{check-circle}** | **{dotted-circle}** | **{dotted-circle}** |
-| Snowplow Mini | **{check-circle}** | **{check-circle}** | **{dotted-circle}** | **{status_preparing}** | **{status_preparing}** |
-
-**Legend**
-
-**{check-circle}** Available, **{status_preparing}** In progress, **{dotted-circle}** Not Planned
-
-### Test frontend events
-
-To test frontend events in development:
-
-- [Enable Snowplow tracking in the Admin Area](#enable-snowplow-tracking).
-- Turn off any ad blockers that would prevent Snowplow JS from loading in your environment.
-- Turn off "Do Not Track" (DNT) in your browser.
-
-#### Snowplow Analytics Debugger Chrome Extension
-
-Snowplow Analytics Debugger is a browser extension for testing frontend events. This works on production, staging, and local development environments.
-
-1. Install the [Snowplow Analytics Debugger](https://chrome.google.com/webstore/detail/snowplow-analytics-debugg/jbnlcgeengmijcghameodeaenefieedm) Chrome browser extension.
-1. Open Chrome DevTools to the Snowplow Analytics Debugger tab.
-1. Learn more at [Igloo Analytics](https://www.iglooanalytics.com/blog/snowplow-analytics-debugger-chrome-extension.html).
-
-#### Snowplow Inspector Chrome Extension
-
-Snowplow Inspector Chrome Extension is a browser extension for testing frontend events. This works on production, staging and local development environments.
-
-1. Install [Snowplow Inspector](https://chrome.google.com/webstore/detail/snowplow-inspector/maplkdomeamdlngconidoefjpogkmljm?hl=en).
-1. Open the Chrome extension by pressing the Snowplow Inspector icon beside the address bar.
-1. Click around on a webpage with Snowplow and you should see JavaScript events firing in the inspector window.
-
-### Snowplow Micro
-
-Snowplow Micro is a very small version of a full Snowplow data collection pipeline: small enough that it can be launched by a test suite. Events can be recorded into Snowplow Micro just as they can a full Snowplow pipeline. Micro then exposes an API that can be queried.
-
-Snowplow Micro is a Docker-based solution for testing frontend and backend events in a local development environment. You must modify GDK using the instructions below to set this up.
-
-- Read [Introducing Snowplow Micro](https://snowplowanalytics.com/blog/2019/07/17/introducing-snowplow-micro/)
-- Look at the [Snowplow Micro repository](https://github.com/snowplow-incubator/snowplow-micro)
-- Watch our <i class="fa fa-youtube-play youtube" aria-hidden="true"></i> [installation guide recording](https://www.youtube.com/watch?v=OX46fo_A0Ag)
-
-1. Ensure Docker is installed and running.
-
-1. Install [Snowplow Micro](https://github.com/snowplow-incubator/snowplow-micro) by cloning the settings in [this project](https://gitlab.com/gitlab-org/snowplow-micro-configuration):
-1. Navigate to the directory with the cloned project, and start the appropriate Docker
- container with the following script:
-
- ```shell
- ./snowplow-micro.sh
- ```
-
-1. Use GDK to start the PostgreSQL terminal and connect to the `gitlabhq_development` database:
-
- ```shell
- gdk psql -d gitlabhq_development
- ```
-
-1. Update your instance's settings to enable Snowplow events and point to the Snowplow Micro collector:
-
- ```shell
- update application_settings set snowplow_collector_hostname='localhost:9090', snowplow_enabled=true, snowplow_cookie_domain='.gitlab.com';
- ```
-
-1. Update `DEFAULT_SNOWPLOW_OPTIONS` in `app/assets/javascripts/tracking/constants.js` to remove `forceSecureTracker: true`:
-
- ```diff
- diff --git a/app/assets/javascripts/tracking/constants.js b/app/assets/javascripts/tracking/constants.js
- index 598111e4086..eff38074d4c 100644
- --- a/app/assets/javascripts/tracking/constants.js
- +++ b/app/assets/javascripts/tracking/constants.js
- @@ -7,7 +7,6 @@ export const DEFAULT_SNOWPLOW_OPTIONS = {
- appId: '',
- userFingerprint: false,
- respectDoNotTrack: true,
- - forceSecureTracker: true,
- eventMethod: 'post',
- contexts: { webPage: true, performanceTiming: true },
- formTracking: false,
- ```
-
-1. Update `options` in `lib/gitlab/tracking.rb` to add `protocol` and `port`:
-
- ```diff
- diff --git a/lib/gitlab/tracking.rb b/lib/gitlab/tracking.rb
- index 618e359211b..e9084623c43 100644
- --- a/lib/gitlab/tracking.rb
- +++ b/lib/gitlab/tracking.rb
- @@ -41,7 +41,9 @@ def options(group)
- cookie_domain: Gitlab::CurrentSettings.snowplow_cookie_domain,
- app_id: Gitlab::CurrentSettings.snowplow_app_id,
- form_tracking: additional_features,
- - link_click_tracking: additional_features
- + link_click_tracking: additional_features,
- + protocol: 'http',
- + port: 9090
- }.transform_keys! { |key| key.to_s.camelize(:lower).to_sym }
- end
- ```
-
-1. Update `emitter` in `lib/gitlab/tracking/destinations/snowplow.rb` to change `protocol`:
-
- ```diff
- diff --git a/lib/gitlab/tracking/destinations/snowplow.rb b/lib/gitlab/tracking/destinations/snowplow.rb
- index 4fa844de325..5dd9d0eacfb 100644
- --- a/lib/gitlab/tracking/destinations/snowplow.rb
- +++ b/lib/gitlab/tracking/destinations/snowplow.rb
- @@ -40,7 +40,7 @@ def tracker
- def emitter
- SnowplowTracker::AsyncEmitter.new(
- Gitlab::CurrentSettings.snowplow_collector_hostname,
- - protocol: 'https'
- + protocol: 'http'
- )
- end
- end
-
- ```
-
-1. Restart GDK:
-
- ```shell
- gdk restart
- ```
-
-1. Send a test Snowplow event from the Rails console:
+### Web-specific parameters
- ```ruby
- Gitlab::Tracking.event('category', 'action')
- ```
+Snowplow JavaScript adds [web-specific parameters](https://docs.snowplowanalytics.com/docs/collecting-data/collecting-from-own-applications/snowplow-tracker-protocol/#Web-specific_parameters) to all web events by default.
-1. Navigate to `localhost:9090/micro/good` to see the event.
-
-### Snowplow Mini
-
-[Snowplow Mini](https://github.com/snowplow/snowplow-mini) is an easily-deployable, single-instance version of Snowplow.
-
-Snowplow Mini can be used for testing frontend and backend events on a production, staging and local development environment.
-
-For GitLab.com, we're setting up a [QA and Testing environment](https://gitlab.com/gitlab-org/telemetry/-/issues/266) using Snowplow Mini.
-
-### Troubleshooting
-
-To control content security policy warnings when using an external host, you can allow or disallow them by modifying `config/gitlab.yml`. To allow them, add the relevant host for `connect_src`. For example, for `https://snowplow.trx.gitlab.net`:
-
-```yaml
-development:
- <<: *base
- gitlab:
- content_security_policy:
- enabled: true
- directives:
- connect_src: "'self' http://localhost:* http://127.0.0.1:* ws://localhost:* wss://localhost:* ws://127.0.0.1:* https://snowplow.trx.gitlab.net/"
-```
+## Related topics
-## Snowplow Schemas
-
-### `gitlab_standard`
-
-We are including the [`gitlab_standard` schema](https://gitlab.com/gitlab-org/iglu/-/blob/master/public/schemas/com.gitlab/gitlab_standard/jsonschema/) with every event. See [Standardize Snowplow Schema](https://gitlab.com/groups/gitlab-org/-/epics/5218) for details.
-
-The [`StandardContext`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/tracking/standard_context.rb) class represents this schema in the application.
-
-| Field Name | Required | Type | Description |
-|----------------|---------------------|-----------------------|---------------------------------------------------------------------------------------------|
-| `project_id` | **{dotted-circle}** | integer | |
-| `namespace_id` | **{dotted-circle}** | integer | |
-| `environment` | **{check-circle}** | string (max 32 chars) | Name of the source environment, such as `production` or `staging` |
-| `source` | **{check-circle}** | string (max 32 chars) | Name of the source application, such as `gitlab-rails` or `gitlab-javascript` |
-| `plan` | **{dotted-circle}** | string (max 32 chars) | Name of the plan for the namespace, such as `free`, `premium`, or `ultimate`. Automatically picked from the `namespace`. |
-| `extra` | **{dotted-circle}** | JSON | Any additional data associated with the event, in the form of key-value pairs |
-
-### Default Schema
-
-| Field Name | Required | Type | Description |
-|--------------------------|---------------------|-----------|----------------------------------------------------------------------------------------------------------------------------------|
-| `app_id` | **{check-circle}** | string | Unique identifier for website / application |
-| `base_currency` | **{dotted-circle}** | string | Reporting currency |
-| `br_colordepth` | **{dotted-circle}** | integer | Browser color depth |
-| `br_cookies` | **{dotted-circle}** | boolean | Does the browser permit cookies? |
-| `br_family` | **{dotted-circle}** | string | Browser family |
-| `br_features_director` | **{dotted-circle}** | boolean | Director plugin installed? |
-| `br_features_flash` | **{dotted-circle}** | boolean | Flash plugin installed? |
-| `br_features_gears` | **{dotted-circle}** | boolean | Google gears installed? |
-| `br_features_java` | **{dotted-circle}** | boolean | Java plugin installed? |
-| `br_features_pdf` | **{dotted-circle}** | boolean | Adobe PDF plugin installed? |
-| `br_features_quicktime` | **{dotted-circle}** | boolean | Quicktime plugin installed? |
-| `br_features_realplayer` | **{dotted-circle}** | boolean | RealPlayer plugin installed? |
-| `br_features_silverlight` | **{dotted-circle}** | boolean | Silverlight plugin installed? |
-| `br_features_windowsmedia` | **{dotted-circle}** | boolean | Windows media plugin installed? |
-| `br_lang` | **{dotted-circle}** | string | Language the browser is set to |
-| `br_name` | **{dotted-circle}** | string | Browser name |
-| `br_renderengine` | **{dotted-circle}** | string | Browser rendering engine |
-| `br_type` | **{dotted-circle}** | string | Browser type |
-| `br_version` | **{dotted-circle}** | string | Browser version |
-| `br_viewheight` | **{dotted-circle}** | string | Browser viewport height |
-| `br_viewwidth` | **{dotted-circle}** | string | Browser viewport width |
-| `collector_tstamp` | **{dotted-circle}** | timestamp | Time stamp for the event recorded by the collector |
-| `contexts` | **{dotted-circle}** | | |
-| `derived_contexts` | **{dotted-circle}** | | Contexts derived in the Enrich process |
-| `derived_tstamp` | **{dotted-circle}** | timestamp | Timestamp making allowance for inaccurate device clock |
-| `doc_charset` | **{dotted-circle}** | string | Web page's character encoding |
-| `doc_height` | **{dotted-circle}** | string | Web page height |
-| `doc_width` | **{dotted-circle}** | string | Web page width |
-| `domain_sessionid` | **{dotted-circle}** | string | Unique identifier (UUID) for this visit of this user_id to this domain |
-| `domain_sessionidx` | **{dotted-circle}** | integer | Index of number of visits that this user_id has made to this domain (The first visit is `1`) |
-| `domain_userid` | **{dotted-circle}** | string | Unique identifier for a user, based on a first party cookie (so domain specific) |
-| `dvce_created_tstamp` | **{dotted-circle}** | timestamp | Timestamp when event occurred, as recorded by client device |
-| `dvce_ismobile` | **{dotted-circle}** | boolean | Indicates whether device is mobile |
-| `dvce_screenheight` | **{dotted-circle}** | string | Screen / monitor resolution |
-| `dvce_screenwidth` | **{dotted-circle}** | string | Screen / monitor resolution |
-| `dvce_sent_tstamp` | **{dotted-circle}** | timestamp | Timestamp when event was sent by client device to collector |
-| `dvce_type` | **{dotted-circle}** | string | Type of device |
-| `etl_tags` | **{dotted-circle}** | string | JSON of tags for this ETL run |
-| `etl_tstamp` | **{dotted-circle}** | timestamp | Timestamp event began ETL |
-| `event` | **{dotted-circle}** | string | Event type |
-| `event_fingerprint` | **{dotted-circle}** | string | Hash client-set event fields |
-| `event_format` | **{dotted-circle}** | string | Format for event |
-| `event_id` | **{dotted-circle}** | string | Event UUID |
-| `event_name` | **{dotted-circle}** | string | Event name |
-| `event_vendor` | **{dotted-circle}** | string | The company who developed the event model |
-| `event_version` | **{dotted-circle}** | string | Version of event schema |
-| `geo_city` | **{dotted-circle}** | string | City of IP origin |
-| `geo_country` | **{dotted-circle}** | string | Country of IP origin |
-| `geo_latitude` | **{dotted-circle}** | string | An approximate latitude |
-| `geo_longitude` | **{dotted-circle}** | string | An approximate longitude |
-| `geo_region` | **{dotted-circle}** | string | Region of IP origin |
-| `geo_region_name` | **{dotted-circle}** | string | Region of IP origin |
-| `geo_timezone` | **{dotted-circle}** | string | Timezone of IP origin |
-| `geo_zipcode` | **{dotted-circle}** | string | Zip (postal) code of IP origin |
-| `ip_domain` | **{dotted-circle}** | string | Second level domain name associated with the visitor's IP address |
-| `ip_isp` | **{dotted-circle}** | string | Visitor's ISP |
-| `ip_netspeed` | **{dotted-circle}** | string | Visitor's connection type |
-| `ip_organization` | **{dotted-circle}** | string | Organization associated with the visitor's IP address – defaults to ISP name if none is found |
-| `mkt_campaign` | **{dotted-circle}** | string | The campaign ID |
-| `mkt_clickid` | **{dotted-circle}** | string | The click ID |
-| `mkt_content` | **{dotted-circle}** | string | The content or ID of the ad. |
-| `mkt_medium` | **{dotted-circle}** | string | Type of traffic source |
-| `mkt_network` | **{dotted-circle}** | string | The ad network to which the click ID belongs |
-| `mkt_source` | **{dotted-circle}** | string | The company / website where the traffic came from |
-| `mkt_term` | **{dotted-circle}** | string | Keywords associated with the referrer |
-| `name_tracker` | **{dotted-circle}** | string | The tracker namespace |
-| `network_userid` | **{dotted-circle}** | string | Unique identifier for a user, based on a cookie from the collector (so set at a network level and shouldn't be set by a tracker) |
-| `os_family` | **{dotted-circle}** | string | Operating system family |
-| `os_manufacturer` | **{dotted-circle}** | string | Manufacturers of operating system |
-| `os_name` | **{dotted-circle}** | string | Name of operating system |
-| `os_timezone` | **{dotted-circle}** | string | Client operating system timezone |
-| `page_referrer` | **{dotted-circle}** | string | Referrer URL |
-| `page_title` | **{dotted-circle}** | string | Page title |
-| `page_url` | **{dotted-circle}** | string | Page URL |
-| `page_urlfragment` | **{dotted-circle}** | string | Fragment aka anchor |
-| `page_urlhost` | **{dotted-circle}** | string | Host aka domain |
-| `page_urlpath` | **{dotted-circle}** | string | Path to page |
-| `page_urlport` | **{dotted-circle}** | integer | Port if specified, 80 if not |
-| `page_urlquery` | **{dotted-circle}** | string | Query string |
-| `page_urlscheme` | **{dotted-circle}** | string | Scheme (protocol name) |
-| `platform` | **{dotted-circle}** | string | The platform the app runs on |
-| `pp_xoffset_max` | **{dotted-circle}** | integer | Maximum page x offset seen in the last ping period |
-| `pp_xoffset_min` | **{dotted-circle}** | integer | Minimum page x offset seen in the last ping period |
-| `pp_yoffset_max` | **{dotted-circle}** | integer | Maximum page y offset seen in the last ping period |
-| `pp_yoffset_min` | **{dotted-circle}** | integer | Minimum page y offset seen in the last ping period |
-| `refr_domain_userid` | **{dotted-circle}** | string | The Snowplow `domain_userid` of the referring website |
-| `refr_dvce_tstamp` | **{dotted-circle}** | timestamp | The time of attaching the `domain_userid` to the inbound link |
-| `refr_medium` | **{dotted-circle}** | string | Type of referer |
-| `refr_source` | **{dotted-circle}** | string | Name of referer if recognised |
-| `refr_term` | **{dotted-circle}** | string | Keywords if source is a search engine |
-| `refr_urlfragment` | **{dotted-circle}** | string | Referer URL fragment |
-| `refr_urlhost` | **{dotted-circle}** | string | Referer host |
-| `refr_urlpath` | **{dotted-circle}** | string | Referer page path |
-| `refr_urlport` | **{dotted-circle}** | integer | Referer port |
-| `refr_urlquery` | **{dotted-circle}** | string | Referer URL query string |
-| `refr_urlscheme` | **{dotted-circle}** | string | Referer scheme |
-| `se_action` | **{dotted-circle}** | string | The action / event itself |
-| `se_category` | **{dotted-circle}** | string | The category of event |
-| `se_label` | **{dotted-circle}** | string | A label often used to refer to the 'object' the action is performed on |
-| `se_property` | **{dotted-circle}** | string | A property associated with either the action or the object |
-| `se_value` | **{dotted-circle}** | decimal | A value associated with the user action |
-| `ti_category` | **{dotted-circle}** | string | Item category |
-| `ti_currency` | **{dotted-circle}** | string | Currency |
-| `ti_name` | **{dotted-circle}** | string | Item name |
-| `ti_orderid` | **{dotted-circle}** | string | Order ID |
-| `ti_price` | **{dotted-circle}** | decimal | Item price |
-| `ti_price_base` | **{dotted-circle}** | decimal | Item price in base currency |
-| `ti_quantity` | **{dotted-circle}** | integer | Item quantity |
-| `ti_sku` | **{dotted-circle}** | string | Item SKU |
-| `tr_affiliation` | **{dotted-circle}** | string | Transaction affiliation (such as channel) |
-| `tr_city` | **{dotted-circle}** | string | Delivery address: city |
-| `tr_country` | **{dotted-circle}** | string | Delivery address: country |
-| `tr_currency` | **{dotted-circle}** | string | Transaction Currency |
-| `tr_orderid` | **{dotted-circle}** | string | Order ID |
-| `tr_shipping` | **{dotted-circle}** | decimal | Delivery cost charged |
-| `tr_shipping_base` | **{dotted-circle}** | decimal | Shipping cost in base currency |
-| `tr_state` | **{dotted-circle}** | string | Delivery address: state |
-| `tr_tax` | **{dotted-circle}** | decimal | Transaction tax value (such as amount of VAT included) |
-| `tr_tax_base` | **{dotted-circle}** | decimal | Tax applied in base currency |
-| `tr_total` | **{dotted-circle}** | decimal | Transaction total value |
-| `tr_total_base` | **{dotted-circle}** | decimal | Total amount of transaction in base currency |
-| `true_tstamp` | **{dotted-circle}** | timestamp | User-set exact timestamp |
-| `txn_id` | **{dotted-circle}** | string | Transaction ID |
-| `unstruct_event` | **{dotted-circle}** | JSON | The properties of the event |
-| `uploaded_at` | **{dotted-circle}** | | |
-| `user_fingerprint` | **{dotted-circle}** | integer | User identifier based on (hopefully unique) browser features |
-| `user_id` | **{dotted-circle}** | string | Unique identifier for user, set by the business using setUserId |
-| `user_ipaddress` | **{dotted-circle}** | string | IP address |
-| `useragent` | **{dotted-circle}** | string | User agent (expressed as a browser string) |
-| `v_collector` | **{dotted-circle}** | string | Collector version |
-| `v_etl` | **{dotted-circle}** | string | ETL version |
-| `v_tracker` | **{dotted-circle}** | string | Identifier for Snowplow tracker |
+- [Product Intelligence Guide](https://about.gitlab.com/handbook/product/product-intelligence-guide/)
+- [Service Ping Guide](../service_ping/index.md)
+- [Product Intelligence Direction](https://about.gitlab.com/direction/product-intelligence/)
+- [Data Analysis Process](https://about.gitlab.com/handbook/business-technology/data-team/#data-analysis-process/)
+- [Data for Product Managers](https://about.gitlab.com/handbook/business-technology/data-team/programs/data-for-product-managers/)
+- [Data Infrastructure](https://about.gitlab.com/handbook/business-technology/data-team/platform/infrastructure/)
diff --git a/doc/development/snowplow/review_guidelines.md b/doc/development/snowplow/review_guidelines.md
index 8edcbf06a0e..69fad1794e2 100644
--- a/doc/development/snowplow/review_guidelines.md
+++ b/doc/development/snowplow/review_guidelines.md
@@ -14,7 +14,7 @@ general best practices for code reviews, refer to our [code review guide](../cod
## Resources for reviewers
- [Snowplow Guide](index.md)
-- [Event Dictionary](dictionary.md)
+- [Event Dictionary](https://metrics.gitlab.com/snowplow.html)
## Review process
@@ -26,18 +26,18 @@ events or touches Snowplow related files.
#### The merge request **author** should
- For frontend events, when relevant, add a screenshot of the event in
- the [testing tool](../snowplow/index.md#develop-and-test-snowplow) used.
+ the [testing tool](implementation.md#develop-and-test-snowplow) used.
- For backend events, when relevant, add the output of the
- [Snowplow Micro](index.md#snowplow-mini) good events
+ [Snowplow Micro](implementation.md#snowplow-micro) good events
`GET http://localhost:9090/micro/good` (it might be a good idea
to reset with `GET http://localhost:9090/micro/reset` first).
- Update the [Event Dictionary](event_dictionary_guide.md).
#### The Product Intelligence **reviewer** should
-- Check that the [event taxonomy](../snowplow/index.md#structured-event-taxonomy) is correct.
-- Check the [usage recommendations](../snowplow/index.md#usage-recommendations).
+- Check that the [event taxonomy](index.md#structured-event-taxonomy) is correct.
+- Check the [usage recommendations](implementation.md#usage-recommendations).
- Check that the [Event Dictionary](event_dictionary_guide.md) is up-to-date.
- If needed, check that the events are firing locally using one of the
-[testing tools](../snowplow/index.md#develop-and-test-snowplow) available.
+[testing tools](implementation.md#develop-and-test-snowplow) available.
- Approve the MR, and relabel the MR with `~"product intelligence::approved"`.
diff --git a/doc/development/snowplow/schemas.md b/doc/development/snowplow/schemas.md
new file mode 100644
index 00000000000..5b9e4f5256e
--- /dev/null
+++ b/doc/development/snowplow/schemas.md
@@ -0,0 +1,166 @@
+---
+stage: Growth
+group: Product Intelligence
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
+---
+
+# Snowplow schemas
+
+This page provides Snowplow schema reference for GitLab events.
+
+## `gitlab_standard`
+
+We are including the [`gitlab_standard` schema](https://gitlab.com/gitlab-org/iglu/-/blob/master/public/schemas/com.gitlab/gitlab_standard/jsonschema/) with every event. See [Standardize Snowplow Schema](https://gitlab.com/groups/gitlab-org/-/epics/5218) for details.
+
+The [`StandardContext`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/tracking/standard_context.rb) class represents this schema in the application.
+
+| Field Name | Required | Type | Description |
+|----------------|---------------------|-----------------------|---------------------------------------------------------------------------------------------|
+| `project_id` | **{dotted-circle}** | integer | |
+| `namespace_id` | **{dotted-circle}** | integer | |
+| `environment` | **{check-circle}** | string (max 32 chars) | Name of the source environment, such as `production` or `staging` |
+| `source` | **{check-circle}** | string (max 32 chars) | Name of the source application, such as `gitlab-rails` or `gitlab-javascript` |
+| `plan` | **{dotted-circle}** | string (max 32 chars) | Name of the plan for the namespace, such as `free`, `premium`, or `ultimate`. Automatically picked from the `namespace`. |
+| `google_analytics_id` | **{dotted-circle}** | string (max 32 chars) | Google Analytics ID, present when set from our marketing sites. |
+| `extra` | **{dotted-circle}** | JSON | Any additional data associated with the event, in the form of key-value pairs |
+
+## Default Schema
+
+Frontend events include a [web-specific schema](https://docs.snowplowanalytics.com/docs/understanding-your-pipeline/canonical-event/#Web-specific_fields) provided by Snowplow.
+All URLs are pseudonymized. The entity identifier [replaces](https://docs.snowplowanalytics.com/docs/collecting-data/collecting-from-own-applications/javascript-trackers/javascript-tracker/javascript-tracker-v2/tracker-setup/other-parameters-2/#Setting_a_custom_page_URL_and_referrer_URL) personally identifiable
+information (PII). PII includes usernames, group, and project names.
+
+| Field Name | Required | Type | Description |
+|--------------------------|---------------------|-----------|----------------------------------------------------------------------------------------------------------------------------------|
+| `app_id` | **{check-circle}** | string | Unique identifier for website / application |
+| `base_currency` | **{dotted-circle}** | string | Reporting currency |
+| `br_colordepth` | **{dotted-circle}** | integer | Browser color depth |
+| `br_cookies` | **{dotted-circle}** | boolean | Does the browser permit cookies? |
+| `br_family` | **{dotted-circle}** | string | Browser family |
+| `br_features_director` | **{dotted-circle}** | boolean | Director plugin installed? |
+| `br_features_flash` | **{dotted-circle}** | boolean | Flash plugin installed? |
+| `br_features_gears` | **{dotted-circle}** | boolean | Google gears installed? |
+| `br_features_java` | **{dotted-circle}** | boolean | Java plugin installed? |
+| `br_features_pdf` | **{dotted-circle}** | boolean | Adobe PDF plugin installed? |
+| `br_features_quicktime` | **{dotted-circle}** | boolean | Quicktime plugin installed? |
+| `br_features_realplayer` | **{dotted-circle}** | boolean | RealPlayer plugin installed? |
+| `br_features_silverlight` | **{dotted-circle}** | boolean | Silverlight plugin installed? |
+| `br_features_windowsmedia` | **{dotted-circle}** | boolean | Windows media plugin installed? |
+| `br_lang` | **{dotted-circle}** | string | Language the browser is set to |
+| `br_name` | **{dotted-circle}** | string | Browser name |
+| `br_renderengine` | **{dotted-circle}** | string | Browser rendering engine |
+| `br_type` | **{dotted-circle}** | string | Browser type |
+| `br_version` | **{dotted-circle}** | string | Browser version |
+| `br_viewheight` | **{dotted-circle}** | string | Browser viewport height |
+| `br_viewwidth` | **{dotted-circle}** | string | Browser viewport width |
+| `collector_tstamp` | **{dotted-circle}** | timestamp | Time stamp for the event recorded by the collector |
+| `contexts` | **{dotted-circle}** | | |
+| `derived_contexts` | **{dotted-circle}** | | Contexts derived in the Enrich process |
+| `derived_tstamp` | **{dotted-circle}** | timestamp | Timestamp making allowance for inaccurate device clock |
+| `doc_charset` | **{dotted-circle}** | string | Web page's character encoding |
+| `doc_height` | **{dotted-circle}** | string | Web page height |
+| `doc_width` | **{dotted-circle}** | string | Web page width |
+| `domain_sessionid` | **{dotted-circle}** | string | Unique identifier (UUID) for this visit of this user_id to this domain |
+| `domain_sessionidx` | **{dotted-circle}** | integer | Index of number of visits that this user_id has made to this domain (The first visit is `1`) |
+| `domain_userid` | **{dotted-circle}** | string | Unique identifier for a user, based on a first party cookie (so domain specific) |
+| `dvce_created_tstamp` | **{dotted-circle}** | timestamp | Timestamp when event occurred, as recorded by client device |
+| `dvce_ismobile` | **{dotted-circle}** | boolean | Indicates whether device is mobile |
+| `dvce_screenheight` | **{dotted-circle}** | string | Screen / monitor resolution |
+| `dvce_screenwidth` | **{dotted-circle}** | string | Screen / monitor resolution |
+| `dvce_sent_tstamp` | **{dotted-circle}** | timestamp | Timestamp when event was sent by client device to collector |
+| `dvce_type` | **{dotted-circle}** | string | Type of device |
+| `etl_tags` | **{dotted-circle}** | string | JSON of tags for this ETL run |
+| `etl_tstamp` | **{dotted-circle}** | timestamp | Timestamp event began ETL |
+| `event` | **{dotted-circle}** | string | Event type |
+| `event_fingerprint` | **{dotted-circle}** | string | Hash client-set event fields |
+| `event_format` | **{dotted-circle}** | string | Format for event |
+| `event_id` | **{dotted-circle}** | string | Event UUID |
+| `event_name` | **{dotted-circle}** | string | Event name |
+| `event_vendor` | **{dotted-circle}** | string | The company who developed the event model |
+| `event_version` | **{dotted-circle}** | string | Version of event schema |
+| `geo_city` | **{dotted-circle}** | string | City of IP origin |
+| `geo_country` | **{dotted-circle}** | string | Country of IP origin |
+| `geo_latitude` | **{dotted-circle}** | string | An approximate latitude |
+| `geo_longitude` | **{dotted-circle}** | string | An approximate longitude |
+| `geo_region` | **{dotted-circle}** | string | Region of IP origin |
+| `geo_region_name` | **{dotted-circle}** | string | Region of IP origin |
+| `geo_timezone` | **{dotted-circle}** | string | Time zone of IP origin |
+| `geo_zipcode` | **{dotted-circle}** | string | Zip (postal) code of IP origin |
+| `ip_domain` | **{dotted-circle}** | string | Second level domain name associated with the visitor's IP address |
+| `ip_isp` | **{dotted-circle}** | string | Visitor's ISP |
+| `ip_netspeed` | **{dotted-circle}** | string | Visitor's connection type |
+| `ip_organization` | **{dotted-circle}** | string | Organization associated with the visitor's IP address – defaults to ISP name if none is found |
+| `mkt_campaign` | **{dotted-circle}** | string | The campaign ID |
+| `mkt_clickid` | **{dotted-circle}** | string | The click ID |
+| `mkt_content` | **{dotted-circle}** | string | The content or ID of the ad. |
+| `mkt_medium` | **{dotted-circle}** | string | Type of traffic source |
+| `mkt_network` | **{dotted-circle}** | string | The ad network to which the click ID belongs |
+| `mkt_source` | **{dotted-circle}** | string | The company / website where the traffic came from |
+| `mkt_term` | **{dotted-circle}** | string | Keywords associated with the referrer |
+| `name_tracker` | **{dotted-circle}** | string | The tracker namespace |
+| `network_userid` | **{dotted-circle}** | string | Unique identifier for a user, based on a cookie from the collector (so set at a network level and shouldn't be set by a tracker) |
+| `os_family` | **{dotted-circle}** | string | Operating system family |
+| `os_manufacturer` | **{dotted-circle}** | string | Manufacturers of operating system |
+| `os_name` | **{dotted-circle}** | string | Name of operating system |
+| `os_timezone` | **{dotted-circle}** | string | Client operating system time zone |
+| `page_referrer` | **{dotted-circle}** | string | Referrer URL |
+| `page_title` | **{dotted-circle}** | string | Page title |
+| `page_url` | **{dotted-circle}** | string | Page URL |
+| `page_urlfragment` | **{dotted-circle}** | string | Fragment aka anchor |
+| `page_urlhost` | **{dotted-circle}** | string | Host aka domain |
+| `page_urlpath` | **{dotted-circle}** | string | Path to page |
+| `page_urlport` | **{dotted-circle}** | integer | Port if specified, 80 if not |
+| `page_urlquery` | **{dotted-circle}** | string | Query string |
+| `page_urlscheme` | **{dotted-circle}** | string | Scheme (protocol name) |
+| `platform` | **{dotted-circle}** | string | The platform the app runs on |
+| `pp_xoffset_max` | **{dotted-circle}** | integer | Maximum page x offset seen in the last ping period |
+| `pp_xoffset_min` | **{dotted-circle}** | integer | Minimum page x offset seen in the last ping period |
+| `pp_yoffset_max` | **{dotted-circle}** | integer | Maximum page y offset seen in the last ping period |
+| `pp_yoffset_min` | **{dotted-circle}** | integer | Minimum page y offset seen in the last ping period |
+| `refr_domain_userid` | **{dotted-circle}** | string | The Snowplow `domain_userid` of the referring website |
+| `refr_dvce_tstamp` | **{dotted-circle}** | timestamp | The time of attaching the `domain_userid` to the inbound link |
+| `refr_medium` | **{dotted-circle}** | string | Type of referer |
+| `refr_source` | **{dotted-circle}** | string | Name of referer if recognised |
+| `refr_term` | **{dotted-circle}** | string | Keywords if source is a search engine |
+| `refr_urlfragment` | **{dotted-circle}** | string | Referer URL fragment |
+| `refr_urlhost` | **{dotted-circle}** | string | Referer host |
+| `refr_urlpath` | **{dotted-circle}** | string | Referer page path |
+| `refr_urlport` | **{dotted-circle}** | integer | Referer port |
+| `refr_urlquery` | **{dotted-circle}** | string | Referer URL query string |
+| `refr_urlscheme` | **{dotted-circle}** | string | Referer scheme |
+| `se_action` | **{dotted-circle}** | string | The action / event itself |
+| `se_category` | **{dotted-circle}** | string | The category of event |
+| `se_label` | **{dotted-circle}** | string | A label often used to refer to the 'object' the action is performed on |
+| `se_property` | **{dotted-circle}** | string | A property associated with either the action or the object |
+| `se_value` | **{dotted-circle}** | decimal | A value associated with the user action |
+| `ti_category` | **{dotted-circle}** | string | Item category |
+| `ti_currency` | **{dotted-circle}** | string | Currency |
+| `ti_name` | **{dotted-circle}** | string | Item name |
+| `ti_orderid` | **{dotted-circle}** | string | Order ID |
+| `ti_price` | **{dotted-circle}** | decimal | Item price |
+| `ti_price_base` | **{dotted-circle}** | decimal | Item price in base currency |
+| `ti_quantity` | **{dotted-circle}** | integer | Item quantity |
+| `ti_sku` | **{dotted-circle}** | string | Item SKU |
+| `tr_affiliation` | **{dotted-circle}** | string | Transaction affiliation (such as channel) |
+| `tr_city` | **{dotted-circle}** | string | Delivery address: city |
+| `tr_country` | **{dotted-circle}** | string | Delivery address: country |
+| `tr_currency` | **{dotted-circle}** | string | Transaction Currency |
+| `tr_orderid` | **{dotted-circle}** | string | Order ID |
+| `tr_shipping` | **{dotted-circle}** | decimal | Delivery cost charged |
+| `tr_shipping_base` | **{dotted-circle}** | decimal | Shipping cost in base currency |
+| `tr_state` | **{dotted-circle}** | string | Delivery address: state |
+| `tr_tax` | **{dotted-circle}** | decimal | Transaction tax value (such as amount of VAT included) |
+| `tr_tax_base` | **{dotted-circle}** | decimal | Tax applied in base currency |
+| `tr_total` | **{dotted-circle}** | decimal | Transaction total value |
+| `tr_total_base` | **{dotted-circle}** | decimal | Total amount of transaction in base currency |
+| `true_tstamp` | **{dotted-circle}** | timestamp | User-set exact timestamp |
+| `txn_id` | **{dotted-circle}** | string | Transaction ID |
+| `unstruct_event` | **{dotted-circle}** | JSON | The properties of the event |
+| `uploaded_at` | **{dotted-circle}** | | |
+| `user_fingerprint` | **{dotted-circle}** | integer | User identifier based on (hopefully unique) browser features |
+| `user_id` | **{dotted-circle}** | string | Unique identifier for user, set by the business using setUserId |
+| `user_ipaddress` | **{dotted-circle}** | string | IP address |
+| `useragent` | **{dotted-circle}** | string | User agent (expressed as a browser string) |
+| `v_collector` | **{dotted-circle}** | string | Collector version |
+| `v_etl` | **{dotted-circle}** | string | ETL version |
+| `v_tracker` | **{dotted-circle}** | string | Identifier for Snowplow tracker |
diff --git a/doc/development/sql.md b/doc/development/sql.md
index 3483305c113..129280598fe 100644
--- a/doc/development/sql.md
+++ b/doc/development/sql.md
@@ -241,7 +241,7 @@ MergeRequest.where(source_project_id: Project.all.select(:id))
```
The _only_ time you should use `pluck` is when you actually need to operate on
-the values in Ruby itself (e.g. write them to a file). In almost all other cases
+the values in Ruby itself (for example, writing them to a file). In almost all other cases
you should ask yourself "Can I not just use a sub-query?".
In line with our `CodeReuse/ActiveRecord` cop, you should only use forms like
diff --git a/doc/development/stage_group_dashboards.md b/doc/development/stage_group_dashboards.md
index 7c518e9b6ca..a887558e473 100644
--- a/doc/development/stage_group_dashboards.md
+++ b/doc/development/stage_group_dashboards.md
@@ -1,6 +1,6 @@
---
-stage: Enablement
-group: Infrastructure
+stage: Platforms
+group: Scalability
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
@@ -58,6 +58,12 @@ component can have 2 indicators:
[Web](https://gitlab.com/gitlab-com/runbooks/-/blob/f22f40b2c2eab37d85e23ccac45e658b2c914445/metrics-catalog/services/web.jsonnet#L154)
services, that threshold is **5 seconds**.
+ We're working on making this target configurable per endpoint in [this
+ project](https://gitlab.com/groups/gitlab-com/gl-infra/-/epics/525). Learn
+ how to [customize the request
+ apdex](application_slis/rails_request_apdex.md), this new apdex
+ measurement is not yet part of the error budget.
+
For Sidekiq job execution, the threshold depends on the [job
urgency](sidekiq_style_guide.md#job-urgency). It is
[currently](https://gitlab.com/gitlab-com/runbooks/-/blob/f22f40b2c2eab37d85e23ccac45e658b2c914445/metrics-catalog/services/lib/sidekiq-helpers.libsonnet#L25-38)
@@ -120,7 +126,7 @@ Inside a stage group dashboard, there are some notable components. Let's take th
![Default time filter](img/stage_group_dashboards_time_filter.png)
-- By default, all the times are in UTC timezone. [We use UTC when communicating in Engineering](https://about.gitlab.com/handbook/communication/#writing-style-guidelines).
+- By default, all the times are in UTC time zone. [We use UTC when communicating in Engineering](https://about.gitlab.com/handbook/communication/#writing-style-guidelines).
- All metrics recorded in the GitLab production system have [1-year retention](https://gitlab.com/gitlab-cookbooks/gitlab-prometheus/-/blob/31526b03fef823e2f9b3cda7c75dcd28a12418a3/attributes/prometheus.rb#L40).
- Alternatively, you can zoom in or filter the time range directly on a graph. See the [Grafana Time Range Controls](https://grafana.com/docs/grafana/latest/dashboards/time-range-controls/) documentation for more information.
diff --git a/doc/development/testing_guide/best_practices.md b/doc/development/testing_guide/best_practices.md
index 79664490368..52e89a10556 100644
--- a/doc/development/testing_guide/best_practices.md
+++ b/doc/development/testing_guide/best_practices.md
@@ -68,7 +68,7 @@ SILENCE_DEPRECATIONS=1 bin/rspec spec/models/project_spec.rb
### Test speed
-GitLab has a massive test suite that, without [parallelization](ci.md#test-suite-parallelization-on-the-ci), can take hours
+GitLab has a massive test suite that, without [parallelization](../pipelines.md#test-suite-parallelization), can take hours
to run. It's important that we make an effort to write tests that are accurate
and effective _as well as_ fast.
diff --git a/doc/development/testing_guide/ci.md b/doc/development/testing_guide/ci.md
index e3fccdcee34..de024084c9c 100644
--- a/doc/development/testing_guide/ci.md
+++ b/doc/development/testing_guide/ci.md
@@ -1,45 +1,9 @@
---
-stage: none
-group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
+redirect_to: '../pipelines.md'
+remove_date: '2022-01-12'
---
-# GitLab tests in the Continuous Integration (CI) context
+This file was moved to [another location](../pipelines.md).
-## Test suite parallelization on the CI
-
-Our current CI parallelization setup is as follows:
-
-1. The `retrieve-tests-metadata` job in the `prepare` stage ensures we have a
- `knapsack/report-master.json` file:
- - The `knapsack/report-master.json` file is fetched from the latest `main` pipeline which runs `update-tests-metadata`
- (for now it's the 2-hourly scheduled master pipeline), if it's not here we initialize the file with `{}`.
-1. Each `[rspec|rspec-ee] [unit|integration|system|geo] n m` job are run with
- `knapsack rspec` and should have an evenly distributed share of tests:
- - It works because the jobs have access to the `knapsack/report-master.json`
- since the "artifacts from all previous stages are passed by default".
- - the jobs set their own report path to
- `"knapsack/${TEST_TOOL}_${TEST_LEVEL}_${DATABASE}_${CI_NODE_INDEX}_${CI_NODE_TOTAL}_report.json"`.
- - if knapsack is doing its job, test files that are run should be listed under
- `Report specs`, not under `Leftover specs`.
-1. The `update-tests-metadata` job (which only runs on scheduled pipelines for
- [the canonical project](https://gitlab.com/gitlab-org/gitlab) takes all the
- `knapsack/rspec*_pg_*.json` files and merge them all together into a single
- `knapsack/report-master.json` file that is saved as artifact.
-
-After that, the next pipeline uses the up-to-date `knapsack/report-master.json` file.
-
-## Monitoring
-
-The GitLab test suite is [monitored](../performance.md#rspec-profiling) for the `main` branch, and any branch
-that includes `rspec-profile` in their name.
-
-## CI setup
-
-- Rails logging to `log/test.log` is disabled by default in CI [for
- performance reasons](https://jtway.co/speed-up-your-rails-test-suite-by-6-in-1-line-13fedb869ec4). To override this setting, provide the
- `RAILS_ENABLE_TEST_LOG` environment variable.
-
----
-
-[Return to Testing documentation](index.md)
+<!-- This redirect file can be deleted after <2022-01-12>. -->
+<!-- Before deletion, see: https://docs.gitlab.com/ee/development/documentation/#move-or-rename-a-page -->
diff --git a/doc/development/testing_guide/end_to_end/best_practices.md b/doc/development/testing_guide/end_to_end/best_practices.md
index a3caa8bf2b3..9491c89c2a0 100644
--- a/doc/development/testing_guide/end_to_end/best_practices.md
+++ b/doc/development/testing_guide/end_to_end/best_practices.md
@@ -245,7 +245,11 @@ end
## Prefer `aggregate_failures` when there are back-to-back expectations
-In cases where there must be multiple (back-to-back) expectations within a test case, it is preferable to use `aggregate_failures`.
+See [Prefer aggregate failures when there are multiple expectations](#prefer-aggregate_failures-when-there-are-multiple-expectations)
+
+## Prefer `aggregate_failures` when there are multiple expectations
+
+In cases where there must be multiple expectations within a test case, it is preferable to use `aggregate_failures`.
This allows you to group a set of expectations and see all the failures altogether, rather than having the test being aborted on the first failure.
@@ -270,6 +274,32 @@ Page::Search::Results.perform do |search|
end
```
+Attach the `:aggregate_failures` metadata to the example if multiple expectations are separated by statements.
+
+```ruby
+#=> Good
+it 'searches', :aggregate_failures do
+ Page::Search::Results.perform do |search|
+ expect(search).to have_file_in_project(template[:file_name], project.name)
+
+ search.switch_to_code
+
+ expect(search).to have_file_with_content(template[:file_name], content[0..33])
+ end
+end
+
+#=> Bad
+it 'searches' do
+ Page::Search::Results.perform do |search|
+ expect(search).to have_file_in_project(template[:file_name], project.name)
+
+ search.switch_to_code
+
+ expect(search).to have_file_with_content(template[:file_name], content[0..33])
+ end
+end
+```
+
## Prefer to split tests across multiple files
Our framework includes a couple of parallelization mechanisms that work by executing spec files in parallel.
@@ -333,11 +363,11 @@ after(:all) do
end
```
-## Tag tests that require Administrator access
+## Tag tests that require the Administrator role
-We don't run tests that require Administrator access against our Production environments.
+We don't run tests that require the Administrator role against our Production environments.
-When you add a new test that requires Administrator access, apply the RSpec metadata `:requires_admin` so that the test will not be included in the test suites executed against Production and other environments on which we don't want to run those tests.
+When you add a new test that requires the Administrator role, apply the RSpec metadata `:requires_admin` so that the test will not be included in the test suites executed against Production and other environments on which we don't want to run those tests.
When running tests locally or configuring a pipeline, the environment variable `QA_CAN_TEST_ADMIN_FEATURES` can be set to `false` to skip tests that have the `:requires_admin` tag.
diff --git a/doc/development/testing_guide/end_to_end/dynamic_element_validation.md b/doc/development/testing_guide/end_to_end/dynamic_element_validation.md
index 6c504e6fa28..8770a5d33cd 100644
--- a/doc/development/testing_guide/end_to_end/dynamic_element_validation.md
+++ b/doc/development/testing_guide/end_to_end/dynamic_element_validation.md
@@ -52,7 +52,7 @@ Simply put, a required element is a visible HTML element that appears on a UI co
"Visible" can be defined as
-- Not having any CSS preventing its display. E.g.: `display: none` or `width: 0px; height: 0px;`
+- Not having any CSS preventing its display, for example, `display: none` or `width: 0px; height: 0px;`
- Being able to be interacted with by the user
"UI component" can be defined as
diff --git a/doc/development/testing_guide/end_to_end/feature_flags.md b/doc/development/testing_guide/end_to_end/feature_flags.md
index c9acb2e9371..994ee3f253c 100644
--- a/doc/development/testing_guide/end_to_end/feature_flags.md
+++ b/doc/development/testing_guide/end_to_end/feature_flags.md
@@ -15,7 +15,7 @@ token via `GITLAB_QA_ADMIN_ACCESS_TOKEN` (recommended), or provide `GITLAB_ADMIN
and `GITLAB_ADMIN_PASSWORD`.
Please be sure to include the tag `:requires_admin` so that the test can be skipped in environments
-where admin access is not available.
+where administrator access is not available.
WARNING:
You are strongly advised to [enable feature flags only for a group, project, user](../../feature_flags/index.md#feature-actors),
diff --git a/doc/development/testing_guide/end_to_end/index.md b/doc/development/testing_guide/end_to_end/index.md
index 36c0f5adf00..b097d6b0729 100644
--- a/doc/development/testing_guide/end_to_end/index.md
+++ b/doc/development/testing_guide/end_to_end/index.md
@@ -184,7 +184,7 @@ in the pipeline:
- Fetches these source files from all test jobs.
- Generates and uploads the report to the `GCS` bucket `gitlab-qa-allure-report` under the project `gitlab-qa-resources`.
-A common CI template for report uploading is stored in
+A common CI template for report uploading is stored in
[`allure-report.yml`](https://gitlab.com/gitlab-org/quality/pipeline-common/-/blob/master/ci/allure-report.yml).
#### Merge requests
diff --git a/doc/development/testing_guide/end_to_end/page_objects.md b/doc/development/testing_guide/end_to_end/page_objects.md
index 9ffa7ea4f77..85ab4d479f9 100644
--- a/doc/development/testing_guide/end_to_end/page_objects.md
+++ b/doc/development/testing_guide/end_to_end/page_objects.md
@@ -105,7 +105,7 @@ code but **this is deprecated** in favor of the above method for two reasons:
- Consistency: there is only one way to define an element
- Separation of concerns: QA uses dedicated `data-qa-*` attributes instead of reusing code
- or classes used by other components (e.g. `js-*` classes etc.)
+ or classes used by other components (for example, `js-*` classes etc.)
```ruby
view 'app/views/my/view.html.haml' do
diff --git a/doc/development/testing_guide/end_to_end/rspec_metadata_tests.md b/doc/development/testing_guide/end_to_end/rspec_metadata_tests.md
index b6e92367f89..3749511fef5 100644
--- a/doc/development/testing_guide/end_to_end/rspec_metadata_tests.md
+++ b/doc/development/testing_guide/end_to_end/rspec_metadata_tests.md
@@ -26,6 +26,7 @@ This is a partial list of the [RSpec metadata](https://relishapp.com/rspec/rspec
| `:ldap_no_tls` | The test requires a GitLab instance to be configured to use an external LDAP server with TLS not enabled. |
| `:ldap_tls` | The test requires a GitLab instance to be configured to use an external LDAP server with TLS enabled. |
| `:mattermost` | The test requires a GitLab Mattermost service on the GitLab instance. |
+| `:mixed_env` | The test should only be executed in environments that have a paired canary version available through traffic routing based on the existence of the `gitlab_canary=true` cookie. Tests in this category are switching the cookie mid-test to validate mixed deployment environments. |
| `:object_storage` | The test requires a GitLab instance to be configured to use multiple [object storage types](../../../administration/object_storage.md). Uses MinIO as the object storage server. |
| `:only` | The test is only to be run in specific execution contexts. See [test execution context selection](execution_context_selection.md) for more information. |
| `:orchestrated` | The GitLab instance under test may be [configured by `gitlab-qa`](https://gitlab.com/gitlab-org/gitlab-qa/-/blob/master/docs/what_tests_can_be_run.md#orchestrated-tests) to be different to the default GitLab configuration, or `gitlab-qa` may launch additional services in separate Docker containers, or both. Tests tagged with `:orchestrated` are excluded when testing environments where we can't dynamically modify the GitLab configuration (for example, Staging). |
@@ -34,7 +35,7 @@ This is a partial list of the [RSpec metadata](https://relishapp.com/rspec/rspec
| `:relative_url` | The test requires a GitLab instance to be installed under a [relative URL](../../../install/relative_url.md). |
| `:reliable` | The test has been [promoted to a reliable test](https://about.gitlab.com/handbook/engineering/quality/guidelines/reliable-tests/#promoting-an-existing-test-to-reliable) meaning it passes consistently in all pipelines, including merge requests. |
| `:repository_storage` | The test requires a GitLab instance to be configured to use multiple [repository storage paths](../../../administration/repository_storage_paths.md). Paired with the `:orchestrated` tag. |
-| `:requires_admin` | The test requires an admin account. Tests with the tag are excluded when run against Canary and Production environments. |
+| `:requires_admin` | The test requires an administrator account. Tests with the tag are excluded when run against Canary and Production environments. |
| `:requires_git_protocol_v2` | The test requires that Git protocol version 2 is enabled on the server. It's assumed to be enabled by default but if not the test can be skipped by setting `QA_CAN_TEST_GIT_PROTOCOL_V2` to `false`. |
| `:requires_praefect` | The test requires that the GitLab instance uses [Gitaly Cluster](../../../administration/gitaly/praefect.md) (a.k.a. Praefect) as the repository storage . It's assumed to be used by default but if not the test can be skipped by setting `QA_CAN_TEST_PRAEFECT` to `false`. |
| `:runner` | The test depends on and sets up a GitLab Runner instance, typically to run a pipeline. |
diff --git a/doc/development/testing_guide/end_to_end/running_tests_that_require_special_setup.md b/doc/development/testing_guide/end_to_end/running_tests_that_require_special_setup.md
index 46a3053c267..eadd0ef49a0 100644
--- a/doc/development/testing_guide/end_to_end/running_tests_that_require_special_setup.md
+++ b/doc/development/testing_guide/end_to_end/running_tests_that_require_special_setup.md
@@ -45,7 +45,7 @@ docker run \
Jenkins is available on `http://localhost:8080`.
-Admin username is `admin` and password is `password`.
+Administrator username is `admin` and password is `password`.
It is worth noting that this is not an orchestrated test. It is [tagged with the `:orchestrated` meta](https://gitlab.com/gitlab-org/gitlab/-/blob/163c8a8c814db26d11e104d1cb2dcf02eb567dbe/qa/qa/specs/features/ee/browser_ui/3_create/jenkins/jenkins_build_status_spec.rb#L5)
only to prevent it from running in the pipelines for live environments such as Staging.
@@ -167,9 +167,9 @@ The following includes more information on the command:
-`QA_DEBUG` - Set to `true` to verbosely log page object actions.
-`WEBDRIVER_HEADLESS` - When running locally, set to `false` to allow browser tests to be visible - watch your tests being run.
--`GITLAB_ADMIN_USERNAME` - Admin username to use when adding a license.
--`GITLAB_ADMIN_PASSWORD` - Admin password to use when adding a license.
--`GITLAB_QA_ACCESS_TOKEN` and `GITLAB_QA_ADMIN_ACCESS_TOKEN` - A valid personal access token with the `api` scope. This is used for API access during tests, and is used in the version that staging is currently running. The `ADMIN_ACCESS_TOKEN` is from a user with admin access. Used for API access as an admin during tests.
+-`GITLAB_ADMIN_USERNAME` - Administrator username to use when adding a license.
+-`GITLAB_ADMIN_PASSWORD` - Administrator password to use when adding a license.
+-`GITLAB_QA_ACCESS_TOKEN` and `GITLAB_QA_ADMIN_ACCESS_TOKEN` - A valid personal access token with the `api` scope. This is used for API access during tests, and is used in the version that staging is currently running. The `ADMIN_ACCESS_TOKEN` is from a user with administrator access. Used for API access as an administrator during tests.
-`CLUSTER_API_URL` - Use the address `https://kubernetes.docker.internal:6443` . This address is used to enable the cluster to be network accessible while deploying using Auto DevOps.
-`https://[YOUR-PORT].qa-tunnel.gitlab.info/` - The address of your local GDK
-`qa/specs/features/browser_ui/8_monitor/all_monitor_core_features_spec.rb` - The path to the monitor core specs
@@ -410,9 +410,9 @@ Tests that are tagged with `:ldap_tls` and `:ldap_no_tls` meta are orchestrated
These tests spin up a Docker container [(`osixia/openldap`)](https://hub.docker.com/r/osixia/openldap) running an instance of [OpenLDAP](https://www.openldap.org/).
The container uses fixtures [checked into the GitLab-QA repository](https://gitlab.com/gitlab-org/gitlab-qa/-/tree/9ffb9ad3be847a9054967d792d6772a74220fb42/fixtures/ldap) to create
-base data such as users and groups including the admin group. The password for [all users](https://gitlab.com/gitlab-org/gitlab-qa/-/blob/9ffb9ad3be847a9054967d792d6772a74220fb42/fixtures/ldap/2_add_users.ldif) including [the `tanuki` user](https://gitlab.com/gitlab-org/gitlab-qa/-/blob/9ffb9ad3be847a9054967d792d6772a74220fb42/fixtures/ldap/tanuki.ldif) is `password`.
+base data such as users and groups including the administrator group. The password for [all users](https://gitlab.com/gitlab-org/gitlab-qa/-/blob/9ffb9ad3be847a9054967d792d6772a74220fb42/fixtures/ldap/2_add_users.ldif) including [the `tanuki` user](https://gitlab.com/gitlab-org/gitlab-qa/-/blob/9ffb9ad3be847a9054967d792d6772a74220fb42/fixtures/ldap/tanuki.ldif) is `password`.
-A GitLab instance is also created in a Docker container based on our [General LDAP setup](../../../administration/auth/ldap/index.md#general-ldap-setup) documentation.
+A GitLab instance is also created in a Docker container based on our [LDAP setup](../../../administration/auth/ldap/index.md) documentation.
Tests that are tagged `:ldap_tls` enable TLS on GitLab using the certificate [checked into the GitLab-QA repository](https://gitlab.com/gitlab-org/gitlab-qa/-/tree/9ffb9ad3be847a9054967d792d6772a74220fb42/tls_certificates/gitlab).
diff --git a/doc/development/testing_guide/flaky_tests.md b/doc/development/testing_guide/flaky_tests.md
index bfcd68dbaf3..9489020de5d 100644
--- a/doc/development/testing_guide/flaky_tests.md
+++ b/doc/development/testing_guide/flaky_tests.md
@@ -37,9 +37,9 @@ bin/rspec --tag quarantine
Once a test is in quarantine, there are 3 choices:
-- Should the test be fixed (i.e. get rid of its flakiness)?
+- Should the test be fixed (that is, get rid of its flakiness)?
- Should the test be moved to a lower level of testing?
-- Should the test be removed entirely (e.g. because there's already a
+- Should the test be removed entirely (for example, because there's already a
lower-level test, or it's duplicating another same-level test, or it's testing
too much etc.)?
diff --git a/doc/development/testing_guide/frontend_testing.md b/doc/development/testing_guide/frontend_testing.md
index 76687db3a3f..0e721ba2760 100644
--- a/doc/development/testing_guide/frontend_testing.md
+++ b/doc/development/testing_guide/frontend_testing.md
@@ -146,7 +146,7 @@ it('does not display a dropdown if no metricTypes exist', () => {
});
```
-Keep an eye out for these kinds of tests, as they just make updating logic more fragile and tedious than it needs to be. This is also true for other libraries. A rule of thumb here is: if you are checking a `wrapper.vm` property, you should probably stop and rethink the test to check the rendered template instead.
+Keep an eye out for these kinds of tests, as they just make updating logic more fragile and tedious than it needs to be. This is also true for other libraries. A suggestion here is: if you are checking a `wrapper.vm` property, you should probably stop and rethink the test to check the rendered template instead.
Some more examples can be found in the [Frontend unit tests section](testing_levels.md#frontend-unit-tests)
@@ -783,20 +783,25 @@ often using fixtures to validate correct integration with the backend code.
### Use fixtures
-Jest uses `spec/frontend/__helpers__/fixtures.js` to import fixtures in tests.
-
-The following are examples of tests that work for Jest:
+To import a JSON fixture, `import` it using the `test_fixtures` alias.
```javascript
+import responseBody from 'test_fixtures/some/fixture.json' // loads spec/frontend/fixtures/some/fixture.json
+
it('makes a request', () => {
- const responseBody = getJSONFixture('some/fixture.json'); // loads spec/frontend/fixtures/some/fixture.json
axiosMock.onGet(endpoint).reply(200, responseBody);
myButton.click();
// ...
});
+```
+
+For other fixtures, Jest uses `spec/frontend/__helpers__/fixtures.js` to import them in tests.
+The following are examples of tests that work for Jest:
+
+```javascript
it('uses some HTML element', () => {
loadFixtures('some/page.html'); // loads spec/frontend/fixtures/some/page.html and adds it to the DOM
@@ -843,10 +848,6 @@ describe GraphQL::Query, type: :request do
all_releases_query_path = 'releases/graphql/queries/all_releases.query.graphql'
- before(:all) do
- clean_frontend_fixtures('graphql/releases/')
- end
-
it "graphql/#{all_releases_query_path}.json" do
query = get_graphql_query_as_string(all_releases_query_path)
@@ -860,7 +861,7 @@ end
This will create a new fixture located at
`tmp/tests/frontend/fixtures-ee/graphql/releases/graphql/queries/all_releases.query.graphql.json`.
-You can import the JSON fixture in a Jest test using the `getJSONFixture` method
+You can import the JSON fixture in a Jest test using the `test_fixtures` alias
[as described below](#use-fixtures).
## Data-driven tests
@@ -998,7 +999,7 @@ it like so:
import Subject from '~/feature/the_subject.vue';
// Force Jest to transpile and cache
-// eslint-disable-next-line import/order, no-unused-vars
+// eslint-disable-next-line no-unused-vars
import _Thing from '~/feature/path/to/thing.vue';
```
diff --git a/doc/development/testing_guide/img/review-app-parent-pipeline.png b/doc/development/testing_guide/img/review-app-parent-pipeline.png
new file mode 100644
index 00000000000..5686d5f6ebe
--- /dev/null
+++ b/doc/development/testing_guide/img/review-app-parent-pipeline.png
Binary files differ
diff --git a/doc/development/testing_guide/index.md b/doc/development/testing_guide/index.md
index 015d8a92a4d..2e00a00c454 100644
--- a/doc/development/testing_guide/index.md
+++ b/doc/development/testing_guide/index.md
@@ -48,7 +48,7 @@ testing promises, stubbing etc.
What are flaky tests, the different kind of flaky tests we encountered, and what
we do about them.
-## [GitLab tests in the Continuous Integration (CI) context](ci.md)
+## [GitLab pipelines](../pipelines.md)
How GitLab test suite is run in the CI context: setup, caches, artifacts,
parallelization, monitoring.
diff --git a/doc/development/testing_guide/review_apps.md b/doc/development/testing_guide/review_apps.md
index 72d63fd8194..4091f213a8f 100644
--- a/doc/development/testing_guide/review_apps.md
+++ b/doc/development/testing_guide/review_apps.md
@@ -6,8 +6,98 @@ info: To determine the technical writer assigned to the Stage/Group associated w
# Review Apps
-Review Apps are automatically deployed by [the
-pipeline](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/6665).
+Review Apps are deployed using the `start-review-app-pipeline` job. This job triggers a child pipeline containing a series of jobs to perform the various tasks needed to deploy a Review App.
+
+![start-review-app-pipeline job](img/review-app-parent-pipeline.png)
+
+For any of the following scenarios, the `start-review-app-pipeline` job would be automatically started:
+
+- for merge requests with CI config changes
+- for merge requests with frontend changes
+- for merge requests with QA changes
+- for scheduled pipelines
+
+## QA runs on Review Apps
+
+On every [pipeline](https://gitlab.com/gitlab-org/gitlab/pipelines/125315730) in the `qa` stage (which comes after the
+`review` stage), the `review-qa-smoke` job is automatically started and it runs
+the QA smoke suite.
+
+You can also manually start the `review-qa-all`: it runs the full QA suite.
+
+After the end-to-end test runs have finished, [Allure reports](https://github.com/allure-framework/allure2) are generated and published by
+the `allure-report-qa-smoke` and `allure-report-qa-all` jobs. A comment with links to the reports are added to the merge request.
+
+## Performance Metrics
+
+On every [pipeline](https://gitlab.com/gitlab-org/gitlab/pipelines/125315730) in the `qa` stage, the
+`review-performance` job is automatically started: this job does basic
+browser performance testing using a
+[Sitespeed.io Container](../../user/project/merge_requests/browser_performance_testing.md).
+
+## How to
+
+### Get access to the GCP Review Apps cluster
+
+You need to [open an access request (internal link)](https://gitlab.com/gitlab-com/access-requests/-/issues/new)
+for the `gcp-review-apps-dev` GCP group and role.
+
+This grants you the following permissions for:
+
+- [Retrieving pod logs](#dig-into-a-pods-logs). Granted by [Viewer (`roles/viewer`)](https://cloud.google.com/iam/docs/understanding-roles#kubernetes-engine-roles).
+- [Running a Rails console](#run-a-rails-console). Granted by [Kubernetes Engine Developer (`roles/container.pods.exec`)](https://cloud.google.com/iam/docs/understanding-roles#kubernetes-engine-roles).
+
+### Log into my Review App
+
+For GitLab Team Members only. If you want to sign in to the review app, review
+the GitLab handbook information for the [shared 1Password account](https://about.gitlab.com/handbook/security/#1password-for-teams).
+
+- The default username is `root`.
+- The password can be found in the 1Password login item named `GitLab EE Review App`.
+
+### Enable a feature flag for my Review App
+
+1. Open your Review App and log in as documented above.
+1. Create a personal access token.
+1. Enable the feature flag using the [Feature flag API](../../api/features.md).
+
+### Find my Review App slug
+
+1. Open the `review-deploy` job.
+1. Look for `** Deploying review-*`.
+1. For instance for `** Deploying review-1234-abc-defg... **`,
+ your Review App slug would be `review-1234-abc-defg` in this case.
+
+### Run a Rails console
+
+1. Make sure you [have access to the cluster](#get-access-to-the-gcp-review-apps-cluster) and the `container.pods.exec` permission first.
+1. [Filter Workloads by your Review App slug](https://console.cloud.google.com/kubernetes/workload?project=gitlab-review-apps),
+ e.g. `review-qa-raise-e-12chm0`.
+1. Find and open the `task-runner` Deployment, e.g. `review-qa-raise-e-12chm0-task-runner`.
+1. Click on the Pod in the "Managed pods" section, e.g. `review-qa-raise-e-12chm0-task-runner-d5455cc8-2lsvz`.
+1. Click on the `KUBECTL` dropdown, then `Exec` -> `task-runner`.
+1. Replace `-c task-runner -- ls` with `-it -- gitlab-rails console` from the
+ default command or
+ - Run `kubectl exec --namespace review-qa-raise-e-12chm0 review-qa-raise-e-12chm0-task-runner-d5455cc8-2lsvz -it -- gitlab-rails console` and
+ - Replace `review-qa-raise-e-12chm0-task-runner-d5455cc8-2lsvz`
+ with your Pod's name.
+
+### Dig into a Pod's logs
+
+1. Make sure you [have access to the cluster](#get-access-to-the-gcp-review-apps-cluster) and the `container.pods.getLogs` permission first.
+1. [Filter Workloads by your Review App slug](https://console.cloud.google.com/kubernetes/workload?project=gitlab-review-apps),
+ e.g. `review-qa-raise-e-12chm0`.
+1. Find and open the `migrations` Deployment, e.g.
+ `review-qa-raise-e-12chm0-migrations.1`.
+1. Click on the Pod in the "Managed pods" section, e.g.
+ `review-qa-raise-e-12chm0-migrations.1-nqwtx`.
+1. Click on the `Container logs` link.
+
+Alternatively, you could use the [Logs Explorer](https://console.cloud.google.com/logs/query;query=?project=gitlab-review-apps) which provides more utility to search logs. An example query for a pod name is as follows:
+
+```shell
+resource.labels.pod_name:"review-qa-raise-e-12chm0-migrations"
+```
## How does it work?
@@ -89,7 +179,7 @@ subgraph "CNG-mirror pipeline"
**Additional notes:**
-- If the `review-deploy` job keep failing (note that we already retry it twice),
+- If the `review-deploy` job keeps failing (and a manual retry didn't help),
please post a message in the `#g_qe_engineering_productivity` channel and/or create a `~"Engineering Productivity"` `~"ep::review apps"` `~bug`
issue with a link to your merge request. Note that the deployment failure can
reveal an actual problem introduced in your merge request (i.e. this isn't
@@ -105,7 +195,7 @@ subgraph "CNG-mirror pipeline"
stop a Review App manually, and is also started by GitLab once a merge
request's branch is deleted after being merged.
- The Kubernetes cluster is connected to the `gitlab` projects using the
- [GitLab Kubernetes integration](../../user/project/clusters/index.md). This basically
+ [GitLab Kubernetes integration](../../user/infrastructure/clusters/index.md). This basically
allows to have a link to the Review App directly from the merge request widget.
### Auto-stopping of Review Apps
@@ -126,24 +216,6 @@ The `review-gcp-cleanup` job that automatically runs in scheduled pipelines
(and is manual in merge request) removes any dangling GCP network resources
that were not removed along with the Kubernetes resources.
-## QA runs
-
-On every [pipeline](https://gitlab.com/gitlab-org/gitlab/pipelines/125315730) in the `qa` stage (which comes after the
-`review` stage), the `review-qa-smoke` job is automatically started and it runs
-the QA smoke suite.
-
-You can also manually start the `review-qa-all`: it runs the full QA suite.
-
-After the end-to-end test runs have finished, [Allure reports](https://github.com/allure-framework/allure2) are generated and published by
-the `allure-report-qa-smoke` and `allure-report-qa-all` jobs. A comment with links to the reports are added to the merge request.
-
-## Performance Metrics
-
-On every [pipeline](https://gitlab.com/gitlab-org/gitlab/pipelines/125315730) in the `qa` stage, the
-`review-performance` job is automatically started: this job does basic
-browser performance testing using a
-[Sitespeed.io Container](../../user/project/merge_requests/browser_performance_testing.md).
-
## Cluster configuration
The cluster is configured via Terraform in the [`engineering-productivity-infrastructure`](https://gitlab.com/gitlab-org/quality/engineering-productivity-infrastructure) project.
@@ -157,64 +229,6 @@ The Helm version used is defined in the
[`registry.gitlab.com/gitlab-org/gitlab-build-images:gitlab-helm3-kubectl1.14` image](https://gitlab.com/gitlab-org/gitlab-build-images/-/blob/master/Dockerfile.gitlab-helm3-kubectl1.14#L7)
used by the `review-deploy` and `review-stop` jobs.
-## How to
-
-### Get access to the GCP Review Apps cluster
-
-You need to [open an access request (internal link)](https://gitlab.com/gitlab-com/access-requests/-/issues/new)
-for the `gcp-review-apps-dev` GCP group and role.
-
-This grants you the following permissions for:
-
-- [Retrieving pod logs](#dig-into-a-pods-logs). Granted by [Viewer (`roles/viewer`)](https://cloud.google.com/iam/docs/understanding-roles#kubernetes-engine-roles).
-- [Running a Rails console](#run-a-rails-console). Granted by [Kubernetes Engine Developer (`roles/container.pods.exec`)](https://cloud.google.com/iam/docs/understanding-roles#kubernetes-engine-roles).
-
-### Log into my Review App
-
-For GitLab Team Members only. If you want to sign in to the review app, review
-the GitLab handbook information for the [shared 1Password account](https://about.gitlab.com/handbook/security/#1password-for-teams).
-
-- The default username is `root`.
-- The password can be found in the 1Password login item named `GitLab EE Review App`.
-
-### Enable a feature flag for my Review App
-
-1. Open your Review App and log in as documented above.
-1. Create a personal access token.
-1. Enable the feature flag using the [Feature flag API](../../api/features.md).
-
-### Find my Review App slug
-
-1. Open the `review-deploy` job.
-1. Look for `** Deploying review-*`.
-1. For instance for `** Deploying review-1234-abc-defg... **`,
- your Review App slug would be `review-1234-abc-defg` in this case.
-
-### Run a Rails console
-
-1. Make sure you [have access to the cluster](#get-access-to-the-gcp-review-apps-cluster) and the `container.pods.exec` permission first.
-1. [Filter Workloads by your Review App slug](https://console.cloud.google.com/kubernetes/workload?project=gitlab-review-apps),
- e.g. `review-qa-raise-e-12chm0`.
-1. Find and open the `task-runner` Deployment, e.g. `review-qa-raise-e-12chm0-task-runner`.
-1. Click on the Pod in the "Managed pods" section, e.g. `review-qa-raise-e-12chm0-task-runner-d5455cc8-2lsvz`.
-1. Click on the `KUBECTL` dropdown, then `Exec` -> `task-runner`.
-1. Replace `-c task-runner -- ls` with `-it -- gitlab-rails console` from the
- default command or
- - Run `kubectl exec --namespace review-qa-raise-e-12chm0 review-qa-raise-e-12chm0-task-runner-d5455cc8-2lsvz -it -- gitlab-rails console` and
- - Replace `review-qa-raise-e-12chm0-task-runner-d5455cc8-2lsvz`
- with your Pod's name.
-
-### Dig into a Pod's logs
-
-1. Make sure you [have access to the cluster](#get-access-to-the-gcp-review-apps-cluster) and the `container.pods.getLogs` permission first.
-1. [Filter Workloads by your Review App slug](https://console.cloud.google.com/kubernetes/workload?project=gitlab-review-apps),
- e.g. `review-qa-raise-e-12chm0`.
-1. Find and open the `migrations` Deployment, e.g.
- `review-qa-raise-e-12chm0-migrations.1`.
-1. Click on the Pod in the "Managed pods" section, e.g.
- `review-qa-raise-e-12chm0-migrations.1-nqwtx`.
-1. Click on the `Container logs` link.
-
## Diagnosing unhealthy Review App releases
If [Review App Stability](https://app.periscopedata.com/app/gitlab/496118/Engineering-Productivity-Sandbox?widget=6690556&udv=785399)
diff --git a/doc/development/usage_ping/dictionary.md b/doc/development/usage_ping/dictionary.md
index e7e8464ff7a..810c789bc03 100644
--- a/doc/development/usage_ping/dictionary.md
+++ b/doc/development/usage_ping/dictionary.md
@@ -1,4 +1,4 @@
---
-redirect_to: 'https://gitlab-org.gitlab.io/growth/product-intelligence/metric-dictionary'
+redirect_to: 'https://metrics.gitlab.com/index.html'
remove_date: '2021-11-10'
---
diff --git a/doc/development/usage_ping/index.md b/doc/development/usage_ping/index.md
deleted file mode 100644
index aa06cb36f0c..00000000000
--- a/doc/development/usage_ping/index.md
+++ /dev/null
@@ -1,9 +0,0 @@
----
-redirect_to: '../service_ping/index.md'
-remove_date: '2021-10-09'
----
-
-This file was moved to [another location](../service_ping/index.md).
-
-<!-- This redirect file can be deleted after <2021-10-09>. -->
-<!-- Before deletion, see: https://docs.gitlab.com/ee/development/documentation/#move-or-rename-a-page -->
diff --git a/doc/development/usage_ping/metrics_dictionary.md b/doc/development/usage_ping/metrics_dictionary.md
deleted file mode 100644
index 3743c2e0414..00000000000
--- a/doc/development/usage_ping/metrics_dictionary.md
+++ /dev/null
@@ -1,9 +0,0 @@
----
-redirect_to: '../service_ping/metrics_dictionary.md'
-remove_date: '2021-10-09'
----
-
-This file was moved to [another location](../service_ping/metrics_dictionary.md).
-
-<!-- This redirect file can be deleted after <2021-10-09>. -->
-<!-- Before deletion, see: https://docs.gitlab.com/ee/development/documentation/#move-or-rename-a-page -->
diff --git a/doc/development/usage_ping/metrics_instrumentation.md b/doc/development/usage_ping/metrics_instrumentation.md
deleted file mode 100644
index f2d731803b8..00000000000
--- a/doc/development/usage_ping/metrics_instrumentation.md
+++ /dev/null
@@ -1,9 +0,0 @@
----
-redirect_to: '../service_ping/metrics_instrumentation.md'
-remove_date: '2021-10-09'
----
-
-This file was moved to [another location](../service_ping/metrics_instrumentation.md).
-
-<!-- This redirect file can be deleted after <2021-10-09>. -->
-<!-- Before deletion, see: https://docs.gitlab.com/ee/development/documentation/#move-or-rename-a-page -->
diff --git a/doc/development/usage_ping/product_intelligence_review.md b/doc/development/usage_ping/product_intelligence_review.md
deleted file mode 100644
index dc51e3e300a..00000000000
--- a/doc/development/usage_ping/product_intelligence_review.md
+++ /dev/null
@@ -1,9 +0,0 @@
----
-redirect_to: '../service_ping/review_guidelines.md'
-remove_date: '2021-10-09'
----
-
-This file was moved to [another location](../service_ping/review_guidelines.md).
-
-<!-- This redirect file can be deleted after <2021-10-09>. -->
-<!-- Before deletion, see: https://docs.gitlab.com/ee/development/documentation/#move-or-rename-a-page -->
diff --git a/doc/development/usage_ping/review_guidelines.md b/doc/development/usage_ping/review_guidelines.md
deleted file mode 100644
index dc51e3e300a..00000000000
--- a/doc/development/usage_ping/review_guidelines.md
+++ /dev/null
@@ -1,9 +0,0 @@
----
-redirect_to: '../service_ping/review_guidelines.md'
-remove_date: '2021-10-09'
----
-
-This file was moved to [another location](../service_ping/review_guidelines.md).
-
-<!-- This redirect file can be deleted after <2021-10-09>. -->
-<!-- Before deletion, see: https://docs.gitlab.com/ee/development/documentation/#move-or-rename-a-page -->
diff --git a/doc/development/windows.md b/doc/development/windows.md
index 07f8a80e95f..fb095b68939 100644
--- a/doc/development/windows.md
+++ b/doc/development/windows.md
@@ -33,7 +33,7 @@ tags:
- windows-1809
```
-A list of software preinstalled on the Windows images is available at: [Preinstalled software](https://gitlab.com/gitlab-org/ci-cd/shared-runners/images/gcp/windows-containers/blob/master/cookbooks/preinstalled-software/README.md).
+A list of software preinstalled on the Windows images is available at: [Preinstalled software](https://gitlab.com/gitlab-org/ci-cd/shared-runners/images/gcp/windows-containers/blob/main/cookbooks/preinstalled-software/README.md).
## GCP Windows image for development
@@ -57,7 +57,7 @@ Build a Google Cloud image with the above shared runners repository by doing the
1. Clone the repository <https://github.com/rgl/packer-provisioner-windows-update> and `cd` into the cloned directory.
1. Run the command `go build -o packer-provisioner-windows-update` (requires `go` to be installed).
1. Verify `packer-provisioner-windows-update` is in the `PATH` environment variable.
-1. Add all [required environment variables](https://gitlab.com/gitlab-org/ci-cd/shared-runners/images/gcp/windows-containers/-/blob/master/packer.json#L2-10)
+1. Add all [required environment variables](https://gitlab.com/gitlab-org/ci-cd/shared-runners/images/gcp/windows-containers/-/blob/main/packer.json#L2-10)
in the `packer.json` file to your environment (perhaps use [`direnv`](https://direnv.net/)).
1. Build the image by running the command: `packer build packer.json`.
@@ -136,7 +136,7 @@ PowerShell has aliases for all of the following commands so you don't have to le
- `/` ---> `\` (path separator)
- `cat` ---> `type`
- `mv` ---> `move`
-- Redirection works the same (i.e. `>` and `2>&1`)
+- Redirection works the same (for example, `>` and `2>&1`)
- `.\some.exe` to call a local executable
- curl is available
- `..` and `.` are available