summaryrefslogtreecommitdiff
path: root/doc/development
diff options
context:
space:
mode:
Diffstat (limited to 'doc/development')
-rw-r--r--doc/development/adding_database_indexes.md9
-rw-r--r--doc/development/api_graphql_styleguide.md45
-rw-r--r--doc/development/application_slis/index.md45
-rw-r--r--doc/development/application_slis/rails_request_apdex.md15
-rw-r--r--doc/development/architecture.md2
-rw-r--r--doc/development/audit_event_guide/index.md2
-rw-r--r--doc/development/backend/create_source_code_be/gitaly_touch_points.md27
-rw-r--r--doc/development/backend/create_source_code_be/index.md112
-rw-r--r--doc/development/backend/create_source_code_be/rest_endpoints.md112
-rw-r--r--doc/development/backend/ruby_style_guide.md2
-rw-r--r--doc/development/batched_background_migrations.md322
-rw-r--r--doc/development/changelog.md2
-rw-r--r--doc/development/chatops_on_gitlabcom.md6
-rw-r--r--doc/development/cicd/cicd_reference_documentation_guide.md2
-rw-r--r--doc/development/cicd/templates.md26
-rw-r--r--doc/development/code_intelligence/index.md2
-rw-r--r--doc/development/code_review.md47
-rw-r--r--doc/development/contributing/design.md4
-rw-r--r--doc/development/contributing/index.md5
-rw-r--r--doc/development/contributing/issue_workflow.md2
-rw-r--r--doc/development/contributing/merge_request_workflow.md5
-rw-r--r--doc/development/contributing/verify/index.md4
-rw-r--r--doc/development/dangerbot.md19
-rw-r--r--doc/development/database/avoiding_downtime_in_migrations.md66
-rw-r--r--doc/development/database/background_migrations.md14
-rw-r--r--doc/development/database/batched_background_migrations.md371
-rw-r--r--doc/development/database/loose_foreign_keys.md10
-rw-r--r--doc/development/database/migrations_for_multiple_databases.md35
-rw-r--r--doc/development/database/multiple_databases.md63
-rw-r--r--doc/development/database/pagination_guidelines.md2
-rw-r--r--doc/development/database/strings_and_the_text_data_type.md2
-rw-r--r--doc/development/database/table_partitioning.md2
-rw-r--r--doc/development/deprecation_guidelines/index.md51
-rw-r--r--doc/development/distributed_tracing.md4
-rw-r--r--doc/development/documentation/feature_flags.md13
-rw-r--r--doc/development/documentation/restful_api_styleguide.md62
-rw-r--r--doc/development/documentation/site_architecture/index.md75
-rw-r--r--doc/development/documentation/structure.md4
-rw-r--r--doc/development/documentation/styleguide/index.md349
-rw-r--r--doc/development/documentation/styleguide/word_list.md74
-rw-r--r--doc/development/documentation/testing.md37
-rw-r--r--doc/development/documentation/versions.md232
-rw-r--r--doc/development/documentation/workflow.md2
-rw-r--r--doc/development/ee_features.md24
-rw-r--r--doc/development/emails.md2
-rw-r--r--doc/development/event_store.md8
-rw-r--r--doc/development/experiment_guide/experiment_code_reviews.md25
-rw-r--r--doc/development/experiment_guide/experiment_rollout.md77
-rw-r--r--doc/development/experiment_guide/experimentation.md11
-rw-r--r--doc/development/experiment_guide/gitlab_experiment.md589
-rw-r--r--doc/development/experiment_guide/implementing_experiments.md369
-rw-r--r--doc/development/experiment_guide/index.md83
-rw-r--r--doc/development/experiment_guide/testing_experiments.md150
-rw-r--r--doc/development/fe_guide/content_editor.md2
-rw-r--r--doc/development/fe_guide/design_anti_patterns.md2
-rw-r--r--doc/development/fe_guide/development_process.md2
-rw-r--r--doc/development/fe_guide/graphql.md4
-rw-r--r--doc/development/fe_guide/registry_architecture.md2
-rw-r--r--doc/development/fe_guide/style/html.md6
-rw-r--r--doc/development/fe_guide/tooling.md7
-rw-r--r--doc/development/fe_guide/vue3_migration.md2
-rw-r--r--doc/development/feature_flags/controls.md2
-rw-r--r--doc/development/feature_flags/index.md84
-rw-r--r--doc/development/features_inside_dot_gitlab.md1
-rw-r--r--doc/development/fips_compliance.md411
-rw-r--r--doc/development/foreign_keys.md4
-rw-r--r--doc/development/geo.md17
-rw-r--r--doc/development/geo/framework.md6
-rw-r--r--doc/development/gitaly.md20
-rw-r--r--doc/development/gitlab_flavored_markdown/index.md7
-rw-r--r--doc/development/gitlab_flavored_markdown/specification_guide/index.md301
-rw-r--r--doc/development/go_guide/dependencies.md2
-rw-r--r--doc/development/go_guide/go_upgrade.md2
-rw-r--r--doc/development/index.md4
-rw-r--r--doc/development/integrations/index.md4
-rw-r--r--doc/development/integrations/secure.md5
-rw-r--r--doc/development/internal_api/index.md22
-rw-r--r--doc/development/kubernetes.md2
-rw-r--r--doc/development/maintenance_mode.md4
-rw-r--r--doc/development/merge_request_application_and_rate_limit_guidelines.md2
-rw-r--r--doc/development/merge_request_performance_guidelines.md8
-rw-r--r--doc/development/migration_style_guide.md27
-rw-r--r--doc/development/new_fe_guide/development/performance.md14
-rw-r--r--doc/development/new_fe_guide/modules/widget_extensions.md9
-rw-r--r--doc/development/packages.md4
-rw-r--r--doc/development/performance.md3
-rw-r--r--doc/development/permissions.md4
-rw-r--r--doc/development/pipelines.md29
-rw-r--r--doc/development/product_qualified_lead_guide/index.md4
-rw-r--r--doc/development/python_guide/index.md12
-rw-r--r--doc/development/rails_initializers.md18
-rw-r--r--doc/development/rails_update.md2
-rw-r--r--doc/development/rake_tasks.md4
-rw-r--r--doc/development/redis.md11
-rw-r--r--doc/development/redis/new_redis_instance.md4
-rw-r--r--doc/development/routing.md10
-rw-r--r--doc/development/ruby_upgrade.md4
-rw-r--r--doc/development/secure_coding_guidelines.md83
-rw-r--r--doc/development/service_ping/implement.md12
-rw-r--r--doc/development/service_ping/index.md318
-rw-r--r--doc/development/service_ping/metrics_dictionary.md5
-rw-r--r--doc/development/service_ping/metrics_instrumentation.md68
-rw-r--r--doc/development/service_ping/metrics_lifecycle.md1
-rw-r--r--doc/development/service_ping/troubleshooting.md85
-rw-r--r--doc/development/sidekiq/idempotent_jobs.md2
-rw-r--r--doc/development/sidekiq_style_guide.md11
-rw-r--r--doc/development/snowplow/implementation.md12
-rw-r--r--doc/development/snowplow/troubleshooting.md26
-rw-r--r--doc/development/testing_guide/end_to_end/beginners_guide.md2
-rw-r--r--doc/development/testing_guide/end_to_end/feature_flags.md22
-rw-r--r--doc/development/testing_guide/end_to_end/index.md6
-rw-r--r--doc/development/testing_guide/end_to_end/rspec_metadata_tests.md4
-rw-r--r--doc/development/testing_guide/frontend_testing.md35
-rw-r--r--doc/development/testing_guide/img/k9s.pngbin117900 -> 0 bytes
-rw-r--r--doc/development/testing_guide/review_apps.md3
-rw-r--r--doc/development/testing_guide/testing_migrations_guide.md12
-rw-r--r--doc/development/uploads/background.md157
-rw-r--r--doc/development/uploads/implementation.md193
-rw-r--r--doc/development/uploads/index.md160
-rw-r--r--doc/development/uploads/working_with_uploads.md82
-rw-r--r--doc/development/workhorse/configuration.md19
-rw-r--r--doc/development/workhorse/index.md2
122 files changed, 3446 insertions, 2591 deletions
diff --git a/doc/development/adding_database_indexes.md b/doc/development/adding_database_indexes.md
index d263d9b5eb5..35dbd80e4d1 100644
--- a/doc/development/adding_database_indexes.md
+++ b/doc/development/adding_database_indexes.md
@@ -158,7 +158,7 @@ and should not be used. Some other points to consider:
### Why explicit names are required
As Rails is database agnostic, it generates an index name only
-from the required options of all indexes: table name and column name(s).
+from the required options of all indexes: table name and column names.
For example, imagine the following two indexes are created in a migration:
```ruby
@@ -173,7 +173,7 @@ Creation of the second index would fail, because Rails would generate
the same name for both indexes.
This is further complicated by the behavior of the `index_exists?` method.
-It considers only the table name, column name(s) and uniqueness specification
+It considers only the table name, column names, and uniqueness specification
of the index when making a comparison. Consider:
```ruby
@@ -284,8 +284,9 @@ production clone.
### Add a migration to create the index synchronously
After the index is verified to exist on the production database, create a second
-merge request that adds the index synchronously. The synchronous
-migration results in a no-op on GitLab.com, but you should still add the
+merge request that adds the index synchronously. The schema changes must be
+updated and committed to `structure.sql` in this second merge request.
+The synchronous migration results in a no-op on GitLab.com, but you should still add the
migration as expected for other installations. The below block
demonstrates how to create the second migration for the previous
asynchronous example.
diff --git a/doc/development/api_graphql_styleguide.md b/doc/development/api_graphql_styleguide.md
index 4f27e811b11..f807ed0f85e 100644
--- a/doc/development/api_graphql_styleguide.md
+++ b/doc/development/api_graphql_styleguide.md
@@ -97,7 +97,7 @@ discussed in [Nullable fields](#nullable-fields).
Fields that use the [`feature_flag` property](#feature_flag-property) and the flag is disabled by default are exempt
from the deprecation process, and can be removed at any time without notice.
-See the [deprecating fields, arguments, and enum values](#deprecating-fields-arguments-and-enum-values) section for how to deprecate items.
+See the [deprecating schema items](#deprecating-schema-items) section for how to deprecate items.
## Global IDs
@@ -540,19 +540,39 @@ def foo
end
```
-## Deprecating fields, arguments, and enum values
+## Deprecating schema items
The GitLab GraphQL API is versionless, which means we maintain backwards
compatibility with older versions of the API with every change.
-Rather than removing fields, arguments, or [enum values](#enums), they
-must be _deprecated_ instead.
+Rather than removing fields, arguments, [enum values](#enums), or [mutations](#mutations),
+they must be _deprecated_ instead.
The deprecated parts of the schema can then be removed in a future release
in accordance with the [GitLab deprecation process](../api/graphql/index.md#deprecation-and-removal-process).
-Fields, arguments, and enum values are deprecated using the `deprecated` property.
-The value of the property is a `Hash` of:
+To deprecate a schema item in GraphQL:
+
+1. [Create a deprecation issue](#create-a-deprecation-issue) for the item.
+1. [Mark the item as deprecated](#mark-the-item-as-deprecated) in the schema.
+
+See also:
+
+- [Aliasing and deprecating mutations](#aliasing-and-deprecating-mutations).
+- [How to filter Kibana for queries that used deprecated fields](graphql_guide/monitoring.md#queries-that-used-a-deprecated-field).
+
+### Create a deprecation issue
+
+Every GraphQL deprecation should have a deprecation issue created [using the `Deprecations` issue template](https://gitlab.com/gitlab-org/gitlab/-/issues/new?issuable_template=Deprecations) to track its deprecation and removal.
+
+Apply these two labels to the deprecation issue:
+
+- `~GraphQL`
+- `~deprecation`
+
+### Mark the item as deprecated
+
+Fields, arguments, enum values, and mutations are deprecated using the `deprecated` property. The value of the property is a `Hash` of:
- `reason` - Reason for the deprecation.
- `milestone` - Milestone that the field was deprecated.
@@ -569,7 +589,7 @@ The original `description` of the things being deprecated should be maintained,
and should _not_ be updated to mention the deprecation. Instead, the `reason`
is appended to the `description`.
-### Deprecation reason style guide
+#### Deprecation reason style guide
Where the reason for deprecation is due to the field, argument, or enum value being
replaced, the `reason` must indicate the replacement. For example, the
@@ -601,7 +621,7 @@ end
If the field, argument, or enum value being deprecated is not being replaced,
a descriptive deprecation `reason` should be given.
-### Deprecate Global IDs
+#### Deprecate Global IDs
We use the [`rails/globalid`](https://github.com/rails/globalid) gem to generate and parse
Global IDs, so as such they are coupled to model names. When we rename a
@@ -698,11 +718,6 @@ aware of the support.
The documentation will mention that the old Global ID style is now deprecated.
-See also:
-
-- [Aliasing and deprecating mutations](#aliasing-and-deprecating-mutations).
-- [How to filter Kibana for queries that used deprecated fields](graphql_guide/monitoring.md#queries-that-used-a-deprecated-field).
-
## Enums
GitLab GraphQL enums are defined in `app/graphql/types`. When defining new enums, the
@@ -748,7 +763,7 @@ end
```
Enum values can be deprecated using the
-[`deprecated` keyword](#deprecating-fields-arguments-and-enum-values).
+[`deprecated` keyword](#deprecating-schema-items).
### Defining GraphQL enums dynamically from Rails enums
@@ -1713,7 +1728,7 @@ mount_aliased_mutation 'BarMutation', Mutations::FooMutation
```
This allows us to rename a mutation and continue to support the old name,
-when coupled with the [`deprecated`](#deprecating-fields-arguments-and-enum-values)
+when coupled with the [`deprecated`](#deprecating-schema-items)
argument.
Example:
diff --git a/doc/development/application_slis/index.md b/doc/development/application_slis/index.md
index a202bc419e1..2834723fc01 100644
--- a/doc/development/application_slis/index.md
+++ b/doc/development/application_slis/index.md
@@ -30,18 +30,33 @@ to be emitted from the rails application:
## Defining a new SLI
-An SLI can be defined using the `Gitlab::Metrics::Sli` class.
+An SLI can be defined using the `Gitlab::Metrics::Sli::Apdex` or
+`Gitlab::Metrics::Sli::ErrorRate` class. These work in broadly the same way, but
+for clarity, they define different metric names:
+
+1. `Gitlab::Metrics::Sli::Apdex.new('foo')` defines:
+ 1. `gitlab_sli:foo_apdex:total` for the total number of measurements.
+ 1. `gitlab_sli:foo_apdex:success_total` for the number of successful
+ measurements.
+1. `Gitlab::Metrics::Sli::ErrorRate.new('foo')` defines:
+ 1. `gitlab_sli:foo_error_rate:total` for the total number of measurements.
+ 1. `gitlab_sli:foo_error_rate:error_total` for the number of error
+ measurements - as this is an error rate, it's more natural to talk about
+ errors divided by the total.
+
+As shown in this example, they can share a base name (`foo` in this example). We
+recommend this when they refer to the same operation.
Before the first scrape, it is important to have [initialized the SLI
with all possible
label-combinations](https://prometheus.io/docs/practices/instrumentation/#avoid-missing-metrics). This
avoid confusing results when using these counters in calculations.
-To initialize an SLI, use the `.inilialize_sli` class method, for
+To initialize an SLI, use the `.initialize_sli` class method, for
example:
```ruby
-Gitlab::Metrics::Sli.initialize_sli(:received_email, [
+Gitlab::Metrics::Sli::Apdex.initialize_sli(:received_email, [
{
feature_category: :team_planning,
email_type: :create_issue
@@ -67,7 +82,7 @@ this adds is understood and acceptable.
Tracking an operation in the newly defined SLI can be done like this:
```ruby
-Gitlab::Metrics::Sli[:received_email].increment(
+Gitlab::Metrics::Sli::Apdex[:received_email].increment(
labels: {
feature_category: :service_desk,
email_type: :service_desk
@@ -79,20 +94,26 @@ Gitlab::Metrics::Sli[:received_email].increment(
Calling `#increment` on this SLI will increment the total Prometheus counter
```prometheus
-gitlab_sli:received_email:total{ feature_category='service_desk', email_type='service_desk' }
+gitlab_sli:received_email_apdex:total{ feature_category='service_desk', email_type='service_desk' }
```
-If the `success:` argument passed is truthy, then the success counter
-will also be incremented:
+If the `success:` argument passed is truthy, then the success counter will also
+be incremented:
```prometheus
-gitlab_sli:received_email:success_total{ feature_category='service_desk', email_type='service_desk' }
+gitlab_sli:received_email_apdex:success_total{ feature_category='service_desk', email_type='service_desk' }
```
-So far, only tracking `apdex` using a success rate is supported. If you
-need to track errors this way, please upvote
-[this issue](https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/1395)
-and leave a comment so we can prioritize this.
+For error rate SLIs, the equivalent argument is called `error:`:
+
+```ruby
+Gitlab::Metrics::Sli::ErrorRate[:merge].increment(
+ labels: {
+ merge_type: :fast_forward
+ },
+ error: !merge_success?
+)
+```
## Using the SLI in service monitoring and alerts
diff --git a/doc/development/application_slis/rails_request_apdex.md b/doc/development/application_slis/rails_request_apdex.md
index 373589aaefc..f9613a14dd1 100644
--- a/doc/development/application_slis/rails_request_apdex.md
+++ b/doc/development/application_slis/rails_request_apdex.md
@@ -136,8 +136,19 @@ information in the logs to check:
1. The table loads information for the busiest endpoints by
default. To speed the response, add both:
- - A filter for `json.caller_id.keyword`.
- - The identifier you're interested in, such as `Projects::RawController#show`.
+
+ - A filter for `json.meta.caller_id.keyword`.
+ - The identifier you're interested in, for example:
+
+ ```ruby
+ Projects::RawController#show
+ ```
+
+ or:
+
+ ```plaintext
+ GET /api/:version/projects/:id/snippets/:snippet_id/raw
+ ```
1. Check the [appropriate percentile duration](#request-apdex-slo) for
the service handling the endpoint. The overall duration should
diff --git a/doc/development/architecture.md b/doc/development/architecture.md
index dd432dd5e37..486ef6d27fc 100644
--- a/doc/development/architecture.md
+++ b/doc/development/architecture.md
@@ -347,7 +347,7 @@ Component statuses are linked to configuration documentation for each component.
| [Elasticsearch](#elasticsearch) | Improved search within GitLab | ⤓ | ⚙ | ⤓ | ⤓ | ✅ | ⤓ | ⚙ | EE Only |
| [Gitaly](#gitaly) | Git RPC service for handling all Git calls made by GitLab | ✅ | ✅ | ✅ | ✅ | ✅ | ⚙ | ✅ | CE & EE |
| [GitLab Exporter](#gitlab-exporter) | Generates a variety of GitLab metrics | ✅ | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | CE & EE |
-| [GitLab Geo Node](#gitlab-geo) | Geographically distributed GitLab nodes | ⚙ | ⚙ | ❌ | ❌ | ✅ | ❌ | ⚙ | EE Only |
+| [GitLab Geo](#gitlab-geo) | Geographically distributed GitLab site | ⚙ | ⚙ | ❌ | ❌ | ✅ | ❌ | ⚙ | EE Only |
| [GitLab Pages](#gitlab-pages) | Hosts static websites | ⚙ | ⚙ | ❌ | ❌ | ✅ | ⚙ | ⚙ | CE & EE |
| [GitLab agent](#gitlab-agent) | Integrate Kubernetes clusters in a cloud-native way | ⚙ | ⚙ | ⚙ | ❌ | ❌ | ⤓ | ⚙ | EE Only |
| [GitLab self-monitoring: Alertmanager](#alertmanager) | Deduplicates, groups, and routes alerts from Prometheus | ⚙ | ⚙ | ✅ | ⚙ | ✅ | ❌ | ❌ | CE & EE |
diff --git a/doc/development/audit_event_guide/index.md b/doc/development/audit_event_guide/index.md
index 34f78174e5b..0d62bcdc3b2 100644
--- a/doc/development/audit_event_guide/index.md
+++ b/doc/development/audit_event_guide/index.md
@@ -25,7 +25,7 @@ To instrument an audit event, the following attributes should be provided:
| `scope` | User, Project, Group | true | Scope which the audit event belongs to |
| `target` | Object | true | Target object being audited |
| `message` | String | true | Message describing the action |
-| `created_at` | DateTime | false | The time when the action occured. Defaults to `DateTime.current` |
+| `created_at` | DateTime | false | The time when the action occurred. Defaults to `DateTime.current` |
## How to instrument new Audit Events
diff --git a/doc/development/backend/create_source_code_be/gitaly_touch_points.md b/doc/development/backend/create_source_code_be/gitaly_touch_points.md
new file mode 100644
index 00000000000..5ac362e709f
--- /dev/null
+++ b/doc/development/backend/create_source_code_be/gitaly_touch_points.md
@@ -0,0 +1,27 @@
+---
+stage: Create
+group: Source Code
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
+---
+
+# Source Code - Gitaly Touch Points
+
+## RPCs
+
+Gitaly is a wrapper around the `git` binary, running in a [Gitaly Cluster](../../../administration/gitaly/index.md). It provides managed access to the file system housing the `git` repositories, via Golang Remote Procedure Calls (RPCs). Other functions are access optimization, caching, and a form of pagination against the file system.
+
+The comprehensive [Beginner's guide to Gitaly contributions](https://gitlab.com/gitlab-org/gitaly/-/blob/master/doc/beginners_guide.md) is focused on making updates to Gitaly, and offers many insights into how to understand the Gitaly code.
+
+All access to Gitaly from other parts of GitLab are through Create: Source Code endpoints:
+
+## The `Commit` model
+
+After a call is made to Gitaly, Git `commit` information is stored in memory. This information is wrapped by the [Ruby `Commit` Model](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/models/commit.rb), which is a wrapper around [`Gitlab::Git::Commit`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/git/commit.rb).
+
+The `Commit` model acts like an ActiveRecord object, but it does not have a PostgreSQL backend. Instead, it maps back to Gitaly RPCs.
+
+## Rugged Patches
+
+Historically in GitLab, access to the server-based `git` repositories was provided through the [rugged](https://github.com/libgit2/rugged) RubyGem, which provides Ruby bindings to `libgit2`. This was further extended by what is termed "Rugged Patches", [a set of extensions to the Rugged library](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/57317). Rugged implementations of some of the most commonly-used RPCs can be [enabled via feature flags](../../gitaly.md#legacy-rugged-code).
+
+Rugged access requires the use of a NFS file system, a direction GitLab is moving away from in favor of Gitaly Cluster. Rugged has been proposed for [deprecation and removal](https://gitlab.com/gitlab-org/gitaly/-/issues/1690). Several large customers are still using NFS, and a specific removal date is not planned at this point.
diff --git a/doc/development/backend/create_source_code_be/index.md b/doc/development/backend/create_source_code_be/index.md
index 8661d8b4d74..ad4e25dc815 100644
--- a/doc/development/backend/create_source_code_be/index.md
+++ b/doc/development/backend/create_source_code_be/index.md
@@ -35,107 +35,13 @@ for GitLab Shell.
## GitLab Rails
-### Source code API endpoints
+### Gitaly touch points
-| Endpoint | Threshold | Source |
-| -----------------------------------------------------------------------------------|---------------------------------------|--------------------------------------------------------------------------------------|
-| `DELETE /api/:version/projects/:id/protected_branches/:name` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/api/protected_branches.rb) |
-| `GET /api/:version/internal/authorized_keys` | `:high` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/api/internal/base.rb) | | |
-| `GET /api/:version/internal/lfs` | `:high` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/api/internal/lfs.rb)|
-| `GET /api/:version/projects/:id/approval_rules` | `:low` | |
-| `GET /api/:version/projects/:id/approval_settings` | default | |
-| `GET /api/:version/projects/:id/approvals` | default | |
-| `GET /api/:version/projects/:id/forks` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/api/projects.rb) |
-| `GET /api/:version/projects/:id/groups` | default | [source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/api/projects.rb) |
-| `GET /api/:version/projects/:id/languages` | `:medium` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/api/projects.rb) |
-| `GET /api/:version/projects/:id/merge_request_approval_setting` | `:medium` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/lib/api/merge_request_approval_settings.rb) |
-| `GET /api/:version/projects/:id/merge_requests/:merge_request_iid/approval_rules` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/lib/api/merge_request_approval_rules.rb) |
-| `GET /api/:version/projects/:id/merge_requests/:merge_request_iid/approval_settings` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/lib/api/project_approval_settings.rb) |
-| `GET /api/:version/projects/:id/merge_requests/:merge_request_iid/approval_state` | `:low` | [source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/api/merge_request_approvals.rb) |
-| `GET /api/:version/projects/:id/merge_requests/:merge_request_iid/approvals` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/api/merge_request_approvals.rb) |
-| `GET /api/:version/projects/:id/protected_branches` | default |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/api/protected_branches.rb) |
-| `GET /api/:version/projects/:id/protected_branches/:name` | default |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/api/protected_branches.rb) |
-| `GET /api/:version/projects/:id/protected_tags` | default | |
-| `GET /api/:version/projects/:id/protected_tags/:name` | default | |
-| `GET /api/:version/projects/:id/push_rule` | default | |
-| `GET /api/:version/projects/:id/remote_mirrors` | default | |
-| `GET /api/:version/projects/:id/repository/archive` | default | |
-| `GET /api/:version/projects/:id/repository/blobs/:sha` | default | |
-| `GET /api/:version/projects/:id/repository/blobs/:sha/raw` | default | |
-| `GET /api/:version/projects/:id/repository/branches` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/api/branches.rb) |
-| `GET /api/:version/projects/:id/repository/branches/:branch` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/api/branches.rb) |
-| `GET /api/:version/projects/:id/repository/commits` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/api/commits.rb)|
-| `GET /api/:version/projects/:id/repository/commits/:sha` | default | [source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/api/commits.rb) |
-| `GET /api/:version/projects/:id/repository/commits/:sha/comments` | default | [source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/api/commits.rb) |
-| `GET /api/:version/projects/:id/repository/commits/:sha/diff` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/api/commits.rb) |
-| `GET /api/:version/projects/:id/repository/commits/:sha/merge_requests` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/api/commits.rb)|
-| `GET /api/:version/projects/:id/repository/commits/:sha/refs` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/api/commits.rb) |
-| `GET /api/:version/projects/:id/repository/compare` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/api/repositories.rb) |
-| `GET /api/:version/projects/:id/repository/contributors` | default | |
-| `GET /api/:version/projects/:id/repository/files/:file_path` | default | |
-| `GET /api/:version/projects/:id/repository/files/:file_path/raw` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/api/files.rb) |
-| `GET /api/:version/projects/:id/repository/tags` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/api/tags.rb) |
-| `GET /api/:version/projects/:id/repository/tree` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/api/repositories.rb) |
-| `GET /api/:version/projects/:id/statistics` | default | |
-| `GraphqlController#execute` | default | |
-| `HEAD /api/:version/projects/:id/repository/files/:file_path` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/api/files.rb) |
-| `HEAD /api/:version/projects/:id/repository/files/:file_path/raw` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/api/files.rb) |
-| `POST /api/:version/internal/allowed` | default | [source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/api/internal/base.rb) |
-| `POST /api/:version/internal/lfs_authenticate` | `:high` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/api/internal/base.rb) |
-| `POST /api/:version/internal/post_receive` | default | [source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/api/internal/base.rb) |
-| `POST /api/:version/internal/pre_receive` | `:high` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/api/internal/base.rb) |
-| `POST /api/:version/projects/:id/approvals` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/lib/api/project_approvals.rb) |
-| `POST /api/:version/projects/:id/merge_requests/:merge_request_iid/approvals` | `:low` | [source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/api/merge_request_approvals.rb) |
-| `POST /api/:version/projects/:id/merge_requests/:merge_request_iid/approve` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/api/merge_request_approvals.rb) |
-| `POST /api/:version/projects/:id/merge_requests/:merge_request_iid/unapprove` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/api/merge_request_approvals.rb)|
-| `POST /api/:version/projects/:id/protected_branches` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/api/protected_branches.rb)|
-| `POST /api/:version/projects/:id/repository/commits` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/api/commits.rb)|
-| `POST /api/:version/projects/:id/repository/files/:file_path` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/api/files.rb) |
-| `PUT /api/:version/projects/:id/push_rule` | default | |
-| `PUT /api/:version/projects/:id/repository/files/:file_path` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/api/files.rb) |
-| `Projects::BlameController#show` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/controllers/projects/blame_controller.rb) |
-| `Projects::BlobController#create` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/controllers/projects/blob_controller.rb) |
-| `Projects::BlobController#diff` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/controllers/projects/blob_controller.rb) |
-| `Projects::BlobController#edit` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/controllers/projects/blob_controller.rb) |
-| `Projects::BlobController#show` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/controllers/projects/blob_controller.rb) |
-| `Projects::BlobController#update` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/controllers/projects/blob_controller.rb) |
-| `Projects::BranchesController#create` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/controllers/projects/branches_controller.rb) |
-| `Projects::BranchesController#destroy` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/controllers/projects/branches_controller.rb) |
-| `Projects::BranchesController#diverging_commit_counts` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/controllers/projects/branches_controller.rb) |
-| `Projects::BranchesController#index` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/controllers/projects/branches_controller.rb) |
-| `Projects::BranchesController#new` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/controllers/projects/branches_controller.rb) |
-| `Projects::CommitController#branches` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/controllers/projects/commit_controller.rb) |
-| `Projects::CommitController#merge_requests` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/controllers/projects/commit_controller.rb) |
-| `Projects::CommitController#pipelines` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/controllers/projects/commit_controller.rb) |
-| `Projects::CommitController#show` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/controllers/projects/commit_controller.rb) |
-| `Projects::CommitsController#show` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/controllers/projects/commits_controller.rb)|
-| `Projects::CommitsController#signatures` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/controllers/projects/commits_controller.rb) |
-| `Projects::CompareController#create` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/controllers/projects/commits_controller.rb) |
-| `Projects::CompareController#index` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/controllers/projects/compare_controller.rb) |
-| `Projects::CompareController#show` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/controllers/projects/compare_controller.rb) |
-| `Projects::CompareController#signatures` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/controllers/projects/compare_controller.rb) |
-| `Projects::FindFileController#list` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/controllers/projects/find_file_controller.rb) |
-| `Projects::FindFileController#show` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/controllers/projects/find_file_controller.rb) |
-| `Projects::ForksController#index` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/controllers/projects/forks_controller.rb) |
-| `Projects::GraphsController#show` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/controllers/projects/graphs_controller.rb) |
-| `Projects::NetworkController#show` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/controllers/projects/network_controller.rb) |
-| `Projects::PathLocksController#index` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/app/controllers/projects/path_locks_controller.rb) |
-| `Projects::RawController#show` | default | |
-| `Projects::RefsController#logs_tree` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/controllers/projects/refs_controller.rb) |
-| `Projects::RefsController#switch` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/controllers/projects/refs_controller.rb) |
-| `Projects::RepositoriesController#archive` | default | |
-| `Projects::Settings::RepositoryController#show` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/controllers/projects/settings/repository_controller.rb) |
-| `Projects::TagsController#index` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/controllers/projects/tags_controller.rb) |
-| `Projects::TagsController#new` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/controllers/projects/tags_controller.rb) |
-| `Projects::TagsController#show` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/controllers/projects/tags_controller.rb) |
-| `Projects::TemplatesController#names` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/controllers/projects/templates_controller.rb) |
-| `Projects::TreeController#show` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/controllers/projects/tree_controller.rb) |
-| `ProjectsController#refs` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/controllers/projects_controller.rb) |
-| `Repositories::GitHttpController#git_receive_pack` | default | |
-| `Repositories::GitHttpController#git_upload_pack` | default | |
-| `Repositories::GitHttpController#info_refs` | default | |
-| `Repositories::LfsApiController#batch` | `:medium` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/controllers/repositories/lfs_api_controller.rb) |
-| `Repositories::LfsLocksApiController#verify` | default | |
-| `Repositories::LfsStorageController#download` | `:medium` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/controllers/repositories/lfs_storage_controller.rb) |
-| `Repositories::LfsStorageController#upload_authorize` | `:medium` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/controllers/repositories/lfs_storage_controller.rb) |
-| `Repositories::LfsStorageController#upload_finalize` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/controllers/repositories/lfs_storage_controller.rb) |
+Gitaly is a Golang RPC service which handles all the `git` calls made by GitLab.
+GitLab is not exposed directly, and all traffic comes through Create: Source Code.
+For more information, read [Gitaly touch points](gitaly_touch_points.md).
+
+### Source Code REST API Endpoints
+
+Create: Source Code has over 100 REST endpoints, being a mixture of Grape API endpoints and Rails controller endpoints.
+For a detailed list, refer to [Source Code REST Endpoints](rest_endpoints.md).
diff --git a/doc/development/backend/create_source_code_be/rest_endpoints.md b/doc/development/backend/create_source_code_be/rest_endpoints.md
new file mode 100644
index 00000000000..dd43bb914c9
--- /dev/null
+++ b/doc/development/backend/create_source_code_be/rest_endpoints.md
@@ -0,0 +1,112 @@
+---
+stage: Create
+group: Source Code
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
+---
+
+# Source Code REST endpoints
+
+The Create :: Source Code team maintains these endpoints:
+
+| Endpoint | Threshold | Source |
+| -----------------------------------------------------------------------------------|---------------------------------------|--------------------------------------------------------------------------------------|
+| `DELETE /api/:version/projects/:id/protected_branches/:name` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/api/protected_branches.rb) |
+| `GET /api/:version/internal/authorized_keys` | `:high` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/api/internal/base.rb) | | |
+| `GET /api/:version/internal/lfs` | `:high` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/api/internal/lfs.rb)|
+| `GET /api/:version/projects/:id/approval_rules` | `:low` | |
+| `GET /api/:version/projects/:id/approval_settings` | default | |
+| `GET /api/:version/projects/:id/approvals` | default | |
+| `GET /api/:version/projects/:id/forks` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/api/projects.rb) |
+| `GET /api/:version/projects/:id/groups` | default | [source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/api/projects.rb) |
+| `GET /api/:version/projects/:id/languages` | `:medium` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/api/projects.rb) |
+| `GET /api/:version/projects/:id/merge_request_approval_setting` | `:medium` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/lib/api/merge_request_approval_settings.rb) |
+| `GET /api/:version/projects/:id/merge_requests/:merge_request_iid/approval_rules` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/lib/api/merge_request_approval_rules.rb) |
+| `GET /api/:version/projects/:id/merge_requests/:merge_request_iid/approval_settings` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/lib/api/project_approval_settings.rb) |
+| `GET /api/:version/projects/:id/merge_requests/:merge_request_iid/approval_state` | `:low` | [source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/api/merge_request_approvals.rb) |
+| `GET /api/:version/projects/:id/merge_requests/:merge_request_iid/approvals` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/api/merge_request_approvals.rb) |
+| `GET /api/:version/projects/:id/protected_branches` | default |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/api/protected_branches.rb) |
+| `GET /api/:version/projects/:id/protected_branches/:name` | default |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/api/protected_branches.rb) |
+| `GET /api/:version/projects/:id/protected_tags` | default | |
+| `GET /api/:version/projects/:id/protected_tags/:name` | default | |
+| `GET /api/:version/projects/:id/push_rule` | default | |
+| `GET /api/:version/projects/:id/remote_mirrors` | default | |
+| `GET /api/:version/projects/:id/repository/archive` | default | |
+| `GET /api/:version/projects/:id/repository/blobs/:sha` | default | |
+| `GET /api/:version/projects/:id/repository/blobs/:sha/raw` | default | |
+| `GET /api/:version/projects/:id/repository/branches` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/api/branches.rb) |
+| `GET /api/:version/projects/:id/repository/branches/:branch` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/api/branches.rb) |
+| `GET /api/:version/projects/:id/repository/commits` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/api/commits.rb)|
+| `GET /api/:version/projects/:id/repository/commits/:sha` | default | [source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/api/commits.rb) |
+| `GET /api/:version/projects/:id/repository/commits/:sha/comments` | default | [source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/api/commits.rb) |
+| `GET /api/:version/projects/:id/repository/commits/:sha/diff` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/api/commits.rb) |
+| `GET /api/:version/projects/:id/repository/commits/:sha/merge_requests` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/api/commits.rb)|
+| `GET /api/:version/projects/:id/repository/commits/:sha/refs` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/api/commits.rb) |
+| `GET /api/:version/projects/:id/repository/compare` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/api/repositories.rb) |
+| `GET /api/:version/projects/:id/repository/contributors` | default | |
+| `GET /api/:version/projects/:id/repository/files/:file_path` | default | |
+| `GET /api/:version/projects/:id/repository/files/:file_path/raw` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/api/files.rb) |
+| `GET /api/:version/projects/:id/repository/tags` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/api/tags.rb) |
+| `GET /api/:version/projects/:id/repository/tree` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/api/repositories.rb) |
+| `GET /api/:version/projects/:id/statistics` | default | |
+| `GraphqlController#execute` | default | |
+| `HEAD /api/:version/projects/:id/repository/files/:file_path` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/api/files.rb) |
+| `HEAD /api/:version/projects/:id/repository/files/:file_path/raw` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/api/files.rb) |
+| `POST /api/:version/internal/allowed` | default | [source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/api/internal/base.rb) |
+| `POST /api/:version/internal/lfs_authenticate` | `:high` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/api/internal/base.rb) |
+| `POST /api/:version/internal/post_receive` | default | [source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/api/internal/base.rb) |
+| `POST /api/:version/internal/pre_receive` | `:high` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/api/internal/base.rb) |
+| `POST /api/:version/projects/:id/approvals` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/lib/api/project_approvals.rb) |
+| `POST /api/:version/projects/:id/merge_requests/:merge_request_iid/approvals` | `:low` | [source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/api/merge_request_approvals.rb) |
+| `POST /api/:version/projects/:id/merge_requests/:merge_request_iid/approve` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/api/merge_request_approvals.rb) |
+| `POST /api/:version/projects/:id/merge_requests/:merge_request_iid/unapprove` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/api/merge_request_approvals.rb)|
+| `POST /api/:version/projects/:id/protected_branches` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/api/protected_branches.rb)|
+| `POST /api/:version/projects/:id/repository/commits` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/api/commits.rb)|
+| `POST /api/:version/projects/:id/repository/files/:file_path` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/api/files.rb) |
+| `PUT /api/:version/projects/:id/push_rule` | default | |
+| `PUT /api/:version/projects/:id/repository/files/:file_path` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/api/files.rb) |
+| `Projects::BlameController#show` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/controllers/projects/blame_controller.rb) |
+| `Projects::BlobController#create` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/controllers/projects/blob_controller.rb) |
+| `Projects::BlobController#diff` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/controllers/projects/blob_controller.rb) |
+| `Projects::BlobController#edit` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/controllers/projects/blob_controller.rb) |
+| `Projects::BlobController#show` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/controllers/projects/blob_controller.rb) |
+| `Projects::BlobController#update` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/controllers/projects/blob_controller.rb) |
+| `Projects::BranchesController#create` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/controllers/projects/branches_controller.rb) |
+| `Projects::BranchesController#destroy` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/controllers/projects/branches_controller.rb) |
+| `Projects::BranchesController#diverging_commit_counts` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/controllers/projects/branches_controller.rb) |
+| `Projects::BranchesController#index` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/controllers/projects/branches_controller.rb) |
+| `Projects::BranchesController#new` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/controllers/projects/branches_controller.rb) |
+| `Projects::CommitController#branches` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/controllers/projects/commit_controller.rb) |
+| `Projects::CommitController#merge_requests` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/controllers/projects/commit_controller.rb) |
+| `Projects::CommitController#pipelines` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/controllers/projects/commit_controller.rb) |
+| `Projects::CommitController#show` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/controllers/projects/commit_controller.rb) |
+| `Projects::CommitsController#show` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/controllers/projects/commits_controller.rb)|
+| `Projects::CommitsController#signatures` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/controllers/projects/commits_controller.rb) |
+| `Projects::CompareController#create` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/controllers/projects/commits_controller.rb) |
+| `Projects::CompareController#index` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/controllers/projects/compare_controller.rb) |
+| `Projects::CompareController#show` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/controllers/projects/compare_controller.rb) |
+| `Projects::CompareController#signatures` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/controllers/projects/compare_controller.rb) |
+| `Projects::FindFileController#list` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/controllers/projects/find_file_controller.rb) |
+| `Projects::FindFileController#show` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/controllers/projects/find_file_controller.rb) |
+| `Projects::ForksController#index` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/controllers/projects/forks_controller.rb) |
+| `Projects::GraphsController#show` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/controllers/projects/graphs_controller.rb) |
+| `Projects::NetworkController#show` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/controllers/projects/network_controller.rb) |
+| `Projects::PathLocksController#index` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/app/controllers/projects/path_locks_controller.rb) |
+| `Projects::RawController#show` | default | |
+| `Projects::RefsController#logs_tree` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/controllers/projects/refs_controller.rb) |
+| `Projects::RefsController#switch` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/controllers/projects/refs_controller.rb) |
+| `Projects::RepositoriesController#archive` | default | |
+| `Projects::Settings::RepositoryController#show` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/controllers/projects/settings/repository_controller.rb) |
+| `Projects::TagsController#index` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/controllers/projects/tags_controller.rb) |
+| `Projects::TagsController#new` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/controllers/projects/tags_controller.rb) |
+| `Projects::TagsController#show` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/controllers/projects/tags_controller.rb) |
+| `Projects::TemplatesController#names` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/controllers/projects/templates_controller.rb) |
+| `Projects::TreeController#show` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/controllers/projects/tree_controller.rb) |
+| `ProjectsController#refs` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/controllers/projects_controller.rb) |
+| `Repositories::GitHttpController#git_receive_pack` | default | |
+| `Repositories::GitHttpController#git_upload_pack` | default | |
+| `Repositories::GitHttpController#info_refs` | default | |
+| `Repositories::LfsApiController#batch` | `:medium` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/controllers/repositories/lfs_api_controller.rb) |
+| `Repositories::LfsLocksApiController#verify` | default | |
+| `Repositories::LfsStorageController#download` | `:medium` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/controllers/repositories/lfs_storage_controller.rb) |
+| `Repositories::LfsStorageController#upload_authorize` | `:medium` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/controllers/repositories/lfs_storage_controller.rb) |
+| `Repositories::LfsStorageController#upload_finalize` | `:low` |[source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/controllers/repositories/lfs_storage_controller.rb) |
diff --git a/doc/development/backend/ruby_style_guide.md b/doc/development/backend/ruby_style_guide.md
index 419db628b0d..6c8125a6157 100644
--- a/doc/development/backend/ruby_style_guide.md
+++ b/doc/development/backend/ruby_style_guide.md
@@ -13,7 +13,7 @@ Generally, if a style is not covered by [existing Rubocop rules or style guides]
Before adding a new cop to enforce a given style, make sure to discuss it with your team.
When the style is approved by a backend EM or by a BE staff eng, add a new section to this page to
document the new rule. For every new guideline, add it in a new section and link the discussion from the section's
-[version history note](../documentation/styleguide/index.md#version-text-in-the-version-history)
+[version history note](../documentation/versions.md#add-a-version-history-item)
to provide context and serve as a reference.
Just because something is listed here does not mean it cannot be reopened for discussion.
diff --git a/doc/development/batched_background_migrations.md b/doc/development/batched_background_migrations.md
index e7703b5dd2b..f5f3655944b 100644
--- a/doc/development/batched_background_migrations.md
+++ b/doc/development/batched_background_migrations.md
@@ -1,319 +1,11 @@
---
-type: reference, dev
-stage: Enablement
-group: Database
-info: "See the Technical Writers assigned to Development Guidelines: https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments-to-development-guidelines"
+redirect_to: 'database/batched_background_migrations.md'
+remove_date: '2022-07-26'
---
-# Batched background migrations
+This document was moved to [another location](database/batched_background_migrations.md).
-Batched Background Migrations should be used to perform data migrations whenever a
-migration exceeds [the time limits](migration_style_guide.md#how-long-a-migration-should-take)
-in our guidelines. For example, you can use batched background
-migrations to migrate data that's stored in a single JSON column
-to a separate table instead.
-
-## When to use batched background migrations
-
-Use a batched background migration when you migrate _data_ in tables containing
-so many rows that the process would exceed
-[the time limits in our guidelines](migration_style_guide.md#how-long-a-migration-should-take)
-if performed using a regular Rails migration.
-
-- Batched background migrations should be used when migrating data in
- [high-traffic tables](migration_style_guide.md#high-traffic-tables).
-- Batched background migrations may also be used when executing numerous single-row queries
- for every item on a large dataset. Typically, for single-record patterns, runtime is
- largely dependent on the size of the dataset. Split the dataset accordingly,
- and put it into background migrations.
-- Don't use batched background migrations to perform schema migrations.
-
-Background migrations can help when:
-
-- Migrating events from one table to multiple separate tables.
-- Populating one column based on JSON stored in another column.
-- Migrating data that depends on the output of external services. (For example, an API.)
-
-NOTE:
-If the batched background migration is part of an important upgrade, it must be announced
-in the release post. Discuss with your Project Manager if you're unsure if the migration falls
-into this category.
-
-## Isolation
-
-Batched background migrations must be isolated and can not use application code. (For example,
-models defined in `app/models`.). Because these migrations can take a long time to
-run, it's possible for new versions to deploy while the migrations are still running.
-
-## Idempotence
-
-Batched background migrations are executed in a context of a Sidekiq process.
-The usual Sidekiq rules apply, especially the rule that jobs should be small
-and idempotent. Make sure that in case that your migration job is retried, data
-integrity is guaranteed.
-
-See [Sidekiq best practices guidelines](https://github.com/mperham/sidekiq/wiki/Best-Practices)
-for more details.
-
-## Batched background migrations for EE-only features
-
-All the background migration classes for EE-only features should be present in GitLab CE.
-For this purpose, create an empty class for GitLab CE, and extend it for GitLab EE
-as explained in the guidelines for
-[implementing Enterprise Edition features](ee_features.md#code-in-libgitlabbackground_migration).
-
-Batched Background migrations are simple classes that define a `perform` method. A
-Sidekiq worker then executes such a class, passing any arguments to it. All
-migration classes must be defined in the namespace
-`Gitlab::BackgroundMigration`. Place the files in the directory
-`lib/gitlab/background_migration/`.
-
-## Queueing
-
-Queueing a batched background migration should be done in a post-deployment
-migration. Use this `queue_batched_background_migration` example, queueing the
-migration to be executed in batches. Replace the class name and arguments with the values
-from your migration:
-
-```ruby
-queue_batched_background_migration(
- JOB_CLASS_NAME,
- TABLE_NAME,
- JOB_ARGUMENTS,
- JOB_INTERVAL
- )
-```
-
-Make sure the newly-created data is either migrated, or
-saved in both the old and new version upon creation. Removals in
-turn can be handled by defining foreign keys with cascading deletes.
-
-### Requeuing batched background migrations
-
-If one of the batched background migrations contains a bug that is fixed in a patch
-release, you must requeue the batched background migration so the migration
-repeats on systems that already performed the initial migration.
-
-When you requeue the batched background migration, turn the original
-queuing into a no-op by clearing up the `#up` and `#down` methods of the
-migration performing the requeuing. Otherwise, the batched background migration is
-queued multiple times on systems that are upgrading multiple patch releases at
-once.
-
-When you start the second post-deployment migration, delete the
-previously batched migration with the provided code:
-
-```ruby
-Gitlab::Database::BackgroundMigration::BatchedMigration
- .for_configuration(MIGRATION_NAME, TABLE_NAME, COLUMN, JOB_ARGUMENTS)
- .delete_all
-```
-
-## Cleaning up
-
-NOTE:
-Cleaning up any remaining background migrations must be done in either a major
-or minor release. You must not do this in a patch release.
-
-Because background migrations can take a long time, you can't immediately clean
-things up after queueing them. For example, you can't drop a column used in the
-migration process, as jobs would fail. You must add a separate _post-deployment_
-migration in a future release that finishes any remaining
-jobs before cleaning things up. (For example, removing a column.)
-
-To migrate the data from column `foo` (containing a big JSON blob) to column `bar`
-(containing a string), you would:
-
-1. Release A:
- 1. Create a migration class that performs the migration for a row with a given ID.
- 1. Update new rows using one of these techniques:
- - Create a new trigger for simple copy operations that don't need application logic.
- - Handle this operation in the model/service as the records are created or updated.
- - Create a new custom background job that updates the records.
- 1. Queue the batched background migration for all existing rows in a post-deployment migration.
-1. Release B:
- 1. Add a post-deployment migration that checks if the batched background migration is completed.
- 1. Deploy code so that the application starts using the new column and stops to update new records.
- 1. Remove the old column.
-
-Bump to the [import/export version](../user/project/settings/import_export.md) may
-be required, if importing a project from a prior version of GitLab requires the
-data to be in the new format.
-
-## Example
-
-The table `integrations` has a field called `properties`, stored in JSON. For all rows,
-extract the `url` key from this JSON object and store it in the `integrations.url`
-column. Millions of integrations exist, and parsing JSON is slow, so you can't
-do this work in a regular migration.
-
-1. Start by defining our migration class:
-
- ```ruby
- class Gitlab::BackgroundMigration::ExtractIntegrationsUrl
- class Integration < ActiveRecord::Base
- self.table_name = 'integrations'
- end
-
- def perform(start_id, end_id)
- Integration.where(id: start_id..end_id).each do |integration|
- json = JSON.load(integration.properties)
-
- integration.update(url: json['url']) if json['url']
- rescue JSON::ParserError
- # If the JSON is invalid we don't want to keep the job around forever,
- # instead we'll just leave the "url" field to whatever the default value
- # is.
- next
- end
- end
- end
- ```
-
- NOTE:
- To get a `connection` in the batched background migration,use an inheritance
- relation using the following base class `Gitlab::BackgroundMigration::BaseJob`.
- For example: `class Gitlab::BackgroundMigration::ExtractIntegrationsUrl < Gitlab::BackgroundMigration::BaseJob`
-
-1. Add a new trigger to the database to update newly created and updated integrations,
- similar to this example:
-
- ```ruby
- execute(<<~SQL)
- CREATE OR REPLACE FUNCTION example() RETURNS trigger
- LANGUAGE plpgsql
- AS $$
- BEGIN
- NEW."url" := NEW.properties -> "url"
- RETURN NEW;
- END;
- $$;
- SQL
- ```
-
-1. Create a post-deployment migration that queues the migration for existing data:
-
- ```ruby
- class QueueExtractIntegrationsUrl < Gitlab::Database::Migration[1.0]
- disable_ddl_transaction!
-
- MIGRATION = 'ExtractIntegrationsUrl'
- DELAY_INTERVAL = 2.minutes
-
- def up
- queue_batched_background_migration(
- MIGRATION,
- :migrations,
- :id,
- job_interval: DELAY_INTERVAL
- )
- end
-
- def down
- Gitlab::Database::BackgroundMigration::BatchedMigration
- .for_configuration(MIGRATION, :migrations, :id, []).delete_all
- end
- end
- ```
-
- After deployment, our application:
- - Continues using the data as before.
- - Ensures that both existing and new data are migrated.
-
-1. In the next release, remove the trigger. We must also add a new post-deployment migration
- that checks that the batched background migration is completed. For example:
-
- ```ruby
- class FinalizeExtractIntegrationsUrlJobs < Gitlab::Database::Migration[1.0]
- MIGRATION = 'ExtractIntegrationsUrl'
- disable_ddl_transaction!
-
- def up
- ensure_batched_background_migration_is_finished(
- job_class_name: MIGRATION,
- table_name: :integrations,
- column_name: :id,
- job_arguments: []
- )
- end
-
- def down
- # no-op
- end
- end
- ```
-
- If the application does not depend on the data being 100% migrated (for
- instance, the data is advisory, and not mission-critical), then you can skip this
- final step. This step confirms that the migration is completed, and all of the rows were migrated.
-
-After the batched migration is completed, you can safely remove the `integrations.properties` column.
-
-## Testing
-
-Writing tests is required for:
-
-- The batched background migrations' queueing migration.
-- The batched background migration itself.
-- A cleanup migration.
-
-The `:migration` and `schema: :latest` RSpec tags are automatically set for
-background migration specs. Refer to the
-[Testing Rails migrations](testing_guide/testing_migrations_guide.md#testing-a-non-activerecordmigration-class)
-style guide.
-
-Remember that `before` and `after` RSpec hooks
-migrate your database down and up. These hooks can result in other batched background
-migrations being called. Using `spy` test doubles with
-`have_received` is encouraged, instead of using regular test doubles, because
-your expectations defined in a `it` block can conflict with what is
-called in RSpec hooks. Refer to [issue #35351](https://gitlab.com/gitlab-org/gitlab/-/issues/18839)
-for more details.
-
-## Best practices
-
-1. Know how much data you're dealing with.
-1. Make sure the batched background migration jobs are idempotent.
-1. Confirm the tests you write are not false positives.
-1. If the data being migrated is critical and cannot be lost, the
- clean-up migration must also check the final state of the data before completing.
-1. Discuss the numbers with a database specialist. The migration may add
- more pressure on DB than you expect. Measure on staging,
- or ask someone to measure on production.
-1. Know how much time is required to run the batched background migration.
-
-## Additional tips and strategies
-
-### Viewing failure error logs
-
-You can view failures in two ways:
-
-- Via GitLab logs:
- 1. After running a batched background migration, if any jobs fail,
- view the logs in [Kibana](https://log.gprd.gitlab.net/goto/5f06a57f768c6025e1c65aefb4075694).
- View the production Sidekiq log and filter for:
-
- - `json.new_state: failed`
- - `json.job_class_name: <Batched Background Migration job class name>`
- - `json.job_arguments: <Batched Background Migration job class arguments>`
-
- 1. Review the `json.exception_class` and `json.exception_message` values to help
- understand why the jobs failed.
-
- 1. Remember the retry mechanism. Having a failure does not mean the job failed.
- Always check the last status of the job.
-
-- Via database:
-
- 1. Get the batched background migration `CLASS_NAME`.
- 1. Execute the following query in the PostgreSQL console:
-
- ```sql
- SELECT migration.id, migration.job_class_name, transition_logs.exception_class, transition_logs.exception_message
- FROM batched_background_migrations as migration
- INNER JOIN batched_background_migration_jobs as jobs
- ON jobs.batched_background_migration_id = migration.id
- INNER JOIN batched_background_migration_job_transition_logs as transition_logs
- ON transition_logs.batched_background_migration_job_id = jobs.id
- WHERE transition_logs.next_status = '2' AND migration.job_class_name = "CLASS_NAME";
- ```
+<!-- This redirect file can be deleted after <2022-07-26>. -->
+<!-- Redirects that point to other docs in the same project expire in three months. -->
+<!-- Redirects that point to docs in a different project or site (link is not relative and starts with `https:`) expire in one year. -->
+<!-- Before deletion, see: https://docs.gitlab.com/ee/development/documentation/redirects.html -->
diff --git a/doc/development/changelog.md b/doc/development/changelog.md
index b98ed6cb109..c19c5b40382 100644
--- a/doc/development/changelog.md
+++ b/doc/development/changelog.md
@@ -100,7 +100,7 @@ EE: true
database records created during Cycle Analytics model spec."
- _Any_ contribution from a community member, no matter how small, **may** have
a changelog entry regardless of these guidelines if the contributor wants one.
-- Any [GLEX experiment](experiment_guide/gitlab_experiment.md) changes **should not** have a changelog entry.
+- Any [experiment](experiment_guide/index.md) changes **should not** have a changelog entry.
- An MR that includes only documentation changes **should not** have a changelog entry.
For more information, see
diff --git a/doc/development/chatops_on_gitlabcom.md b/doc/development/chatops_on_gitlabcom.md
index e18fcb0061b..2065021c61b 100644
--- a/doc/development/chatops_on_gitlabcom.md
+++ b/doc/development/chatops_on_gitlabcom.md
@@ -20,12 +20,10 @@ tasks such as:
To request access to ChatOps on GitLab.com:
1. Sign in to [Internal GitLab for Operations](https://ops.gitlab.net/users/sign_in)
- with one of the following methods:
+ with one of the following methods (Okta is not supported):
- - The same username you use on GitLab.com. You may have to choose a different
- username later.
+ - The same username you use on GitLab.com. You may have to choose a different username later.
- Clicking the **Sign in with Google** button to sign in with your GitLab.com email address.
- - Clicking the **Sign in with Okta** button to sign in with Okta.
1. Confirm that your username in [Internal GitLab for Operations](https://ops.gitlab.net/)
is the same as your username in [GitLab.com](https://gitlab.com/). If the usernames
diff --git a/doc/development/cicd/cicd_reference_documentation_guide.md b/doc/development/cicd/cicd_reference_documentation_guide.md
index e937220d208..0da1717f53c 100644
--- a/doc/development/cicd/cicd_reference_documentation_guide.md
+++ b/doc/development/cicd/cicd_reference_documentation_guide.md
@@ -4,7 +4,7 @@ group: Pipeline Authoring
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
-# CI/CD YAML reference style guide **(FREE)**
+# Documenting the `.gitlab-ci.yml` keywords **(FREE)**
The [CI/CD YAML reference](../../ci/yaml/index.md) uses a standard style to make it easier to use and update.
diff --git a/doc/development/cicd/templates.md b/doc/development/cicd/templates.md
index c6f59a7e452..8d88e7155a2 100644
--- a/doc/development/cicd/templates.md
+++ b/doc/development/cicd/templates.md
@@ -342,32 +342,6 @@ include:
- remote: https://gitlab.com/gitlab-org/gitlab/-/raw/v13.0.1-ee/lib/gitlab/ci/templates/Jobs/Deploy.gitlab-ci.yml
```
-### Use a feature flag to roll out a `latest` template
-
-With a major version release like 13.0 or 14.0, [stable templates](#stable-version) must be
-updated with their corresponding [latest template versions](#latest-version).
-It may be hard to gauge the impact of this change, so use the `redirect_to_latest_template_<name>`
-feature flag to test the impact on a subset of users. Using a feature flag can help
-reduce the risk of reverts or rollbacks on production.
-
-For example, to redirect the stable `Jobs/Deploy` template to its latest template in 25% of
-projects on `gitlab.com`:
-
-```shell
-/chatops run feature set redirect_to_latest_template_jobs_deploy 25 --actors
-```
-
-After you're confident the latest template can be moved to stable:
-
-1. Update the stable template with the content of the latest version.
-1. Remove the migration template from `Gitlab::Template::GitlabCiYmlTemplate::TEMPLATES_WITH_LATEST_VERSION` const.
-1. Remove the corresponding feature flag.
-
-NOTE:
-Feature flags are enabled by default in RSpec, so all tests are performed
-against the latest templates. You should also test the stable templates
-with `stub_feature_flags(redirect_to_latest_template_<name>: false)`.
-
### Further reading
There is an [open issue](https://gitlab.com/gitlab-org/gitlab/-/issues/17716) about
diff --git a/doc/development/code_intelligence/index.md b/doc/development/code_intelligence/index.md
index e1e2105298c..3a8845084c3 100644
--- a/doc/development/code_intelligence/index.md
+++ b/doc/development/code_intelligence/index.md
@@ -38,7 +38,7 @@ sequenceDiagram
1. The CI/CD job generates a document in an LSIF format (usually `dump.lsif`) using [an
indexer](https://lsif.dev) for the language of a project. The format
[describes](https://github.com/sourcegraph/sourcegraph/blob/main/doc/code_intelligence/explanations/writing_an_indexer.md)
- interactions between a method or function and its definition(s) or references. The
+ interactions between a method or function and its definitions or references. The
document is marked to be stored as an LSIF report artifact.
1. After receiving a request for storing the artifact, Workhorse asks
diff --git a/doc/development/code_review.md b/doc/development/code_review.md
index 48bbe4c60ba..252bd1daf55 100644
--- a/doc/development/code_review.md
+++ b/doc/development/code_review.md
@@ -26,7 +26,7 @@ This is only a recommendation and the reviewer may be from a different team.
However, it is recommended to pick someone who is a [domain expert](#domain-experts).
If your merge request touches more than one domain (for example, Dynamic Analysis and GraphQL), ask for reviews from an expert from each domain.
-You can read more about the importance of involving reviewer(s) in the section on the responsibility of the author below.
+You can read more about the importance of involving reviewers in the section on the responsibility of the author below.
If you need some guidance (for example, it's your first merge request), feel free to ask
one of the [Merge request coaches](https://about.gitlab.com/company/team/).
@@ -107,7 +107,7 @@ For more information, review [the roulette README](https://gitlab.com/gitlab-org
### Approval guidelines
As described in the section on the responsibility of the maintainer below, you
-are recommended to get your merge request approved and merged by maintainer(s)
+are recommended to get your merge request approved and merged by maintainers
with [domain expertise](#domain-experts).
1. If your merge request includes backend changes (*1*), it must be
@@ -118,8 +118,7 @@ with [domain expertise](#domain-experts).
1. If your merge request includes frontend changes (*1*), it must be
**approved by a [frontend maintainer](https://about.gitlab.com/handbook/engineering/projects/#gitlab_maintainers_frontend)**.
1. If your merge request includes user-facing changes (*3*), it must be
- **approved by a [Product Designer](https://about.gitlab.com/handbook/engineering/projects/#gitlab_reviewers_UX)**,
- based on assignments in the appropriate [DevOps stage group](https://about.gitlab.com/handbook/product/categories/#devops-stages).
+ **approved by a [Product Designer](https://about.gitlab.com/handbook/engineering/projects/#gitlab_reviewers_UX)**.
See the [design and user interface guidelines](contributing/design.md) for details.
1. If your merge request includes adding a new JavaScript library (*1*)...
- If the library significantly increases the
@@ -155,7 +154,7 @@ with [domain expertise](#domain-experts).
#### Acceptance checklist
-This checklist encourages the authors, reviewers, and maintainers of merge requests (MRs) to confirm changes were analyzed for high-impact risks to quality, performance, reliability, security, and maintainability.
+This checklist encourages the authors, reviewers, and maintainers of merge requests (MRs) to confirm changes were analyzed for high-impact risks to quality, performance, reliability, security, observability, and maintainability.
Using checklists improves quality in software engineering. This checklist is a straightforward tool to support and bolster the skills of contributors to the GitLab codebase.
@@ -183,6 +182,10 @@ See the [test engineering process](https://about.gitlab.com/handbook/engineering
1. I have considered the scalability risk based on future predicted growth.
1. I have considered the performance, reliability, and availability impacts of this change on large customers who may have significantly more data than the average customer.
+##### Observability instrumentation
+
+1. I have included enough instrumentation to facilitate debugging and proactive performance improvements through observability.
+
##### Documentation
1. I have included changelog trailers, or I have decided that they are not needed.
@@ -220,7 +223,9 @@ should be confident that:
The best way to do this, and to avoid unnecessary back-and-forth with reviewers,
is to perform a self-review of your own merge request, following the
-[Code Review](#reviewing-a-merge-request) guidelines.
+[Code Review](#reviewing-a-merge-request) guidelines. During this self-review,
+try to include comments in the MR on lines
+where decisions or trade-offs were made, or where a contextual explanation might aid the reviewer in more easily understanding the code.
To reach the required level of confidence in their solution, an author is expected
to involve other people in the investigation and implementation processes as
@@ -258,7 +263,7 @@ Avoid:
[_explain why, not what_](https://blog.codinghorror.com/code-tells-you-how-comments-tell-you-why/).
- Requesting maintainer reviews of merge requests with failed tests. If the tests are failing and you have to request a review, ensure you leave a comment with an explanation.
- Excessively mentioning maintainers through email or Slack (if the maintainer is reachable
-through Slack). If you can't add a reviewer for a merge request, it's acceptable to `@` mention a maintainer in a comment. In all other cases, it's sufficient to add a reviewer or [request their attention](../user/project/merge_requests/index.md#request-attention-to-a-merge-request) if they're already a reviewer.
+through Slack). If you can't add a reviewer for a merge request, `@` mentioning a maintainer in a comment is acceptable and in all other cases adding a reviewer is sufficient.
This saves reviewers time and helps authors catch mistakes earlier.
@@ -268,8 +273,10 @@ This saves reviewers time and helps authors catch mistakes earlier.
that it meets all requirements, you should:
- Click the Approve button.
-- Request a review from a maintainer or [request their attention](../user/project/merge_requests/index.md#request-attention-to-a-merge-request) if they're already a reviewer. Default to requests for a maintainer with [domain expertise](#domain-experts),
+- `@` mention the author to generate a to-do notification, and advise them that their merge request has been reviewed and approved.
+- Request a review from a maintainer. Default to requests for a maintainer with [domain expertise](#domain-experts),
however, if one isn't available or you think the merge request doesn't need a review by a [domain expert](#domain-experts), feel free to follow the [Reviewer roulette](#reviewer-roulette) suggestion.
+- Remove yourself as a reviewer.
### The responsibility of the maintainer
@@ -297,7 +304,7 @@ If a developer who happens to also be a maintainer was involved in a merge reque
as a reviewer, it is recommended that they are not also picked as the maintainer to ultimately approve and merge it.
Maintainers should check before merging if the merge request is approved by the
-required approvers. If still awaiting further approvals from others, explain that in a comment and [request attention](../user/project/merge_requests/index.md#request-attention-to-a-merge-request) from other reviewers as appropriate. Do not remove yourself as a reviewer.
+required approvers. If still awaiting further approvals from others, remove yourself as a reviewer then `@` mention the author and explain why in a comment. Stay as reviewer if you're merging the code.
Maintainers must check before merging if the merge request is introducing new
vulnerabilities, by inspecting the list in the merge request
@@ -319,20 +326,14 @@ After merging, a maintainer should stay as the reviewer listed on the merge requ
### Dogfooding the Reviewers feature
-Replaced with [dogfooding the attention request feature](#dogfooding-the-attention-request-feature).
-
-### Dogfooding the attention request feature
-
-In March of 2022, an updated process was put in place aimed at efficiently and consistently dogfooding the
-[attention requests feature](../user/project/merge_requests/index.md#request-attention-to-a-merge-request) under `Merge requests` -> `Need your attention`. This replaces previous guidance on [dogfooding the reviewers feature](#dogfooding-the-reviewers-feature).
+On March 18th 2021, an updated process was put in place aimed at efficiently and consistently dogfooding the Reviewers feature.
Here is a summary of the changes, also reflected in this section above.
-- Merge request authors and DRIs stay as assignees
-- Assignees request a review from reviewer(s) when they are expected to review
-- Reviewers stay assigned for the entire duration of the merge request
-- Reviewers request attention from the assignee or other reviewer(s) after they've done reviewing, depending on who needs to take action
-- Assignees request attention from the reviewer(s) when changes are made
+- Merge request authors and DRIs stay as Assignees
+- Authors request a review from Reviewers when they are expected to review
+- Reviewers remove themselves after they're done reviewing/approving
+- The last approver stays as Reviewer upon merging
## Best practices
@@ -443,7 +444,7 @@ experience, refactors the existing code). Then:
- For non-mandatory suggestions, decorate with (non-blocking) so the author knows they can
optionally resolve within the merge request or follow-up at a later stage.
- There's a [Chrome/Firefox add-on](https://gitlab.com/conventionalcomments/conventional-comments-button) which you can use to apply [Conventional Comment](https://conventionalcomments.org/) prefixes.
-- Ensure there are no open dependencies. Check [linked issues](../user/project/issues/related_issues.md) for blockers. Clarify with the author(s)
+- Ensure there are no open dependencies. Check [linked issues](../user/project/issues/related_issues.md) for blockers. Clarify with the authors
if necessary. If blocked by one or more open MRs, set an [MR dependency](../user/project/merge_requests/merge_request_dependencies.md).
- After a round of line notes, it can be helpful to post a summary note such as
"Looks good to me", or "Just a couple things to address."
@@ -696,10 +697,10 @@ Properties of customer critical merge requests:
- The [VP of Development](https://about.gitlab.com/job-families/engineering/development/management/vp/) ([@clefelhocz1](https://gitlab.com/clefelhocz1)) is the DRI for deciding if a merge request qualifies as customer critical.
- The DRI applies the `customer-critical-merge-request` label to the merge request.
-- It is required that the reviewer(s) and maintainer(s) involved with a customer critical merge request are engaged as soon as this decision is made.
+- It is required that the reviewers and maintainers involved with a customer critical merge request are engaged as soon as this decision is made.
- It is required to prioritize work for those involved on a customer critical merge request so that they have the time available necessary to focus on it.
- It is required to adhere to GitLab [values](https://about.gitlab.com/handbook/values/) and processes when working on customer critical merge requests, taking particular note of family and friends first/work second, definition of done, iteration, and release when it's ready.
-- Customer critical merge requests are required to not reduce security, introduce data-loss risk, reduce availability, nor break existing functionality per the process for [prioritizing technical decisions](https://about.gitlab.com/handbook/engineering/principles/#prioritizing-technical-decisions).
+- Customer critical merge requests are required to not reduce security, introduce data-loss risk, reduce availability, nor break existing functionality per the process for [prioritizing technical decisions](https://about.gitlab.com/handbook/engineering/development/principles/#prioritizing-technical-decisions).
- On customer critical requests, it is _recommended_ that those involved _consider_ coordinating synchronously (Zoom, Slack) in addition to asynchronously (merge requests comments) if they believe this may reduce the elapsed time to merge even though this _may_ sacrifice [efficiency](https://about.gitlab.com/company/culture/all-remote/asynchronous/#evaluating-efficiency.md).
- After a customer critical merge request is merged, a retrospective must be completed with the intention of reducing the frequency of future customer critical merge requests.
diff --git a/doc/development/contributing/design.md b/doc/development/contributing/design.md
index def39a960d8..7f5c800216a 100644
--- a/doc/development/contributing/design.md
+++ b/doc/development/contributing/design.md
@@ -117,7 +117,7 @@ At any moment, but usually _during_ or _after_ the design's implementation:
for additions or enhancements to the design system.
- Create issues with the [`~UX debt`](issue_workflow.md#technical-and-ux-debt)
label for intentional deviations from the agreed-upon UX requirements due to
- time or feasibility challenges, linking back to the corresponding issue(s) or
- MR(s).
+ time or feasibility challenges, linking back to the corresponding issues or
+ merge requests.
- Create issues for [feature additions or enhancements](issue_workflow.md#feature-proposals)
outside the agreed-upon UX requirements to avoid scope creep.
diff --git a/doc/development/contributing/index.md b/doc/development/contributing/index.md
index ea54f36a7e5..8a4b06840a4 100644
--- a/doc/development/contributing/index.md
+++ b/doc/development/contributing/index.md
@@ -33,9 +33,8 @@ GitLab Inc engineers should refer to the [engineering workflow document](https:/
## Security vulnerability disclosure
-Report suspected security vulnerabilities in private to
-`support@gitlab.com`, also see the
-[disclosure section on the GitLab.com website](https://about.gitlab.com/security/disclosure/).
+Report suspected security vulnerabilities by following the
+[disclosure process on the GitLab.com website](https://about.gitlab.com/security/disclosure/).
WARNING:
Do **NOT** create publicly viewable issues for suspected security vulnerabilities.
diff --git a/doc/development/contributing/issue_workflow.md b/doc/development/contributing/issue_workflow.md
index fe1549e7f34..97c8c179e09 100644
--- a/doc/development/contributing/issue_workflow.md
+++ b/doc/development/contributing/issue_workflow.md
@@ -108,7 +108,7 @@ Group labels specify which [groups](https://about.gitlab.com/company/team/struct
It's highly recommended to add a group label, as it's used by our triage
automation to
-[infer the correct stage label](https://about.gitlab.com/handbook/engineering/quality/triage-operations/#auto-labelling-of-issues).
+[infer the correct stage label](https://about.gitlab.com/handbook/engineering/quality/triage-operations/#auto-labelling-of-issues-and-merge-requests).
#### Naming and color convention
diff --git a/doc/development/contributing/merge_request_workflow.md b/doc/development/contributing/merge_request_workflow.md
index 5ed0885eed9..ee1ed744cd4 100644
--- a/doc/development/contributing/merge_request_workflow.md
+++ b/doc/development/contributing/merge_request_workflow.md
@@ -53,7 +53,7 @@ request is as follows:
1. If you have multiple commits, combine them into a few logically organized
commits by [squashing them](https://git-scm.com/book/en/v2/Git-Tools-Rewriting-History#_squashing),
but do not change the commit history if you're working on shared branches though.
-1. Push the commit(s) to your working branch in your fork.
+1. Push the commits to your working branch in your fork.
1. Submit a merge request (MR) to the `main` branch in the main GitLab project.
1. Your merge request needs at least 1 approval, but depending on your changes
you might need additional approvals. Refer to the [Approval guidelines](../code_review.md#approval-guidelines).
@@ -65,7 +65,7 @@ request is as follows:
template already provided in the "Description" field.
1. If you are contributing documentation, choose `Documentation` from the
"Choose a template" menu and fill in the description according to the template.
- 1. Use the syntax `Solves #XXX`, `Closes #XXX`, or `Refs #XXX` to mention the issue(s) your merge
+ 1. Use the syntax `Solves #XXX`, `Closes #XXX`, or `Refs #XXX` to mention the issues your merge
request addresses. Referenced issues do not [close automatically](../../user/project/issues/managing_issues.md#closing-issues-automatically).
You must close them manually once the merge request is merged.
1. The MR must include *Before* and *After* screenshots if UI changes are made.
@@ -81,6 +81,7 @@ request is as follows:
1. If your MR touches code that executes shell commands, reads or opens files, or
handles paths to files on disk, make sure it adheres to the
[shell command guidelines](../shell_commands.md)
+1. [Code changes should include observability instrumentation](../code_review.md#observability-instrumentation).
1. If your code needs to handle file storage, see the [uploads documentation](../uploads/index.md).
1. If your merge request adds one or more migrations, make sure to execute all
migrations on a fresh database before the MR is reviewed. If the review leads
diff --git a/doc/development/contributing/verify/index.md b/doc/development/contributing/verify/index.md
index 828eb0a9598..01aacffd00f 100644
--- a/doc/development/contributing/verify/index.md
+++ b/doc/development/contributing/verify/index.md
@@ -231,5 +231,5 @@ building medical, aviation, and automotive software. Continuous Integration is
a mission critical part of software engineering.
When you are working on a subsystem for pipeline processing and transitioning
-CI/CD statuses, request an additional review from a domain expert and hold
-others accountable for doing the same.
+CI/CD statuses, request an additional opinion on the design from a domain expert
+as early as possible and hold others accountable for doing the same.
diff --git a/doc/development/dangerbot.md b/doc/development/dangerbot.md
index f941e0720c6..003df4fe078 100644
--- a/doc/development/dangerbot.md
+++ b/doc/development/dangerbot.md
@@ -66,7 +66,7 @@ continue to apply. However, there are a few things that deserve special emphasis
Danger is a powerful tool and flexible tool, but not always the most appropriate
way to solve a given problem or workflow.
-First, be aware of the GitLab [commitment to dogfooding](https://about.gitlab.com/handbook/engineering/principles/#dogfooding).
+First, be aware of the GitLab [commitment to dogfooding](https://about.gitlab.com/handbook/engineering/development/principles/#dogfooding).
The code we write for Danger is GitLab-specific, and it **may not** be most
appropriate place to implement functionality that addresses a need we encounter.
Our users, customers, and even our own satellite projects, such as [Gitaly](https://gitlab.com/gitlab-org/gitaly),
@@ -155,7 +155,7 @@ To enable the Dangerfile on another existing GitLab project, complete the follow
file:
- '/ci/danger-review.yml'
rules:
- - if: '$CI_SERVER_HOST == "gitlab.com"'
+ - if: $CI_SERVER_HOST == "gitlab.com"
```
1. If your project is in the `gitlab-org` group, you don't need to set up any token as the `DANGER_GITLAB_API_TOKEN`
@@ -196,10 +196,11 @@ is not shared to forks.
Contributors can configure Danger for their forks with the following steps:
-1. Add an [environment variable](../ci/variables/index.md) called `DANGER_GITLAB_API_TOKEN` with a
-[personal API token](https://gitlab.com/-/profile/personal_access_tokens?name=GitLab+Dangerbot&scopes=api)
-to your fork that has the `api` scope set.
-1. Making the variable [masked](../ci/variables/index.md#mask-a-cicd-variable) makes sure it
-doesn't show up in the job logs. The variable cannot be
-[protected](../ci/variables/index.md#protect-a-cicd-variable), as it needs
-to be present for all feature branches.
+1. Create a [personal API token](https://gitlab.com/-/profile/personal_access_tokens?name=GitLab+Dangerbot&scopes=api)
+ that has the `api` scope set (don't forget to copy it to the clipboard).
+1. In your fork, add a [project CI/CD variable](../ci/variables/index.md#add-a-cicd-variable-to-a-project)
+ called `DANGER_GITLAB_API_TOKEN` with the token copied in the previous step.
+1. Make the variable [masked](../ci/variables/index.md#mask-a-cicd-variable) so it
+ doesn't show up in the job logs. The variable cannot be
+ [protected](../ci/variables/index.md#protected-cicd-variables), because it needs
+ to be present for all branches.
diff --git a/doc/development/database/avoiding_downtime_in_migrations.md b/doc/development/database/avoiding_downtime_in_migrations.md
index ad2768397e6..3cf9ab1ab5c 100644
--- a/doc/development/database/avoiding_downtime_in_migrations.md
+++ b/doc/development/database/avoiding_downtime_in_migrations.md
@@ -68,10 +68,72 @@ In this example, the change to ignore the column went into release 12.5.
Continuing our example, dropping the column goes into a _post-deployment_ migration in release 12.6:
+Start by creating the **post-deployment migration**:
+
+```shell
+bundle exec rails g post_deployment_migration remove_users_updated_at_column
+```
+
+There are two scenarios that you need to consider
+to write a migration that removes a column:
+
+#### A. The removed column has no indexes or constraints that belong to it
+
+In this case, a **transactional migration** can be used. Something as simple as:
+
+```ruby
+class RemoveUsersUpdatedAtColumn < Gitlab::Database::Migration[2.0]
+ def up
+ remove_column :users, :updated_at
+ end
+
+ def down
+ add_column :users, :updated_at, :datetime
+ end
+end
+```
+
+You can consider [enabling lock retries](
+https://docs.gitlab.com/ee/development/migration_style_guide.html#usage-with-transactional-migrations
+) when you run a migration on big tables, because it might take some time to
+acquire a lock on this table.
+
+#### B. The removed column has an index or constraint that belongs to it
+
+If the `down` method requires adding back any dropped indexes or constraints, that cannot
+be done within a transactional migration, then the migration would look like this:
+
```ruby
- remove_column :user, :updated_at
+class RemoveUsersUpdatedAtColumn < Gitlab::Database::Migration[1.0]
+ disable_ddl_transaction!
+
+ def up
+ remove_column :users, :updated_at
+ end
+
+ def down
+ unless column_exists?(:users, :updated_at)
+ add_column :users, :updated_at, :datetime
+ end
+
+ # Make sure to add back any indexes or constraints,
+ # that were dropped in the `up` method. For example:
+ add_concurrent_index(:users, :updated_at)
+ end
+end
```
+In the `down` method, we check to see if the column already exists before adding it again.
+We do this because the migration is non-transactional and might have failed while it was running.
+
+The [`disable_ddl_transaction!`](
+https://docs.gitlab.com/ee/development/migration_style_guide.html#usage-with-non-transactional-migrations-disable_ddl_transaction
+) is used to disable the transaction that wraps the whole migration.
+
+You can refer to the page [Migration Style Guide](
+https://docs.gitlab.com/ee/development/migration_style_guide.html
+) for more information about database migrations.
+
### Step 3: Removing the ignore rule (release M+2)
With the next release, in this example 12.7, we set up another merge request to remove the ignore rule.
@@ -272,7 +334,7 @@ Renaming a table is possible without downtime by following our multi-release
Adding foreign keys usually works in 3 steps:
1. Start a transaction
-1. Run `ALTER TABLE` to add the constraint(s)
+1. Run `ALTER TABLE` to add the constraints
1. Check all existing data
Because `ALTER TABLE` typically acquires an exclusive lock until the end of a
diff --git a/doc/development/database/background_migrations.md b/doc/development/database/background_migrations.md
index 1f7e0d76c89..80ba0336bda 100644
--- a/doc/development/database/background_migrations.md
+++ b/doc/development/database/background_migrations.md
@@ -7,7 +7,7 @@ info: To determine the technical writer assigned to the Stage/Group associated w
# Background migrations
WARNING:
-Background migrations are strongly discouraged in favor of the new [batched background migrations framework](../batched_background_migrations.md).
+Background migrations are strongly discouraged in favor of the new [batched background migrations framework](batched_background_migrations.md).
Please check that documentation and determine if that framework suits your needs and fall back
to these only if required.
@@ -45,13 +45,17 @@ into this category.
## Isolation
Background migrations must be isolated and can not use application code (for example,
-models defined in `app/models`). Since these migrations can take a long time to
-run it's possible for new versions to be deployed while they are still running.
+models defined in `app/models` except the `ApplicationRecord` classes). Since these migrations
+can take a long time to run it's possible for new versions to be deployed while they are still running.
It's also possible for different migrations to be executed at the same time.
This means that different background migrations should not migrate data in a
way that would cause conflicts.
+## Accessing data for multiple databases
+
+See [Accessing data for multiple databases of Batched Background Migrations](batched_background_migrations.md#accessing-data-for-multiple-databases) for more details.
+
## Idempotence
Background migrations are executed in a context of a Sidekiq process.
@@ -190,7 +194,7 @@ class:
```ruby
class Gitlab::BackgroundMigration::ExtractIntegrationsUrl
- class Integration < ActiveRecord::Base
+ class Integration < ::ApplicationRecord
self.table_name = 'integrations'
end
@@ -214,7 +218,7 @@ created and updated integrations. We can do this using something along the lines
the following:
```ruby
-class Integration < ActiveRecord::Base
+class Integration < ::ApplicationRecord
after_commit :schedule_integration_migration, on: :update
after_commit :schedule_integration_migration, on: :create
diff --git a/doc/development/database/batched_background_migrations.md b/doc/development/database/batched_background_migrations.md
new file mode 100644
index 00000000000..3a0fa77eff9
--- /dev/null
+++ b/doc/development/database/batched_background_migrations.md
@@ -0,0 +1,371 @@
+---
+type: reference, dev
+stage: Enablement
+group: Database
+info: "See the Technical Writers assigned to Development Guidelines: https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments-to-development-guidelines"
+---
+
+# Batched background migrations
+
+Batched Background Migrations should be used to perform data migrations whenever a
+migration exceeds [the time limits](../migration_style_guide.md#how-long-a-migration-should-take)
+in our guidelines. For example, you can use batched background
+migrations to migrate data that's stored in a single JSON column
+to a separate table instead.
+
+## When to use batched background migrations
+
+Use a batched background migration when you migrate _data_ in tables containing
+so many rows that the process would exceed
+[the time limits in our guidelines](../migration_style_guide.md#how-long-a-migration-should-take)
+if performed using a regular Rails migration.
+
+- Batched background migrations should be used when migrating data in
+ [high-traffic tables](../migration_style_guide.md#high-traffic-tables).
+- Batched background migrations may also be used when executing numerous single-row queries
+ for every item on a large dataset. Typically, for single-record patterns, runtime is
+ largely dependent on the size of the dataset. Split the dataset accordingly,
+ and put it into background migrations.
+- Don't use batched background migrations to perform schema migrations.
+
+Background migrations can help when:
+
+- Migrating events from one table to multiple separate tables.
+- Populating one column based on JSON stored in another column.
+- Migrating data that depends on the output of external services. (For example, an API.)
+
+NOTE:
+If the batched background migration is part of an important upgrade, it must be announced
+in the release post. Discuss with your Project Manager if you're unsure if the migration falls
+into this category.
+
+## Isolation
+
+Batched background migrations must be isolated and can not use application code (for example,
+models defined in `app/models` except the `ApplicationRecord` classes).
+Because these migrations can take a long time to run, it's possible
+for new versions to deploy while the migrations are still running.
+
+## Accessing data for multiple databases
+
+Background Migration contrary to regular migrations does have access to multiple databases
+and can be used to efficiently access and update data across them. To properly indicate
+a database to be used it is desired to create ActiveRecord model inline the migration code.
+Such model should use a correct [`ApplicationRecord`](multiple_databases.md#gitlab-schema)
+depending on which database the table is located. As such usage of `ActiveRecord::Base`
+is disallowed as it does not describe a explicitly database to be used to access given table.
+
+```ruby
+# good
+class Gitlab::BackgroundMigration::ExtractIntegrationsUrl
+ class Project < ::ApplicationRecord
+ self.table_name = 'projects'
+ end
+
+ class Build < ::Ci::ApplicationRecord
+ self.table_name = 'ci_builds'
+ end
+end
+
+# bad
+class Gitlab::BackgroundMigration::ExtractIntegrationsUrl
+ class Project < ActiveRecord::Base
+ self.table_name = 'projects'
+ end
+
+ class Build < ActiveRecord::Base
+ self.table_name = 'ci_builds'
+ end
+end
+```
+
+Similarly the usage of `ActiveRecord::Base.connection` is disallowed and needs to be
+replaced preferably with the usage of model connection.
+
+```ruby
+# good
+Project.connection.execute("SELECT * FROM projects")
+
+# acceptable
+ApplicationRecord.connection.execute("SELECT * FROM projects")
+
+# bad
+ActiveRecord::Base.connection.execute("SELECT * FROM projects")
+```
+
+## Idempotence
+
+Batched background migrations are executed in a context of a Sidekiq process.
+The usual Sidekiq rules apply, especially the rule that jobs should be small
+and idempotent. Make sure that in case that your migration job is retried, data
+integrity is guaranteed.
+
+See [Sidekiq best practices guidelines](https://github.com/mperham/sidekiq/wiki/Best-Practices)
+for more details.
+
+## Batched background migrations for EE-only features
+
+All the background migration classes for EE-only features should be present in GitLab CE.
+For this purpose, create an empty class for GitLab CE, and extend it for GitLab EE
+as explained in the guidelines for
+[implementing Enterprise Edition features](../ee_features.md#code-in-libgitlabbackground_migration).
+
+Batched Background migrations are simple classes that define a `perform` method. A
+Sidekiq worker then executes such a class, passing any arguments to it. All
+migration classes must be defined in the namespace
+`Gitlab::BackgroundMigration`. Place the files in the directory
+`lib/gitlab/background_migration/`.
+
+## Queueing
+
+Queueing a batched background migration should be done in a post-deployment
+migration. Use this `queue_batched_background_migration` example, queueing the
+migration to be executed in batches. Replace the class name and arguments with the values
+from your migration:
+
+```ruby
+queue_batched_background_migration(
+ JOB_CLASS_NAME,
+ TABLE_NAME,
+ JOB_ARGUMENTS,
+ JOB_INTERVAL
+ )
+```
+
+Make sure the newly-created data is either migrated, or
+saved in both the old and new version upon creation. Removals in
+turn can be handled by defining foreign keys with cascading deletes.
+
+### Requeuing batched background migrations
+
+If one of the batched background migrations contains a bug that is fixed in a patch
+release, you must requeue the batched background migration so the migration
+repeats on systems that already performed the initial migration.
+
+When you requeue the batched background migration, turn the original
+queuing into a no-op by clearing up the `#up` and `#down` methods of the
+migration performing the requeuing. Otherwise, the batched background migration is
+queued multiple times on systems that are upgrading multiple patch releases at
+once.
+
+When you start the second post-deployment migration, delete the
+previously batched migration with the provided code:
+
+```ruby
+Gitlab::Database::BackgroundMigration::BatchedMigration
+ .for_configuration(MIGRATION_NAME, TABLE_NAME, COLUMN, JOB_ARGUMENTS)
+ .delete_all
+```
+
+## Cleaning up
+
+NOTE:
+Cleaning up any remaining background migrations must be done in either a major
+or minor release. You must not do this in a patch release.
+
+Because background migrations can take a long time, you can't immediately clean
+things up after queueing them. For example, you can't drop a column used in the
+migration process, as jobs would fail. You must add a separate _post-deployment_
+migration in a future release that finishes any remaining
+jobs before cleaning things up. (For example, removing a column.)
+
+To migrate the data from column `foo` (containing a big JSON blob) to column `bar`
+(containing a string), you would:
+
+1. Release A:
+ 1. Create a migration class that performs the migration for a row with a given ID.
+ 1. Update new rows using one of these techniques:
+ - Create a new trigger for simple copy operations that don't need application logic.
+ - Handle this operation in the model/service as the records are created or updated.
+ - Create a new custom background job that updates the records.
+ 1. Queue the batched background migration for all existing rows in a post-deployment migration.
+1. Release B:
+ 1. Add a post-deployment migration that checks if the batched background migration is completed.
+ 1. Deploy code so that the application starts using the new column and stops to update new records.
+ 1. Remove the old column.
+
+Bump to the [import/export version](../../user/project/settings/import_export.md) may
+be required, if importing a project from a prior version of GitLab requires the
+data to be in the new format.
+
+## Example
+
+The `routes` table has a `source_type` field that's used for a polymorphic relationship.
+As part of a database redesign, we're removing the polymorphic relationship. One step of
+the work will be migrating data from the `source_id` column into a new singular foreign key.
+Because we intend to delete old rows later, there's no need to update them as part of the
+background migration.
+
+1. Start by defining our migration class, which should inherit
+ from `Gitlab::BackgroundMigration::BatchedMigrationJob`:
+
+ ```ruby
+ class Gitlab::BackgroundMigration::BackfillRouteNamespaceId < BatchedMigrationJob
+ # For illustration purposes, if we were to use a local model we could
+ # define it like below, using an `ApplicationRecord` as the base class
+ # class Route < ::ApplicationRecord
+ # self.table_name = 'routes'
+ # end
+
+ def perform
+ each_sub_batch(
+ operation_name: :update_all,
+ batching_scope: -> (relation) { relation.where("source_type <> 'UnusedType'") }
+ ) do |sub_batch|
+ sub_batch.update_all('namespace_id = source_id')
+ end
+ end
+ end
+ ```
+
+ NOTE:
+ Job classes must be subclasses of `BatchedMigrationJob` to be
+ correctly handled by the batched migration framework. Any subclass of
+ `BatchedMigrationJob` will be initialized with necessary arguments to
+ execute the batch, as well as a connection to the tracking database.
+ Additional `job_arguments` set on the migration will be passed to the
+ job's `perform` method.
+
+1. Add a new trigger to the database to update newly created and updated routes,
+ similar to this example:
+
+ ```ruby
+ execute(<<~SQL)
+ CREATE OR REPLACE FUNCTION example() RETURNS trigger
+ LANGUAGE plpgsql
+ AS $$
+ BEGIN
+ NEW."namespace_id" = NEW."source_id"
+ RETURN NEW;
+ END;
+ $$;
+ SQL
+ ```
+
+1. Create a post-deployment migration that queues the migration for existing data:
+
+ ```ruby
+ class QueueBackfillRoutesNamespaceId < Gitlab::Database::Migration[1.0]
+ disable_ddl_transaction!
+
+ MIGRATION = 'BackfillRouteNamespaceId'
+ DELAY_INTERVAL = 2.minutes
+
+ def up
+ queue_batched_background_migration(
+ MIGRATION,
+ :routes,
+ :id,
+ job_interval: DELAY_INTERVAL
+ )
+ end
+
+ def down
+ Gitlab::Database::BackgroundMigration::BatchedMigration
+ .for_configuration(MIGRATION, :routes, :id, []).delete_all
+ end
+ end
+ ```
+
+ After deployment, our application:
+ - Continues using the data as before.
+ - Ensures that both existing and new data are migrated.
+
+1. In the next release, remove the trigger. We must also add a new post-deployment migration
+ that checks that the batched background migration is completed. For example:
+
+ ```ruby
+ class FinalizeBackfillRouteNamespaceId < Gitlab::Database::Migration[1.0]
+ MIGRATION = 'BackfillRouteNamespaceId'
+ disable_ddl_transaction!
+
+ def up
+ ensure_batched_background_migration_is_finished(
+ job_class_name: MIGRATION,
+ table_name: :routes,
+ column_name: :id,
+ job_arguments: []
+ )
+ end
+
+ def down
+ # no-op
+ end
+ end
+ ```
+
+ If the application does not depend on the data being 100% migrated (for
+ instance, the data is advisory, and not mission-critical), then you can skip this
+ final step. This step confirms that the migration is completed, and all of the rows were migrated.
+
+After the batched migration is completed, you can safely depend on the
+data in `routes.namespace_id` being populated.
+
+## Testing
+
+Writing tests is required for:
+
+- The batched background migrations' queueing migration.
+- The batched background migration itself.
+- A cleanup migration.
+
+The `:migration` and `schema: :latest` RSpec tags are automatically set for
+background migration specs. Refer to the
+[Testing Rails migrations](../testing_guide/testing_migrations_guide.md#testing-a-non-activerecordmigration-class)
+style guide.
+
+Remember that `before` and `after` RSpec hooks
+migrate your database down and up. These hooks can result in other batched background
+migrations being called. Using `spy` test doubles with
+`have_received` is encouraged, instead of using regular test doubles, because
+your expectations defined in a `it` block can conflict with what is
+called in RSpec hooks. Refer to [issue #35351](https://gitlab.com/gitlab-org/gitlab/-/issues/18839)
+for more details.
+
+## Best practices
+
+1. Know how much data you're dealing with.
+1. Make sure the batched background migration jobs are idempotent.
+1. Confirm the tests you write are not false positives.
+1. If the data being migrated is critical and cannot be lost, the
+ clean-up migration must also check the final state of the data before completing.
+1. Discuss the numbers with a database specialist. The migration may add
+ more pressure on DB than you expect. Measure on staging,
+ or ask someone to measure on production.
+1. Know how much time is required to run the batched background migration.
+
+## Additional tips and strategies
+
+### Viewing failure error logs
+
+You can view failures in two ways:
+
+- Via GitLab logs:
+ 1. After running a batched background migration, if any jobs fail,
+ view the logs in [Kibana](https://log.gprd.gitlab.net/goto/5f06a57f768c6025e1c65aefb4075694).
+ View the production Sidekiq log and filter for:
+
+ - `json.new_state: failed`
+ - `json.job_class_name: <Batched Background Migration job class name>`
+ - `json.job_arguments: <Batched Background Migration job class arguments>`
+
+ 1. Review the `json.exception_class` and `json.exception_message` values to help
+ understand why the jobs failed.
+
+ 1. Remember the retry mechanism. Having a failure does not mean the job failed.
+ Always check the last status of the job.
+
+- Via database:
+
+ 1. Get the batched background migration `CLASS_NAME`.
+ 1. Execute the following query in the PostgreSQL console:
+
+ ```sql
+ SELECT migration.id, migration.job_class_name, transition_logs.exception_class, transition_logs.exception_message
+ FROM batched_background_migrations as migration
+ INNER JOIN batched_background_migration_jobs as jobs
+ ON jobs.batched_background_migration_id = migration.id
+ INNER JOIN batched_background_migration_job_transition_logs as transition_logs
+ ON transition_logs.batched_background_migration_job_id = jobs.id
+ WHERE transition_logs.next_status = '2' AND migration.job_class_name = "CLASS_NAME";
+ ```
diff --git a/doc/development/database/loose_foreign_keys.md b/doc/development/database/loose_foreign_keys.md
index 2bcdc91202a..3db24793f1b 100644
--- a/doc/development/database/loose_foreign_keys.md
+++ b/doc/development/database/loose_foreign_keys.md
@@ -117,8 +117,8 @@ Showing cross-schema foreign keys (20):
18 | N | ci_job_token_project_scope_links | projects | target_project_id | cascade
19 | N | ci_project_monthly_usages | projects | project_id | cascade
-To match FK write one or many filters to match against FROM/TO/COLUMN:
-- scripts/decomposition/generate-loose-foreign-key <filter(s)...>
+To match foreign key (FK), write one or many filters to match against FROM/TO/COLUMN:
+- scripts/decomposition/generate-loose-foreign-key (filters...)
- scripts/decomposition/generate-loose-foreign-key ci_job_artifacts project_id
- scripts/decomposition/generate-loose-foreign-key dast_site_profiles_pipelines
```
@@ -593,7 +593,7 @@ Partitions: gitlab_partitions_dynamic.loose_foreign_keys_deleted_records_84 FOR
The `partition` column controls the insert direction, the `partition` value determines which
partition will get the deleted rows inserted via the trigger. Notice that the default value of
the `partition` table matches with the value of the list partition (84). In `INSERT` query
-within the trigger thevalue of the `partition` is omitted, the trigger always relies on the
+within the trigger the value of the `partition` is omitted, the trigger always relies on the
default value of the column.
Example `INSERT` query for the trigger:
@@ -605,7 +605,7 @@ SELECT TG_TABLE_SCHEMA || '.' || TG_TABLE_NAME, old_table.id FROM old_table;
```
The partition "sliding" process is controlled by two, regularly executed callbacks. These
-callbackes are defined within the `LooseForeignKeys::DeletedRecord` model.
+callbacks are defined within the `LooseForeignKeys::DeletedRecord` model.
The `next_partition_if` callback controls when to create a new partition. A new partition will
be created when the current partition has at least one record older than 24 hours. A new partition
@@ -805,7 +805,7 @@ Possible solutions:
- Long-term: invoke the worker more frequently. Parallelize the worker
For a one-time fix, we can run the cleanup worker several times from the rails console. The worker
-can run parallelly however, this can introduce lock contention and it could increase the worker
+can run in parallel however, this can introduce lock contention and it could increase the worker
runtime.
```ruby
diff --git a/doc/development/database/migrations_for_multiple_databases.md b/doc/development/database/migrations_for_multiple_databases.md
index 0ec4612e985..ce326a6ce4a 100644
--- a/doc/development/database/migrations_for_multiple_databases.md
+++ b/doc/development/database/migrations_for_multiple_databases.md
@@ -33,7 +33,7 @@ Depending on the used constructs, we can classify migrations to be either:
Migrations cannot mix **DDL** and **DML** changes as the application requires the structure
(as described by `db/structure.sql`) to be exactly the same across all decomposed databases.
-### Data Definition Language (DDL)
+### Data Definition Language (DDL)
The DDL migrations are all migrations that:
@@ -43,7 +43,7 @@ The DDL migrations are all migrations that:
1. Add or remove a column with or without a default value (for example, `add_column`).
1. Create or drop trigger functions (for example, `create_trigger_function`).
1. Attach or detach triggers from tables (for example, `track_record_deletions`, `untrack_record_deletions`).
-1. Prepare or not async indexes (for example, `prepare_async_index`, `unprepare_async_index_by_name`).
+1. Prepare or not asynchronous indexes (for example, `prepare_async_index`, `unprepare_async_index_by_name`).
As such DDL migrations **CANNOT**:
@@ -159,7 +159,7 @@ end
### The special purpose of `gitlab_shared`
-As described in [gitlab_schema](multiple_databases.md#the-special-purpose-of-gitlab_shared),
+As described in [`gitlab_schema`](multiple_databases.md#the-special-purpose-of-gitlab_shared),
the `gitlab_shared` tables are allowed to contain data across all databases. This implies
that such migrations should run across all databases to modify structure (DDL) or modify data (DML).
@@ -388,3 +388,32 @@ A Potential extension is to limit running DML migration only to specific environ
```ruby
restrict_gitlab_migration gitlab_schema: :gitlab_main, gitlab_env: :gitlab_com
```
+
+## Background migrations
+
+When you use:
+
+- Background migrations with `track_jobs` set to `true` or
+- Batched background migrations
+
+The migration has to write to a jobs table. All of the
+jobs tables used by background migrations are marked as `gitlab_shared`.
+You can use these migrations when migrating tables in any database.
+
+However, when queuing the batches, you must set `restrict_gitlab_migration` based on the
+table you are iterating over. If you are updating all `projects`, for example, then you would set
+`restrict_gitlab_migration gitlab_schema: :gitlab_main`. If, however, you are
+updating all `ci_pipelines`, you would set
+`restrict_gitlab_migration gitlab_schema: :gitlab_ci`.
+
+As with all DML migrations, you cannot query another database outside of
+`restrict_gitlab_migration` or `gitlab_shared`. If you need to query another database,
+you'll likely need to separate these into two migrations somehow.
+
+Because the actual migration logic (not the queueing step) for background
+migrations runs in a Sidekiq worker, the logic can perform DML queries on
+tables in any database, just like any ordinary Sidekiq worker can.
+
+## How to determine `gitlab_schema` for a given table
+
+See [GitLab Schema](multiple_databases.md#gitlab-schema).
diff --git a/doc/development/database/multiple_databases.md b/doc/development/database/multiple_databases.md
index 3b1b06b557c..c622d4f50ff 100644
--- a/doc/development/database/multiple_databases.md
+++ b/doc/development/database/multiple_databases.md
@@ -74,7 +74,14 @@ in GitLab 14.1. This feature is still under development, and is not ready for pr
### Configure single database
-By default, GDK is configured to run with multiple databases. To configure GDK to use a single database:
+By default, GDK is configured to run with multiple databases.
+
+WARNING:
+Switching back-and-forth between single and multiple databases in
+the same development instance is discouraged. Any data in the `ci`
+database will not be accessible in single database mode. For single database, you should use a separate development instance.
+
+To configure GDK to use a single database:
1. On the GDK root directory, run:
@@ -519,7 +526,7 @@ ci_build.update!(updated_at: Time.current) # CI DB
ci_build.project.update!(updated_at: Time.current) # Main DB
```
-##### Async processing
+##### Asynchronous processing
If we need more guarantee that an operation finishes the work consistently we can execute it
within a background job. A background job is scheduled asynchronously and retried several times
@@ -579,58 +586,6 @@ ensures that we forbid destroying the parent object if something is not cleaned
If all you need to do is clean up the child records themselves from PostgreSQL,
consider using [loose foreign keys](loose_foreign_keys.md).
-## `config/database.yml`
-
-GitLab is adding support to run multiple databases, for example to
-[separate tables for the continuous integration features](https://gitlab.com/groups/gitlab-org/-/epics/6167)
-from the main database. In order to prepare for this change, we
-[validate the structure of the configuration](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/67877)
-in `database.yml` to ensure that only known databases are used.
-
-Previously, the `config/database.yml` looked like this:
-
-```yaml
-production:
- adapter: postgresql
- encoding: unicode
- database: gitlabhq_production
- ...
-```
-
-With the support for many databases this
-syntax is [deprecated](https://gitlab.com/gitlab-org/gitlab/-/issues/338182)
-and will be removed in [15.0](https://gitlab.com/gitlab-org/gitlab/-/issues/338182).
-
-The new `config/database.yml` needs to include a database name
-to define a database configuration. Only `main:` and `ci:` database
-names are supported. The `main:` database must always be a first
-entry in a hash. This change applies to decomposed and non-decomposed
-change. If an invalid or deprecated syntax is used the error
-or warning is printed during application start.
-
-```yaml
-# Non-decomposed database
-production:
- main:
- adapter: postgresql
- encoding: unicode
- database: gitlabhq_production
- ...
-
-# Decomposed database
-production:
- main:
- adapter: postgresql
- encoding: unicode
- database: gitlabhq_production
- ...
- ci:
- adapter: postgresql
- encoding: unicode
- database: gitlabhq_production_ci
- ...
-```
-
## Foreign keys that cross databases
There are many places where we use foreign keys that reference across the two
diff --git a/doc/development/database/pagination_guidelines.md b/doc/development/database/pagination_guidelines.md
index 3a772b10a6d..08840124535 100644
--- a/doc/development/database/pagination_guidelines.md
+++ b/doc/development/database/pagination_guidelines.md
@@ -172,7 +172,7 @@ From the user point of view, this might not be always noticeable. As the user pa
When requesting a large page number, the database needs to read `PAGE * PAGE_SIZE` rows. This makes offset pagination **unsuitable for large database tables**.
-Example: listing users on the Admin page
+Example: listing users on the Admin Area
Listing users with a very simple SQL query:
diff --git a/doc/development/database/strings_and_the_text_data_type.md b/doc/development/database/strings_and_the_text_data_type.md
index 4ed7cf1b4de..7aa529e1518 100644
--- a/doc/development/database/strings_and_the_text_data_type.md
+++ b/doc/development/database/strings_and_the_text_data_type.md
@@ -206,7 +206,7 @@ class ScheduleCapTitleLengthOnIssues < Gitlab::Database::Migration[1.0]
disable_ddl_transaction!
- class Issue < ActiveRecord::Base
+ class Issue < ::ApplicationRecord
include EachBatch
self.table_name = 'issues'
diff --git a/doc/development/database/table_partitioning.md b/doc/development/database/table_partitioning.md
index ec768136404..34cb73978bc 100644
--- a/doc/development/database/table_partitioning.md
+++ b/doc/development/database/table_partitioning.md
@@ -43,7 +43,7 @@ problem.
First, a table is partitioned on a partition key, which is a column or
set of columns which determine how the data will be split across the
partitions. The partition key is used by the database when reading or
-writing data, to decide which partition(s) need to be accessed. The
+writing data, to decide which partitions need to be accessed. The
partition key should be a column that would be included in a `WHERE`
clause on almost all queries accessing that table.
diff --git a/doc/development/deprecation_guidelines/index.md b/doc/development/deprecation_guidelines/index.md
index 08e29e373f6..cafc40ccc68 100644
--- a/doc/development/deprecation_guidelines/index.md
+++ b/doc/development/deprecation_guidelines/index.md
@@ -21,8 +21,6 @@ deprecated.
## When can a feature be deprecated?
-A feature can be deprecated at any time, provided there is a viable alternative.
-
Deprecations should be announced on the [Deprecated feature removal schedule](../../update/deprecations.md).
For steps to create a deprecation entry, see [Deprecations](https://about.gitlab.com/handbook/marketing/blog/release-posts/#deprecations).
@@ -37,3 +35,52 @@ For API removals, see the [GraphQL](../../api/graphql/index.md#deprecation-and-r
For configuration removals, see the [Omnibus deprecation policy](../../administration/package_information/deprecation_policy.md).
For versioning and upgrade details, see our [Release and Maintenance policy](../../policy/maintenance.md).
+
+## Update the deprecations and removals documentation
+
+The [deprecations](../../update/deprecations.md) and [removals](../../update/removals.md)
+documentation is generated from the YAML files located in
+[`gitlab/data/`](https://gitlab.com/gitlab-org/gitlab/-/tree/master/data).
+
+To update the deprecations and removals pages when an entry is added,
+edited, or removed:
+
+1. From the command line, navigate to your local clone of the [`gitlab-org/gitlab`](https://gitlab.com/gitlab-org/gitlab) project.
+1. Create, edit, or remove the YAML file under [deprecations](https://gitlab.com/gitlab-org/gitlab/-/tree/master/data/deprecations)
+ or [removals](https://gitlab.com/gitlab-org/gitlab/-/tree/master/data/removals).
+1. Compile the deprecation or removals documentation with the appropriate command:
+
+ - For deprecations:
+
+ ```shell
+ bin/rake gitlab:docs:compile_deprecations
+ ```
+
+ - For removals:
+
+ ```shell
+ bin/rake gitlab:docs:compile_removals
+ ```
+
+1. If needed, you can verify the docs are up to date with:
+
+ - For deprecations:
+
+ ```shell
+ bin/rake gitlab:docs:check_deprecations
+ ```
+
+ - For removals:
+
+ ```shell
+ bin/rake gitlab:docs:check_removals
+ ```
+
+1. Commit the updated documentation and push the changes.
+1. Create a merge request using the [Deprecations](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/merge_request_templates/Deprecations.md)
+ or [Removals](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/merge_request_templates/Removals.md) templates.
+
+Related Handbook pages:
+
+- <https://about.gitlab.com/handbook/marketing/blog/release-posts/#deprecations-removals-and-breaking-changes>
+- <https://about.gitlab.com/handbook/marketing/blog/release-posts/#update-the-deprecations-and-removals-docs>
diff --git a/doc/development/distributed_tracing.md b/doc/development/distributed_tracing.md
index 680ac71f857..b4f347449cc 100644
--- a/doc/development/distributed_tracing.md
+++ b/doc/development/distributed_tracing.md
@@ -71,9 +71,7 @@ GITLAB_TRACING=opentracing://<driver>?<param_name>=<param_value>&<param_name_2>=
In this example, we have the following hypothetical values:
-- `driver`: the driver. [GitLab supports
- `jaeger`](../operations/tracing.md). In future, other
- tracing implementations may also be supported.
+- `driver`: the driver such a jaegar.
- `param_name`, `param_value`: these are driver specific configuration values. Configuration
parameters for Jaeger are documented [further on in this
document](#2-configure-the-gitlab_tracing-environment-variable) they should be URL encoded.
diff --git a/doc/development/documentation/feature_flags.md b/doc/development/documentation/feature_flags.md
index fb58851e93f..c5ea1985fc7 100644
--- a/doc/development/documentation/feature_flags.md
+++ b/doc/development/documentation/feature_flags.md
@@ -45,7 +45,7 @@ You can combine entries if they happened in the same release:
## Use a note to describe the state of the feature flag
-Information about feature flags should be in a **Note** at the start of the topic (just below the version history).
+Information about feature flags should be in a `FLAG` note at the start of the topic (just below the version history).
The note has three parts, and follows this structure:
@@ -62,6 +62,7 @@ FLAG:
|--------------------------|---------------|
| Available | `On self-managed GitLab, by default this feature is available. To hide the feature, ask an administrator to [disable the feature flag](<path to>/administration/feature_flags.md) named <flag name>.` |
| Unavailable | `On self-managed GitLab, by default this feature is not available. To make it available, ask an administrator to [enable the feature flag](<path to>/administration/feature_flags.md) named <flag name>.` |
+| Available to some users | `On self-managed GitLab, by default this feature is available to a subset of users. To show or hide the feature for all, ask an administrator to [change the status of the feature flag](<path to>/administration/feature_flags.md) named <flag name>.` |
| Available, per-group | `On self-managed GitLab, by default this feature is available. To hide the feature per group, ask an administrator to [disable the feature flag](<path to>/administration/feature_flags.md) named <flag name>.` |
| Unavailable, per-group | `On self-managed GitLab, by default this feature is not available. To make it available per group, ask an administrator to [enable the feature flag](<path to>/administration/feature_flags.md) named <flag name>.` |
| Available, per-project | `On self-managed GitLab, by default this feature is available. To hide the feature per project or for your entire instance, ask an administrator to [disable the feature flag](<path to>/administration/feature_flags.md) named <flag name>.` |
@@ -71,11 +72,11 @@ FLAG:
### GitLab.com availability information
-| If the feature is... | Use this text |
-|-------------------------------------|---------------|
-| Available | `On GitLab.com, this feature is available.` |
-| Available to GitLab.com admins only | `On GitLab.com, this feature is available but can be configured by GitLab.com administrators only.`
-| Unavailable | `On GitLab.com, this feature is not available.`|
+| If the feature is... | Use this text |
+|---------------------------------------------|---------------|
+| Available | `On GitLab.com, this feature is available.` |
+| Available to GitLab.com administrators only | `On GitLab.com, this feature is available but can be configured by GitLab.com administrators only.`
+| Unavailable | `On GitLab.com, this feature is not available.`|
### Optional information
diff --git a/doc/development/documentation/restful_api_styleguide.md b/doc/development/documentation/restful_api_styleguide.md
index 8a505ed84a8..0a24f9b67be 100644
--- a/doc/development/documentation/restful_api_styleguide.md
+++ b/doc/development/documentation/restful_api_styleguide.md
@@ -26,11 +26,12 @@ In the Markdown doc for a resource (AKA endpoint):
GET /projects/:id/repository/branches
```
-- Every method must have a detailed [description of the parameters](#method-description).
+- Every method must have a detailed [description of the attributes](#method-description).
- Every method must have a cURL example.
-- Every method must have a response body (in JSON format).
+- Every method must have a detailed [description of the response body](#response-body-description).
+- Every method must have a response body example (in JSON format).
- If an attribute is available only to higher level tiers than the other
- parameters, add the appropriate inline [tier badge](styleguide/index.md#product-tier-badges).
+ attributes, add the appropriate inline [tier badge](styleguide/index.md#product-tier-badges).
Put the badge in the **Attribute** column, like the
`**(<tier>)**` code in the following template.
@@ -59,6 +60,13 @@ Supported attributes:
| `attribute` | datatype | **{dotted-circle}** No | Detailed description. |
| `attribute` | datatype | **{dotted-circle}** No | Detailed description. |
+Response body attributes:
+
+| Attribute | Type | Description |
+|:-------------------------|:---------|:----------------------|
+| `attribute` | datatype | Detailed description. |
+| `attribute` **(<tier>)** | datatype | Detailed description. |
+
Example request:
```shell
@@ -75,7 +83,7 @@ Example response:
```
````
-Adjust the [version history note accordingly](styleguide/index.md#version-text-in-the-version-history)
+Adjust the [version history note accordingly](versions.md#add-a-version-history-item)
to describe the GitLab release that introduced the API call.
## Method description
@@ -86,23 +94,51 @@ always be in code blocks using backticks (`` ` ``).
Sort the table by required attributes first, then alphabetically.
```markdown
-| Attribute | Type | Required | Description |
-|:-----------------------------|:--------------|:-----------------------|:-----------------------------------------------------|
+| Attribute | Type | Required | Description |
+|:-----------------------------|:--------------|:-----------------------|:----------------------------------------------------|
| `title` | string | **{check-circle}** Yes | Title of the issue. |
-| `assignee_ids` **(PREMIUM)** | integer array | **{dotted-circle}** No | IDs of the users to assign the issue to. |
+| `assignee_ids` **(PREMIUM)** | integer array | **{dotted-circle}** No | IDs of the users to assign the issue to. |
| `confidential` | boolean | **{dotted-circle}** No | Sets the issue to confidential. Default is `false`. |
```
Rendered example:
-| Attribute | Type | Required | Description |
-|:-----------------------------|:--------------|:-----------------------|:-----------------------------------------------------|
+| Attribute | Type | Required | Description |
+|:-----------------------------|:--------------|:-----------------------|:----------------------------------------------------|
| `title` | string | **{check-circle}** Yes | Title of the issue. |
-| `assignee_ids` **(PREMIUM)** | integer array | **{dotted-circle}** No | IDs of the users to assign the issue to. |
+| `assignee_ids` **(PREMIUM)** | integer array | **{dotted-circle}** No | IDs of the users to assign the issue to. |
| `confidential` | boolean | **{dotted-circle}** No | Sets the issue to confidential. Default is `false`. |
For information about writing attribute descriptions, see the [GraphQL API description style guide](../api_graphql_styleguide.md#description-style-guide).
+## Response body description
+
+Use the following table headers to describe the response bodies. Attributes should
+always be in code blocks using backticks (`` ` ``).
+
+If the attribute is a complex type, like another object, represent sub-attributes
+with dots (`.`), like `project.name` or `projects[].name` in case of an array.
+
+Sort the table alphabetically.
+
+```markdown
+| Attribute | Type | Description |
+|:-----------------------------|:--------------|:------------------------------------------|
+| `assignee_ids` **(PREMIUM)** | integer array | IDs of the users to assign the issue to. |
+| `confidential` | boolean | Whether the issue is confidential or not. |
+| `title` | string | Title of the issue. |
+```
+
+Rendered example:
+
+| Attribute | Type | Description |
+|:-----------------------------|:--------------|:------------------------------------------|
+| `assignee_ids` **(PREMIUM)** | integer array | IDs of the users to assign the issue to. |
+| `confidential` | boolean | Whether the issue is confidential or not. |
+| `title` | string | Title of the issue. |
+
+For information about writing attribute descriptions, see the [GraphQL API description style guide](../api_graphql_styleguide.md#description-style-guide).
+
## cURL commands
- Use `https://gitlab.example.com/api/v4/` as an endpoint.
@@ -116,9 +152,9 @@ For information about writing attribute descriptions, see the [GraphQL API descr
| Methods | Description |
|:------------------------------------------------|:-------------------------------------------------------|
| `--header "PRIVATE-TOKEN: <your_access_token>"` | Use this method as is, whenever authentication needed. |
-| `--request POST` | Use this method when creating new objects |
-| `--request PUT` | Use this method when updating existing objects |
-| `--request DELETE` | Use this method when removing existing objects |
+| `--request POST` | Use this method when creating new objects. |
+| `--request PUT` | Use this method when updating existing objects. |
+| `--request DELETE` | Use this method when removing existing objects. |
## cURL Examples
diff --git a/doc/development/documentation/site_architecture/index.md b/doc/development/documentation/site_architecture/index.md
index bdda15e2064..3566ab82379 100644
--- a/doc/development/documentation/site_architecture/index.md
+++ b/doc/development/documentation/site_architecture/index.md
@@ -22,25 +22,29 @@ from where content is sourced, the `gitlab-docs` project, and the published outp
```mermaid
graph LR
- A[gitlab/doc]
- B[gitlab-runner/docs]
- C[omnibus-gitlab/doc]
- D[charts/doc]
- E[gitlab-docs]
- A --> E
- B --> E
- C --> E
- D --> E
- E -- Build pipeline --> F
- F[docs.gitlab.com]
- H[/ee/]
- I[/runner/]
- J[/omnibus/]
- K[/charts/]
- F --> H
- F --> I
- F --> J
- F --> K
+ A[gitlab-org/gitlab/doc]
+ B[gitlab-org/gitlab-runner/docs]
+ C[gitlab-org/omnibus-gitlab/doc]
+ D[gitlab-org/charts/gitlab/doc]
+ E[gitlab-org/cloud-native/gitlab-operator/doc]
+ Y[gitlab-org/gitlab-docs]
+ A --> Y
+ B --> Y
+ C --> Y
+ D --> Y
+ E --> Y
+ Y -- Build pipeline --> Z
+ Z[docs.gitlab.com]
+ M[//ee/]
+ N[//runner/]
+ O[//omnibus/]
+ P[//charts/]
+ Q[//operator/]
+ Z --> M
+ Z --> N
+ Z --> O
+ Z --> P
+ Z --> Q
```
GitLab docs content isn't kept in the `gitlab-docs` repository.
@@ -48,9 +52,10 @@ All documentation files are hosted in the respective repository of each
product, and all together are pulled to generate the docs website:
- [GitLab](https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc)
-- [Omnibus GitLab](https://gitlab.com/gitlab-org/omnibus-gitlab/tree/master/doc)
+- [Omnibus GitLab](https://gitlab.com/gitlab-org/omnibus-gitlab/-/tree/master/doc)
- [GitLab Runner](https://gitlab.com/gitlab-org/gitlab-runner/-/tree/main/docs)
-- [GitLab Chart](https://gitlab.com/charts/gitlab/tree/master/doc)
+- [GitLab Chart](https://gitlab.com/gitlab-org/charts/gitlab/-/tree/master/doc)
+- [GitLab Operator](https://gitlab.com/gitlab-org/cloud-native/gitlab-operator/-/tree/master/doc)
Learn more about [the docs folder structure](folder_structure.md).
@@ -231,31 +236,9 @@ If you don't specify `editor:`, the simple one is used by default.
## Algolia search engine
The docs site uses [Algolia DocSearch](https://community.algolia.com/docsearch/)
-for its search function. This is how it works:
-
-1. GitLab is a member of the [DocSearch program](https://community.algolia.com/docsearch/#join-docsearch-program),
- which is the free tier of [Algolia](https://www.algolia.com/).
-1. Algolia hosts a [DocSearch configuration](https://github.com/algolia/docsearch-configs/blob/master/configs/gitlab.json)
- for the GitLab docs site, and we've worked together to refine it.
-1. That [configuration](https://community.algolia.com/docsearch/config-file.html) is
- parsed by their [crawler](https://community.algolia.com/docsearch/crawler-overview.html)
- every 24h and [stores](https://community.algolia.com/docsearch/inside-the-engine.html)
- the [DocSearch index](https://community.algolia.com/docsearch/how-do-we-build-an-index.html)
- on [Algolia's servers](https://community.algolia.com/docsearch/faq.html#where-is-my-data-hosted%3F).
-1. On the docs side, we use a [DocSearch layout](https://gitlab.com/gitlab-org/gitlab-docs/blob/main/layouts/docsearch.html) which
- is present on pretty much every page except <https://docs.gitlab.com/search/>,
- which uses its [own layout](https://gitlab.com/gitlab-org/gitlab-docs/blob/main/layouts/instantsearch.html). In those layouts,
- there's a JavaScript snippet which initiates DocSearch by using an API key
- and an index name (`gitlab`) that are needed for Algolia to show the results.
-
-### Algolia notes for GitLab team members
-
-If you're a GitLab team member, find credentials for the Algolia dashboard
-in the shared [GitLab 1Password account](https://about.gitlab.com/handbook/security/#1password-for-teams).
-To receive weekly reports of the search usage, search the Google doc with
-title `Email, Slack, and GitLab Groups and Aliases`, search for `docsearch`,
-and add a comment with your email to be added to the alias that gets the weekly
-reports.
+for its search function.
+
+Learn more in <https://gitlab.com/gitlab-org/gitlab-docs/-/blob/main/doc/docsearch.md>.
## Monthly release process (versions)
diff --git a/doc/development/documentation/structure.md b/doc/development/documentation/structure.md
index 21368098f39..329fd279b99 100644
--- a/doc/development/documentation/structure.md
+++ b/doc/development/documentation/structure.md
@@ -237,7 +237,7 @@ consider using subsections for each distinct task.
### Related topics
If inline links are not sufficient, you can create a topic called **Related topics**
-and include a bulleted list of related topics. This topic should be above the Troubleshooting section.
+and include an unordered list of related topics. This topic should be above the Troubleshooting section.
```markdown
# Related topics
@@ -336,7 +336,7 @@ Consider the following guidelines when offering examples:
the reader to go directly to the good part. Consider offering an explanation
(for example, a comment, or a link to a resource) on why something is bad
practice.
-- Better and best cases can be considered part of the good case(s) code block.
+- Better and best cases can be considered part of the good cases' code block.
In the same code block, precede each with comments: `# Better` and `# Best`.
Although the bad-then-good approach is acceptable for the GitLab development
diff --git a/doc/development/documentation/styleguide/index.md b/doc/development/documentation/styleguide/index.md
index 7bfc0320d02..c11d1422167 100644
--- a/doc/development/documentation/styleguide/index.md
+++ b/doc/development/documentation/styleguide/index.md
@@ -276,7 +276,6 @@ You can use these fake tokens as examples:
| Trigger token | `be20d8dcc028677c931e04f3871a9b` |
| Webhook secret token | `6XhDroRcYPM5by_h-HLY` |
| Health check token | `Tu7BgjR9qeZTEyRzGG2P` |
-| Request profile token | `7VgpS4Ax5utVD2esNstz` |
### Contractions
@@ -401,6 +400,39 @@ Backticks are more precise than quotes. For example, in this string:
It's not clear whether the user should include the period in the string.
+### Inline code
+
+Inline code style is applied inline with regular text. Use inline code style:
+
+- For filenames or fragments of configuration files. For example, `.gitlab-ci.yml`, `CODEOWNERS`, and `only: [main]`.
+- For HTTP methods (`HTTP POST`) and HTTP status codes, both full (`404 File Not Found`) and abbreviated (`404`).
+ For example: Send a `DELETE` request to delete the runner. Send a `POST` request to create one.
+
+To apply inline code style, wrap the text in a single backtick (`` ` ``). For example, `this is inline code style`.
+
+### Code blocks
+
+Code block style separates code text from regular text. Use code block style for commands run in the command-line
+interface. Code block style is easier to copy and paste in a user's terminal window.
+
+To apply code block style, wrap the text in triple backticks (three `` ` ``) and add a syntax highlighting hint. For
+example:
+
+````plaintext
+```plaintext
+This is codeblock style
+```
+````
+
+When using code block style:
+
+- Use quadruple backticks (four `` ` ``) to apply code block style when the code block you are styling has triple
+ backticks in it. For example, when illustrating code block style.
+- Add a blank line above and below code blocks.
+- Syntax highlight hints are required for code blocks. See the
+ [list of supported languages and lexers](https://github.com/rouge-ruby/rouge/wiki/List-of-supported-languages-and-lexers)
+ for available syntax highlighters. Use `plaintext` if no better hint is available.
+
## Lists
- Always start list items with a capital letter, unless they're parameters or
@@ -622,7 +654,10 @@ In the Markdown document:
For the heading text, **do**:
- Be clear and direct. Make every word count.
-- Use active verbs for tasks. For example, `Configure GDK` instead of `Configuring GDK`.
+- Use active, imperative verbs for [tasks](../structure.md#task). For example, `Create an issue`.
+- Use `ing` (gerund) verbs only when you need a topic that introduces tasks. For example, `Configuring GDK`.
+- Use nouns for [concepts](../structure.md#concept). For example, `GDK dependency management`. If a noun is
+ ambiguous, you can add a gerund. For example, `Documenting versions` instead of `Versions`.
- Talk about what the product does, realistically but from a positive perspective. Instead of
`Limitations`, move the content near other similar information. If you must, you can
use the title `Known issues`.
@@ -695,7 +730,6 @@ We include guidance for links in these categories:
for authoritative sources.
- When to use [links requiring permissions](#links-requiring-permissions).
- How to set up a [link to a video](#link-to-video).
-- How to [include links with version text](#where-to-put-version-text).
- How to [link to specific lines of code](#link-to-specific-lines-of-code)
### Basic link criteria
@@ -949,7 +983,7 @@ If you are documenting multiple fields and only one field needs explanation, do
1. Expand **Push rules**.
1. Complete the fields. **Branch name** must be a regular expression.
-To describe multiple fields, use bullets:
+To describe multiple fields, use unordered list items:
1. Expand **General pipelines**.
1. Complete the fields.
@@ -1166,80 +1200,6 @@ different mobile devices.
`/help`, because the GitLab Markdown processor doesn't support iframes. It's
hidden on the documentation site, but is displayed by `/help`.
-## Code blocks
-
-- Always wrap code added to a sentence in inline code blocks (`` ` ``).
- For example, `.gitlab-ci.yml`, `git add .`, `CODEOWNERS`, or `only: [main]`.
- File names, commands, entries, and anything that refers to code should be
- added to code blocks. To make things easier for the user, always add a full
- code block for things that can be useful to copy and paste, as they can do it
- with the button on code blocks.
-- HTTP methods (`HTTP POST`) and HTTP status codes, both full (`404 File Not Found`)
- and abbreviated (`404`), should be wrapped in inline code blocks when used in sentences.
- For example: Send a `DELETE` request to delete the runner. Send a `POST` request to create one.
-- Add a blank line above and below code blocks.
-- When providing a shell command and its output, prefix the shell command with `$`
- and leave a blank line between the command and the output.
-- When providing a command without output, don't prefix the shell command with `$`.
-- If you need to include triple backticks inside a code block, use four backticks
- for the code block fences instead of three.
-- For regular fenced code blocks, always use a highlighting class corresponding to
- the language for better readability. Examples:
-
- ````markdown
- ```ruby
- Ruby code
- ```
-
- ```javascript
- JavaScript code
- ```
-
- ```markdown
- [Markdown code example](example.md)
- ```
-
- ```plaintext
- Code or text for which no specific highlighting class is available.
- ```
- ````
-
-Syntax highlighting is required for fenced code blocks added to the GitLab
-documentation. Refer to this table for the most common language classes,
-or check the [complete list](https://github.com/rouge-ruby/rouge/wiki/List-of-supported-languages-and-lexers)
-of available language classes:
-
-| Preferred language tags | Language aliases and notes |
-|-------------------------|------------------------------------------------------------------------------|
-| `asciidoc` | |
-| `dockerfile` | Alias: `docker`. |
-| `elixir` | |
-| `erb` | |
-| `golang` | Alias: `go`. |
-| `graphql` | |
-| `haml` | |
-| `html` | |
-| `ini` | For some simple configuration files that are not in TOML format. |
-| `javascript` | Alias `js`. |
-| `json` | |
-| `markdown` | Alias: `md`. |
-| `mermaid` | |
-| `nginx` | |
-| `perl` | |
-| `php` | |
-| `plaintext` | Examples with no defined language, such as output from shell commands or API calls. If a code block has no language, it defaults to `plaintext`. Alias: `text`.|
-| `prometheus` | Prometheus configuration examples. |
-| `python` | |
-| `ruby` | Alias: `rb`. |
-| `shell` | Aliases: `bash` or `sh`. |
-| `sql` | |
-| `toml` | Runner configuration examples, and other TOML-formatted configuration files. |
-| `typescript` | Alias: `ts`. |
-| `xml` | |
-| `yaml` | Alias: `yml`. |
-
-For a complete reference on code blocks, see the [Kramdown guide](https://about.gitlab.com/handbook/markdown-guide/#code-blocks).
-
## GitLab SVG icons
> [Introduced](https://gitlab.com/gitlab-org/gitlab-docs/-/issues/384) in GitLab 12.7.
@@ -1379,7 +1339,7 @@ you don't need to supply your username and password each time.
### Disclaimer
Use to describe future functionality only.
-For more information, see [Legal disclaimer for future features](#legal-disclaimer-for-future-features).
+For more information, see [Legal disclaimer for future features](../versions.md#legal-disclaimer-for-future-features).
## Blockquotes
@@ -1429,222 +1389,6 @@ application:
- For elements with a tooltip or hover label, use that label in bold with
matching case. For example, `Select **Add status emoji**`.
-## GitLab versions
-
-GitLab product documentation pages (not including [Contributor and Development](../../index.md)
-pages in the `/development` directory) can include version information to help
-users be aware of recent improvements or additions.
-
-The GitLab Technical Writing team determines which versions of
-documentation to display on this site based on the GitLab
-[Statement of Support](https://about.gitlab.com/support/statement-of-support.html#version-support).
-
-### View older GitLab documentation versions
-
-Older versions of GitLab may no longer have documentation available from `docs.gitlab.com`.
-If documentation for your version is no longer available from `docs.gitlab.com`, you can still view a
-tagged and released set of documentation for your installed version:
-
-- In the [documentation archives](https://docs.gitlab.com/archives/).
-- At the `/help` URL of your GitLab instance.
-- In the documentation repository based on the respective branch (for example,
- the [13.2 branch](https://gitlab.com/gitlab-org/gitlab/-/tree/13-2-stable-ee/doc)).
-
-### Where to put version text
-
-When a feature is added or updated, you can include its version information
-either as a **Version history** item or as an inline text reference.
-
-#### Version text in the **Version History**
-
-If all content in a section is related, add version text after the header for
-the section. The version information must:
-
-- Be surrounded by blank lines.
-- Start with `>`. If there are multiple bullets, each line must start with `> -`.
-- The string must include these words in this order (capitalization doesn't matter):
- - `introduced`, `enabled`, `deprecated`, `changed`, `moved`, `recommended` (as in the
- [feature flag documentation](../feature_flags.md)), `removed`, or `renamed`
- - `in` or `to`
- - `GitLab`
-- Whenever possible, include a link to the completed issue, merge request, or epic
- that introduced the feature. An issue is preferred over a merge request, and
- a merge request is preferred over an epic.
-- Do not include information about the tier, unless documenting a tier change
- (for example, `Feature X [moved](issue-link) to Premium in GitLab 19.2`).
-- Do not link to the pricing page.
- The tier is provided by the [product badge](#product-tier-badges) on the heading.
-
-```markdown
-## Feature name
-
-> [Introduced](<link-to-issue>) in GitLab 11.3.
-
-This feature does something.
-
-## Feature name 2
-
-> - [Introduced](<link-to-issue>) in GitLab 11.3.
-> - [Enabled by default](<link-to-issue>) in GitLab 11.4.
-
-This feature does something else.
-```
-
-If you're documenting elements of a feature, start with the feature name or a gerund:
-
-```markdown
-> - Notifications for expiring tokens [introduced](<link-to-issue>) in GitLab 11.3.
-> - Creating an issue from an issue board [introduced](<link-to-issue>) in GitLab 13.1.
-```
-
-If a feature is moved to another tier:
-
-```markdown
-> - [Moved](<link-to-issue>) from GitLab Ultimate to GitLab Premium in 11.8.
-> - [Moved](<link-to-issue>) from GitLab Premium to GitLab Free in 12.0.
-```
-
-#### Inline version text
-
-If you're adding content to an existing topic, you can add version information
-inline with the existing text.
-
-In this case, add `([introduced/deprecated](<link-to-issue>) in GitLab X.X)`.
-
-Including the issue link is encouraged, but isn't a requirement. For example:
-
-```markdown
-The voting strategy in GitLab 13.4 and later requires the primary and secondary
-voters to agree.
-```
-
-#### Deprecated features
-
-When a feature is deprecated, add `(DEPRECATED)` to the page title or to
-the heading of the section documenting the feature, immediately before
-the tier badge:
-
-```markdown
-<!-- Page title example: -->
-# Feature A (DEPRECATED) **(ALL TIERS)**
-
-<!-- Doc section example: -->
-## Feature B (DEPRECATED) **(PREMIUM SELF)**
-```
-
-Add the deprecation to the version history note (you can include a link
-to a replacement when available):
-
-```markdown
-> - [Deprecated](<link-to-issue>) in GitLab 11.3. Replaced by [meaningful text](<link-to-appropriate-documentation>).
-```
-
-You can also describe the replacement in surrounding text, if available. If the
-deprecation isn't obvious in existing text, you may want to include a warning:
-
-```markdown
-WARNING:
-This feature was [deprecated](link-to-issue) in GitLab 12.3 and replaced by
-[Feature name](link-to-feature-documentation).
-```
-
-If you add `(DEPRECATED)` to the page's title and the document is linked from the docs
-navigation, either remove the page from the nav or update the nav item to include the
-same text before the feature name:
-
-```yaml
- - doc_title: (DEPRECATED) Feature A
-```
-
-In the first major GitLab version after the feature was deprecated, be sure to
-remove information about that deprecated feature.
-
-#### End-of-life for features or products
-
-When a feature or product enters its end-of-life, indicate its status by
-creating a [warning alert](#alert-boxes) directly after its relevant header.
-If possible, link to its deprecation and removal issues.
-
-For example:
-
-```markdown
-WARNING:
-This feature is in its end-of-life process. It is [deprecated](link-to-issue)
-in GitLab X.X, and is planned for [removal](link-to-issue) in GitLab X.X.
-```
-
-After the feature or product is officially deprecated and removed, remove
-its information from the GitLab documentation.
-
-### Versions in the past or future
-
-When describing functionality available in past or future versions, use:
-
-- Earlier, and not older or before.
-- Later, and not newer or after.
-
-For example:
-
-- Available in GitLab 13.1 and earlier.
-- Available in GitLab 12.4 and later.
-- In GitLab 12.2 and earlier, ...
-- In GitLab 11.6 and later, ...
-
-### Promising features in future versions
-
-Do not promise to deliver features in a future release. For example, avoid phrases like,
-"Support for this feature is planned."
-
-We cannot guarantee future feature work, and promises
-like these can raise legal issues. Instead, say that an issue exists.
-For example:
-
-- Support for improvements is tracked `[in this issue](LINK)`.
-- You cannot do this thing, but `[an issue exists](LINK)` to change this behavior.
-
-You can say that we plan to remove a feature.
-
-#### Legal disclaimer for future features
-
-If you **must** write about features we have not yet delivered, put this exact disclaimer near the content it applies to.
-
-```markdown
-DISCLAIMER:
-This page contains information related to upcoming products, features, and functionality.
-It is important to note that the information presented is for informational purposes only.
-Please do not rely on this information for purchasing or planning purposes.
-As with all projects, the items mentioned on this page are subject to change or delay.
-The development, release, and timing of any products, features, or functionality remain at the
-sole discretion of GitLab Inc.
-```
-
-It renders on the GitLab documentation site as:
-
-DISCLAIMER:
-This page contains information related to upcoming products, features, and functionality.
-It is important to note that the information presented is for informational purposes only.
-Please do not rely on this information for purchasing or planning purposes.
-As with all projects, the items mentioned on this page are subject to change or delay.
-The development, release, and timing of any products, features, or functionality remain at the
-sole discretion of GitLab Inc.
-
-If all of the content on the page is not available, use the disclaimer once at the top of the page.
-
-If the content in a topic is not ready, use the disclaimer in the topic.
-
-### Removing versions after each major release
-
-When a major GitLab release occurs, we remove all references
-to now-unsupported versions. This removal includes version-specific instructions. For example,
-if GitLab version 12.1 and later are supported,
-instructions for users of GitLab 11 should be removed.
-
-[View the list of supported versions](https://about.gitlab.com/support/statement-of-support.html#version-support).
-
-To view historical information about a feature, review GitLab
-[release posts](https://about.gitlab.com/releases/), or search for the issue or
-merge request where the work was done.
-
## Products and features
Refer to the information in this section when describing products and features
@@ -1664,7 +1408,7 @@ pricing page. For example:
You must assign a tier badge:
-- To all H1 topic headings.
+- To all H1 topic headings, except the pages under `doc/development/*`.
- To topic headings that don't apply to the same tier as the H1.
To add a tier badge to a heading, add the relevant tier badge
@@ -1692,10 +1436,15 @@ functionality is described.
| Only GitLab Premium SaaS and higher tiers (no self-managed instances) | `**(PREMIUM SAAS)**` |
| Only GitLab Ultimate SaaS (no self-managed instances) | `**(ULTIMATE SAAS)**` |
-Topics that mention the `gitlab.rb` file are referring to
-self-managed instances of GitLab. To prevent confusion, include the relevant `TIER SELF`
-tier badge on the highest applicable heading level on
-the page.
+Topics that are only for instance administrators should be badged `<TIER> SELF`. Instance
+administrator documentation often includes sections that mention:
+
+- Changing the `gitlab.rb` or `gitlab.yml` files.
+- Accessing the rails console or running Rake tasks.
+- Doing things in the Admin Area.
+
+These pages should also mention if the tasks can only be accomplished by an
+instance administrator.
## Specific sections
diff --git a/doc/development/documentation/styleguide/word_list.md b/doc/development/documentation/styleguide/word_list.md
index 65f6a0a328b..e7d927de2cf 100644
--- a/doc/development/documentation/styleguide/word_list.md
+++ b/doc/development/documentation/styleguide/word_list.md
@@ -164,7 +164,15 @@ Use lowercase for **boards**, **issue boards**, and **epic boards**.
Use **text box** to refer to the UI field. Do not use **field** or **box**. For example:
-- In the **Variable name** text box, enter `my text`.
+- In the **Variable name** text box, enter a value.
+
+
+## bullet
+
+Don't refer to individual items in an ordered or unordered list as **bullets**. Use **list item** instead. If you need to be less ambiguous, you can use:
+
+- **Ordered list item** for items in an ordered list.
+- **Unordered list item** for items in an unordered list.
## button
@@ -318,7 +326,22 @@ Use **active** or **on** instead. ([Vale](../testing.md#vale) rule: [`InclusionA
## enter
-Use **enter** instead of **type** when talking about putting values into text boxes.
+In most cases, use **enter** rather than **type**.
+
+- **Enter** encompasses multiple ways to enter information, including speech and keyboard.
+- **Enter** assumes that the user puts a value in a field and then moves the cursor outside the field (or presses <kbd>Enter</kbd>).
+ **Enter** includes both the entering of the content and the action to validate the content.
+
+For example:
+
+- In the **Variable name** text box, enter a value.
+- In the **Variable name** text box, enter `my text`.
+
+When you use **Enter** to refer to the key on a keyboard, use the HTML `<kbd>` tag:
+
+- To view the list of results, press <kbd>Enter</kbd>.
+
+See also [**type**](#type).
## epic
@@ -356,7 +379,7 @@ Use **box** instead of **field** or **text box**.
Use:
-- In the **Variable name** box, enter `my text`.
+- In the **Variable name** text box, enter `my text`.
Instead of:
@@ -392,6 +415,13 @@ Do not make **GitLab** possessive (GitLab's). This guidance follows [GitLab Trad
**GitLab.com** refers to the GitLab instance managed by GitLab itself.
+## GitLab Flavored Markdown
+
+When possible, spell out [**GitLab Flavored Markdown**](../../../user/markdown.md).
+([Vale](../testing.md#vale) rule: [`GLFM.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab/SubstitutionSuggestions.yml))
+
+If you must abbreviate, do not use **GFM**. Use **GLFM** instead.
+
## GitLab SaaS
**GitLab SaaS** refers to the product license that provides access to GitLab.com. It does not refer to the
@@ -518,6 +548,25 @@ Instead of:
Do not use **list** when referring to a [**dropdown list**](#dropdown-list).
Use the full phrase **dropdown list** instead.
+## license
+
+When writing about licenses:
+
+- Do not use variations such as **cloud license**, **offline license**, or **legacy license**.
+- Do not use interchangeably with **subscription**:
+ - A license grants users access to the subscription they purchased, and contains information such as the number of seats they purchased and subscription dates.
+ - A subscription is the subscription tier that the user purchases.
+
+Use:
+
+ - Add a license to your instance.
+ - Purchase a subscription.
+
+Instead of:
+
+ - Buy a license.
+ - Purchase a license.
+
## log in, log on
Do not use **log in** or **log on**. Use [sign in](#sign-in) instead. If the user interface has **Log in**, you can use it.
@@ -576,6 +625,11 @@ Use lowercase for **merge requests**. If you use **MR** as the acronym, spell it
Use lowercase for **milestones**.
+## n/a, N/A, not applicable
+
+When possible, use **not applicable**. Spelling out the phrase helps non-English speaking users and avoids
+capitalization inconsistencies.
+
## navigate
Do not use **navigate**. Use **go** instead. For example:
@@ -950,7 +1004,17 @@ Use [**2FA** and **two-factor authentication**](#2fa-two-factor-authentication)
## type
-Do not use **type** if you can avoid it. Use **enter** instead.
+Use **type** when the cursor remains in the field you're typing in. For example,
+in a search dialog, you begin typing and the field populates results. You do not
+click out of the field.
+
+For example:
+
+- To view all users named Alex, type `Al`.
+- To view all labels for the documentation team, type `doc`.
+- For a list of quick actions, type `/`.
+
+See also [**enter**](#enter).
## update
@@ -1031,7 +1095,7 @@ Sometimes you might need to use **yet** when writing a task. If you use
**yet**, ensure the surrounding phrases are written
in present tense, active voice.
-[View guidance about how to write about future features](index.md#promising-features-in-future-versions).
+[View guidance about how to write about future features](../versions.md#promising-features-in-future-versions).
([Vale](../testing.md#vale) rule: [`CurrentStatus.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab/CurrentStatus.yml))
## you, your, yours
diff --git a/doc/development/documentation/testing.md b/doc/development/documentation/testing.md
index 9facb22669b..81e1eca8724 100644
--- a/doc/development/documentation/testing.md
+++ b/doc/development/documentation/testing.md
@@ -23,6 +23,10 @@ in the relevant projects:
- <https://gitlab.com/gitlab-org/gitlab-runner/-/blob/main/.gitlab/ci/docs.gitlab-ci.yml>
- <https://gitlab.com/gitlab-org/omnibus-gitlab/-/blob/master/gitlab-ci-config/gitlab-com.yml>
- <https://gitlab.com/gitlab-org/charts/gitlab/-/blob/master/.gitlab-ci.yml>
+- <https://gitlab.com/gitlab-org/cloud-native/gitlab-operator/-/blob/master/.gitlab-ci.yml>
+
+We also run some documentation tests in the GitLab Development Kit project:
+<https://gitlab.com/gitlab-org/gitlab-development-kit/-/blob/main/.gitlab/ci/test.gitlab-ci.yml>.
## Run tests locally
@@ -144,6 +148,36 @@ synchronized to the other projects. In `omnibus-gitlab`, `gitlab-runner`, and `c
is hard coded for specific projects.
1. Create a merge request and submit it to a technical writer for review and merge.
+## Update linting images
+
+Lint tests run in CI/CD pipelines using images from the `gitlab-docs` [container registry](https://gitlab.com/gitlab-org/gitlab-docs/container_registry).
+
+If a new version of a dependency is released (like a new version of Ruby), we
+should update the images to use the newer version. Then, we can update the configuration
+files in each of our documentation projects to point to the new image.
+
+To update the linting images:
+
+1. In `gitlab-docs`, open a merge request to update `.gitlab-ci.yml` to use the new tooling
+ version. ([Example MR](https://gitlab.com/gitlab-org/gitlab-docs/-/merge_requests/2571))
+1. When merged, start a `Build docs.gitlab.com every 4 hours` [scheduled pipeline](https://gitlab.com/gitlab-org/gitlab-docs/-/pipeline_schedules).
+1. Go the pipeline you started, and manually run the relevant build-images job,
+ for example, `image:docs-lint-markdown`.
+1. In the job output, get the name of the new image.
+ ([Example job output](https://gitlab.com/gitlab-org/gitlab-docs/-/jobs/2335033884#L334))
+1. Verify that the new image was added to the container registry.
+1. Open merge requests to update each of these configuration files to point to the new image.
+ In each merge request, include a small doc update to trigger the job that uses the image.
+ - <https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/ci/docs.gitlab-ci.yml> ([Example MR](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/85177))
+ - <https://gitlab.com/gitlab-org/gitlab-runner/-/blob/main/.gitlab/ci/test.gitlab-ci.yml> ([Example MR](https://gitlab.com/gitlab-org/gitlab-runner/-/merge_requests/3408))
+ - <https://gitlab.com/gitlab-org/omnibus-gitlab/-/blob/master/gitlab-ci-config/gitlab-com.yml> ([Example MR](https://gitlab.com/gitlab-org/omnibus-gitlab/-/merge_requests/6037))
+ - <https://gitlab.com/gitlab-org/charts/gitlab/-/blob/master/.gitlab-ci.yml> ([Example MR](https://gitlab.com/gitlab-org/charts/gitlab/-/merge_requests/2511))
+ - <https://gitlab.com/gitlab-org/cloud-native/gitlab-operator/-/blob/master/.gitlab-ci.yml> ([Example MR](https://gitlab.com/gitlab-org/cloud-native/gitlab-operator/-/merge_requests/462))
+ - <https://gitlab.com/gitlab-org/gitlab-development-kit/-/blob/main/.gitlab/ci/test.gitlab-ci.yml> ([Example MR](https://gitlab.com/gitlab-org/gitlab-development-kit/-/merge_requests/2417))
+1. In each merge request, check the relevant job output to confirm the updated image was
+ used for the test. ([Example job output](https://gitlab.com/gitlab-org/charts/gitlab/-/jobs/2335470260#L24))
+1. Assign the merge requests to any technical writer to review and merge.
+
## Local linters
To help adhere to the [documentation style guidelines](styleguide/index.md), and improve the content
@@ -173,6 +207,7 @@ markdownlint configuration is found in the following projects:
- [`omnibus-gitlab`](https://gitlab.com/gitlab-org/omnibus-gitlab)
- [`charts`](https://gitlab.com/gitlab-org/charts/gitlab)
- [`gitlab-development-kit`](https://gitlab.com/gitlab-org/gitlab-development-kit)
+- [`gitlab-operator`](https://gitlab.com/gitlab-org/cloud-native/gitlab-operator)
This configuration is also used in build pipelines.
@@ -311,7 +346,7 @@ To configure Vale in your editor, install one of the following as appropriate:
- Visual Studio Code [`errata-ai.vale-server` extension](https://marketplace.visualstudio.com/items?itemName=errata-ai.vale-server).
You can configure the plugin to [display only a subset of alerts](#show-subset-of-vale-alerts).
- Vim [ALE plugin](https://github.com/dense-analysis/ale).
-- Jetbrains IDEs - No plugin exists, but
+- JetBrains IDEs - No plugin exists, but
[this issue comment](https://github.com/errata-ai/vale-server/issues/39#issuecomment-751714451)
contains tips for configuring an external tool.
- Emacs [Flycheck extension](https://github.com/flycheck/flycheck).
diff --git a/doc/development/documentation/versions.md b/doc/development/documentation/versions.md
new file mode 100644
index 00000000000..0f2bdca4c73
--- /dev/null
+++ b/doc/development/documentation/versions.md
@@ -0,0 +1,232 @@
+---
+info: For assistance with this Style Guide page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments-to-other-projects-and-subjects.
+stage: none
+group: unassigned
+description: 'Writing styles, markup, formatting, and other standards for GitLab Documentation.'
+---
+
+# Documenting product versions
+
+The GitLab product documentation includes version-specific information,
+including when features were introduced and when they were updated or removed.
+
+## View older documentation versions
+
+Previous versions of the documentation are available on `docs.gitlab.com`.
+To view a previous version, select the **Versions** button in the top right.
+
+To view versions that are not available on `docs.gitlab.com`:
+
+- View the [documentation archives](https://docs.gitlab.com/archives/).
+- Go to the GitLab repository and select the version-specific branch. For example,
+ the [13.2 branch](https://gitlab.com/gitlab-org/gitlab/-/tree/13-2-stable-ee/doc) has the
+ documentation for GitLab 13.2.
+
+## Documenting version-specific features
+
+When a feature is added or updated, you can include its version information
+either as a **Version history** bullet or as an inline text reference.
+
+You do not need to add version information on the pages in the `/development` directory.
+
+### Add a **Version history** item
+
+If all content in a topic is related, add a version history item after the topic heading.
+For example:
+
+```markdown
+## Feature name
+
+> [Introduced](<link-to-issue>) in GitLab 11.3.
+
+This feature does something.
+```
+
+The item text must include these words in order. Capitalization doesn't matter.
+
+- `introduced`, `enabled`, `deprecated`, `changed`, `moved`, `recommended`, `removed`, or `renamed`
+- `in` or `to`
+- `GitLab`
+
+If possible, include a link to the related issue, merge request, or epic.
+Do not link to the pricing page. Do not include the subscription tier.
+
+#### Introducing a new feature
+
+If you use `introduced`, start the sentence with the feature name or a gerund:
+
+```markdown
+> - Notifications for expiring tokens [introduced](<link-to-issue>) in GitLab 11.3.
+> - Creating an issue from an issue board [introduced](<link-to-issue>) in GitLab 13.1.
+```
+
+#### Moving subscription tiers
+
+If a feature is moved to another subscription tier, use `moved`:
+
+```markdown
+> - [Moved](<link-to-issue>) from GitLab Ultimate to GitLab Premium in 11.8.
+> - [Moved](<link-to-issue>) from GitLab Premium to GitLab Free in 12.0.
+```
+
+### Inline version text
+
+If you're adding content to an existing topic, you can add version information
+inline with the existing text. If possible, include a link to the related issue,
+merge request, or epic. For example:
+
+```markdown
+The voting strategy [in GitLab 13.4 and later](<link-to-issue>) requires the primary and secondary
+voters to agree.
+```
+
+## Deprecations and removals
+
+When features are deprecated and removed, update the related documentation.
+
+API documentation follows these guidelines, but the GraphQL docs use
+a [separate process](../api_graphql_styleguide.md#deprecating-schema-items).
+
+### Deprecate a page or topic
+
+To deprecate a page or topic:
+
+1. Add `(deprecated)` after the title. Use a warning to explain when it was deprecated,
+ when it will be removed, and the replacement feature.
+
+ ```markdown
+ ## Title (deprecated) **(ULTIMATE SELF)**
+
+ WARNING:
+ This feature was [deprecated](<link-to-issue>) in GitLab 14.8
+ and is planned for removal in 15.4. Use [feature X](<link-to-issue>) instead.
+ ```
+
+ If you're not sure when the feature will be removed or no
+ replacement feature exists, you don't need to add this information.
+
+1. If the deprecation is a breaking change, add this text:
+
+ ```markdown
+ This change is a breaking change.
+ ```
+
+ You can add any additional context-specific details that might help users.
+
+1. Open a merge request to add the word `(deprecated)` to the left nav, after the page title.
+
+### Remove a page
+
+Mark content as removed during the release the feature was removed.
+The title and a removed indicator remains until three months after the removal.
+
+To remove a page:
+
+1. Leave the page title. Remove all other content, including the version history items and the word `WARNING:`.
+1. After the title, change `(deprecated)` to `(removed)`.
+1. Update the YAML metadata:
+ - For `remove_date`, set the value to a date three months after
+ the release when the feature was removed.
+ - For the `redirect_to`, set a path to a file that makes sense. If no obvious
+ page exists, use the docs home page.
+
+ ```markdown
+ ---
+ stage: Enablement
+ group: Global Search
+ info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
+ remove_date: '2022-08-02'
+ redirect_to: '../newpath/to/file/index.md'
+ ---
+
+ # Title (removed) **(ULTIMATE SELF)**
+
+ This feature was [deprecated](https://gitlab.com/gitlab-org/gitlab/-/issues/351963) in GitLab 14.8
+ and [removed](https://gitlab.com/gitlab-org/gitlab/-/issues/351963) in 15.0.
+ Use [feature X](<link-to-issue>) instead.
+ ```
+
+1. Remove the page's entry from the global navigation by editing [`navigation.yaml`](https://gitlab.com/gitlab-org/gitlab-docs/blob/main/content/_data/navigation.yaml) in `gitlab-docs`.
+
+This content is removed from the documentation as part of the Technical Writing team's
+[regularly scheduled tasks](https://about.gitlab.com/handbook/engineering/ux/technical-writing/#regularly-scheduled-tasks).
+
+### Remove a topic
+
+To remove a topic:
+
+1. Leave the title and the details of the deprecation and removal. Remove all other content,
+ including the version history items and the word `WARNING:`.
+1. Add `(removed)` after the title.
+1. Add the following HTML comments above and below the topic.
+ For the `remove_date`, set a date three months after the release where it was removed.
+
+ ```markdown
+ <!--- start_remove The following content will be removed on remove_date: '2023-08-22' -->
+
+ ## Title (removed) **(ULTIMATE SELF)**
+
+ This feature was [deprecated](https://gitlab.com/gitlab-org/gitlab/-/issues/351963) in GitLab 14.8
+ and [removed](https://gitlab.com/gitlab-org/gitlab/-/issues/351963) in 15.0.
+ Use [feature X](<link-to-issue>) instead.
+
+ <!--- end_remove -->
+ ```
+
+This content is removed from the documentation as part of the Technical Writing team's
+[regularly scheduled tasks](https://about.gitlab.com/handbook/engineering/ux/technical-writing/#regularly-scheduled-tasks).
+
+## Which versions are removed
+
+GitLab supports the current major version and two previous major versions.
+For example, if 14.0 is the current major version, all major and minor releases of
+GitLab 14.0, 13.0 and 12.0 are supported.
+
+[View the list of supported versions](https://about.gitlab.com/support/statement-of-support.html#version-support).
+
+If you see version history items or inline text that refers to unsupported versions, you can remove it.
+
+Historical feature information is available in [release posts](https://about.gitlab.com/releases/)
+or by searching for the issue or merge request where the work was done.
+
+## Promising features in future versions
+
+Do not promise to deliver features in a future release. For example, avoid phrases like,
+"Support for this feature is planned."
+
+We cannot guarantee future feature work, and promises
+like these can raise legal issues. Instead, say that an issue exists.
+For example:
+
+- Support for improvements is tracked `[in this issue](LINK)`.
+- You cannot do this thing, but `[an issue exists](LINK)` to change this behavior.
+
+You can say that we plan to remove a feature.
+
+### Legal disclaimer for future features
+
+If you **must** write about features we have not yet delivered, put this exact disclaimer near the content it applies to.
+
+```markdown
+DISCLAIMER:
+This page contains information related to upcoming products, features, and functionality.
+It is important to note that the information presented is for informational purposes only.
+Please do not rely on this information for purchasing or planning purposes.
+As with all projects, the items mentioned on this page are subject to change or delay.
+The development, release, and timing of any products, features, or functionality remain at the
+sole discretion of GitLab Inc.
+```
+
+It renders on the GitLab documentation site as:
+
+DISCLAIMER:
+This page contains information related to upcoming products, features, and functionality.
+It is important to note that the information presented is for informational purposes only.
+Please do not rely on this information for purchasing or planning purposes.
+As with all projects, the items mentioned on this page are subject to change or delay.
+The development, release, and timing of any products, features, or functionality remain at the
+sole discretion of GitLab Inc.
+
+If all of the content on the page is not available, use the disclaimer once at the top of the page.
+
+If the content in a topic is not ready, use the disclaimer in the topic.
diff --git a/doc/development/documentation/workflow.md b/doc/development/documentation/workflow.md
index a12af51e436..fb43a2e995a 100644
--- a/doc/development/documentation/workflow.md
+++ b/doc/development/documentation/workflow.md
@@ -151,7 +151,7 @@ Remember:
Ensure the following if skipping an initial Technical Writer review:
- [Product badges](styleguide/index.md#product-tier-badges) are applied.
-- The GitLab [version](styleguide/index.md#gitlab-versions) that
+- The GitLab [version](versions.md) that
introduced the feature is included.
- Changes to headings don't affect in-app hyperlinks.
- Specific [user permissions](../../user/permissions.md) are documented.
diff --git a/doc/development/ee_features.md b/doc/development/ee_features.md
index 019dbb13599..28cf6d4e1e3 100644
--- a/doc/development/ee_features.md
+++ b/doc/development/ee_features.md
@@ -74,6 +74,30 @@ setting the [`FOSS_ONLY` environment variable](https://gitlab.com/gitlab-org/git
to something that evaluates as `true`. The same works for running tests
(for example `FOSS_ONLY=1 yarn jest`).
+### Running feature specs as CE
+
+When running [feature specs](testing_guide/best_practices.md#system--feature-tests)
+as CE, you should ensure that the edition of backend and frontend match.
+To do so:
+
+1. Set the `FOSS_ONLY=1` environment variable:
+
+ ```shell
+ export FOSS_ONLY=1
+ ```
+
+1. Start GDK:
+
+ ```shell
+ gdk start
+ ```
+
+1. Run feature specs:
+
+ ```shell
+ bin/rspec spec/features/<path_to_your_spec>
+ ```
+
## CI pipelines in a FOSS context
By default, merge request pipelines for development run in an EE-context only. If you are
diff --git a/doc/development/emails.md b/doc/development/emails.md
index b8e390988bd..a5c2789a3ea 100644
--- a/doc/development/emails.md
+++ b/doc/development/emails.md
@@ -86,7 +86,7 @@ See the [Rails guides](https://guides.rubyonrails.org/action_mailer_basics.html#
# The IDLE command timeout.
idle_timeout: 60
- # Whether to expunge (permanently remove) messages from the mailbox when they are deleted after delivery
+ # Whether to expunge (permanently remove) messages from the mailbox when they are marked as deleted after delivery
expunge_deleted: false
```
diff --git a/doc/development/event_store.md b/doc/development/event_store.md
index 967272dcf2e..afd5640271e 100644
--- a/doc/development/event_store.md
+++ b/doc/development/event_store.md
@@ -313,17 +313,17 @@ we have added helpers and shared examples to standardize the way we test subscri
```ruby
RSpec.describe MergeRequests::UpdateHeadPipelineWorker do
- let(:event) { Ci::PipelineCreatedEvent.new(data: ({ pipeline_id: pipeline.id })) }
+ let(:pipeline_created_event) { Ci::PipelineCreatedEvent.new(data: ({ pipeline_id: pipeline.id })) }
# This shared example ensures that an event is published and correctly processed by
# the current subscriber (`described_class`).
- it_behaves_like 'consumes the published event' do
- let(:event) { event }
+ it_behaves_like 'subscribes to event' do
+ let(:event) { pipeline_created_event }
end
it 'does something' do
# This helper directly executes `perform` ensuring that `handle_event` is called correctly.
- consume_event(subscriber: described_class, event: event)
+ consume_event(subscriber: described_class, event: pipeline_created_event)
# run expectations
end
diff --git a/doc/development/experiment_guide/experiment_code_reviews.md b/doc/development/experiment_guide/experiment_code_reviews.md
new file mode 100644
index 00000000000..fdde89caa34
--- /dev/null
+++ b/doc/development/experiment_guide/experiment_code_reviews.md
@@ -0,0 +1,25 @@
+---
+stage: Growth
+group: Adoption
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
+---
+
+# Experiment code reviews
+
+Experiments' code quality can fail our standards for several reasons. These
+reasons can include not being added to the codebase for a long time, or because
+of fast iteration to retrieve data. However, having the experiment run (or not
+run) shouldn't impact GitLab availability. To avoid or identify issues,
+experiments are initially deployed to a small number of users. Regardless,
+experiments still need tests.
+
+Experiments must have corresponding [frontend or feature tests](../testing_guide/index.md) to ensure they
+exist in the application. These tests should help prevent the experiment code from
+being removed before the [experiment cleanup process](https://about.gitlab.com/handbook/engineering/development/growth/experimentation/#experiment-cleanup-issue) starts.
+
+If, as a reviewer or maintainer, you find code that would usually fail review
+but is acceptable for now, mention your concerns with a note that there's no
+need to change the code. The author can then add a comment to this piece of code
+and link to the issue that resolves the experiment. The author or reviewer can add a link to this concern in the
+experiment rollout issue under the `Experiment Successful Cleanup Concerns` section of the description.
+If the experiment is successful and becomes part of the product, any items that appear under this section will be addressed.
diff --git a/doc/development/experiment_guide/experiment_rollout.md b/doc/development/experiment_guide/experiment_rollout.md
new file mode 100644
index 00000000000..afa32d75221
--- /dev/null
+++ b/doc/development/experiment_guide/experiment_rollout.md
@@ -0,0 +1,77 @@
+---
+stage: Growth
+group: Adoption
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
+---
+
+# Experiment rollouts and feature flags
+
+## Experiment rollout issue
+
+Each experiment should have an [experiment rollout](https://gitlab.com/groups/gitlab-org/-/boards/1352542) issue to track the experiment from rollout through to cleanup and removal.
+The rollout issue is similar to a feature flag rollout issue, and is also used to track the status of an experiment.
+
+When an experiment is deployed, the due date of the issue should be set (this depends on the experiment but can be up to a few weeks in the future).
+After the deadline, the issue needs to be resolved and either:
+
+- It was successful and the experiment becomes the new default.
+- It was not successful and all code related to the experiment is removed.
+
+In either case, an outcome of the experiment should be posted to the issue with the reasoning for the decision.
+
+## Turn off all experiments
+
+When there is a case on GitLab.com (SaaS) that necessitates turning off all experiments, we have this control.
+
+You can toggle experiments on SaaS on and off using the `gitlab_experiment` [feature flag](../feature_flags).
+
+This can be done via ChatOps:
+
+- [disable](../feature_flags/controls.md#disabling-feature-flags): `/chatops run feature set gitlab_experiment false`
+- [enable](../feature_flags/controls.md#process): `/chatops run feature delete gitlab_experiment`
+- This allows the `default_enabled` [value of true in the yml](https://gitlab.com/gitlab-org/gitlab/-/blob/016430f6751b0c34abb24f74608c80a1a8268f20/config/feature_flags/ops/gitlab_experiment.yml#L8) to be honored.
+
+## Notes on feature flags
+
+NOTE:
+We use the terms "enabled" and "disabled" here, even though it's against our
+[documentation style guide recommendations](../documentation/styleguide/word_list.md#enable)
+because these are the terms that the feature flag documentation uses.
+
+You may already be familiar with the concept of feature flags in GitLab, but using
+feature flags in experiments is a bit different. While in general terms, a feature flag
+is viewed as being either `on` or `off`, this isn't accurate for experiments.
+
+Generally, `off` means that when we ask if a feature flag is enabled, it will always
+return `false`, and `on` means that it will always return `true`. An interim state,
+considered `conditional`, also exists. We take advantage of this trinary state of
+feature flags. To understand this `conditional` aspect: consider that either of these
+settings puts a feature flag into this state:
+
+- Setting a `percentage_of_actors` of any percent greater than 0%.
+- Enabling it for a single user or group.
+
+Conditional means that it returns `true` in some situations, but not all situations.
+
+When a feature flag is disabled (meaning the state is `off`), the experiment is
+considered _inactive_. You can visualize this in the [decision tree diagram](https://gitlab.com/gitlab-org/ruby/gems/gitlab-experiment#how-it-works)
+as reaching the first `Running?` node, and traversing the negative path.
+
+When a feature flag is rolled out to a `percentage_of_actors` or similar (meaning the
+state is `conditional`) the experiment is considered to be _running_
+where sometimes the control is assigned, and sometimes the candidate is assigned.
+We don't refer to this as being enabled, because that's a confusing and overloaded
+term here. In the experiment terms, our experiment is _running_, and the feature flag is
+`conditional`.
+
+When a feature flag is enabled (meaning the state is `on`), the candidate will always be
+assigned.
+
+We should try to be consistent with our terms, and so for experiments, we have an
+_inactive_ experiment until we set the feature flag to `conditional`. After which,
+our experiment is then considered _running_. If you choose to "enable" your feature flag,
+you should consider the experiment to be _resolved_, because everyone is assigned
+the candidate unless they've opted out of experimentation.
+
+As of GitLab 13.10, work is being done to improve this process and how we communicate
+about it.
diff --git a/doc/development/experiment_guide/experimentation.md b/doc/development/experiment_guide/experimentation.md
deleted file mode 100644
index 28100564555..00000000000
--- a/doc/development/experiment_guide/experimentation.md
+++ /dev/null
@@ -1,11 +0,0 @@
----
-redirect_to: 'gitlab_experiment.md'
-remove_date: '2022-04-13'
----
-
-This document was moved to [another location](gitlab_experiment.md).
-
-<!-- This redirect file can be deleted after <2022-04-13>. -->
-<!-- Redirects that point to other docs in the same project expire in three months. -->
-<!-- Redirects that point to docs in a different project or site (for example, link is not relative and starts with `https:`) expire in one year. -->
-<!-- Before deletion, see: https://docs.gitlab.com/ee/development/documentation/redirects.html -->
diff --git a/doc/development/experiment_guide/gitlab_experiment.md b/doc/development/experiment_guide/gitlab_experiment.md
index 78e1f84d701..5ddbe9b3de9 100644
--- a/doc/development/experiment_guide/gitlab_experiment.md
+++ b/doc/development/experiment_guide/gitlab_experiment.md
@@ -1,586 +1,11 @@
---
-stage: Growth
-group: Adoption
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
+redirect_to: 'index.md'
+remove_date: '2022-08-05'
---
-# Implementing an A/B/n experiment
+This document was moved to [another location](index.md).
-## Introduction
-
-Experiments in GitLab are tightly coupled with the concepts provided by
-[Feature flags in development of GitLab](../feature_flags/index.md). You're strongly encouraged
-to read and understand the [Feature flags in development of GitLab](../feature_flags/index.md)
-portion of the documentation before considering running experiments. Experiments add additional
-concepts which may seem confusing or advanced without understanding the underpinnings of how GitLab
-uses feature flags in development. One concept: experiments can be run with multiple variants,
-which are sometimes referred to as A/B/n tests.
-
-We use the [`gitlab-experiment` gem](https://gitlab.com/gitlab-org/ruby/gems/gitlab-experiment),
-sometimes referred to as GLEX, to run our experiments. The gem exists in a separate repository
-so it can be shared across any GitLab property that uses Ruby. You should feel comfortable reading
-the documentation on that project if you want to dig into more advanced topics or open issues. Be
-aware that the documentation there reflects what's in the main branch and may not be the same as
-the version being used within GitLab.
-
-## Glossary of terms
-
-To ensure a shared language, you should understand these fundamental terms we use
-when communicating about experiments:
-
-- `experiment`: Any deviation of code paths we want to run at some times, but not others.
-- `context`: A consistent experience we provide in an experiment.
-- `control`: The default, or "original" code path.
-- `candidate`: Defines an experiment with only one code path.
-- `variant(s)`: Defines an experiment with multiple code paths.
-- `behaviors`: Used to reference all possible code paths of an experiment, including the control.
-
-## Implementing an experiment
-
-[Examples](https://gitlab.com/gitlab-org/growth/growth/-/wikis/GLEX-Framework-code-examples)
-
-Start by generating a feature flag using the `bin/feature-flag` command as you
-normally would for a development feature flag, making sure to use `experiment` for
-the type. For the sake of documentation let's name our feature flag (and experiment)
-"pill_color".
-
-```shell
-bin/feature-flag pill_color -t experiment
-```
-
-After you generate the desired feature flag, you can immediately implement an
-experiment in code. An experiment implementation can be as simple as:
-
-```ruby
-experiment(:pill_color, actor: current_user) do |e|
- e.control { 'control' }
- e.variant(:red) { 'red' }
- e.variant(:blue) { 'blue' }
-end
-```
-
-When this code executes, the experiment is run, a variant is assigned, and (if within a
-controller or view) a `window.gl.experiments.pill_color` object will be available in the
-client layer, with details like:
-
-- The assigned variant.
-- The context key for client tracking events.
-
-In addition, when an experiment runs, an event is tracked for
-the experiment `:assignment`. We cover more about events, tracking, and
-the client layer later.
-
-In local development, you can make the experiment active by using the feature flag
-interface. You can also target specific cases by providing the relevant experiment
-to the call to enable the feature flag:
-
-```ruby
-# Enable for everyone
-Feature.enable(:pill_color)
-
-# Get the `experiment` method -- already available in controllers, views, and mailers.
-include Gitlab::Experiment::Dsl
-# Enable for only the first user
-Feature.enable(:pill_color, experiment(:pill_color, actor: User.first))
-```
-
-To roll out your experiment feature flag on an environment, run
-the following command using ChatOps (which is covered in more depth in the
-[Feature flags in development of GitLab](../feature_flags/index.md) documentation).
-This command creates a scenario where half of everyone who encounters
-the experiment would be assigned the _control_, 25% would be assigned the _red_
-variant, and 25% would be assigned the _blue_ variant:
-
-```slack
-/chatops run feature set pill_color 50 --actors
-```
-
-For an even distribution in this example, change the command to set it to 66% instead
-of 50.
-
-NOTE:
-To immediately stop running an experiment, use the
-`/chatops run feature set pill_color false` command.
-
-WARNING:
-We strongly recommend using the `--actors` flag when using the ChatOps commands,
-as anything else may give odd behaviors due to how the caching of variant assignment is
-handled.
-
-We can also implement this experiment in a HAML file with HTML wrappings:
-
-```haml
-#cta-interface
- - experiment(:pill_color, actor: current_user) do |e|
- - e.control do
- .pill-button control
- - e.variant(:red) do
- .pill-button.red red
- - e.variant(:blue) do
- .pill-button.blue blue
-```
-
-### The importance of context
-
-In our previous example experiment, our context (this is an important term) is a hash
-that's set to `{ actor: current_user }`. Context must be unique based on how you
-want to run your experiment, and should be understood at a lower level.
-
-It's expected, and recommended, that you use some of these
-contexts to simplify reporting:
-
-- `{ actor: current_user }`: Assigns a variant and is "sticky" to each user
- (or "client" if `current_user` is nil) who enters the experiment.
-- `{ project: project }`: Assigns a variant and is "sticky" to the project currently
- being viewed. If running your experiment is more useful when viewing a project,
- rather than when a specific user is viewing any project, consider this approach.
-- `{ group: group }`: Similar to the project example, but applies to a wider
- scope of projects and users.
-- `{ actor: current_user, project: project }`: Assigns a variant and is "sticky"
- to the user who is viewing the given project. This creates a different variant
- assignment possibility for every project that `current_user` views. Understand this
- can create a large cache size if an experiment like this in a highly trafficked part
- of the application.
-- `{ wday: Time.current.wday }`: Assigns a variant based on the current day of the
- week. In this example, it would consistently assign one variant on Friday, and a
- potentially different variant on Saturday.
-
-Context is critical to how you define and report on your experiment. It's usually
-the most important aspect of how you choose to implement your experiment, so consider
-it carefully, and discuss it with the wider team if needed. Also, take into account
-that the context you choose affects our cache size.
-
-After the above examples, we can state the general case: *given a specific
-and consistent context, we can provide a consistent experience and track events for
-that experience.* To dive a bit deeper into the implementation details: a context key
-is generated from the context that's provided. Use this context key to:
-
-- Determine the assigned variant.
-- Identify events tracked against that context key.
-
-We can think about this as the experience that we've rendered, which is both dictated
-and tracked by the context key. The context key is used to track the interaction and
-results of the experience we've rendered to that context key. These concepts are
-somewhat abstract and hard to understand initially, but this approach enables us to
-communicate about experiments as something that's wider than just user behavior.
-
-NOTE:
-Using `actor:` utilizes cookies if the `current_user` is nil. If you don't need
-cookies though - meaning that the exposed functionality would only be visible to
-signed in users - `{ user: current_user }` would be just as effective.
-
-WARNING:
-The caching of variant assignment is done by using this context, and so consider
-your impact on the cache size when defining your experiment. If you use
-`{ time: Time.current }` you would be inflating the cache size every time the
-experiment is run. Not only that, your experiment would not be "sticky" and events
-wouldn't be resolvable.
-
-### Advanced experimentation
-
-There are two ways to implement an experiment:
-
-1. The simple experiment style described previously.
-1. A more advanced style where an experiment class is provided.
-
-The advanced style is handled by naming convention, and works similar to what you
-would expect in Rails.
-
-To generate a custom experiment class that can override the defaults in
-`ApplicationExperiment` use the Rails generator:
-
-```shell
-rails generate gitlab:experiment pill_color control red blue
-```
-
-This generates an experiment class in `app/experiments/pill_color_experiment.rb`
-with the _behaviors_ we've provided to the generator. Here's an example
-of how that class would look after migrating our previous example into it:
-
-```ruby
-class PillColorExperiment < ApplicationExperiment
- control { 'control' }
- variant(:red) { 'red' }
- variant(:blue) { 'blue' }
-end
-```
-
-We can now simplify where we run our experiment to the following call, instead of
-providing the block we were initially providing, by explicitly calling `run`:
-
-```ruby
-experiment(:pill_color, actor: current_user).run
-```
-
-The _behaviors_ we defined in our experiment class represent the default
-implementation. You can still use the block syntax to override these _behaviors_
-however, so the following would also be valid:
-
-```ruby
-experiment(:pill_color, actor: current_user) do |e|
- e.control { '<strong>control</strong>' }
-end
-```
-
-NOTE:
-When passing a block to the `experiment` method, it is implicitly invoked as
-if `run` has been called.
-
-#### Segmentation rules
-
-You can use runtime segmentation rules to, for instance, segment contexts into a specific
-variant. The `segment` method is a callback (like `before_action`) and so allows providing
-a block or method name.
-
-In this example, any user named `'Richard'` would always be assigned the _red_
-variant, and any account older than 2 weeks old would be assigned the _blue_ variant:
-
-```ruby
-class PillColorExperiment < ApplicationExperiment
- # ...registered behaviors
-
- segment(variant: :red) { context.actor.first_name == 'Richard' }
- segment :old_account?, variant: :blue
-
- private
-
- def old_account?
- context.actor.created_at < 2.weeks.ago
- end
-end
-```
-
-When an experiment runs, the segmentation rules are executed in the order they're
-defined. The first segmentation rule to produce a truthy result assigns the variant.
-
-In our example, any user named `'Richard'`, regardless of account age, will always
-be assigned the _red_ variant. If you want the opposite logic, flip the order.
-
-NOTE:
-Keep in mind when defining segmentation rules: after a truthy result, the remaining
-segmentation rules are skipped to achieve optimal performance.
-
-#### Exclusion rules
-
-Exclusion rules are similar to segmentation rules, but are intended to determine
-if a context should even be considered as something we should include in the experiment
-and track events toward. Exclusion means we don't care about the events in relation
-to the given context.
-
-These examples exclude all users named `'Richard'`, *and* any account
-older than 2 weeks old. Not only are they given the control behavior - which could
-be nothing - but no events are tracked in these cases as well.
-
-```ruby
-class PillColorExperiment < ApplicationExperiment
- # ...registered behaviors
-
- exclude :old_account?, ->{ context.actor.first_name == 'Richard' }
-
- private
-
- def old_account?
- context.actor.created_at < 2.weeks.ago
- end
-end
-```
-
-You may also need to check exclusion in custom tracking logic by calling `should_track?`:
-
-```ruby
-class PillColorExperiment < ApplicationExperiment
- # ...registered behaviors
-
- def expensive_tracking_logic
- return unless should_track?
-
- track(:my_event, value: expensive_method_call)
- end
-end
-```
-
-### Tracking events
-
-One of the most important aspects of experiments is gathering data and reporting on
-it. You can use the `track` method to track events across an experimental implementation.
-You can track events consistently to an experiment if you provide the same context between
-calls to your experiment. If you do not yet understand context, you should read
-about contexts now.
-
-We can assume we run the experiment in one or a few places, but
-track events potentially in many places. The tracking call remains the same, with
-the arguments you would normally use when
-[tracking events using snowplow](../snowplow/index.md). The easiest example
-of tracking an event in Ruby would be:
-
-```ruby
-experiment(:pill_color, actor: current_user).track(:clicked)
-```
-
-When you run an experiment with any of the examples so far, an `:assignment` event
-is tracked automatically by default. All events that are tracked from an
-experiment have a special
-[experiment context](https://gitlab.com/gitlab-org/iglu/-/blob/master/public/schemas/com.gitlab/gitlab_experiment/jsonschema/1-0-3)
-added to the event. This can be used - typically by the data team - to create a connection
-between the events on a given experiment.
-
-If our current user hasn't encountered the experiment yet (meaning where the experiment
-is run), and we track an event for them, they are assigned a variant and see
-that variant if they ever encountered the experiment later, when an `:assignment`
-event would be tracked at that time for them.
-
-NOTE:
-GitLab tries to be sensitive and respectful of our customers regarding tracking,
-so our experimentation library allows us to implement an experiment without ever tracking identifying
-IDs. It's not always possible, though, based on experiment reporting requirements.
-You may be asked from time to time to track a specific record ID in experiments.
-The approach is largely up to the PM and engineer creating the implementation.
-No recommendations are provided here at this time.
-
-## Testing with RSpec
-
-In the course of working with experiments, you'll probably want to utilize the RSpec
-tooling that's built in. This happens automatically for files in `spec/experiments`, but
-for other files and specs you want to include it in, you can specify the `:experiment` type:
-
-```ruby
-it "tests experiments nicely", :experiment do
-end
-```
-
-### Stub helpers
-
-You can stub experiments using `stub_experiments`. Pass it a hash using experiment
-names as the keys, and the variants you want each to resolve to, as the values:
-
-```ruby
-# Ensures the experiments named `:example` & `:example2` are both "enabled" and
-# that each will resolve to the given variant (`:my_variant` and `:control`
-# respectively).
-stub_experiments(example: :my_variant, example2: :control)
-
-experiment(:example) do |e|
- e.enabled? # => true
- e.assigned.name # => 'my_variant'
-end
-
-experiment(:example2) do |e|
- e.enabled? # => true
- e.assigned.name # => 'control'
-end
-```
-
-### Exclusion, segmentation, and behavior matchers
-
-You can also test things like the registered behaviors, the exclusions, and
-segmentations using the matchers.
-
-```ruby
-class ExampleExperiment < ApplicationExperiment
- control { }
- candidate { '_candidate_' }
-
- exclude { context.actor.first_name == 'Richard' }
- segment(variant: :candidate) { context.actor.username == 'jejacks0n' }
-end
-
-excluded = double(username: 'rdiggitty', first_name: 'Richard')
-segmented = double(username: 'jejacks0n', first_name: 'Jeremy')
-
-# register_behavior matcher
-expect(experiment(:example)).to register_behavior(:control)
-expect(experiment(:example)).to register_behavior(:candidate).with('_candidate_')
-
-# exclude matcher
-expect(experiment(:example)).to exclude(actor: excluded)
-expect(experiment(:example)).not_to exclude(actor: segmented)
-
-# segment matcher
-expect(experiment(:example)).to segment(actor: segmented).into(:candidate)
-expect(experiment(:example)).not_to segment(actor: excluded)
-```
-
-### Tracking matcher
-
-Tracking events is a major aspect of experimentation. We try
-to provide a flexible way to ensure your tracking calls are covered.
-
-You can do this on the instance level or at an "any instance" level:
-
-```ruby
-subject = experiment(:example)
-
-expect(subject).to track(:my_event)
-
-subject.track(:my_event)
-```
-
-You can use the `on_next_instance` chain method to specify that it will happen
-on the next instance of the experiment. This helps you if you're calling
-`experiment(:example).track` downstream:
-
-```ruby
-expect(experiment(:example)).to track(:my_event).on_next_instance
-
-experiment(:example).track(:my_event)
-```
-
-A full example of the methods you can chain onto the `track` matcher:
-
-```ruby
-expect(experiment(:example)).to track(:my_event, value: 1, property: '_property_')
- .on_next_instance
- .with_context(foo: :bar)
- .for(:variant_name)
-
-experiment(:example, :variant_name, foo: :bar).track(:my_event, value: 1, property: '_property_')
-```
-
-## Experiments in the client layer
-
-Any experiment that's been run in the request lifecycle surfaces in `window.gl.experiments`,
-and matches [this schema](https://gitlab.com/gitlab-org/iglu/-/blob/master/public/schemas/com.gitlab/gitlab_experiment/jsonschema/1-0-3)
-so it can be used when resolving experimentation in the client layer.
-
-Given that we've defined a class for our experiment, and have defined the variants for it, we can publish that experiment in a couple ways.
-
-The first way is simply by running the experiment. Assuming the experiment has been run, it will surface in the client layer without having to do anything special.
-
-The second way doesn't run the experiment and is intended to be used if the experiment only needs to surface in the client layer. To accomplish this we can simply `.publish` the experiment. This won't run any logic, but does surface the experiment details in the client layer so they can be utilized there.
-
-An example might be to publish an experiment in a `before_action` in a controller. Assuming we've defined the `PillColorExperiment` class, like we have above, we can surface it to the client by publishing it instead of running it:
-
-```ruby
-before_action -> { experiment(:pill_color).publish }, only: [:show]
-```
-
-You can then see this surface in the JavaScript console:
-
-```javascript
-window.gl.experiments // => { pill_color: { excluded: false, experiment: "pill_color", key: "ca63ac02", variant: "candidate" } }
-```
-
-### Using experiments in Vue
-
-With the `gitlab-experiment` component, you can define slots that match the name of the
-variants pushed to `window.gl.experiments`.
-
-We can make use of the named slots in the Vue component, that match the behaviors defined in :
-
-```vue
-<script>
-import GitlabExperiment from '~/experimentation/components/gitlab_experiment.vue';
-
-export default {
- components: { GitlabExperiment }
-}
-</script>
-
-<template>
- <gitlab-experiment name="pill_color">
- <template #control>
- <button class="bg-default">Click default button</button>
- </template>
-
- <template #red>
- <button class="bg-red">Click red button</button>
- </template>
-
- <template #blue>
- <button class="bg-blue">Click blue button</button>
- </template>
- </gitlab-experiment>
-</template>
-```
-
-NOTE:
-When there is no experiment data in the `window.gl.experiments` object for the given experiment name, the `control` slot will be used, if it exists.
-
-## Test with Jest
-
-### Stub Helpers
-
-You can stub experiments using the `stubExperiments` helper defined in `spec/frontend/__helpers__/experimentation_helper.js`.
-
-```javascript
-import { stubExperiments } from 'helpers/experimentation_helper';
-import { getExperimentData } from '~/experimentation/utils';
-
-describe('when my_experiment is enabled', () => {
- beforeEach(() => {
- stubExperiments({ my_experiment: 'candidate' });
- });
-
- it('sets the correct data', () => {
- expect(getExperimentData('my_experiment')).toEqual({ experiment: 'my_experiment', variant: 'candidate' });
- });
-});
-```
-
-NOTE:
-This method of stubbing in Jest specs will not automatically un-stub itself at the end of the test. We merge our stubbed experiment in with all the other global data in `window.gl`. If you need to remove the stubbed experiment(s) after your test or ensure a clean global object before your test, you'll need to manage the global object directly yourself:
-
-```javascript
-describe('tests that care about global state', () => {
- const originalObjects = [];
-
- beforeEach(() => {
- // For backwards compatibility for now, we're using both window.gon & window.gl
- originalObjects.push(window.gon, window.gl);
- });
-
- afterEach(() => {
- [window.gon, window.gl] = originalObjects;
- });
-
- it('stubs experiment in fresh global state', () => {
- stubExperiment({ my_experiment: 'candidate' });
- // ...
- });
-})
-```
-
-## Notes on feature flags
-
-NOTE:
-We use the terms "enabled" and "disabled" here, even though it's against our
-[documentation style guide recommendations](../documentation/styleguide/word_list.md#enable)
-because these are the terms that the feature flag documentation uses.
-
-You may already be familiar with the concept of feature flags in GitLab, but using
-feature flags in experiments is a bit different. While in general terms, a feature flag
-is viewed as being either `on` or `off`, this isn't accurate for experiments.
-
-Generally, `off` means that when we ask if a feature flag is enabled, it will always
-return `false`, and `on` means that it will always return `true`. An interim state,
-considered `conditional`, also exists. We take advantage of this trinary state of
-feature flags. To understand this `conditional` aspect: consider that either of these
-settings puts a feature flag into this state:
-
-- Setting a `percentage_of_actors` of any percent greater than 0%.
-- Enabling it for a single user or group.
-
-Conditional means that it returns `true` in some situations, but not all situations.
-
-When a feature flag is disabled (meaning the state is `off`), the experiment is
-considered _inactive_. You can visualize this in the [decision tree diagram](https://gitlab.com/gitlab-org/ruby/gems/gitlab-experiment#how-it-works)
-as reaching the first `Running?` node, and traversing the negative path.
-
-When a feature flag is rolled out to a `percentage_of_actors` or similar (meaning the
-state is `conditional`) the experiment is considered to be _running_
-where sometimes the control is assigned, and sometimes the candidate is assigned.
-We don't refer to this as being enabled, because that's a confusing and overloaded
-term here. In the experiment terms, our experiment is _running_, and the feature flag is
-`conditional`.
-
-When a feature flag is enabled (meaning the state is `on`), the candidate will always be
-assigned.
-
-We should try to be consistent with our terms, and so for experiments, we have an
-_inactive_ experiment until we set the feature flag to `conditional`. After which,
-our experiment is then considered _running_. If you choose to "enable" your feature flag,
-you should consider the experiment to be _resolved_, because everyone is assigned
-the candidate unless they've opted out of experimentation.
-
-As of GitLab 13.10, work is being done to improve this process and how we communicate
-about it.
+<!-- This redirect file can be deleted after 2022-08-05. -->
+<!-- Redirects that point to other docs in the same project expire in three months. -->
+<!-- Redirects that point to docs in a different project or site (for example, link is not relative and starts with `https:`) expire in one year. -->
+<!-- Before deletion, see: https://docs.gitlab.com/ee/development/documentation/redirects.html -->
diff --git a/doc/development/experiment_guide/implementing_experiments.md b/doc/development/experiment_guide/implementing_experiments.md
new file mode 100644
index 00000000000..3c33d015108
--- /dev/null
+++ b/doc/development/experiment_guide/implementing_experiments.md
@@ -0,0 +1,369 @@
+---
+stage: Growth
+group: Adoption
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
+---
+
+# Implementing an A/B/n experiment
+
+## Implementing an experiment
+
+[Examples](https://gitlab.com/gitlab-org/growth/growth/-/wikis/GLEX-Framework-code-examples)
+
+Start by generating a feature flag using the `bin/feature-flag` command as you
+normally would for a development feature flag, making sure to use `experiment` for
+the type. For the sake of documentation let's name our feature flag (and experiment)
+"pill_color".
+
+```shell
+bin/feature-flag pill_color -t experiment
+```
+
+After you generate the desired feature flag, you can immediately implement an
+experiment in code. An experiment implementation can be as simple as:
+
+```ruby
+experiment(:pill_color, actor: current_user) do |e|
+ e.control { 'control' }
+ e.variant(:red) { 'red' }
+ e.variant(:blue) { 'blue' }
+end
+```
+
+When this code executes, the experiment is run, a variant is assigned, and (if within a
+controller or view) a `window.gl.experiments.pill_color` object will be available in the
+client layer, with details like:
+
+- The assigned variant.
+- The context key for client tracking events.
+
+In addition, when an experiment runs, an event is tracked for
+the experiment `:assignment`. We cover more about events, tracking, and
+the client layer later.
+
+In local development, you can make the experiment active by using the feature flag
+interface. You can also target specific cases by providing the relevant experiment
+to the call to enable the feature flag:
+
+```ruby
+# Enable for everyone
+Feature.enable(:pill_color)
+
+# Get the `experiment` method -- already available in controllers, views, and mailers.
+include Gitlab::Experiment::Dsl
+# Enable for only the first user
+Feature.enable(:pill_color, experiment(:pill_color, actor: User.first))
+```
+
+To roll out your experiment feature flag on an environment, run
+the following command using ChatOps (which is covered in more depth in the
+[Feature flags in development of GitLab](../feature_flags/index.md) documentation).
+This command creates a scenario where half of everyone who encounters
+the experiment would be assigned the _control_, 25% would be assigned the _red_
+variant, and 25% would be assigned the _blue_ variant:
+
+```slack
+/chatops run feature set pill_color 50 --actors
+```
+
+For an even distribution in this example, change the command to set it to 66% instead
+of 50.
+
+NOTE:
+To immediately stop running an experiment, use the
+`/chatops run feature set pill_color false` command.
+
+WARNING:
+We strongly recommend using the `--actors` flag when using the ChatOps commands,
+as anything else may give odd behaviors due to how the caching of variant assignment is
+handled.
+
+We can also implement this experiment in a HAML file with HTML wrappings:
+
+```haml
+#cta-interface
+ - experiment(:pill_color, actor: current_user) do |e|
+ - e.control do
+ .pill-button control
+ - e.variant(:red) do
+ .pill-button.red red
+ - e.variant(:blue) do
+ .pill-button.blue blue
+```
+
+### The importance of context
+
+In our previous example experiment, our context (this is an important term) is a hash
+that's set to `{ actor: current_user }`. Context must be unique based on how you
+want to run your experiment, and should be understood at a lower level.
+
+It's expected, and recommended, that you use some of these
+contexts to simplify reporting:
+
+- `{ actor: current_user }`: Assigns a variant and is "sticky" to each user
+ (or "client" if `current_user` is nil) who enters the experiment.
+- `{ project: project }`: Assigns a variant and is "sticky" to the project currently
+ being viewed. If running your experiment is more useful when viewing a project,
+ rather than when a specific user is viewing any project, consider this approach.
+- `{ group: group }`: Similar to the project example, but applies to a wider
+ scope of projects and users.
+- `{ actor: current_user, project: project }`: Assigns a variant and is "sticky"
+ to the user who is viewing the given project. This creates a different variant
+ assignment possibility for every project that `current_user` views. Understand this
+ can create a large cache size if an experiment like this in a highly trafficked part
+ of the application.
+- `{ wday: Time.current.wday }`: Assigns a variant based on the current day of the
+ week. In this example, it would consistently assign one variant on Friday, and a
+ potentially different variant on Saturday.
+
+Context is critical to how you define and report on your experiment. It's usually
+the most important aspect of how you choose to implement your experiment, so consider
+it carefully, and discuss it with the wider team if needed. Also, take into account
+that the context you choose affects our cache size.
+
+After the above examples, we can state the general case: *given a specific
+and consistent context, we can provide a consistent experience and track events for
+that experience.* To dive a bit deeper into the implementation details: a context key
+is generated from the context that's provided. Use this context key to:
+
+- Determine the assigned variant.
+- Identify events tracked against that context key.
+
+We can think about this as the experience that we've rendered, which is both dictated
+and tracked by the context key. The context key is used to track the interaction and
+results of the experience we've rendered to that context key. These concepts are
+somewhat abstract and hard to understand initially, but this approach enables us to
+communicate about experiments as something that's wider than just user behavior.
+
+NOTE:
+Using `actor:` utilizes cookies if the `current_user` is nil. If you don't need
+cookies though - meaning that the exposed functionality would only be visible to
+signed in users - `{ user: current_user }` would be just as effective.
+
+WARNING:
+The caching of variant assignment is done by using this context, and so consider
+your impact on the cache size when defining your experiment. If you use
+`{ time: Time.current }` you would be inflating the cache size every time the
+experiment is run. Not only that, your experiment would not be "sticky" and events
+wouldn't be resolvable.
+
+### Advanced experimentation
+
+There are two ways to implement an experiment:
+
+1. The simple experiment style described previously.
+1. A more advanced style where an experiment class is provided.
+
+The advanced style is handled by naming convention, and works similar to what you
+would expect in Rails.
+
+To generate a custom experiment class that can override the defaults in
+`ApplicationExperiment` use the Rails generator:
+
+```shell
+rails generate gitlab:experiment pill_color control red blue
+```
+
+This generates an experiment class in `app/experiments/pill_color_experiment.rb`
+with the _behaviors_ we've provided to the generator. Here's an example
+of how that class would look after migrating our previous example into it:
+
+```ruby
+class PillColorExperiment < ApplicationExperiment
+ control { 'control' }
+ variant(:red) { 'red' }
+ variant(:blue) { 'blue' }
+end
+```
+
+We can now simplify where we run our experiment to the following call, instead of
+providing the block we were initially providing, by explicitly calling `run`:
+
+```ruby
+experiment(:pill_color, actor: current_user).run
+```
+
+The _behaviors_ we defined in our experiment class represent the default
+implementation. You can still use the block syntax to override these _behaviors_
+however, so the following would also be valid:
+
+```ruby
+experiment(:pill_color, actor: current_user) do |e|
+ e.control { '<strong>control</strong>' }
+end
+```
+
+NOTE:
+When passing a block to the `experiment` method, it is implicitly invoked as
+if `run` has been called.
+
+#### Segmentation rules
+
+You can use runtime segmentation rules to, for instance, segment contexts into a specific
+variant. The `segment` method is a callback (like `before_action`) and so allows providing
+a block or method name.
+
+In this example, any user named `'Richard'` would always be assigned the _red_
+variant, and any account older than 2 weeks old would be assigned the _blue_ variant:
+
+```ruby
+class PillColorExperiment < ApplicationExperiment
+ # ...registered behaviors
+
+ segment(variant: :red) { context.actor.first_name == 'Richard' }
+ segment :old_account?, variant: :blue
+
+ private
+
+ def old_account?
+ context.actor.created_at < 2.weeks.ago
+ end
+end
+```
+
+When an experiment runs, the segmentation rules are executed in the order they're
+defined. The first segmentation rule to produce a truthy result assigns the variant.
+
+In our example, any user named `'Richard'`, regardless of account age, will always
+be assigned the _red_ variant. If you want the opposite logic, flip the order.
+
+NOTE:
+Keep in mind when defining segmentation rules: after a truthy result, the remaining
+segmentation rules are skipped to achieve optimal performance.
+
+#### Exclusion rules
+
+Exclusion rules are similar to segmentation rules, but are intended to determine
+if a context should even be considered as something we should include in the experiment
+and track events toward. Exclusion means we don't care about the events in relation
+to the given context.
+
+These examples exclude all users named `'Richard'`, *and* any account
+older than 2 weeks old. Not only are they given the control behavior - which could
+be nothing - but no events are tracked in these cases as well.
+
+```ruby
+class PillColorExperiment < ApplicationExperiment
+ # ...registered behaviors
+
+ exclude :old_account?, ->{ context.actor.first_name == 'Richard' }
+
+ private
+
+ def old_account?
+ context.actor.created_at < 2.weeks.ago
+ end
+end
+```
+
+You may also need to check exclusion in custom tracking logic by calling `should_track?`:
+
+```ruby
+class PillColorExperiment < ApplicationExperiment
+ # ...registered behaviors
+
+ def expensive_tracking_logic
+ return unless should_track?
+
+ track(:my_event, value: expensive_method_call)
+ end
+end
+```
+
+### Tracking events
+
+One of the most important aspects of experiments is gathering data and reporting on
+it. You can use the `track` method to track events across an experimental implementation.
+You can track events consistently to an experiment if you provide the same context between
+calls to your experiment. If you do not yet understand context, you should read
+about contexts now.
+
+We can assume we run the experiment in one or a few places, but
+track events potentially in many places. The tracking call remains the same, with
+the arguments you would normally use when
+[tracking events using snowplow](../snowplow/index.md). The easiest example
+of tracking an event in Ruby would be:
+
+```ruby
+experiment(:pill_color, actor: current_user).track(:clicked)
+```
+
+When you run an experiment with any of the examples so far, an `:assignment` event
+is tracked automatically by default. All events that are tracked from an
+experiment have a special
+[experiment context](https://gitlab.com/gitlab-org/iglu/-/blob/master/public/schemas/com.gitlab/gitlab_experiment/jsonschema/1-0-3)
+added to the event. This can be used - typically by the data team - to create a connection
+between the events on a given experiment.
+
+If our current user hasn't encountered the experiment yet (meaning where the experiment
+is run), and we track an event for them, they are assigned a variant and see
+that variant if they ever encountered the experiment later, when an `:assignment`
+event would be tracked at that time for them.
+
+NOTE:
+GitLab tries to be sensitive and respectful of our customers regarding tracking,
+so our experimentation library allows us to implement an experiment without ever tracking identifying
+IDs. It's not always possible, though, based on experiment reporting requirements.
+You may be asked from time to time to track a specific record ID in experiments.
+The approach is largely up to the PM and engineer creating the implementation.
+No recommendations are provided here at this time.
+
+## Experiments in the client layer
+
+Any experiment that's been run in the request lifecycle surfaces in `window.gl.experiments`,
+and matches [this schema](https://gitlab.com/gitlab-org/iglu/-/blob/master/public/schemas/com.gitlab/gitlab_experiment/jsonschema/1-0-3)
+so it can be used when resolving experimentation in the client layer.
+
+Given that we've defined a class for our experiment, and have defined the variants for it, we can publish that experiment in a couple ways.
+
+The first way is simply by running the experiment. Assuming the experiment has been run, it will surface in the client layer without having to do anything special.
+
+The second way doesn't run the experiment and is intended to be used if the experiment only needs to surface in the client layer. To accomplish this we can simply `.publish` the experiment. This won't run any logic, but does surface the experiment details in the client layer so they can be utilized there.
+
+An example might be to publish an experiment in a `before_action` in a controller. Assuming we've defined the `PillColorExperiment` class, like we have above, we can surface it to the client by publishing it instead of running it:
+
+```ruby
+before_action -> { experiment(:pill_color).publish }, only: [:show]
+```
+
+You can then see this surface in the JavaScript console:
+
+```javascript
+window.gl.experiments // => { pill_color: { excluded: false, experiment: "pill_color", key: "ca63ac02", variant: "candidate" } }
+```
+
+### Using experiments in Vue
+
+With the `gitlab-experiment` component, you can define slots that match the name of the
+variants pushed to `window.gl.experiments`.
+
+We can make use of the named slots in the Vue component, that match the behaviors defined in :
+
+```vue
+<script>
+import GitlabExperiment from '~/experimentation/components/gitlab_experiment.vue';
+
+export default {
+ components: { GitlabExperiment }
+}
+</script>
+
+<template>
+ <gitlab-experiment name="pill_color">
+ <template #control>
+ <button class="bg-default">Click default button</button>
+ </template>
+
+ <template #red>
+ <button class="bg-red">Click red button</button>
+ </template>
+
+ <template #blue>
+ <button class="bg-blue">Click blue button</button>
+ </template>
+ </gitlab-experiment>
+</template>
+```
+
+NOTE:
+When there is no experiment data in the `window.gl.experiments` object for the given experiment name, the `control` slot will be used, if it exists.
diff --git a/doc/development/experiment_guide/index.md b/doc/development/experiment_guide/index.md
index f7af1113b6e..b140cce34fc 100644
--- a/doc/development/experiment_guide/index.md
+++ b/doc/development/experiment_guide/index.md
@@ -6,47 +6,46 @@ info: To determine the technical writer assigned to the Stage/Group associated w
# Experiment Guide
-Experiments can be conducted by any GitLab team, most often the teams from the [Growth Sub-department](https://about.gitlab.com/handbook/engineering/development/growth/). Experiments are not tied to releases because they primarily target GitLab.com.
-
-Experiments are run as an A/B/n test, and are behind an [experiment feature flag](../feature_flags/#experiment-type) to turn the test on or off. Based on the data the experiment generates, the team decides if the experiment had a positive impact and should be made the new default, or rolled back.
-
-## Experiment rollout issue
-
-Each experiment should have an [experiment rollout](https://gitlab.com/groups/gitlab-org/-/boards/1352542) issue to track the experiment from rollout through to cleanup and removal.
-The rollout issue is similar to a feature flag rollout issue, and is also used to track the status of an experiment.
-When an experiment is deployed, the due date of the issue should be set (this depends on the experiment but can be up to a few weeks in the future).
-After the deadline, the issue needs to be resolved and either:
-
-- It was successful and the experiment becomes the new default.
-- It was not successful and all code related to the experiment is removed.
-
-In either case, an outcome of the experiment should be posted to the issue with the reasoning for the decision.
-
-## Code reviews
-
-Experiments' code quality can fail our standards for several reasons. These
-reasons can include not being added to the codebase for a long time, or because
-of fast iteration to retrieve data. However, having the experiment run (or not
-run) shouldn't impact GitLab availability. To avoid or identify issues,
-experiments are initially deployed to a small number of users. Regardless,
-experiments still need tests.
-
-Experiments must have corresponding [frontend or feature tests](../testing_guide/index.md) to ensure they
-exist in the application. These tests should help prevent the experiment code from
-being removed before the [experiment cleanup process](https://about.gitlab.com/handbook/engineering/development/growth/experimentation/#experiment-cleanup-issue) starts.
-
-If, as a reviewer or maintainer, you find code that would usually fail review
-but is acceptable for now, mention your concerns with a note that there's no
-need to change the code. The author can then add a comment to this piece of code
-and link to the issue that resolves the experiment. The author or reviewer can add a link to this concern in the
-experiment rollout issue under the `Experiment Successful Cleanup Concerns` section of the description.
-If the experiment is successful and becomes part of the product, any items that appear under this section will be addressed.
+Experiments can be conducted by any GitLab team, most often the teams from the
+[Growth Sub-department](https://about.gitlab.com/handbook/engineering/development/growth/).
+Experiments are not tied to releases because they primarily target GitLab.com.
+
+Experiments are run as an A/B/n test, and are behind an [experiment feature flag](../feature_flags/#experiment-type)
+to turn the test on or off. Based on the data the experiment generates, the team decides
+if the experiment had a positive impact and should be made the new default, or rolled back.
+
+Experiments in GitLab are tightly coupled with the concepts provided by
+[Feature flags in development of GitLab](../feature_flags/index.md). You're strongly encouraged
+to read and understand the [Feature flags in development of GitLab](../feature_flags/index.md)
+portion of the documentation before considering running experiments. Experiments add additional
+concepts which may seem confusing or advanced without understanding the underpinnings of how GitLab
+uses feature flags in development. One concept: experiments can be run with multiple variants,
+which are sometimes referred to as A/B/n tests.
+
+We use the [`gitlab-experiment` gem](https://gitlab.com/gitlab-org/ruby/gems/gitlab-experiment),
+sometimes referred to as GLEX, to run our experiments. The gem exists in a separate repository
+so it can be shared across any GitLab property that uses Ruby. You should feel comfortable reading
+the documentation on that project if you want to dig into more advanced topics or open issues. Be
+aware that the documentation there reflects what's in the main branch and may not be the same as
+the version being used within GitLab.
+
+## Glossary of terms
+
+To ensure a shared language, you should understand these fundamental terms we use
+when communicating about experiments:
+
+- `experiment`: Any deviation of code paths we want to run at some times, but not others.
+- `context`: A consistent experience we provide in an experiment.
+- `control`: The default, or "original" code path.
+- `candidate`: Defines an experiment with only one code path.
+- `variant(s)`: Defines an experiment with multiple code paths.
+- `behaviors`: Used to reference all possible code paths of an experiment, including the control.
## Implementing an experiment
[`GLEX`](https://gitlab.com/gitlab-org/ruby/gems/gitlab-experiment) - or `Gitlab::Experiment`, the `gitlab-experiment` gem - is the preferred option for implementing an experiment in GitLab.
-For more information, see [Implementing an A/B/n experiment using GLEX](gitlab_experiment.md).
+For more information, see [Implementing an A/B/n experiment using GLEX](implementing_experiments.md).
This uses [experiment](../feature_flags/index.md#experiment-type) feature flags.
@@ -64,15 +63,3 @@ We recommend the following workflow:
1. **If the experiment is a success**, designers add the new icon or illustration to the Pajamas UI kit as part of the cleanup process.
Engineers can then add it to the [SVG library](https://gitlab-org.gitlab.io/gitlab-svgs/) and modify the implementation based on the
[Frontend Development Guidelines](../fe_guide/icons.md#usage-in-hamlrails-2).
-
-## Turn off all experiments
-
-When there is a case on GitLab.com (SaaS) that necessitates turning off all experiments, we have this control.
-
-You can toggle experiments on SaaS on and off using the `gitlab_experiment` [feature flag](../feature_flags).
-
-This can be done via chatops:
-
-- [disable](../feature_flags/controls.md#disabling-feature-flags): `/chatops run feature set gitlab_experiment false`
-- [enable](../feature_flags/controls.md#process): `/chatops run feature delete gitlab_experiment`
- - This allows the `default_enabled` [value of true in the yml](https://gitlab.com/gitlab-org/gitlab/-/blob/016430f6751b0c34abb24f74608c80a1a8268f20/config/feature_flags/ops/gitlab_experiment.yml#L8) to be honored.
diff --git a/doc/development/experiment_guide/testing_experiments.md b/doc/development/experiment_guide/testing_experiments.md
new file mode 100644
index 00000000000..08ff91a3deb
--- /dev/null
+++ b/doc/development/experiment_guide/testing_experiments.md
@@ -0,0 +1,150 @@
+---
+stage: Growth
+group: Activation
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
+---
+
+# Testing experiments
+
+## Testing experiments with RSpec
+
+In the course of working with experiments, you'll probably want to utilize the RSpec
+tooling that's built in. This happens automatically for files in `spec/experiments`, but
+for other files and specs you want to include it in, you can specify the `:experiment` type:
+
+```ruby
+it "tests experiments nicely", :experiment do
+end
+```
+
+### Stub helpers
+
+You can stub experiments using `stub_experiments`. Pass it a hash using experiment
+names as the keys, and the variants you want each to resolve to, as the values:
+
+```ruby
+# Ensures the experiments named `:example` & `:example2` are both "enabled" and
+# that each will resolve to the given variant (`:my_variant` and `:control`
+# respectively).
+stub_experiments(example: :my_variant, example2: :control)
+
+experiment(:example) do |e|
+ e.enabled? # => true
+ e.assigned.name # => 'my_variant'
+end
+
+experiment(:example2) do |e|
+ e.enabled? # => true
+ e.assigned.name # => 'control'
+end
+```
+
+### Exclusion, segmentation, and behavior matchers
+
+You can also test things like the registered behaviors, the exclusions, and
+segmentations using the matchers.
+
+```ruby
+class ExampleExperiment < ApplicationExperiment
+ control { }
+ candidate { '_candidate_' }
+
+ exclude { context.actor.first_name == 'Richard' }
+ segment(variant: :candidate) { context.actor.username == 'jejacks0n' }
+end
+
+excluded = double(username: 'rdiggitty', first_name: 'Richard')
+segmented = double(username: 'jejacks0n', first_name: 'Jeremy')
+
+# register_behavior matcher
+expect(experiment(:example)).to register_behavior(:control)
+expect(experiment(:example)).to register_behavior(:candidate).with('_candidate_')
+
+# exclude matcher
+expect(experiment(:example)).to exclude(actor: excluded)
+expect(experiment(:example)).not_to exclude(actor: segmented)
+
+# segment matcher
+expect(experiment(:example)).to segment(actor: segmented).into(:candidate)
+expect(experiment(:example)).not_to segment(actor: excluded)
+```
+
+### Tracking matcher
+
+Tracking events is a major aspect of experimentation. We try
+to provide a flexible way to ensure your tracking calls are covered.
+
+You can do this on the instance level or at an "any instance" level:
+
+```ruby
+subject = experiment(:example)
+
+expect(subject).to track(:my_event)
+
+subject.track(:my_event)
+```
+
+You can use the `on_next_instance` chain method to specify that it will happen
+on the next instance of the experiment. This helps you if you're calling
+`experiment(:example).track` downstream:
+
+```ruby
+expect(experiment(:example)).to track(:my_event).on_next_instance
+
+experiment(:example).track(:my_event)
+```
+
+A full example of the methods you can chain onto the `track` matcher:
+
+```ruby
+expect(experiment(:example)).to track(:my_event, value: 1, property: '_property_')
+ .on_next_instance
+ .with_context(foo: :bar)
+ .for(:variant_name)
+
+experiment(:example, :variant_name, foo: :bar).track(:my_event, value: 1, property: '_property_')
+```
+
+## Test with Jest
+
+### Stub Helpers
+
+You can stub experiments using the `stubExperiments` helper defined in `spec/frontend/__helpers__/experimentation_helper.js`.
+
+```javascript
+import { stubExperiments } from 'helpers/experimentation_helper';
+import { getExperimentData } from '~/experimentation/utils';
+
+describe('when my_experiment is enabled', () => {
+ beforeEach(() => {
+ stubExperiments({ my_experiment: 'candidate' });
+ });
+
+ it('sets the correct data', () => {
+ expect(getExperimentData('my_experiment')).toEqual({ experiment: 'my_experiment', variant: 'candidate' });
+ });
+});
+```
+
+NOTE:
+This method of stubbing in Jest specs will not automatically un-stub itself at the end of the test. We merge our stubbed experiment in with all the other global data in `window.gl`. If you need to remove the stubbed experiments after your test or ensure a clean global object before your test, you'll need to manage the global object directly yourself:
+
+```javascript
+describe('tests that care about global state', () => {
+ const originalObjects = [];
+
+ beforeEach(() => {
+ // For backwards compatibility for now, we're using both window.gon & window.gl
+ originalObjects.push(window.gon, window.gl);
+ });
+
+ afterEach(() => {
+ [window.gon, window.gl] = originalObjects;
+ });
+
+ it('stubs experiment in fresh global state', () => {
+ stubExperiment({ my_experiment: 'candidate' });
+ // ...
+ });
+})
+```
diff --git a/doc/development/fe_guide/content_editor.md b/doc/development/fe_guide/content_editor.md
index 2e64f52651e..d4c29cb8a24 100644
--- a/doc/development/fe_guide/content_editor.md
+++ b/doc/development/fe_guide/content_editor.md
@@ -47,7 +47,7 @@ The Content Editor requires two properties:
- `renderMarkdown` is an asynchronous function that returns the response (String) of invoking the
[Markdown API](../../api/markdown.md).
-- `uploadsPath` is a URL that points to a [GitLab upload service](../uploads/implementation.md#upload-encodings)
+- `uploadsPath` is a URL that points to a [GitLab upload service](../uploads/index.md)
with `multipart/form-data` support.
See the [`WikiForm.vue`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/assets/javascripts/pages/shared/wikis/components/wiki_form.vue#L207)
diff --git a/doc/development/fe_guide/design_anti_patterns.md b/doc/development/fe_guide/design_anti_patterns.md
index 76825d6ff18..b7238bb2813 100644
--- a/doc/development/fe_guide/design_anti_patterns.md
+++ b/doc/development/fe_guide/design_anti_patterns.md
@@ -9,7 +9,7 @@ info: To determine the technical writer assigned to the Stage/Group associated w
Anti-patterns may seem like good approaches at first, but it has been shown that they bring more ills than benefits. These should
generally be avoided.
-Throughout the GitLab codebase, there may be historic uses of these anti-patterns. Please [use discretion](https://about.gitlab.com/handbook/engineering/principles/#balance-refactoring-and-velocity)
+Throughout the GitLab codebase, there may be historic uses of these anti-patterns. Please [use discretion](https://about.gitlab.com/handbook/engineering/development/principles/#balance-refactoring-and-velocity)
when figuring out whether or not to refactor, when touching code that uses one of these legacy patterns.
NOTE:
diff --git a/doc/development/fe_guide/development_process.md b/doc/development/fe_guide/development_process.md
index 9921b851344..b4893fd4ef9 100644
--- a/doc/development/fe_guide/development_process.md
+++ b/doc/development/fe_guide/development_process.md
@@ -103,7 +103,7 @@ With the purpose of being [respectful of others' time](https://about.gitlab.com/
- Before assigning to a maintainer, assign to a reviewer.
- If you assigned a merge request or pinged someone directly, be patient because we work in different timezones and asynchronously. Unless the merge request is urgent (like fixing a broken default branch), please don't DM or reassign the merge request before waiting for a 24-hour window.
- If you have a question regarding your merge request/issue, make it on the merge request/issue. When we DM each other, we no longer have a SSOT and [no one else is able to contribute](https://about.gitlab.com/handbook/values/#public-by-default).
-- When you have a big **Draft** merge request with many changes, you're advised to get the review started before adding/removing significant code. Make sure it is assigned well before the release cut-off, as the reviewer(s)/maintainer(s) would always prioritize reviewing finished MRs before the **Draft** ones.
+- When you have a big **Draft** merge request with many changes, you're advised to get the review started before adding/removing significant code. Make sure it is assigned well before the release cut-off, as the reviewers/maintainers would always prioritize reviewing finished MRs before the **Draft** ones.
- Make sure to remove the `Draft:` title before the last round of review.
### Share your work early
diff --git a/doc/development/fe_guide/graphql.md b/doc/development/fe_guide/graphql.md
index ddd99f3614d..5cfdaff0448 100644
--- a/doc/development/fe_guide/graphql.md
+++ b/doc/development/fe_guide/graphql.md
@@ -400,7 +400,7 @@ We are still learning the best practices for both **type policies** and **reacti
Take a moment to improve this guide or [leave a comment](https://gitlab.com/gitlab-org/frontend/rfcs/-/issues/100)
if you use it!
-In the example below we define a `@client` query and its `typedefs`:
+In the example below we define a `@client` query and its `typedefs`:
```javascript
// ./graphql/typedefs.graphql
@@ -1987,7 +1987,7 @@ To improve performance, sometimes we want to make initial GraphQL queries early.
}
```
-- Add startup call(s) with correct variables to the HAML file that serves as a view
+- Add startup calls with correct variables to the HAML file that serves as a view
for your application. To add GraphQL startup calls, we use
`add_page_startup_graphql_call` helper where the first parameter is a path to the
query, the second one is an object containing query variables. Path to the query is
diff --git a/doc/development/fe_guide/registry_architecture.md b/doc/development/fe_guide/registry_architecture.md
index 47a6dc40e19..56d67e094b7 100644
--- a/doc/development/fe_guide/registry_architecture.md
+++ b/doc/development/fe_guide/registry_architecture.md
@@ -56,7 +56,7 @@ in the container components when needed. This makes it easier to:
[`delete_package.vue`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/assets/javascripts/packages_and_registries/package_registry/components/functional/delete_package.vue)).
- Leverage [startup for GraphQL calls](graphql.md#making-initial-queries-early-with-graphql-startup-calls).
-## Shared compoenents library
+## Shared components library
Inside `vue_shared/components/registry` and `packages_and_registries/shared`, there's a set of
shared components that you can use to implement registry functionality. These components build the
diff --git a/doc/development/fe_guide/style/html.md b/doc/development/fe_guide/style/html.md
index 72492d56ee4..90ff88bc975 100644
--- a/doc/development/fe_guide/style/html.md
+++ b/doc/development/fe_guide/style/html.md
@@ -58,11 +58,9 @@ Button tags requires a `type` attribute according to the [W3C HTML specification
### Blank target
-Avoid forcing links to open in a new window as this reduces the control the user has over the link.
-However, it might be a good idea to use a blank target when replacing the current page with
-the link makes the user lose content or progress.
+Arbitrarily opening links in a new tab is not recommended, so refer to the [Pajamas guidelines on links](https://design.gitlab.com/product-foundations/interaction/#links) when considering adding `target="_blank"` to links.
-Use `rel="noopener noreferrer"` whenever your links open in a new window, that is, `target="_blank"`. This prevents a security vulnerability [documented by JitBit](https://www.jitbit.com/alexblog/256-targetblank---the-most-underestimated-vulnerability-ever/).
+When using `target="_blank"` with `a` tags, you must also add the `rel="noopener noreferrer"` attribute. This prevents a security vulnerability [documented by JitBit](https://www.jitbit.com/alexblog/256-targetblank---the-most-underestimated-vulnerability-ever/).
When using `gl-link`, using `target="_blank"` is sufficient as it automatically adds `rel="noopener noreferrer"` to the link.
diff --git a/doc/development/fe_guide/tooling.md b/doc/development/fe_guide/tooling.md
index 1ab97d8a1f5..1c32647eefd 100644
--- a/doc/development/fe_guide/tooling.md
+++ b/doc/development/fe_guide/tooling.md
@@ -155,6 +155,13 @@ $ grep "eslint-disable.*import/no-deprecated" -r .
./app/assets/javascripts/issuable_form.js: // eslint-disable-next-line import/no-deprecated
```
+### GraphQL schema and operations validation
+
+We use [`@graphql-eslint/eslint-plugin`](https://www.npmjs.com/package/@graphql-eslint/eslint-plugin)
+to lint GraphQL schema and operations. This plugin requires the entire schema to function properly.
+It is thus recommended to generate an up-to-date dump of the schema when running ESLint locally.
+You can do this by running the `./scripts/dump_graphql_schema` script.
+
## Formatting with Prettier
> Support for `.graphql` [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/227280) in GitLab 13.2.
diff --git a/doc/development/fe_guide/vue3_migration.md b/doc/development/fe_guide/vue3_migration.md
index 8c8bb36d962..068b0c5b475 100644
--- a/doc/development/fe_guide/vue3_migration.md
+++ b/doc/development/fe_guide/vue3_migration.md
@@ -8,7 +8,7 @@ info: To determine the technical writer assigned to the Stage/Group associated w
The migration from Vue 2 to 3 is tracked in epic [&6252](https://gitlab.com/groups/gitlab-org/-/epics/6252).
-To ease migration to Vue 3.x, we have added [eslint rules](https://gitlab.com/gitlab-org/frontend/eslint-plugin/-/merge_requests/50)
+To ease migration to Vue 3.x, we have added [ESLint rules](https://gitlab.com/gitlab-org/frontend/eslint-plugin/-/merge_requests/50)
that prevent us from using the following deprecated features in the codebase.
## Vue filters
diff --git a/doc/development/feature_flags/controls.md b/doc/development/feature_flags/controls.md
index f8f03773c12..68c14c1b0c9 100644
--- a/doc/development/feature_flags/controls.md
+++ b/doc/development/feature_flags/controls.md
@@ -262,7 +262,7 @@ To disable a feature flag that has been globally enabled you can run:
To disable a feature flag that has been enabled for a specific project you can run:
```shell
-/chatops run feature set --group=gitlab-org some_feature false
+/chatops run feature set --project=gitlab-org/gitlab some_feature false
```
You cannot selectively disable feature flags for a specific project/group/user without applying a [specific method of implementing](index.md#selectively-disable-by-actor) the feature flags.
diff --git a/doc/development/feature_flags/index.md b/doc/development/feature_flags/index.md
index 4b417b26381..54158de6893 100644
--- a/doc/development/feature_flags/index.md
+++ b/doc/development/feature_flags/index.md
@@ -7,7 +7,7 @@ info: "See the Technical Writers assigned to Development Guidelines: https://abo
# Feature flags in the development of GitLab
-**NOTE**:
+NOTE:
The documentation below covers feature flags used by GitLab to deploy its own features, which **is not** the same
as the [feature flags offered as part of the product](../../operations/feature_flags.md).
@@ -154,7 +154,6 @@ This process is meant to ensure consistent feature flag usage in the codebase. A
- Be known. Only use feature flags that are explicitly defined.
- Not be defined twice. They have to be defined either in FOSS or EE, but not both.
- Use a valid and consistent `type:` across all invocations.
-- Use the same `default_enabled:` across all invocations.
- Have an owner.
All feature flags known to GitLab are self-documented in YAML files stored in:
@@ -168,7 +167,7 @@ Each feature flag is defined in a separate YAML file consisting of a number of f
|---------------------|----------|----------------------------------------------------------------|
| `name` | yes | Name of the feature flag. |
| `type` | yes | Type of feature flag. |
-| `default_enabled` | yes | The default state of the feature flag that is strictly validated, with `default_enabled:` passed as an argument. |
+| `default_enabled` | yes | The default state of the feature flag. |
| `introduced_by_url` | no | The URL to the merge request that introduced the feature flag. |
| `rollout_issue_url` | no | The URL to the Issue covering the feature flag rollout. |
| `milestone` | no | Milestone in which the feature was added. |
@@ -256,42 +255,10 @@ if Feature.disabled?(:my_feature_flag, project)
end
```
-In rare cases you may want to make a feature enabled by default. If so, explain the reasoning
-in the merge request. Use `default_enabled: true` when checking the feature flag state:
-
-```ruby
-if Feature.enabled?(:feature_flag, project, default_enabled: true)
- # execute code if feature flag is enabled
-else
- # execute code if feature flag is disabled
-end
-
-if Feature.disabled?(:my_feature_flag, project, default_enabled: true)
- # execute code if feature flag is disabled
-end
-```
-
-If not specified, `default_enabled` is `false`.
-
-To force reading the `default_enabled` value from the relative YAML definition file, use
-`default_enabled: :yaml`:
-
-```ruby
-if Feature.enabled?(:feature_flag, project, default_enabled: :yaml)
- # execute code if feature flag is enabled
-end
-```
+Default behavior for not configured feature flags is controlled
+by `default_enabled:` in YAML definition.
-```ruby
-if Feature.disabled?(:feature_flag, project, default_enabled: :yaml)
- # execute code if feature flag is disabled
-end
-```
-
-This allows to use the same feature flag check across various parts of the codebase and
-maintain the status of `default_enabled` in the YAML definition file which is the SSOT.
-
-If `default_enabled: :yaml` is used, a YAML definition is expected or an error is raised
+If feature flag does not have a YAML definition an error will be raised
in development or test environment, while returning `false` on production.
If not specified, the default feature flag type for `Feature.enabled?` and `Feature.disabled?`
@@ -333,6 +300,17 @@ class MyClass
end
```
+#### Recursion detection
+
+When there are many feature flags, it is not always obvious where they are
+called. Avoid cycles where the evaluation of one feature flag requires the
+evaluation of other feature flags. If this causes a cycle, it will be broken
+and the default value will be returned.
+
+To enable this recursion detection to work correctly, always access feature values through
+`Feature::enabled?`, and avoid the low-level use of `Feature::get`. When this
+happens, we track a `Feature::RecursionError` exception to the error tracker.
+
### Frontend
When using a feature flag for UI elements, make sure to _also_ use a feature
@@ -370,16 +348,6 @@ so checking for `gon.features.vim_bindings` would not work.
See the [Vue guide](../fe_guide/vue.md#accessing-feature-flags) for details about
how to access feature flags in a Vue component.
-In rare cases you may want to make a feature enabled by default. If so, explain the reasoning
-in the merge request. Use `default_enabled: true` when checking the feature flag state:
-
-```ruby
-before_action do
- # Prefer to scope it per project or user e.g.
- push_frontend_feature_flag(:vim_bindings, project, default_enabled: true)
-end
-```
-
If not specified, the default feature flag type for `push_frontend_feature_flag`
is `type: development`. For all other feature flag types, you must specify the `type:`:
@@ -440,6 +408,22 @@ Feature.enabled?(:a_feature, project) && Feature.disabled?(:a_feature_override,
/chatops run feature set --project=gitlab-org/gitlab a_feature_override true
```
+#### Use actors for verifying in production
+
+WARNING:
+Using production as a testing environment is not recommended. Use our testing
+environments for testing features that are not production-ready.
+
+While the staging environment provides a way to test features in an environment
+that resembles production, it doesn't allow you to compare before-and-after
+performance metrics specific to production environment. It can be useful to have a
+project in production with your development feature flag enabled, to allow tools
+like Sitespeed reports to reveal the metrics of the new code under a feature flag.
+
+This approach is even more useful if you're already tracking the old codebase in
+Sitespeed, enabling you to compare performance accurately before and after the
+feature flag's rollout.
+
### Enable additional objects as actors
To use feature gates based on actors, the model needs to respond to
@@ -673,9 +657,7 @@ You can control whether the `Flipper::Adapters::Memory` or `ActiveRecord` mode i
#### `stub_feature_flags: true` (default and preferred)
In this mode Flipper is configured to use `Flipper::Adapters::Memory` and mark all feature
-flags to be on-by-default and persisted on a first use. This overwrites the `default_enabled:`
-of `Feature.enabled?` and `Feature.disabled?` returning always `true` unless feature flag
-is persisted.
+flags to be on-by-default and persisted on a first use.
Make sure behavior under feature flag doesn't go untested in some non-specific contexts.
diff --git a/doc/development/features_inside_dot_gitlab.md b/doc/development/features_inside_dot_gitlab.md
index 7b11b541b5a..ca7dbd6adde 100644
--- a/doc/development/features_inside_dot_gitlab.md
+++ b/doc/development/features_inside_dot_gitlab.md
@@ -16,7 +16,6 @@ When implementing new features, please refer to these existing features to avoid
- [CODEOWNERS](../user/project/code_owners.md#set-up-code-owners): `.gitlab/CODEOWNERS`.
- [Route Maps](../ci/review_apps/#route-maps): `.gitlab/route-map.yml`.
- [Customize Auto DevOps Helm Values](../topics/autodevops/customize.md#customize-values-for-helm-chart): `.gitlab/auto-deploy-values.yaml`.
-- [GitLab managed apps CI/CD](../user/clusters/applications.md#prerequisites): `.gitlab/managed-apps/config.yaml`.
- [Insights](../user/project/insights/index.md#configure-your-insights): `.gitlab/insights.yml`.
- [Service Desk Templates](../user/project/service_desk.md#using-customized-email-templates): `.gitlab/service_desk_templates/`.
- [Web IDE](../user/project/web_ide/#web-ide-configuration-file): `.gitlab/.gitlab-webide.yml`.
diff --git a/doc/development/fips_compliance.md b/doc/development/fips_compliance.md
index 8fe5af56f9d..d4274c6275b 100644
--- a/doc/development/fips_compliance.md
+++ b/doc/development/fips_compliance.md
@@ -97,3 +97,414 @@ virtual machine:
```shell
fips-mode-setup --disable
```
+
+#### Detect FIPS enablement in code
+
+You can query `GitLab::FIPS` in Ruby code to determine if the instance is FIPS-enabled:
+
+```ruby
+def default_min_key_size(name)
+ if Gitlab::FIPS.enabled?
+ Gitlab::SSHPublicKey.supported_sizes(name).select(&:positive?).min || -1
+ else
+ 0
+ end
+end
+```
+
+## Nightly Omnibus FIPS builds
+
+The Distribution team has created [nightly FIPS Omnibus builds](https://packages.gitlab.com/gitlab/nightly-fips-builds). These
+GitLab builds are compiled to use the system OpenSSL instead of the Omnibus-embedded version of OpenSSL.
+
+See [the section on how FIPS builds are created](#how-fips-builds-are-created).
+
+## Runner
+
+See the [documentation on installing a FIPS-compliant GitLab Runner](https://docs.gitlab.com/runner/install/#fips-compliant-gitlab-runner).
+
+## Set up a FIPS-enabled cluster
+
+You can use the [GitLab Environment Toolkit](https://gitlab.com/gitlab-org/gitlab-environment-toolkit) to spin
+up a FIPS-enabled cluster for development and testing. These instructions use Amazon Web Services (AWS)
+because that is the first target environment, but you can adapt them for other providers.
+
+### Set up your environment
+
+To get started, your AWS account must subscribe to a FIPS-enabled Amazon
+Machine Image (AMI) in the [AWS Marketplace console](https://aws.amazon.com/premiumsupport/knowledge-center/launch-ec2-marketplace-subscription/).
+
+This example assumes that the `Ubuntu Pro 20.04 FIPS LTS` AMI by
+`Canonical Group Limited` has been added your account. This operating
+system is used for virtual machines running in Amazon EC2.
+
+### Omnibus
+
+The simplest way to get a FIPS-enabled GitLab cluster is to use an Omnibus reference architecture.
+See the [GET Quick Start Guide](https://gitlab.com/gitlab-org/gitlab-environment-toolkit/-/blob/main/docs/environment_quick_start_guide.md)
+for more details. The following instructions build on the Quick Start and are also necessary for [Cloud Native Hybrid](#cloud-native-hybrid) installations.
+
+#### Terraform: Use a FIPS AMI
+
+1. Follow the guide to set up Terraform and Ansible.
+1. After [step 2b](https://gitlab.com/gitlab-org/gitlab-environment-toolkit/-/blob/main/docs/environment_quick_start_guide.md#2b-setup-config),
+ create a `data.tf` in your environment (for example, `gitlab-environment-toolkit/terraform/environments/gitlab-10k/inventory/data.tf`):
+
+ ```tf
+ data "aws_ami" "ubuntu_20_04_fips" {
+ count = 1
+
+ most_recent = true
+
+ filter {
+ name = "name"
+ values = ["ubuntu-pro-fips-server/images/hvm-ssd/ubuntu-focal-20.04-amd64-pro-fips-server-*"]
+ }
+
+ filter {
+ name = "virtualization-type"
+ values = ["hvm"]
+ }
+
+ owners = ["aws-marketplace"]
+ }
+ ```
+
+1. Add the custom `ami_id` to use this AMI in `environment.tf`. For
+ example, in `gitlab-environment-toolkit/terraform/environments/gitlab-10k/inventory/environment.tf`:
+
+ ```tf
+ module "gitlab_ref_arch_aws" {
+ source = "../../modules/gitlab_ref_arch_aws"
+
+ prefix = var.prefix
+ ami_id = data.aws_ami.ubuntu_20_04_fips[0].id
+ ...
+ ```
+
+NOTE:
+GET does not allow the AMI to change on EC2 instances after it has
+been deployed via `terraform apply`. Since an AMI change would tear down
+an instance, this would result in data loss: not only would disks be
+destroyed, but also GitLab secrets would be lost. There is a [Terraform lifecycle rule](https://gitlab.com/gitlab-org/gitlab-environment-toolkit/blob/2aaeaff8ac8067f23cd7b6bb5bf131061649089d/terraform/modules/gitlab_aws_instance/main.tf#L40)
+to ignore AMI changes.
+
+#### Ansible: Specify the FIPS Omnibus builds
+
+The standard Omnibus GitLab releases build their own OpenSSL library,
+which is not FIPS-validated. However, we have nightly builds that create
+Omnibus packages that link against the operating system's OpenSSL library. To
+use this package, update the `gitlab_repo_script_url` field in the
+Ansible `vars.yml`. For example, you might modify
+`gitlab-environment-toolkit/ansible/environments/gitlab-10k/inventory/vars.yml`
+in this way:
+
+```yaml
+all:
+ vars:
+ ...
+ gitlab_repo_script_url: "https://packages.gitlab.com/install/repositories/gitlab/nightly-fips-builds/script.deb.sh"
+```
+
+### Cloud Native Hybrid
+
+A Cloud Native Hybrid install uses both Omnibus and Cloud Native GitLab
+(CNG) images. The previous instructions cover the Omnibus part, but two
+additional steps are needed to enable FIPS in CNG:
+
+1. Use a custom Amazon Elastic Kubernetes Service (EKS) AMI.
+1. Use GitLab containers built with RedHat's Universal Base Image (UBI).
+
+#### Build a custom EKS AMI
+
+Because Amazon does not yet publish a FIPS-enabled AMI, you have to
+build one yourself with Packer.
+
+Amazon publishes the following Git repositories with information about custom EKS AMIs:
+
+- [Amazon EKS AMI Build Specification](https://github.com/awslabs/amazon-eks-ami)
+- [Sample EKS custom AMIs](https://github.com/aws-samples/amazon-eks-custom-amis/)
+
+This [GitHub pull request](https://github.com/awslabs/amazon-eks-ami/pull/898) makes
+it possible to create an Amazon Linux 2 EKS AMI with FIPS enabled for Kubernetes v1.21.
+To build an image:
+
+1. [Install Packer](https://learn.hashicorp.com/tutorials/packer/get-started-install-cli).
+1. Run the following:
+
+ ```shell
+ git clone https://github.com/awslabs/amazon-eks-ami
+ cd amazon-eks-ami
+ git fetch origin pull/898/head:fips-ami
+ git checkout fips-ami
+ AWS_DEFAULT_REGION=us-east-1 make 1.21-fips # Be sure to set the region accordingly
+ ```
+
+If you are using a different version of Kubernetes, adjust the `make`
+command and `Makefile` accordingly.
+
+When the AMI build is done, a new AMI should be created with a message
+such as the following:
+
+```plaintext
+==> Builds finished. The artifacts of successful builds are:
+--> amazon-ebs: AMIs were created:
+us-west-2: ami-0a25e760cd00b027e
+```
+
+In this example, the AMI ID is `ami-0a25e760cd00b027e`, but your value may
+be different.
+
+Building a RHEL-based system with FIPS enabled should be possible, but
+there is [an outstanding issue preventing the Packer build from completing](https://github.com/aws-samples/amazon-eks-custom-amis/issues/51).
+
+#### Terraform: Use a custom EKS AMI
+
+Now you can set the custom EKS AMI.
+
+1. In `environment.tf`, add `eks_ami_id = var.eks_ami_id` so you can pass this variable to the
+ AWS reference architecture module. For example, in
+ `gitlab-environment-toolkit/terraform/environments/gitlab-10k/inventory/environment.tf`:
+
+ ```tf
+ module "gitlab_ref_arch_aws" {
+ source = "../../modules/gitlab_ref_arch_aws"
+
+ prefix = var.prefix
+ ami_id = data.aws_ami.ubuntu_20_04_fips[0].id
+ eks_ami_id = var.eks_ami_id
+ ....
+ ```
+
+1. In `variables.tf`, define a `eks_ami_id` with the AMI ID in the
+ previous step:
+
+ ```tf
+ variable "eks_ami_id" {
+ default = "ami-0a25e760cd00b027e"
+ }
+ ```
+
+#### Ansible: Use UBI images
+
+CNG uses a Helm Chart to manage which container images to deploy. By default, GET
+deploys the latest released versions that use Debian-based containers.
+
+To switch to UBI-based containers, edit the Ansible `vars.yml` to use custom
+Charts variables:
+
+```yaml
+all:
+ vars:
+ ...
+ gitlab_charts_custom_config_file: '/path/to/gitlab-environment-toolkit/ansible/environments/gitlab-10k/inventory/charts.yml'
+```
+
+Now create `charts.yml` in the location specified above and specify tags with a `-ubi8` suffix. For example:
+
+```yaml
+global:
+ image:
+ pullPolicy: Always
+ certificates:
+ image:
+ tag: master-ubi8
+
+gitlab:
+ gitaly:
+ image:
+ tag: master-ubi8
+ gitlab-exporter:
+ image:
+ tag: master-ubi8
+ gitlab-shell:
+ image:
+ tag: main-ubi8 # The default branch is main, not master
+ gitlab-mailroom:
+ image:
+ tag: master-ubi8
+ migrations:
+ image:
+ tag: master-ubi8
+ sidekiq:
+ image:
+ tag: master-ubi8
+ toolbox:
+ image:
+ tag: master-ubi8
+ webservice:
+ image:
+ tag: master-ubi8
+ workhorse:
+ tag: master-ubi8
+
+nginx-ingress:
+ controller:
+ image:
+ repository: registry.gitlab.com/stanhu/gitlab-test-images/k8s-staging-ingress-nginx/controller
+ tag: v1.2.0-beta.1
+ pullPolicy: Always
+ digest: sha256:ace38833689ad34db4a46bc1e099242696eb800def88f02200a8615530734116
+```
+
+The above example shows a FIPS-enabled [`nginx-ingress`](https://github.com/kubernetes/ingress-nginx) image.
+See [this issue](https://gitlab.com/gitlab-org/charts/gitlab/-/issues/3153#note_917782207) for more details on
+how to build NGINX and the Ingress Controller.
+
+You can also use release tags, but the versioning is tricky because each
+component may use its own versioning scheme. For example, for GitLab v14.10:
+
+```yaml
+global:
+ certificates:
+ image:
+ tag: 20191127-r2-ubi8
+
+gitlab:
+ gitaly:
+ image:
+ tag: v14.10.0-ubi8
+ gitlab-exporter:
+ image:
+ tag: 11.14.0-ubi8
+ gitlab-shell:
+ image:
+ tag: v13.25.1-ubi8
+ gitlab-mailroom:
+ image:
+ tag: v14.10.0-ubi8
+ migrations:
+ image:
+ tag: v14.10.0-ubi8
+ sidekiq:
+ image:
+ tag: v14.10.0-ubi8
+ toolbox:
+ image:
+ tag: v14.10.0-ubi8
+ webservice:
+ image:
+ tag: v14.10.0-ubi8
+ workhorse:
+ tag: v14.10.0-ubi8
+```
+
+## Verify FIPS
+
+The following sections describe ways you can verify if FIPS is enabled.
+
+### Kernel
+
+```shell
+$ cat /proc/sys/crypto/fips_enabled
+1
+```
+
+### Ruby (Omnibus images)
+
+```ruby
+$ /opt/gitlab/embedded/bin/irb
+irb(main):001:0> require 'openssl'; OpenSSL.fips_mode
+=> true
+```
+
+### Ruby (CNG images)
+
+```ruby
+$ irb
+irb(main):001:0> require 'openssl'; OpenSSL.fips_mode
+=> true
+```
+
+### Go
+
+Google maintains a [`dev.boringcrypto` branch](https://github.com/golang/go/tree/dev.boringcrypto) in the Golang compiler
+that makes it possible to statically link BoringSSL, a FIPS-validated module forked from OpenSSL.
+However, BoringSSL is not intended for public use.
+
+We use [`golang-fips`](https://github.com/golang-fips/go), [a fork of the `dev.boringcrypto` branch](https://github.com/golang/go/blob/2fb6bf8a4a51f92f98c2ae127eff2b7ac392c08f/README.boringcrypto.md) to build Go programs that
+[dynamically link OpenSSL via `dlopen`](https://github.com/golang-fips/go/blob/go1.18.1-1-openssl-fips/src/crypto/internal/boring/boring.go#L47-L65). This has several advantages:
+
+- Using a FIPS-validated, system OpenSSL is straightforward.
+- This is the source code used by [Red Hat's go-toolset package](https://gitlab.com/redhat/centos-stream/rpms/golang#sources).
+- Unlike [go-toolset](https://developers.redhat.com/blog/2019/06/24/go-and-fips-140-2-on-red-hat-enterprise-linux#), this fork appears to keep up with the latest Go releases.
+
+However, [cgo](https://pkg.go.dev/cmd/cgo) must be enabled via `CGO_ENABLED=1` for this to work. There
+is a performance hit when calling into C code.
+
+Projects that are compiled with `golang-fips` on Linux x86 automatically
+get built the crypto routines that use OpenSSL. While the `boringcrypto`
+build tag is automatically present, no extra build tags are actually
+needed. There are [specific build tags](https://github.com/golang-fips/go/blob/go1.18.1-1-openssl-fips/src/crypto/internal/boring/boring.go#L6)
+that disable these crypto hooks.
+
+We can [check whether a given binary is using OpenSSL](https://go.googlesource.com/go/+/dev.boringcrypto/misc/boring/#caveat) via `go tool nm`
+and look for symbols named `Cfunc__goboringcrypto`. For example:
+
+```plaintext
+$ go tool nm nginx-ingress-controller | grep Cfunc__goboringcrypto | tail
+ 2a0b650 D crypto/internal/boring._cgo_71ae3cd1ca33_Cfunc__goboringcrypto_SHA384_Final
+ 2a0b658 D crypto/internal/boring._cgo_71ae3cd1ca33_Cfunc__goboringcrypto_SHA384_Init
+ 2a0b660 D crypto/internal/boring._cgo_71ae3cd1ca33_Cfunc__goboringcrypto_SHA384_Update
+ 2a0b668 D crypto/internal/boring._cgo_71ae3cd1ca33_Cfunc__goboringcrypto_SHA512_Final
+ 2a0b670 D crypto/internal/boring._cgo_71ae3cd1ca33_Cfunc__goboringcrypto_SHA512_Init
+ 2a0b678 D crypto/internal/boring._cgo_71ae3cd1ca33_Cfunc__goboringcrypto_SHA512_Update
+ 2a0b680 D crypto/internal/boring._cgo_71ae3cd1ca33_Cfunc__goboringcrypto_internal_ECDSA_sign
+ 2a0b688 D crypto/internal/boring._cgo_71ae3cd1ca33_Cfunc__goboringcrypto_internal_ECDSA_verify
+ 2a0b690 D crypto/internal/boring._cgo_71ae3cd1ca33_Cfunc__goboringcrypto_internal_ERR_error_string_n
+ 2a0b698 D crypto/internal/boring._cgo_71ae3cd1ca33_Cfunc__goboringcrypto_internal_ERR_get_error
+```
+
+In addition, LabKit contains routines to [check whether FIPS is enabled](https://gitlab.com/gitlab-org/labkit/-/tree/master/fips).
+
+## How FIPS builds are created
+
+Many GitLab projects (for example: Gitaly, GitLab Pages) have
+standardized on using `FIPS_MODE=1 make` to build FIPS binaries locally.
+
+### Omnibus
+
+The Omnibus FIPS builds are triggered with the `USE_SYSTEM_SSL`
+environment variable set to `true`. When this environment variable is
+set, the Omnibus recipes dependencies such as `curl`, NGINX, and libgit2
+will link against the system OpenSSL. OpenSSL will NOT be included in
+the Omnibus build.
+
+The Omnibus builds are created using container images [that use the `golang-fips` compiler](https://gitlab.com/gitlab-org/gitlab-omnibus-builder/-/blob/master/docker/snippets/go_fips). For
+example, [this job](https://gitlab.com/gitlab-org/gitlab-omnibus-builder/-/jobs/2363742108) created
+the `registry.gitlab.com/gitlab-org/gitlab-omnibus-builder/centos_8_fips:3.3.1` image used to
+build packages for RHEL 8.
+
+#### Add a new FIPS build for another Linux distribution
+
+First, you need to make sure there is an Omnibus builder image for the
+desired Linux distribution. The images used to build Omnibus packages are
+created with [Omnibus Builder images](https://gitlab.com/gitlab-org/gitlab-omnibus-builder).
+
+Review [this merge request](https://gitlab.com/gitlab-org/gitlab-omnibus-builder/-/merge_requests/218). A
+new image can be added by:
+
+1. Adding CI jobs with the `_fips` suffix (for example: `ubuntu_18.04_fips`).
+1. Making sure the `Dockerfile` uses `Snippets.new(fips: fips).populate` instead of `Snippets.new.populate`.
+
+After this image has been tagged, add a new [CI job to Omnibus GitLab](https://gitlab.com/gitlab-org/omnibus-gitlab/-/blob/911fbaccc08398dfc4779be003ea18014b3e30e9/gitlab-ci-config/dev-gitlab-org.yml#L594-602).
+
+### Cloud Native GitLab (CNG)
+
+The Cloud Native GitLab CI pipeline generates images using several base images:
+
+- Debian
+- [Red Hat's Universal Base Image (UBI)](https://developers.redhat.com/products/rhel/ubi)
+
+UBI images ship with the same OpenSSL package as those used by
+RHEL. This makes it possible to build FIPS-compliant binaries without
+needing RHEL. Note that RHEL 8.2 ships a [FIPS-validated OpenSSL](https://access.redhat.com/articles/2918071), but 8.5 is in
+review for FIPS validation.
+
+[This merge request](https://gitlab.com/gitlab-org/build/CNG/-/merge_requests/981)
+introduces a FIPS pipeline for CNG images. Images tagged for FIPS have the `-fips` suffix. For example,
+the `webservice` container has the following tags:
+
+- `master`
+- `master-ubi8`
+- `master-fips`
diff --git a/doc/development/foreign_keys.md b/doc/development/foreign_keys.md
index db8367fe5f5..c20c70623ae 100644
--- a/doc/development/foreign_keys.md
+++ b/doc/development/foreign_keys.md
@@ -123,3 +123,7 @@ class UserConfig < ActiveRecord::Base
belongs_to :user
end
```
+
+Using a foreign key as primary key saves space but can make
+[batch counting](service_ping/implement.md#batch-counters) in [Service Ping](service_ping/index.md) less efficient.
+Consider using a regular `id` column if the table will be relevant for Service Ping.
diff --git a/doc/development/geo.md b/doc/development/geo.md
index f37901754aa..f62b2de30db 100644
--- a/doc/development/geo.md
+++ b/doc/development/geo.md
@@ -97,7 +97,7 @@ projects that need updating. Those projects can be:
timestamp that is more recent than the `last_repository_successful_sync_at`
timestamp in the `Geo::ProjectRegistry` model.
- Manual: The administrator can manually flag a repository to resync in the
- [Geo admin panel](../user/admin_area/geo_nodes.md).
+ [Geo Admin Area](../user/admin_area/geo_nodes.md).
When we fail to fetch a repository on the secondary `RETRIES_BEFORE_REDOWNLOAD`
times, Geo does a so-called _re-download_. It will do a clean clone
@@ -118,17 +118,6 @@ CI Job Artifacts and LFS objects are synced in a similar way as uploads,
but they are tracked by `Geo::JobArtifactRegistry`, and `Geo::LfsObjectRegistry`
models respectively.
-#### File Download Dispatch worker
-
-Also similar to the [Repository Sync worker](#repository-sync-worker),
-there is a `Geo::FileDownloadDispatchWorker` class that is run
-periodically to sync all uploads that aren't synced to the Geo
-**secondary** site yet.
-
-Files are copied via HTTP(s) and initiated via the
-`/api/v4/geo/transfers/:type/:id` endpoint,
-for example, `/api/v4/geo/transfers/lfs/123`.
-
## Authentication
To authenticate file transfers, each `GeoNode` record has two fields:
@@ -212,7 +201,7 @@ rails g geo_migration [args] [options]
To migrate the tracking database, run:
```shell
-bundle exec rake geo:db:migrate
+bundle exec rake db:migrate:geo
```
## Finders
@@ -259,7 +248,7 @@ basically hashes all Git refs together and stores that hash in the
The **secondary** site does the same to calculate the hash of its
clone, and compares the hash with the value the **primary** site
calculated. If there is a mismatch, Geo will mark this as a mismatch
-and the administrator can see this in the [Geo admin panel](../user/admin_area/geo_nodes.md).
+and the administrator can see this in the [Geo Admin Area](../user/admin_area/geo_nodes.md).
## Glossary
diff --git a/doc/development/geo/framework.md b/doc/development/geo/framework.md
index 778387986d8..055c2cd4ea8 100644
--- a/doc/development/geo/framework.md
+++ b/doc/development/geo/framework.md
@@ -93,12 +93,6 @@ module Geo
def self.model
::Packages::PackageFile
end
-
- # The feature flag follows the format `geo_#{replicable_name}_replication`,
- # so here it would be `geo_package_file_replication`
- def self.replication_enabled_by_default?
- false
- end
end
end
```
diff --git a/doc/development/gitaly.md b/doc/development/gitaly.md
index 275e9421983..0743a03ddac 100644
--- a/doc/development/gitaly.md
+++ b/doc/development/gitaly.md
@@ -63,8 +63,7 @@ in
This should make it easier to contribute for developers who are less
comfortable writing Go code.
-There is documentation for this approach in [the Gitaly
-repository](https://gitlab.com/gitlab-org/gitaly/blob/master/doc/ruby_endpoint.md).
+For more information, see the [Beginner's guide to Gitaly contributions](https://gitlab.com/gitlab-org/gitaly/-/blob/master/doc/beginners_guide.md).
## Gitaly-Related Test Failures
@@ -372,3 +371,20 @@ the integration by using GDK:
```shell
curl --silent "http://localhost:9236/metrics" | grep go_find_all_tags
```
+
+## Using Praefect in test
+
+By default Praefect in test uses an in-memory election strategy. This strategy
+is deprecated and no longer used in production. It mainly is kept for
+unit-testing purposes.
+
+A more modern election strategy requires a connection with a PostgreSQL
+database. This behavior is disabled by default when running tests, but you can
+enable it by setting `GITALY_PRAEFECT_WITH_DB=1` in your environment.
+
+This requires you have PostgreSQL running, and you have the database created.
+When you are using GDK, you can set it up with:
+
+1. Start the database: `gdk start db`
+1. Load the environment from GDK: `eval $(cd ../gitaly && gdk env)`
+1. Create the database: `createdb --encoding=UTF8 --locale=C --echo praefect_test`
diff --git a/doc/development/gitlab_flavored_markdown/index.md b/doc/development/gitlab_flavored_markdown/index.md
index 682d8011cd8..7f7781cbc62 100644
--- a/doc/development/gitlab_flavored_markdown/index.md
+++ b/doc/development/gitlab_flavored_markdown/index.md
@@ -4,13 +4,13 @@ group: Editor
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
-# Markdown developer documentation **(FREE)**
+# GitLab Flavored Markdown (GLFM) developer documentation **(FREE)**
-This page contains the MVC for the developer documentation for GitLab Flavored Markdown.
+This page contains the MVC for the developer documentation for GitLab Flavored Markdown (GLFM).
For the user documentation about Markdown in GitLab, refer to
[GitLab Flavored Markdown](../../user/markdown.md).
-## GitLab Flavored Markdown specification guide
+## GitLab Flavored Markdown (GLFM) specification guide
The [specification guide](specification_guide/index.md) includes:
@@ -18,3 +18,4 @@ The [specification guide](specification_guide/index.md) includes:
- [Parsing and rendering](specification_guide/index.md#parsing-and-rendering).
- [Goals](specification_guide/index.md#goals).
- [Implementation](specification_guide/index.md#implementation) of the spec.
+- [Workflows](specification_guide/index.md#workflows).
diff --git a/doc/development/gitlab_flavored_markdown/specification_guide/index.md b/doc/development/gitlab_flavored_markdown/specification_guide/index.md
index 021f7bafce9..397d555c54f 100644
--- a/doc/development/gitlab_flavored_markdown/specification_guide/index.md
+++ b/doc/development/gitlab_flavored_markdown/specification_guide/index.md
@@ -64,7 +64,7 @@ serve as input to automated conformance tests. It is
[explained in the CommonMark specification](https://spec.commonmark.org/0.30/#about-this-document):
> This document attempts to specify Markdown syntax unambiguously. It contains many
-> examples with side-by-side Markdown and HTML. These are intended to double as conformance tests.
+> examples with side-by-side Markdown and HTML. These examples are intended to double as conformance tests.
The HTML-rendered versions of the specifications:
@@ -385,20 +385,61 @@ subgraph output:<br/>GLFM specification files
end
```
+#### `canonicalize-html.rb` script
+
+The `scripts/glfm/canonicalize-html.rb` handles the
+["canonicalization" of HTML](#canonicalization-of-html). It is a pipe-through
+helper script which takes as input a static or WYSIWYG HTML string containing
+extra HTML, and outputs a canonical HTML string.
+
+It is implemented as a standalone, modular, single-purpose script, based on the
+[Unix philosophy](https://en.wikipedia.org/wiki/Unix_philosophy#:~:text=The%20Unix%20philosophy%20emphasizes%20building,developers%20other%20than%20its%20creators.).
+It's easy to use when running the standard CommonMark `spec_tests.py`
+script, which expects canonical HTML, against the GitLab renderer implementations.
+
+#### `run-spec-tests.sh` script
+
+`scripts/glfm/run-spec-tests.sh` is a convenience shell script which runs
+conformance specs via the CommonMark standard `spec_tests.py` script,
+which uses the `glfm_specification/output/spec.txt` file and `scripts/glfm/canonicalize-html.rb`
+helper script to test the GLFM renderer implementations' support for rendering Markdown
+specification examples to canonical HTML.
+
+```mermaid
+graph LR
+subgraph scripts:
+ A{run-spec-tests.sh} --> C
+ subgraph specification testing process
+ B[canonicalize-html.sh] --> C
+ C[spec_tests.py]
+ end
+end
+subgraph input
+ D[spec.txt GLFM specification] --> C
+ E((GLFM static<br/>renderer implementation)) --> B
+ F((GLFM WYSIWYG<br/>renderer implementation)) --> B
+end
+subgraph output:<br/>test results/output
+ C --> G[spec_tests.py output]
+end
+```
+
#### `update-example-snapshots.rb` script
-The `scripts/glfm/update-example-snapshots.rb` script uses input specification
-files to update example snapshots:
+The `scripts/glfm/update-example-snapshots.rb` script uses the GLFM
+`glfm_specification/output/spec.txt` specification file and the
+`glfm_specification/input/gitlab_flavored_markdown/glfm_example_status.yml`
+file to create and update the [example snapshot](#example-snapshot-files)
+YAML files:
```mermaid
graph LR
subgraph script:
A{update-example-snapshots.rb}
end
-subgraph input:<br/>input specification files
- B[downloaded gfm_spec_v_0.29.txt] --> A
- C[glfm_canonical_examples.txt] --> A
- D[glfm_example_status.yml] --> A
+subgraph input:<br/>input specification file
+ B[spec.txt] --> A
+ C[glfm_example_status.yml] --> A
end
subgraph output:<br/>example snapshot files
A --> E[examples_index.yml]
@@ -435,9 +476,11 @@ code. It contains only shell scripting commands for the relevant
```mermaid
graph LR
+subgraph tests:
+ B[relevant rspec+jest test files]
+end
subgraph script:
- A{run-snapshopt-tests.sh} --> B
- B[relevant rspec/jest test files]
+ A{run-snapshopt-tests.sh} -->|invokes| B
end
subgraph input:<br/>YAML
C[examples_index.yml] --> B
@@ -446,46 +489,7 @@ subgraph input:<br/>YAML
F[prosemirror_json.yml] --> B
end
subgraph output:<br/>test results/output
- B --> G[rspec/jest output]
-end
-```
-
-#### `canonicalize-html.rb` script
-
-The `scripts/glfm/canonicalize-html.rb` handles the
-["canonicalization" of HTML](#canonicalization-of-html). It is a pipe-through
-helper script which takes as input a static or WYSIWYG HTML string containing
-extra HTML, and outputs a canonical HTML string.
-
-It is implemented as a standalone, modular, single-purpose script, based on the
-[Unix philosophy](https://en.wikipedia.org/wiki/Unix_philosophy#:~:text=The%20Unix%20philosophy%20emphasizes%20building,developers%20other%20than%20its%20creators.).
-It's easy to use when running the standard CommonMark `spec_tests.py`
-script, which expects canonical HTML, against the GitLab renderer implementations.
-
-#### `run-spec-tests.sh` script
-
-`scripts/glfm/run-spec-tests.sh` is a convenience shell script which runs
-conformance specs via the CommonMark standard `spec_tests.py` script,
-which uses the `glfm_specification/output/spec.txt` file and `scripts/glfm/canonicalize-html.rb`
-helper script to test the GLFM renderer implementations' support for rendering Markdown
-specification examples to canonical HTML.
-
-```mermaid
-graph LR
-subgraph scripts:
- A{run-spec-tests.sh} --> C
- subgraph specification testing process
- B[canonicalize-html.sh] --> C
- C[spec_tests.py]
- end
-end
-subgraph input
- D[spec.txt GLFM specification] --> C
- E((GLFM static<br/>renderer implementation)) --> B
- F((GLFM WYSIWYG<br/>renderer implementation)) --> B
-end
-subgraph output:<br/>test results/output
- C --> G[spec_tests.py output]
+ B --> H[rspec+jest output]
end
```
@@ -506,21 +510,76 @@ They are either downloaded, as in the case of the
GFM `spec.txt` file, or manually
updated, as in the case of all GFM files.
-- `glfm_specification/input/github_flavored_markdown/gfm_spec_v_0.29.txt` -
- official latest [GFM spec.txt](https://github.com/github/cmark-gfm/blob/master/test/spec.txt),
- automatically downloaded and updated by `update-specification.rb` script.
-- `glfm_specification/input/gitlab_flavored_markdown/glfm_intro.txt` -
- Manually updated text of intro section for generated GLFM `spec.txt`.
- - Replaces GFM version of introductory
- section in `spec.txt`.
-- `glfm_specification/input/gitlab_flavored_markdown/glfm_canonical_examples.txt` -
- Manually updated canonical Markdown+HTML examples for GLFM extensions.
- - Standard backtick-delimited `spec.txt` examples format with Markdown + canonical HTML.
- - Inserted as a new section before the appendix of generated `spec.txt`.
-- `glfm_specification/input/gitlab_flavored_markdown/glfm_example_status.yml` -
- Manually updated status of automatic generation of files based on Markdown
- examples.
- - Allows example snapshot generation, Markdown conformance tests, or
+##### GitHub Flavored Markdown specification
+
+[`glfm_specification/input/github_flavored_markdown/gfm_spec_v_0.29.txt`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/glfm_specification/input/github_flavored_markdown/gfm_spec_v_0.29.txt)
+is the official latest [GFM `spec.txt`](https://github.com/github/cmark-gfm/blob/master/test/spec.txt).
+
+- It is automatically downloaded and updated by `update-specification.rb` script.
+- When it is downloaded, the version number is added to the filename.
+
+##### `glfm_intro.txt`
+
+[`glfm_specification/input/gitlab_flavored_markdown/glfm_intro.txt`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/glfm_specification/input/gitlab_flavored_markdown/glfm_intro.txt)
+is the GitLab-specific version of the prose in the introduction section of the GLFM specification.
+
+- It is manually updated.
+- The `update-specification.rb` script inserts it into the generated GLFM `spec.txt` to replace
+ the GitHub-specific GFM version of the introductory section.
+
+##### `glfm_canonical_examples.txt`
+
+[`glfm_specification/input/gitlab_flavored_markdown/glfm_canonical_examples.txt`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/glfm_specification/input/gitlab_flavored_markdown/glfm_canonical_examples.txt)
+is the manually updated canonical Markdown+HTML examples for GLFM extensions.
+
+- It contains examples in the [standard backtick-delimited `spec.txt` format](#various-markdown-specifications),
+ each of which contain a Markdown example and the corresponding canonical HTML.
+- The `update-specification.rb` script inserts it as new sections before the appendix
+ of generated `spec.txt`.
+- It should consist of `H1` header sections, with all examples nested exactly 2 levels deep within `H2`
+ header sections.
+
+`glfm_specification/input/gitlab_flavored_markdown/glfm_canonical_examples.txt` sample entries:
+
+NOTE:
+All lines in this example are prefixed with a `|` character. This prefix helps avoid false
+errors when this file is checked by `markdownlint`, and possible errors in other Markdown editors.
+The actual file should not have these prefixed `|` characters.
+
+```plaintext
+|# First GitLab-Specific Section with Examples
+|
+|## Strong but with two asterisks
+|
+|```````````````````````````````` example
+|**bold**
+|.
+|<p><strong>bold</strong></p>
+|````````````````````````````````
+|
+|# Second GitLab-Specific Section with Examples
+|
+|## Strong but with HTML
+|
+|```````````````````````````````` example
+|<strong>
+|bold
+|</strong>
+|.
+|<p><strong>
+|bold
+|</strong></p>
+|````````````````````````````````
+```
+
+##### `glfm_example_status.yml`
+
+[`glfm_specification/input/gitlab_flavored_markdown/glfm_example_status.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/glfm_specification/input/gitlab_flavored_markdown/glfm_example_status.yml)
+controls the behavior of the [scripts](#scripts) and [tests](#types-of-markdown-tests-driven-by-the-glfm-specification).
+
+- It is manually updated.
+- It controls the status of automatic generation of files based on Markdown examples.
+- It allows example snapshot generation, Markdown conformance tests, or
Markdown snapshot tests to be skipped for individual examples. For example, if
they are unimplemented, broken, or cannot be tested for some reason.
@@ -528,12 +587,14 @@ updated, as in the case of all GFM files.
```yaml
07_99_an_example_with_incomplete_wysiwyg_implementation_1:
- skip_update_example_snapshots: true
- skip_running_snapshot_static_html_tests: false
- skip_running_snapshot_wysiwyg_html_tests: true
- skip_running_snapshot_prosemirror_json_tests: true
+ skip_update_example_snapshots: false
+ skip_update_example_snapshot_html_static: false
+ skip_update_example_snapshot_html_wysiwyg: false
skip_running_conformance_static_tests: false
- skip_running_conformance_wysiwyg_tests: true
+ skip_running_conformance_wysiwyg_tests: false
+ skip_running_snapshot_static_html_tests: false
+ skip_running_snapshot_wysiwyg_html_tests: false
+ skip_running_snapshot_prosemirror_json_tests: false
```
#### Output specification files
@@ -541,28 +602,39 @@ updated, as in the case of all GFM files.
The `glfm_specification/output` directory contains the CommonMark standard format
`spec.txt` file which represents the canonical GLFM specification which is generated
by the `update-specification.rb` script. It also contains the rendered `spec.html`
-and `spec.pdf` which are generated from with the `spec.txt` as input.
-
-- `glfm_specification/output/spec.txt` - A Markdown file, in the standard format
- with prose and Markdown + canonical HTML examples, generated (or updated) by the
- `update-specification.rb` script.
-- `glfm_specification/output/spec.html` - An HTML file, rendered based on `spec.txt`,
- also generated (or updated) by the `update-specification.rb` script at the same time as
- `spec.txt`. It corresponds to the HTML-rendered versions of the
- "GitHub Flavored Markdown" (<abbr title="GitHub Flavored Markdown">GFM</abbr>)
- [specification](https://github.github.com/gfm/)
- and the [CommonMark specification](https://spec.commonmark.org/0.30/).
-
-These output `spec.**` files, which represent the official, canonical GLFM specification
+which is generated based on the `spec.txt` as input.
+
+These output `spec.*` files, which represent the official, canonical GLFM specification,
are colocated under the same parent folder `glfm_specification` with the other
`input` specification files. They're located here both for convenience, and because they are all
-a mix of manually edited and generated files. In GFM,
-`spec.txt` is [located in the test dir](https://github.com/github/cmark-gfm/blob/master/test/spec.txt),
-and in CommonMark it's located
-[in the project root](https://github.com/github/cmark-gfm/blob/master/test/spec.txt).
-No precedent exists for a standard location. In the future, we may decide to
+a mix of manually edited and generated files.
+
+In GFM, `spec.txt` is [located in the test dir](https://github.com/github/cmark-gfm/blob/master/test/spec.txt),
+and in CommonMark it's located [in the project root](https://github.com/github/cmark-gfm/blob/master/test/spec.txt). No precedent exists for a standard location. In the future, we may decide to
move or copy a hosted version of the rendered HTML `spec.html` version to another location or site.
+##### spec.txt
+
+[`glfm_specification/output/spec.txt`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/glfm_specification/output/spec.txt)
+is a Markdown specification file, in the standard format
+with prose and Markdown + canonical HTML examples. It is generated or updated by the
+`update-specification.rb` script.
+
+It also serves as input for other scripts such as `update-example-snapshots.rb`
+and `run-spec-tests.sh`.
+
+##### spec.html
+
+[`glfm_specification/output/spec.html`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/glfm_specification/output/spec.html)
+is an HTML file, rendered based on `spec.txt`. It is
+also generated (or updated) by the `update-specification.rb` script at the same time as
+`spec.txt`.
+
+It corresponds to the HTML-rendered versions of the
+"GitHub Flavored Markdown" (<abbr title="GitHub Flavored Markdown">GFM</abbr>)
+[specification](https://github.github.com/gfm/)
+and the [CommonMark specification](https://spec.commonmark.org/0.30/).
+
### Example snapshot files
The `example_snapshots` directory contains files which are generated by the
@@ -574,12 +646,13 @@ After the entire GLFM implementation is complete for both backend (Ruby) and
frontend (JavaScript), all of these YAML files can be automatically generated.
However, while the implementations are still in progress, the `skip_update_example_snapshots`
key in `glfm_specification/input/gitlab_flavored_markdown/glfm_example_status.yml`
-can be used to disable automatic generation of some examples, and they can instead
+can be used to disable automatic generation of some examples. They can instead
be manually edited as necessary to help drive the implementations.
#### `spec/fixtures/glfm/example_snapshots/examples_index.yml`
-`spec/fixtures/glfm/example_snapshots/examples_index.yml` is the main list of all
+[`spec/fixtures/glfm/example_snapshots/examples_index.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/spec/fixtures/glfm/example_snapshots/examples_index.yml)
+is the main list of all
CommonMark, GFM, and GLFM example names, each with a unique canonical name.
- It is generated from the hierarchical sections and examples in the
@@ -590,10 +663,15 @@ CommonMark, GFM, and GLFM example names, each with a unique canonical name.
the additional Section 7 in the GLFM `spec.txt`.
- It also contains extra metadata about each example, such as:
1. `spec_txt_example_position` - The position of the example in the generated GLFM `spec.txt` file.
+ - This value is the index order of each individual Markdown + HTML5 example in the file. It is _not_
+ the line number in the file.
+ - This value can be used to locate the example in the rendered `spec.html` file, because the standard
+ CommonMark tooling includes the index number for each example in the rendered HTML file.
+ For example: [https://spec.commonmark.org/0.30/#example-42](https://spec.commonmark.org/0.30/#example-42)
1. `source_specification` - Which specification the example originally came from:
`commonmark`, `github`, or `gitlab`.
- The naming convention for example entry names is based on nested header section
- names and example index within the header.
+ names and example index in the header.
- This naming convention should result in fairly stable names and example positions.
The CommonMark / GLFM specification rarely changes, and most GLFM
examples where multiple examples exist for the same Section 7 subsection are
@@ -621,7 +699,7 @@ CommonMark, GFM, and GLFM example names, each with a unique canonical name.
#### `spec/fixtures/glfm/example_snapshots/markdown.yml`
-`spec/fixtures/glfm/example_snapshots/markdown.yml` contains the original Markdown
+[`spec/fixtures/glfm/example_snapshots/markdown.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/spec/fixtures/glfm/example_snapshots/markdown.yml) contains the original Markdown
for each entry in `spec/fixtures/glfm/example_snapshots/examples_index.yml`
- For CommonMark and GFM Markdown,
@@ -634,14 +712,14 @@ for each entry in `spec/fixtures/glfm/example_snapshots/examples_index.yml`
`spec/fixtures/glfm/example_snapshots/markdown.yml` sample entry:
```yaml
-06_04_inlines_emphasis_and_strong_emphasis_1: |-
+06_04_inlines_emphasis_and_strong_emphasis_1: |
*foo bar*
```
#### `spec/fixtures/glfm/example_snapshots/html.yml`
-`spec/fixtures/glfm/example_snapshots/html.yml` contains the HTML for each entry in
-`spec/fixtures/glfm/example_snapshots/examples_index.yml`
+[`spec/fixtures/glfm/example_snapshots/html.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/spec/fixtures/glfm/example_snapshots/html.yml)
+contains the HTML for each entry in `spec/fixtures/glfm/example_snapshots/examples_index.yml`
Three types of entries exist, with different HTML for each:
@@ -670,11 +748,11 @@ Any exceptions or failures which occur when generating HTML are replaced with an
```yaml
06_04_inlines_emphasis_and_strong_emphasis_1:
- canonical: |-
+ canonical: |
<p><em>foo bar</em></p>
- static: |-
+ static: |
<p data-sourcepos="1:1-1:9" dir="auto"><strong>foo bar</strong></p>
- wysiwyg: |-
+ wysiwyg: |
<p><strong>foo bar</strong></p>
```
@@ -684,8 +762,8 @@ depending on how the implementations evolve.
#### `spec/fixtures/glfm/example_snapshots/prosemirror_json.yml`
-`spec/fixtures/glfm/example_snapshots/prosemirror_json.yml` contains the ProseMirror
-JSON for each entry in `spec/fixtures/glfm/example_snapshots/examples_index.yml`
+[`spec/fixtures/glfm/example_snapshots/prosemirror_json.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/spec/fixtures/glfm/example_snapshots/prosemirror_json.yml)
+contains the ProseMirror JSON for each entry in `spec/fixtures/glfm/example_snapshots/examples_index.yml`
- It is generated (or updated) from the frontend code via the `update-example-snapshots.rb`
script, but can be manually updated for examples with incomplete implementations.
@@ -715,3 +793,28 @@ JSON for each entry in `spec/fixtures/glfm/example_snapshots/examples_index.yml`
]
}
```
+
+## Workflows
+
+This section describes how the scripts can be used to manage the GLFM specification and tests.
+
+### Update the GLFM specification and run conformance tests
+
+1. Run [`update-specification.rb`](#update-specificationrb-script) to update the GLFM specification [output specification files](#output-specification-files).
+1. Visually inspect and confirm any resulting changes to the [output specification files](#output-specification-files).
+1. Run [`run-spec-tests.sh`](http://gdk.test:3005/ee/development/gitlab_flavored_markdown/specification_guide/index.html#run-spec-testssh-script) to run the conformance tests against the canonicalized GLFM specification.
+1. Commit any changes to the [output specification files](#output-specification-files).
+
+### Update the example snapshots and run snapshot tests
+
+1. If you are working on an in-progress feature or bug, make any necessary manual updates to the [input specification files](#input-specification-files). This may include:
+ 1. Updating the canonical Markdown or HTML examples in `glfm_specification/input/gitlab_flavored_markdown/glfm_canonical_examples.txt`.
+ 1. Updating `glfm_specification/input/gitlab_flavored_markdown/glfm_example_status.yml` to reflect the current status of the examples or tests.
+1. Run [`update-specification.rb`](#update-specificationrb-script) to update the `spec.txt` to reflect any changes which were made to the [input specification files](#input-specification-files).
+1. Visually inspect and confirm any resulting changes to the [output specification files](#output-specification-files).
+1. Run [`update-example-snapshots.rb`](#update-example-snapshotsrb-script) to update the [example snapshot files](#example-snapshot-files).
+1. Visually inspect and confirm any resulting changes to the [example snapshot files](#example-snapshot-files).
+1. Run [`run-snapshot-tests.sh`](#run-snapshot-testssh-script) as a convenience script to run all relevant frontend (RSpec) and backend (Jest) tests which use the example snapshots.
+ 1. Any frontend or backend snapshot test may also be run individually.
+ 1. All frontend and backend tests are also run as part of the continuous integration suite, as they normally are.
+1. Commit any changes to the [input specification files](#input-specification-files), [output specification files](#output-specification-files), or [example snapshot files](#example-snapshot-files).
diff --git a/doc/development/go_guide/dependencies.md b/doc/development/go_guide/dependencies.md
index 8aa8f286edc..0c2ce4f2b48 100644
--- a/doc/development/go_guide/dependencies.md
+++ b/doc/development/go_guide/dependencies.md
@@ -102,7 +102,7 @@ malicious party without causing build failures.
Go 1.12+ can be configured to use a checksum database. If configured to do so,
when Go fetches a dependency and there is no corresponding entry in `go.sum`, Go
-queries the configured checksum database(s) for the checksum of the
+queries the configured checksum databases for the checksum of the
dependency instead of calculating it from the downloaded dependency. If the
dependency cannot be found in the checksum database, the build fails. If the
downloaded dependency's checksum does not match the result from the checksum
diff --git a/doc/development/go_guide/go_upgrade.md b/doc/development/go_guide/go_upgrade.md
index 3267d1262f0..4e2a0d95910 100644
--- a/doc/development/go_guide/go_upgrade.md
+++ b/doc/development/go_guide/go_upgrade.md
@@ -158,7 +158,7 @@ if you need help finding the correct person or labels:
| GitLab Quality Images | [Issue Tracker](https://gitlab.com/gitlab-org/gitlab-build-images/-/issues) |
| GitLab Shell | [Issue Tracker](https://gitlab.com/gitlab-org/gitlab-shell/-/issues) |
| GitLab Workhorse | [Issue Tracker](https://gitlab.com/gitlab-org/gitlab/-/issues) |
-| Labkit | [Issue Tracker](https://gitlab.com/gitlab-org/labkit/-/issues) |
+| LabKit | [Issue Tracker](https://gitlab.com/gitlab-org/labkit/-/issues) |
| [Node Exporter](https://github.com/prometheus/node_exporter) | [Issue Tracker](https://gitlab.com/gitlab-org/gitlab/-/issues) |
| [PgBouncer Exporter](https://github.com/prometheus-community/pgbouncer_exporter) | [Issue Tracker](https://gitlab.com/gitlab-org/gitlab/-/issues) |
| [Postgres Exporter](https://github.com/prometheus-community/postgres_exporter) | [Issue Tracker](https://gitlab.com/gitlab-org/gitlab/-/issues) |
diff --git a/doc/development/index.md b/doc/development/index.md
index 5c0cc7f9718..3d5ec24d3e2 100644
--- a/doc/development/index.md
+++ b/doc/development/index.md
@@ -21,7 +21,7 @@ For information on using GitLab to work on your own software projects, see the
For information on working with the GitLab APIs, see the [API documentation](../api/index.md).
For information about how to install, configure, update, and upgrade your own
-GitLab instance, see the [administration documentation](../administration/index.md).
+GitLab instance, see the [Administrator documentation](../administration/index.md).
## Get started
@@ -144,7 +144,7 @@ In these cases, use the following workflow:
If the page is not assigned to a specific group, follow the
[Technical Writing review process for development guidelines](https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments-to-development-guidelines).
The Technical Writer may ask for additional approvals as previously suggested before merging the MR.
-
+
### Reviewer values
> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/57293) in GitLab 14.1.
diff --git a/doc/development/integrations/index.md b/doc/development/integrations/index.md
index 34ac307c98a..e595fea6d96 100644
--- a/doc/development/integrations/index.md
+++ b/doc/development/integrations/index.md
@@ -134,7 +134,7 @@ By default, the integration form provides:
- Checkboxes for each of the trigger events returned from `Integration#configurable_events`.
You can also add help text at the top of the form by either overriding `Integration#help`,
-or providing a template in `app/views/projects/services/$INTEGRATION_NAME/_help.html.haml`.
+or providing a template in `app/views/shared/integrations/$INTEGRATION_NAME/_help.html.haml`.
To add your custom properties to the form, you can define the metadata for them in `Integration#fields`.
@@ -275,7 +275,7 @@ as described above in [Customize the frontend form](#customize-the-frontend-form
our [usability guidelines](https://design.gitlab.com/usability/helping-users) for help text.
For more detailed documentation, provide a page in `doc/user/project/integrations`,
-and link it from the [Integrations overview](../../user/project/integrations/overview.md).
+and link it from the [Integrations overview](../../user/project/integrations/index.md).
You can also refer to our general [documentation guidelines](../documentation/index.md).
diff --git a/doc/development/integrations/secure.md b/doc/development/integrations/secure.md
index 5f7cccdab64..0f4fa1a97a8 100644
--- a/doc/development/integrations/secure.md
+++ b/doc/development/integrations/secure.md
@@ -290,9 +290,6 @@ useful when debugging. The default value for `SECURE_LOG_LEVEL` should be set
to `info`.
When executing command lines, scanners should use the `debug` level to log the command line and its output.
-For instance, the [bundler-audit](https://gitlab.com/gitlab-org/security-products/analyzers/bundler-audit) scanner
-uses the `debug` level to log the command line `bundle audit check --quiet`,
-and what `bundle audit` writes to the standard output.
If the command line fails, then it should be logged with the `error` log level;
this makes it possible to debug the problem without having to change the log level to `debug` and rerun the scanning job.
@@ -679,7 +676,7 @@ The confidence ranges from `Low` to `Confirmed`, but it can also be `Unknown`,
Valid values are: `Ignore`, `Unknown`, `Experimental`, `Low`, `Medium`, `High`, or `Confirmed`
`Unknown` values means that data is unavailable to determine it's actual value. Therefore, it may be `high`, `medium`, or `low`,
-and needs to be investigated. We have [provided a chart](../../user/application_security/sast/analyzers.md#analyzers-data)
+and needs to be investigated. We have [provided a chart](../../user/application_security/sast/analyzers.md#data-provided-by-analyzers)
of the available SAST Analyzers and what data is currently available.
#### Remediations
diff --git a/doc/development/internal_api/index.md b/doc/development/internal_api/index.md
index cdbc674e0a5..dca71413564 100644
--- a/doc/development/internal_api/index.md
+++ b/doc/development/internal_api/index.md
@@ -478,28 +478,6 @@ curl --request POST --header "Gitlab-Kas-Api-Request: <JWT token>" --header "Con
--data '{"gitops_sync_count":1}' "http://localhost:3000/api/v4/internal/kubernetes/usage_metrics"
```
-### GitLab agent alert metrics
-
-Called from GitLab agent server (KAS) to save alerts derived from Cilium on Kubernetes
-Cluster.
-
-| Attribute | Type | Required | Description |
-|:----------|:-------|:---------|:------------|
-| `alert` | Hash | yes | Alerts detail. Same format as [3rd party alert](../../operations/incident_management/integrations.md#customize-the-alert-payload-outside-of-gitlab). |
-
-```plaintext
-POST internal/kubernetes/modules/cilium_alert
-```
-
-Example Request:
-
-```shell
-curl --request POST --header "Gitlab-Kas-Api-Request: <JWT token>" \
- --header "Authorization: Bearer <agent token>" --header "Content-Type: application/json" \
- --data '"{\"alert\":{\"title\":\"minimal\",\"message\":\"network problem\",\"evalMatches\":[{\"value\":1,\"metric\":\"Count\",\"tags\":{}}]}}"' \
- "http://localhost:3000/api/v4/internal/kubernetes/modules/cilium_alert"
-```
-
### Create Starboard vulnerability
Called from the GitLab agent server (`kas`) to create a security vulnerability
diff --git a/doc/development/kubernetes.md b/doc/development/kubernetes.md
index a6d9c754838..ee261769d82 100644
--- a/doc/development/kubernetes.md
+++ b/doc/development/kubernetes.md
@@ -54,7 +54,7 @@ webserver, and can lead to a denial-of-service (DoS) attack in GitLab as
the Kubernetes cluster response times are outside of our control.
The easiest way to ensure your calls happen a background process is to
-delegate any such work to happen in a [Sidekiq worker](sidekiq_style_guide.md).
+delegate any such work to happen in a [Sidekiq worker](sidekiq/index.md).
You may want to make calls to Kubernetes and return the response, but a background
worker isn't a good fit. Consider using
diff --git a/doc/development/maintenance_mode.md b/doc/development/maintenance_mode.md
index c2fd4bab605..a118d9cf0ad 100644
--- a/doc/development/maintenance_mode.md
+++ b/doc/development/maintenance_mode.md
@@ -11,8 +11,8 @@ info: To determine the technical writer assigned to the Stage/Group associated w
GitLab Maintenance Mode **only** blocks writes from HTTP and SSH requests at the application level in a few key places within the rails application.
[Search the codebase for `maintenance_mode?`.](https://gitlab.com/search?search=maintenance_mode%3F&group_id=9970&project_id=278964&scope=blobs&search_code=false&snippets=false&repository_ref=)
-- [the read-only database method](https://gitlab.com/gitlab-org/gitlab/-/blob/2425e9de50c678413ceaad6ee3bf66f42b7e228c/ee/lib/ee/gitlab/database.rb#L13), which toggles special behavior when we are not allowed to write to the database. [Search the codebase for `Gitlab::Database.read_only?`.](https://gitlab.com/search?search=Gitlab%3A%3ADatabase.read_only%3F&group_id=9970&project_id=278964&scope=blobs&search_code=false&snippets=false&repository_ref=)
-- [the read-only middleware](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/lib/ee/gitlab/middleware/read_only/controller.rb), where HTTP requests that cause database writes are blocked, unless explicitly allowed.
+- [the read-only database method](https://gitlab.com/gitlab-org/gitlab/-/blob/2425e9de50c678413ceaad6ee3bf66f42b7e228c/ee/lib/ee/gitlab/database.rb#L13), which toggles special behavior when we are not allowed to write to the database. We use this method for possible places where writes could occur in GET requests. [Search the codebase for `Gitlab::Database.read_only?`.](https://gitlab.com/search?search=Gitlab%3A%3ADatabase.read_only%3F&group_id=9970&project_id=278964&scope=blobs&search_code=false&snippets=false&repository_ref=)
+- [the read-only middleware](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/lib/ee/gitlab/middleware/read_only/controller.rb), where HTTP requests that cause database writes are blocked, unless explicitly allowed (for example, GET requests).
- [Git push access via SSH is denied](https://gitlab.com/gitlab-org/gitlab/-/blob/2425e9de50c678413ceaad6ee3bf66f42b7e228c/ee/lib/ee/gitlab/git_access.rb#L13) by returning 401 when `gitlab-shell` POSTs to [`/internal/allowed`](internal_api/index.md) to [check if access is allowed](internal_api/index.md#git-authentication).
- [Container registry authentication service](https://gitlab.com/gitlab-org/gitlab/-/blob/2425e9de50c678413ceaad6ee3bf66f42b7e228c/ee/app/services/ee/auth/container_registry_authentication_service.rb#L12), where updates to the container registry are blocked.
diff --git a/doc/development/merge_request_application_and_rate_limit_guidelines.md b/doc/development/merge_request_application_and_rate_limit_guidelines.md
index 94ae126802a..62bf62f6275 100644
--- a/doc/development/merge_request_application_and_rate_limit_guidelines.md
+++ b/doc/development/merge_request_application_and_rate_limit_guidelines.md
@@ -14,7 +14,7 @@ Every new feature should have safe usage limits included in its implementation.
Limits are applicable for:
- System-level resource pools such as API requests, SSHD connections, database connections, storage, and so on.
-- Domain-level objects such as CI minutes, groups, sign-in attempts, and so on.
+- Domain-level objects such as CI/CD minutes, groups, sign-in attempts, and so on.
## When limits are required
diff --git a/doc/development/merge_request_performance_guidelines.md b/doc/development/merge_request_performance_guidelines.md
index fe8e730d64e..5e7fe9cc8fb 100644
--- a/doc/development/merge_request_performance_guidelines.md
+++ b/doc/development/merge_request_performance_guidelines.md
@@ -206,7 +206,7 @@ By default, this `Gitlab::SQL::CTE` class forces materialization through adding
(this behavior is implemented using a custom Arel node `Gitlab::Database::AsWithMaterialized` under the surface).
WARNING:
-We plan to drop the support for PostgreSQL 11. Upgrading to GitLab 14.0 requires PostgreSQL 12 or higher.
+Upgrading to GitLab 14.0 requires PostgreSQL 12 or higher.
## Cached Queries
@@ -526,7 +526,7 @@ end
The usage of shared temporary storage is required if your intent
is to persistent file for a disk-based storage, and not Object Storage.
-[Workhorse direct_upload](uploads/implementation.md#direct-upload) when accepting file
+[Workhorse direct_upload](uploads/index.md#direct-upload) when accepting file
can write it to shared storage, and later GitLab Rails can perform a move operation.
The move operation on the same destination is instantaneous.
The system instead of performing `copy` operation just re-attaches file into a new place.
@@ -550,7 +550,7 @@ that implements a seamless support for Shared and Object Storage-based persisten
#### Data access
Each feature that accepts data uploads or allows to download them needs to use
-[Workhorse direct_upload](uploads/implementation.md#direct-upload). It means that uploads needs to be
+[Workhorse direct_upload](uploads/index.md#direct-upload). It means that uploads needs to be
saved directly to Object Storage by Workhorse, and all downloads needs to be served
by Workhorse.
@@ -562,5 +562,5 @@ can time out, which is especially problematic for slow clients. If clients take
to upload/download the processing slot might be killed due to request processing
timeout (usually between 30s-60s).
-For the above reasons it is required that [Workhorse direct_upload](uploads/implementation.md#direct-upload) is implemented
+For the above reasons it is required that [Workhorse direct_upload](uploads/index.md#direct-upload) is implemented
for all file uploads and downloads.
diff --git a/doc/development/migration_style_guide.md b/doc/development/migration_style_guide.md
index 086e061452b..aebecd90574 100644
--- a/doc/development/migration_style_guide.md
+++ b/doc/development/migration_style_guide.md
@@ -110,6 +110,11 @@ table, that column is added at the bottom. Please do not reorder
columns manually for existing tables as this causes confusion to
other people using `db/structure.sql` generated by Rails.
+NOTE:
+[Creating an index asynchronously requires two merge requests.](adding_database_indexes.md#add-a-migration-to-create-the-index-synchronously)
+When done, commit the schema change in the merge request
+that adds the index with `add_concurrent_index`.
+
When your local database in your GDK is diverging from the schema from
`main` it might be hard to cleanly commit the schema changes to
Git. In that case you can use the `scripts/regenerate-schema` script to
@@ -127,6 +132,24 @@ scripts/regenerate-schema
TARGET=12-9-stable-ee scripts/regenerate-schema
```
+The `scripts/regenerate-schema` script can create additional differences.
+If this happens, use a manual procedure where `<migration ID>` is the `DATETIME`
+part of the migration file.
+
+```shell
+# Rebase against master
+git rebase master
+
+# Rollback changes
+VERSION=<migration ID> bundle exec rails db:rollback:main
+
+# Checkout db/structure.sql from master
+git checkout origin/master db/structure.sql
+
+# Migrate changes
+VERSION=<migration ID> bundle exec rails db:migrate:main
+```
+
## Avoiding downtime
The document ["Avoiding downtime in migrations"](database/avoiding_downtime_in_migrations.md) specifies
@@ -487,7 +510,7 @@ end
### When to use the helper method
You can **only** use the `with_lock_retries` helper method when the execution is not already inside
-an open transaction (using Postgres subtransactions is discouraged). It can be used with
+an open transaction (using PostgreSQL subtransactions is discouraged). It can be used with
standard Rails migration helper methods. Calling more than one migration
helper is not a problem if they're executed on the same table.
@@ -602,7 +625,7 @@ end
```
You must explicitly name indexes that are created with more complex
-definitions beyond table name, column name(s) and uniqueness constraint.
+definitions beyond table name, column names, and uniqueness constraint.
Consult the [Adding Database Indexes](adding_database_indexes.md#requirements-for-naming-indexes)
guide for more details.
diff --git a/doc/development/new_fe_guide/development/performance.md b/doc/development/new_fe_guide/development/performance.md
index f34c407da84..ee853942cb9 100644
--- a/doc/development/new_fe_guide/development/performance.md
+++ b/doc/development/new_fe_guide/development/performance.md
@@ -8,15 +8,15 @@ info: To determine the technical writer assigned to the Stage/Group associated w
## Monitoring
-We have a performance dashboard available in one of our [Grafana instances](https://dashboards.gitlab.net/d/1EBTz3Dmz/sitespeed-page-summary?orgId=1). This dashboard automatically aggregates metric data from [sitespeed.io](https://www.sitespeed.io/) every 6 hours. These changes are displayed after a set number of pages are aggregated.
+We have a performance dashboard available in one of our [Grafana instances](https://dashboards.gitlab.net/d/000000043/sitespeed-page-summary?orgId=1). This dashboard automatically aggregates metric data from [sitespeed.io](https://www.sitespeed.io/) every 4 hours. These changes are displayed after a set number of pages are aggregated.
-These pages can be found inside a text file in the [`gitlab-build-images` repository](https://gitlab.com/gitlab-org/gitlab-build-images) called [`gitlab.txt`](https://gitlab.com/gitlab-org/gitlab-build-images/blob/master/scripts/gitlab.txt)
-Any frontend engineer can contribute to this dashboard. They can contribute by adding or removing URLs of pages from this text file. Please have a [frontend monitoring expert](https://about.gitlab.com/company/team/) review your changes before assigning to a maintainer of the `gitlab-build-images` project. The changes are pushed live on the next scheduled run after the changes are merged into `main`.
+These pages can be found inside text files in the [`sitespeed-measurement-setup` repository](https://gitlab.com/gitlab-org/frontend/sitespeed-measurement-setup) called [`gitlab`](https://gitlab.com/gitlab-org/frontend/sitespeed-measurement-setup/-/tree/master/gitlab)
+Any frontend engineer can contribute to this dashboard. They can contribute by adding or removing URLs of pages to the text files. The changes are pushed live on the next scheduled run after the changes are merged into `main`.
-There are 3 recommended high impact metrics to review on each page:
+There are 3 recommended high impact metrics (core web vitals) to review on each page:
-- [First visual change](https://web.dev/first-meaningful-paint/)
-- [Speed Index](https://github.com/WPO-Foundation/webpagetest-docs/blob/master/user/Metrics/SpeedIndex.md)
-- [Visual Complete 95%](https://github.com/WPO-Foundation/webpagetest-docs/blob/master/user/Metrics/SpeedIndex.md)
+- [Largest Contentful Paint](https://web.dev/lcp/)
+- [First Input Delay](https://web.dev/fid/)
+- [Cumulative Layout Shift](https://web.dev/cls/)
For these metrics, lower numbers are better as it means that the website is more performant.
diff --git a/doc/development/new_fe_guide/modules/widget_extensions.md b/doc/development/new_fe_guide/modules/widget_extensions.md
index 638a0a2a85b..d3be8981abb 100644
--- a/doc/development/new_fe_guide/modules/widget_extensions.md
+++ b/doc/development/new_fe_guide/modules/widget_extensions.md
@@ -36,6 +36,7 @@ export default {
},
expandEvent: '', // Optional: RedisHLL event name to track expanding content
enablePolling: false, // Optional: Tells extension to poll for data
+ modalComponent: null, // Optional: The component to use for the modal
computed: {
summary(data) {}, // Required: Level 1 summary text
statusIcon(data) {}, // Required: Level 1 status icon
@@ -128,6 +129,14 @@ mentioned below:
text: '', // Required: Text to be displayed inside badge
variant: '', // Optional: GitLab UI badge variant, defaults to info
},
+ link: { // Optional: Link to a URL displayed after text
+ text: '', // Required: Text of the link
+ href: '', // Optional: URL for the link
+ },
+ modal: { // Optional: Link to open a modal displayed after text
+ text: '', // Required: Text of the link
+ onClick: () => {} // Optional: Function to run when link is clicked, i.e. to set this.modalData
+ }
actions: [], // Optional: Action button for row
children: [], // Optional: Child content to render, structure matches the same structure
}
diff --git a/doc/development/packages.md b/doc/development/packages.md
index 35a93c77c7f..6526bdd45a1 100644
--- a/doc/development/packages.md
+++ b/doc/development/packages.md
@@ -151,7 +151,7 @@ During this phase, the idea is to collect as much information as possible about
1. Empty file structure (API file, base service for this package)
1. Authentication system for "logging in" to the package manager
1. Identify metadata and create applicable tables
- 1. Workhorse route for [object storage direct upload](uploads/implementation.md#direct-upload)
+ 1. Workhorse route for [object storage direct upload](uploads/index.md#direct-upload)
1. Endpoints required for upload/publish
1. Endpoints required for install/download
1. Endpoints required for required actions
@@ -210,7 +210,7 @@ File uploads should be handled by GitLab Workhorse using object accelerated uplo
the workhorse proxy that checks all incoming requests to GitLab intercept the upload request,
upload the file, and forward a request to the main GitLab codebase only containing the metadata
and file location rather than the file itself. An overview of this process can be found in the
-[development documentation](uploads/implementation.md#direct-upload).
+[development documentation](uploads/index.md#direct-upload).
In terms of code, this means a route must be added to the
[GitLab Workhorse project](https://gitlab.com/gitlab-org/gitlab-workhorse) for each upload endpoint being added
diff --git a/doc/development/performance.md b/doc/development/performance.md
index 1e3e0570206..6d0b833a2da 100644
--- a/doc/development/performance.md
+++ b/doc/development/performance.md
@@ -75,7 +75,6 @@ GitLab provides built-in tools to help improve performance and availability:
- [Profiling](profiling.md).
- [Distributed Tracing](distributed_tracing.md)
- [GitLab Performance Monitoring](../administration/monitoring/performance/index.md).
-- [Request Profiling](../administration/monitoring/performance/request_profiling.md).
- [QueryRecoder](query_recorder.md) for preventing `N+1` regressions.
- [Chaos endpoints](chaos_endpoints.md) for testing failure scenarios. Intended mainly for testing availability.
- [Service measurement](service_measurement.md) for measuring and logging service execution.
@@ -319,7 +318,7 @@ You can do this when using the [performance bar](profiling.md#speedscope-flamegr
and when [profiling code blocks](https://github.com/jlfwong/speedscope/wiki/Importing-from-stackprof-(ruby)).
This option isn't supported by `bin/rspec-stackprof`.
-You can profile speciific methods by using `--method method_name`:
+You can profile specific methods by using `--method method_name`:
```shell
$ stackprof tmp/project_policy_spec.rb.dump --method access_allowed_to
diff --git a/doc/development/permissions.md b/doc/development/permissions.md
index 47aebc2f4d2..f3818e92fec 100644
--- a/doc/development/permissions.md
+++ b/doc/development/permissions.md
@@ -86,10 +86,10 @@ module):
- Maintainer (`40`)
- Owner (`50`)
-If a user is the member of both a project and the project parent group(s), the
+If a user is the member of both a project and the project parent groups, the
higher permission is taken into account for the project.
-If a user is the member of a project, but not the parent group(s), they
+If a user is the member of a project, but not the parent groups, they
can still view the groups and their entities (like epics).
Project membership (where the group membership is already taken into account)
diff --git a/doc/development/pipelines.md b/doc/development/pipelines.md
index e0b236bc5fc..b70f07ea7d9 100644
--- a/doc/development/pipelines.md
+++ b/doc/development/pipelines.md
@@ -12,7 +12,7 @@ which itself includes files under
[`.gitlab/ci/`](https://gitlab.com/gitlab-org/gitlab/-/tree/master/.gitlab/ci)
for easier maintenance.
-We're striving to [dogfood](https://about.gitlab.com/handbook/engineering/principles/#dogfooding)
+We're striving to [dogfood](https://about.gitlab.com/handbook/engineering/development/principles/#dogfooding)
GitLab [CI/CD features and best-practices](../ci/yaml/index.md)
as much as possible.
@@ -53,7 +53,7 @@ In summary:
To identify the minimal set of tests needed, we use the [`test_file_finder` gem](https://gitlab.com/gitlab-org/ci-cd/test_file_finder), with two strategies:
-- dynamic mapping from test coverage tracing (generated via the [Crystalball gem](https://github.com/toptal/crystalball))
+- dynamic mapping from test coverage tracing (generated via the [`Crystalball` gem](https://github.com/toptal/crystalball))
([see where it's used](https://gitlab.com/gitlab-org/gitlab/-/blob/47d507c93779675d73a05002e2ec9c3c467cd698/tooling/bin/find_tests#L15))
- static mapping maintained in the [`tests.yml` file](https://gitlab.com/gitlab-org/gitlab/-/blob/master/tests.yml) for special cases that cannot
be mapped via coverage tracing ([see where it's used](https://gitlab.com/gitlab-org/gitlab/-/blob/47d507c93779675d73a05002e2ec9c3c467cd698/tooling/bin/find_tests#L12))
@@ -93,7 +93,7 @@ In addition, there are a few circumstances where we would always run the full Je
### Fork pipelines
We only run the minimal RSpec & Jest jobs for fork pipelines unless the `pipeline:run-all-rspec`
-label is set on the MR. The goal is to reduce the CI minutes consumed by fork pipelines.
+label is set on the MR. The goal is to reduce the CI/CD minutes consumed by fork pipelines.
See the [experiment issue](https://gitlab.com/gitlab-org/quality/team-tasks/-/issues/1170).
@@ -160,7 +160,7 @@ Our current RSpec tests parallelization setup is as follows:
`knapsack/report-master.json` file:
- The `knapsack/report-master.json` file is fetched from the latest `main` pipeline which runs `update-tests-metadata`
(for now it's the 2-hourly scheduled master pipeline), if it's not here we initialize the file with `{}`.
-1. Each `[rspec|rspec-ee] [unit|integration|system|geo] n m` job are run with
+1. Each `[rspec|rspec-ee] [migration|unit|integration|system|geo] n m` job are run with
`knapsack rspec` and should have an evenly distributed share of tests:
- It works because the jobs have access to the `knapsack/report-master.json`
since the "artifacts from all previous stages are passed by default".
@@ -170,7 +170,7 @@ Our current RSpec tests parallelization setup is as follows:
`Report specs`, not under `Leftover specs`.
1. The `update-tests-metadata` job (which only runs on scheduled pipelines for
[the canonical project](https://gitlab.com/gitlab-org/gitlab) takes all the
- `knapsack/rspec*_pg_*.json` files and merge them all together into a single
+ `knapsack/rspec*.json` files and merge them all together into a single
`knapsack/report-master.json` file that is saved as artifact.
After that, the next pipeline uses the up-to-date `knapsack/report-master.json` file.
@@ -247,12 +247,18 @@ The `* as-if-jh` jobs are run in addition to the regular EE-context jobs. The `j
The intent is to ensure that a change doesn't introduce a failure after `gitlab-org/gitlab` is synced to [GitLab JH](https://jihulab.com/gitlab-cn/gitlab).
+### When to consider applying `pipeline:run-as-if-jh` label
+
+If a Ruby file is renamed and there's a corresponding [`prepend_mod` line](jh_features_review.md#jh-features-based-on-ce-or-ee-features),
+it's likely that GitLab JH is relying on it and requires a corresponding
+change to rename the module or class it's prepending.
+
### Corresponding JH branch
You can create a corresponding JH branch on [GitLab JH](https://jihulab.com/gitlab-cn/gitlab) by
appending `-jh` to the branch name. If a corresponding JH branch is found,
`* as-if-jh` jobs grab the `jh` folder from the respective branch,
-rather than from the default branch.
+rather than from the default branch `main-jh`.
NOTE:
For now, CI will try to fetch the branch on the [GitLab JH mirror](https://gitlab.com/gitlab-org/gitlab-jh-mirrors/gitlab), so it might take some time for the new JH branch to propagate to the mirror.
@@ -345,7 +351,7 @@ We use the [`rules:`](../ci/yaml/index.md#rules) and [`needs:`](../ci/yaml/index
to determine the jobs that need to be run in a pipeline. Note that an MR that includes multiple types of changes would
have a pipelines that include jobs from multiple types (for example, a combination of docs-only and code-only pipelines).
-Following are graphs of the critical paths for each pipeline type. Jobs that aren't part of the critical path are ommitted.
+Following are graphs of the critical paths for each pipeline type. Jobs that aren't part of the critical path are omitted.
### Documentation pipeline
@@ -508,12 +514,9 @@ The current stages are:
- `test`: This stage includes most of the tests, and DB/migration jobs.
- `post-test`: This stage includes jobs that build reports or gather data from
the `test` stage's jobs (for example, coverage, Knapsack metadata, and so on).
-- `review-prepare`: This stage includes a job that build the CNG images that are
- later used by the (Helm) Review App deployment (see
- [Review Apps](testing_guide/review_apps.md) for details).
-- `review`: This stage includes jobs that deploy the GitLab and Docs Review Apps.
-- `dast`: This stage includes jobs that run a DAST full scan against the Review App
-that is deployed in stage `review`.
+- `review`: This stage includes jobs that build the CNG images, deploy them, and
+ run end-to-end tests against Review Apps (see [Review Apps](testing_guide/review_apps.md) for details).
+ It also includes Docs Review App jobs.
- `qa`: This stage includes jobs that perform QA tasks against the Review App
that is deployed in stage `review`.
- `post-qa`: This stage includes jobs that build reports or gather data from
diff --git a/doc/development/product_qualified_lead_guide/index.md b/doc/development/product_qualified_lead_guide/index.md
index 2395689ada2..dcd8b33e5c5 100644
--- a/doc/development/product_qualified_lead_guide/index.md
+++ b/doc/development/product_qualified_lead_guide/index.md
@@ -16,8 +16,8 @@ A hand-raise PQL is a user who requests to speak to sales from within the produc
1. Set up CustomersDot to talk to a staging instance of Platypus.
1. Set up CustomersDot using the [normal install instructions](https://gitlab.com/gitlab-org/customers-gitlab-com/-/blob/staging/doc/setup/installation_steps.md).
-1. Set the `CUSTOMER_PORTAL_URL` env var to your local (or ngrok) URL of your CustomersDot instance.
-1. Place `export CUSTOMER_PORTAL_URL='https://XXX.ngrok.io/'` in your shell rc script (~/.zshrc or ~/.bash_profile or ~/.bashrc) and restart GDK.
+1. Set the `CUSTOMER_PORTAL_URL` environment variable to your local (or ngrok) URL of your CustomersDot instance.
+1. Place `export CUSTOMER_PORTAL_URL='https://XXX.ngrok.io/'` in your shell rc script (`~/.zshrc` or `~/.bash_profile` or `~/.bashrc`) and restart GDK.
1. Enter the credentials on CustomersDot development to Platypus in your `/config/secrets.yml` and restart. Credentials for the Platypus Staging are in the 1Password Growth vault. The URL for staging is `https://staging.ci.nexus.gitlabenvironment.cloud`.
```yaml
diff --git a/doc/development/python_guide/index.md b/doc/development/python_guide/index.md
index fe5492c3bd8..77dd328b513 100644
--- a/doc/development/python_guide/index.md
+++ b/doc/development/python_guide/index.md
@@ -25,6 +25,18 @@ To install `pyenv` on macOS, you can use [Homebrew](https://brew.sh/) with:
brew install pyenv
```
+### Windows
+
+`pyenv` does not officially support Windows and does not work in Windows outside the Windows Subsystem for Linux. If you are a Windows user, you can use `pyenv-win`.
+
+To install `pyenv-win` on Windows, run the following PowerShell command:
+
+```shell
+Invoke-WebRequest -UseBasicParsing -Uri "https://raw.githubusercontent.com/pyenv-win/pyenv-win/master/pyenv-win/install-pyenv-win.ps1" -OutFile "./install-pyenv-win.ps1"; &"./install-pyenv-win.ps1"
+```
+
+[Learn more about `pyenv-win`](https://github.com/pyenv-win/pyenv-win).
+
### Linux
To install `pyenv` on Linux, you can run the command below:
diff --git a/doc/development/rails_initializers.md b/doc/development/rails_initializers.md
index ee73dac2b72..9bf4109f1cb 100644
--- a/doc/development/rails_initializers.md
+++ b/doc/development/rails_initializers.md
@@ -20,3 +20,21 @@ Some examples where you would need to do this are:
1. Modifying Rails' `config.autoload_paths`
1. Changing configuration that Zeitwerk uses, for example, inflections
+
+## Database connections in initializers
+
+Ideally, database connections are not opened from Rails initializers. Opening a
+database connection (e.g. checking the database exists, or making a database
+query) from an initializer means that tasks like `db:drop`, and
+`db:test:prepare` will fail because an active session prevents the database from
+being dropped.
+
+To help detect when database connections are opened from initializers, we now
+warn in stderr. For example:
+
+```shell
+DEPRECATION WARNING: Database connection should not be called during initializers (called from block in <module:HasVariable> at app/models/concerns/ci/has_variable.rb:22)
+```
+
+If you wish to print out the full backtrace, set the
+`DEBUG_INITIALIZER_CONNECTIONS` environment variable.
diff --git a/doc/development/rails_update.md b/doc/development/rails_update.md
index 1a30e606c17..8999ac90f4c 100644
--- a/doc/development/rails_update.md
+++ b/doc/development/rails_update.md
@@ -88,7 +88,7 @@ To efficiently and quickly find which Rails change caused the spec failure you c
For example, `git bisect start v6.1.4.1 v6.1.3.2` if we're upgrading from version 6.1.3.2 to 6.1.4.1.
Replace `<NEW_VERSION_TAG>` with the tag where the spec is red and `<OLD_VERSION_TAG>` with the one with the green spec. For example, `git bisect start v6.1.4.1 v6.1.3.2` if we're upgrading from version 6.1.3.2 to 6.1.4.1.
In the output, you can see how many steps approximately it takes to find the commit.
-1. Start the `git bisect` process and pass spec's file name(s) to `scripts/rails-update-bisect` as an argument or arguments. It can be faster to pick only one example instead of an entire spec file.
+1. Start the `git bisect` process and pass spec's filenames to `scripts/rails-update-bisect` as arguments. It can be faster to pick only one example instead of an entire spec file.
```shell
git bisect run <GDK_FOLDER>/gitlab/scripts/rails-update-bisect spec/models/ability_spec.rb
diff --git a/doc/development/rake_tasks.md b/doc/development/rake_tasks.md
index 1e9367ecee4..0538add59b5 100644
--- a/doc/development/rake_tasks.md
+++ b/doc/development/rake_tasks.md
@@ -366,8 +366,8 @@ The docs generator code comes from our side giving us more flexibility, like usi
To edit the content, you may need to edit the following:
-- The template. You can edit the template at `lib/gitlab/graphql/docs/templates/default.md.haml`.
- The actual renderer is at `Gitlab::Graphql::Docs::Renderer`.
+- The template. You can edit the template at `tooling/graphql/docs/templates/default.md.haml`.
+ The actual renderer is at `Tooling::Graphql::Docs::Renderer`.
- The applicable `description` field in the code, which
[Updates machine-readable schema files](#update-machine-readable-schema-files),
which is then used by the `rake` task described earlier.
diff --git a/doc/development/redis.md b/doc/development/redis.md
index 75170b8c746..d5f526f2d32 100644
--- a/doc/development/redis.md
+++ b/doc/development/redis.md
@@ -11,7 +11,7 @@ info: To determine the technical writer assigned to the Stage/Group associated w
GitLab uses [Redis](https://redis.io) for the following distinct purposes:
- Caching (mostly via `Rails.cache`).
-- As a job processing queue with [Sidekiq](sidekiq_style_guide.md).
+- As a job processing queue with [Sidekiq](sidekiq/index.md).
- To manage the shared application state.
- To store CI trace chunks.
- As a Pub/Sub queue backend for ActionCable.
@@ -147,12 +147,11 @@ mostly for fine-grained control of Redis usage, so they wouldn't be used
in combination with the `Rails.cache` wrapper: we'd either use
`Rails.cache` or these classes and literal Redis commands.
-`Rails.cache` or these classes and literal Redis commands. We prefer
-using `Rails.cache` so we can reap the benefits of future optimizations
-done to Rails. It is worth noting that Ruby objects are
+We prefer using `Rails.cache` so we can reap the benefits of future
+optimizations done to Rails. Ruby objects are
[marshalled](https://github.com/rails/rails/blob/v6.0.3.1/activesupport/lib/active_support/cache/redis_cache_store.rb#L447)
-when written to Redis, so we need to pay attention to not to store huge
-objects, or untrusted user input.
+when written to Redis, so we must pay attention to store neither huge objects,
+nor untrusted user input.
Typically we would only use these classes when at least one of the
following is true:
diff --git a/doc/development/redis/new_redis_instance.md b/doc/development/redis/new_redis_instance.md
index 96f860f3890..389cddbb4e5 100644
--- a/doc/development/redis/new_redis_instance.md
+++ b/doc/development/redis/new_redis_instance.md
@@ -232,8 +232,8 @@ a developer will need to add an implementation for missing Redis commands before
| metrics name | type | labels | description |
|-------------------------------------------------|--------------------|------------------------|----------------------------------------------------|
-| gitlab_redis_multi_store_read_fallback_total | Prometheus Counter | command, instance_name | Client side Redis MultiStore reading fallback total|
-| gitlab_redis_multi_store_method_missing_total | Prometheus Counter | command, instance_name | Client side Redis MultiStore method missing total |
+| `gitlab_redis_multi_store_read_fallback_total` | Prometheus Counter | command, instance_name | Client side Redis MultiStore reading fallback total|
+| `gitlab_redis_multi_store_method_missing_total` | Prometheus Counter | command, instance_name | Client side Redis MultiStore method missing total |
## Step 4: clean up after the migration
diff --git a/doc/development/routing.md b/doc/development/routing.md
index 8fca9b00157..41961c2288f 100644
--- a/doc/development/routing.md
+++ b/doc/development/routing.md
@@ -31,6 +31,16 @@ we introduced the `/-/` scope. The purpose of it is to separate group or
project paths from the rest of the routes. Also it helps to reduce the
number of [reserved names](../user/reserved_names.md).
+## View all available routes
+
+You can view and find routes from the console by running:
+
+```shell
+rails routes | grep crm
+```
+
+You can also view routes in your browser by going to [http://gdk.test:3000/rails/info/routes](http://gdk.test:3000/rails/info/routes).
+
## Global routes
We have a number of global routes. For example:
diff --git a/doc/development/ruby_upgrade.md b/doc/development/ruby_upgrade.md
index a208a93e300..3b89a6fd1ea 100644
--- a/doc/development/ruby_upgrade.md
+++ b/doc/development/ruby_upgrade.md
@@ -144,7 +144,7 @@ A [build matrix definition](../ci/yaml/index.md#parallelmatrix) can do this effi
When upgrading Ruby, consider updating the following repositories:
- [Gitaly](https://gitlab.com/gitlab-org/gitaly) ([example](https://gitlab.com/gitlab-org/gitaly/-/merge_requests/3771))
-- [GitLab Labkit](https://gitlab.com/gitlab-org/labkit-ruby) ([example](https://gitlab.com/gitlab-org/labkit-ruby/-/merge_requests/79))
+- [GitLab LabKit](https://gitlab.com/gitlab-org/labkit-ruby) ([example](https://gitlab.com/gitlab-org/labkit-ruby/-/merge_requests/79))
- [GitLab Exporter](https://gitlab.com/gitlab-org/gitlab-exporter) ([example](https://gitlab.com/gitlab-org/gitlab-exporter/-/merge_requests/150))
- [GitLab Experiment](https://gitlab.com/gitlab-org/ruby/gems/gitlab-experiment) ([example](https://gitlab.com/gitlab-org/ruby/gems/gitlab-experiment/-/merge_requests/128))
- [Gollum Lib](https://gitlab.com/gitlab-org/gollum-lib) ([example](https://gitlab.com/gitlab-org/gollum-lib/-/merge_requests/21))
@@ -272,4 +272,4 @@ and merged back independently.
- **Give yourself enough time to fix problems ahead of a milestone release.** GitLab moves fast.
As a Ruby upgrade requires many MRs to be sent and reviewed, make sure all changes are merged at least a week
before the 22nd. This gives us extra time to act if something breaks. If in doubt, it is better to
-postpone the upgrade to the following month, as we [prioritize availability over velocity](https://about.gitlab.com/handbook/engineering/principles/#prioritizing-technical-decisions).
+postpone the upgrade to the following month, as we [prioritize availability over velocity](https://about.gitlab.com/handbook/engineering/development/principles/#prioritizing-technical-decisions).
diff --git a/doc/development/secure_coding_guidelines.md b/doc/development/secure_coding_guidelines.md
index 8a86a46d1d3..3e46891d20e 100644
--- a/doc/development/secure_coding_guidelines.md
+++ b/doc/development/secure_coding_guidelines.md
@@ -461,7 +461,9 @@ References:
### Description
-Path Traversal vulnerabilities grant attackers access to arbitrary directories and files on the server that is executing an application, including data, code or credentials.
+Path Traversal vulnerabilities grant attackers access to arbitrary directories and files on the server that is executing an application. This data can include data, code or credentials.
+
+Traversal can occur when a path includes directories. A typical malicious example includes one or more `../`, which tells the file system to look in the parent directory. Supplying many of them in a path, for example `../../../../../../../etc/passwd`, usually resolves to `/etc/passwd`. If the file system is instructed to look back to the root directory and can't go back any further, then extra `../` are ignored. The file system then looks from the root, resulting in `/etc/passwd` - a file you definitely do not want exposed to a malicious attacker!
### Impact
@@ -510,6 +512,44 @@ requires :file_path, type: String, file_path: true
Absolute paths are not allowed by default. If allowing an absolute path is required, you
need to provide an array of paths to the parameter `allowlist`.
+### Misleading behavior
+
+Some methods used to construct file paths can have non-intuitive behavior. To properly validate user input, be aware
+of these behaviors.
+
+#### Ruby
+
+The Ruby method [`Pathname.join`](https://ruby-doc.org/stdlib-2.7.4/libdoc/pathname/rdoc/Pathname.html#method-i-join)
+joins path names. Using methods in a specific way can result in a path name typically prohibited in
+normal use. In the examples below, we see attempts to access `/etc/passwd`, which is a sensitive file:
+
+```ruby
+require 'pathname'
+
+p = Pathname.new('tmp')
+print(p.join('log', 'etc/passwd', 'foo'))
+# => tmp/log/etc/passwd/foo
+```
+
+Assuming the second parameter is user-supplied and not validated, submitting a new absolute path
+results in a different path:
+
+```ruby
+print(p.join('log', '/etc/passwd', ''))
+# renders the path to "/etc/passwd", which is not what we expect!
+```
+
+#### Golang
+
+Golang has similar behavior with [`path.Clean`](https://pkg.go.dev/path#example-Clean). Remember that with many file systems, using `../../../../` traverses up to the root directory. Any remaining `../` are ignored. This example may give an attacker access to `/etc/passwd`:
+
+```golang
+path.Clean("/../../etc/passwd")
+// renders the path to "etc/passwd"; the file path is relative to whatever the current directory is
+path.Clean("../../etc/passwd")
+// renders the path to "../../etc/passwd"; the file path will look back up to two parent directories!
+```
+
## OS command injection guidelines
Command injection is an issue in which an attacker is able to execute arbitrary commands on the host
@@ -620,7 +660,7 @@ cfg := &tls.Config{
}
```
-For **Ruby**, you can use [HTTParty](https://github.com/jnunemaker/httparty) and specify TLS 1.3 version as well as ciphers:
+For **Ruby**, you can use [`HTTParty`](https://github.com/jnunemaker/httparty) and specify TLS 1.3 version as well as ciphers:
Whenever possible this example should be **avoided** for security purposes:
@@ -665,7 +705,7 @@ tls.Config{
This example was taken [here](https://gitlab.com/gitlab-org/cluster-integration/gitlab-agent/-/blob/871b52dc700f1a66f6644fbb1e78a6d463a6ff83/internal/tool/tlstool/tlstool.go#L72).
-For **Ruby**, you can use again [HTTParty](https://github.com/jnunemaker/httparty) and specify this time TLS 1.2 version alongside with the recommended ciphers:
+For **Ruby**, you can use again [`HTTParty`](https://github.com/jnunemaker/httparty) and specify this time TLS 1.2 version alongside with the recommended ciphers:
```ruby
response = GitLab::HTTP.perform_request(Net::HTTP::Get, 'https://gitlab.com', ssl_version: :TLSv1_2, ciphers: ['ECDHE-ECDSA-AES128-GCM-SHA256', 'ECDHE-RSA-AES128-GCM-SHA256', 'ECDHE-ECDSA-AES256-GCM-SHA384', 'ECDHE-RSA-AES256-GCM-SHA384', 'ECDHE-ECDSA-CHACHA20-POLY1305', 'ECDHE-RSA-CHACHA20-POLY1305'])
@@ -833,7 +873,7 @@ If a vulnerable application extracts an archive file with any of these file name
#### Ruby
-For zip files, the [rubyzip](https://rubygems.org/gems/rubyzip) Ruby gem is already patched against the Zip Slip vulnerability and will refuse to extract files that try to perform directory traversal, so for this vulnerable example we will extract a `tar.gz` file with `Gem::Package::TarReader`:
+For zip files, the [`rubyzip`](https://rubygems.org/gems/rubyzip) Ruby gem is already patched against the Zip Slip vulnerability and will refuse to extract files that try to perform directory traversal, so for this vulnerable example we will extract a `tar.gz` file with `Gem::Package::TarReader`:
```ruby
# Vulnerable tar.gz extraction example!
@@ -1032,7 +1072,7 @@ Symlink attacks makes it possible for an attacker to read the contents of arbitr
#### Ruby
-For zip files, the [rubyzip](https://rubygems.org/gems/rubyzip) Ruby gem is already patched against symlink attacks as it simply ignores symbolic links, so for this vulnerable example we will extract a `tar.gz` file with `Gem::Package::TarReader`:
+For zip files, the [`rubyzip`](https://rubygems.org/gems/rubyzip) Ruby gem is already patched against symlink attacks as it simply ignores symbolic links, so for this vulnerable example we will extract a `tar.gz` file with `Gem::Package::TarReader`:
```ruby
# Vulnerable tar.gz extraction example!
@@ -1210,3 +1250,36 @@ An example of well implemented `Gitlab::UrlBlocker.validate!` call that prevents
### Resources
- [CWE-367: Time-of-check Time-of-use (TOCTOU) Race Condition](https://cwe.mitre.org/data/definitions/367.html)
+
+## Handling credentials
+
+Credentials can be:
+
+- Login details like username and password.
+- Private keys.
+- Tokens (PAT, runner tokens, JWT token, CSRF tokens, project access tokens, etc).
+- Session cookies.
+- Any other piece of information that can be used for authentication or authorization purposes.
+
+This sensitive data must be handled carefully to avoid leaks which could lead to unauthorized access. If you have questions or need help with any of the following guidance, talk to the GitLab AppSec team on Slack (`#sec-appsec`).
+
+### At rest
+
+- Credentials must be encrypted while at rest (database or file) with `attr_encrypted`. See [issue #26243](https://gitlab.com/gitlab-org/gitlab/-/issues/26243) before using `attr_encrypted`.
+ - Store the encryption keys separately from the encrypted credentials with proper access control. For instance, store the keys in a vault, KMS, or file. Here is an [example](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/models/user.rb#L70-74) use of `attr_encrypted` for encryption with keys stored in separate access controlled file.
+ - When the intention is to only compare secrets, store only the salted hash of the secret instead of the encrypted value.
+- Never commit credentials to repositories.
+ - The [Gitleaks Git hook](https://gitlab.com/gitlab-com/gl-security/security-research/gitleaks-endpoint-installer) is recommended for preventing credentials from being committed.
+- Never log credentials under any circumstance. Issue [#353857](https://gitlab.com/gitlab-org/gitlab/-/issues/353857) is an example of credential leaks through log file.
+- When credentials are required in a CI/CD job, use [masked variables](../ci/variables/index.md#mask-a-cicd-variable) to help prevent accidental exposure in the job logs. Be aware that when [debug logging](../ci/variables/index.md#debug-logging) is enabled, all masked CI/CD variables are visible in job logs. Also consider using [protected variables](../ci/variables/index.md#protected-cicd-variables) when possible so that sensitive CI/CD variables are only available to pipelines running on protected branches or protected tags.
+- Proper scanners must be enabled depending on what data those credentials are protecting. See the [Application Security Inventory Policy](https://about.gitlab.com/handbook/engineering/security/security-engineering-and-research/application-security/inventory.html#policies) and our [Data Classification Standards](https://about.gitlab.com/handbook/engineering/security/data-classification-standard.html#data-classification-standards).
+- To store and/or share credentials between teams, refer to [1Password for Teams](https://about.gitlab.com/handbook/security/#1password-for-teams) and follow [the 1Password Guidelines](https://about.gitlab.com/handbook/security/#1password-guidelines).
+- If you need to share a secret with a team member, use 1Password. Do not share a secret over email, Slack, or other service on the Internet.
+
+### In transit
+
+- Use an encrypted channel like TLS to transmit credentials. See [our TLS minimum recommendation guidelines](#tls-minimum-recommended-version).
+- Avoid including credentials as part of an HTTP response unless it is absolutely necessary as part of the workflow. For example, generating a PAT for users.
+- Avoid sending credentials in URL parameters, as these can be more easily logged inadvertently during transit.
+
+In the event of credential leak through an MR, issue, or any other medium, [reach out to SIRT team](https://about.gitlab.com/handbook/engineering/security/security-operations/sirt/#-engaging-sirt).
diff --git a/doc/development/service_ping/implement.md b/doc/development/service_ping/implement.md
index ca4a0158051..27bc4d2e8ca 100644
--- a/doc/development/service_ping/implement.md
+++ b/doc/development/service_ping/implement.md
@@ -268,10 +268,9 @@ Arguments:
#### Ordinary Redis counters
-Examples of implementation:
+Example of implementation:
-- Using Redis methods [`INCR`](https://redis.io/commands/incr), [`GET`](https://redis.io/commands/get), and [`Gitlab::UsageDataCounters::WikiPageCounter`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/usage_data_counters/wiki_page_counter.rb)
-- Using Redis methods [`HINCRBY`](https://redis.io/commands/hincrby), [`HGETALL`](https://redis.io/commands/hgetall), and [`Gitlab::UsageCounters::PodLogs`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/usage_counters/pod_logs.rb)
+Using Redis methods [`INCR`](https://redis.io/commands/incr), [`GET`](https://redis.io/commands/get), and [`Gitlab::UsageDataCounters::WikiPageCounter`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/usage_data_counters/wiki_page_counter.rb)
##### `UsageData` API
@@ -287,9 +286,7 @@ Enabled by default in GitLab 13.7 and later.
Increment event count using an ordinary Redis counter, for a given event name.
API requests are protected by checking for a valid CSRF token.
-
- To increment the values, the related feature `usage_data_<event_name>` must be enabled.
-
+
```plaintext
POST /usage_data/increment_counter
```
@@ -366,7 +363,7 @@ Implemented using Redis methods [PFADD](https://redis.io/commands/pfadd) and [PF
aggregation.
- `aggregation`: may be set to a `:daily` or `:weekly` key. Defines how counting data is stored in Redis.
Aggregation on a `daily` basis does not pull more fine grained data.
- - `feature_flag`: optional `default_enabled: :yaml`. If no feature flag is set then the tracking is enabled. One feature flag can be used for multiple events. For details, see our [GitLab internal Feature flags](../feature_flags/index.md) documentation. The feature flags are owned by the group adding the event tracking.
+ - `feature_flag`: if no feature flag is set then the tracking is enabled. One feature flag can be used for multiple events. For details, see our [GitLab internal Feature flags](../feature_flags/index.md) documentation. The feature flags are owned by the group adding the event tracking.
1. Use one of the following methods to track the event:
@@ -580,7 +577,6 @@ Example:
```ruby
# Redis Counters
redis_usage_data(Gitlab::UsageDataCounters::WikiPageCounter)
-redis_usage_data { ::Gitlab::UsageCounters::PodLogs.usage_totals[:total] }
# Define events in common.yml https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/usage_data_counters/known_events/common.yml
diff --git a/doc/development/service_ping/index.md b/doc/development/service_ping/index.md
index 14bb90537e7..1e09dada36e 100644
--- a/doc/development/service_ping/index.md
+++ b/doc/development/service_ping/index.md
@@ -22,7 +22,7 @@ and sales teams understand how GitLab is used. The data helps to:
Service Ping information is not anonymous. It's linked to the instance's hostname, but does
not contain project names, usernames, or any other specific data.
-Sending a Service Ping payload is optional and you can [disable](#disable-service-ping) it on any
+Sending a Service Ping payload is optional and you can [disable](../../user/admin_area/settings/usage_statistics.md#enable-or-disable-usage-statistics) it on any
self-managed instance. When Service Ping is enabled, GitLab gathers data from the other instances
and can show your instance's usage statistics to your users.
@@ -38,23 +38,6 @@ We use the following terminology to describe the Service Ping components:
- **MAU**: monthly active users.
- **WAU**: weekly active users.
-### Why enable Service Ping?
-
-The main purpose of Service Ping is to build a better GitLab. We collect data about how GitLab is used
-to understand feature or stage adoption and usage. This data gives an insight into how GitLab adds
-value and helps our team understand the reasons why people use GitLab, and with this knowledge we're able to
-make better product decisions.
-
-There are several other benefits to enabling Service Ping:
-
-- As a benefit of having Service Ping active, GitLab lets you analyze the users' activities over time of your GitLab installation.
-- As a benefit of having Service Ping active, GitLab provides you with [DevOps Score](../../user/admin_area/analytics/dev_ops_reports.md#devops-score), which gives you an overview of your entire instance's adoption of Concurrent DevOps from planning to monitoring.
-- You get better, more proactive support (assuming that our TAMs and support organization used the data to deliver more value).
-- You get insight and advice into how to get the most value out of your investment in GitLab. Wouldn't you want to know that a number of features or values are not being adopted in your organization?
-- You get a report that illustrates how you compare against other similar organizations (anonymized), with specific advice and recommendations on how to improve your DevOps processes.
-- Service Ping is enabled by default. To disable it, see [Disable Service Ping](#disable-service-ping).
-- When Service Ping is enabled, you have the option to participate in our [Registration Features Program](#registration-features-program) and receive free paid features.
-
### Limitations
- Service Ping does not track frontend events things like page views, link clicks, or user sessions.
@@ -65,107 +48,6 @@ Because of these limitations we recommend you:
- Instrument your products with Snowplow for more detailed analytics on GitLab.com.
- Use Service Ping to track aggregated backend events on self-managed instances.
-### Registration Features Program
-
-> Introduced in GitLab 14.1.
-
-In GitLab versions 14.1 and later, GitLab Free customers with a self-managed instance running
-[GitLab EE](../ee_features.md) can receive paid features by registering with GitLab and sending us
-activity data through Service Ping. Features introduced here do not remove the feature from its paid
-tier. Users can continue to access the features in a paid tier without sharing usage data.
-
-#### Features available in 14.1 and later
-
-1. [Email from GitLab](../../user/admin_area/email_from_gitlab.md).
-
-#### Features available in 14.4 and later
-
-1. [Repository size limit](../../user/admin_area/settings/account_and_limit_settings.md#repository-size-limit).
-1. [Restrict group access by IP address](../../user/group/index.md#restrict-group-access-by-ip-address).
-
-NOTE:
-Registration is not yet required for participation, but will be added in a future milestone.
-
-#### Enable Registration Features
-
-1. Sign in as a user with administrator access.
-1. On the top bar, select **Menu > Admin**.
-1. On the left sidebar, select **Settings > Metrics and profiling**.
-1. Expand the **Usage statistics** section.
-1. If not enabled, select the **Enable Service Ping** checkbox.
-1. Select the **Enable Registration Features** checkbox.
-1. Select **Save changes**.
-
-## View the Service Ping payload **(FREE SELF)**
-
-You can view the exact JSON payload sent to GitLab Inc. in the Admin Area. To view the payload:
-
-1. Sign in as a user with administrator access.
-1. On the top bar, select **Menu > Admin**.
-1. On the left sidebar, select **Settings > Metrics and profiling**.
-1. Expand the **Usage statistics** section.
-1. Select **Preview payload**.
-
-For an example payload, see [Example Service Ping payload](#example-service-ping-payload).
-
-## Disable Service Ping **(FREE SELF)**
-
-NOTE:
-The method to disable Service Ping in the GitLab configuration file does not work in
-GitLab versions 9.3 to 13.12.3. See the [troubleshooting section](#cannot-disable-service-ping-using-the-configuration-file)
-on how to disable it.
-
-You can disable Service Ping either using the GitLab UI, or editing the GitLab
-configuration file.
-
-### Disable Service Ping using the UI
-
-To disable Service Ping in the GitLab UI:
-
-1. Sign in as a user with administrator access.
-1. On the top bar, select **Menu > Admin**.
-1. On the left sidebar, select **Settings > Metrics and profiling**.
-1. Expand the **Usage statistics** section.
-1. Clear the **Enable Service Ping** checkbox.
-1. Select **Save changes**.
-
-### Disable Service Ping using the configuration file
-
-To disable Service Ping and prevent it from being configured in the future through
-the Admin Area:
-
-**For installations using the Linux package:**
-
-1. Edit `/etc/gitlab/gitlab.rb`:
-
- ```ruby
- gitlab_rails['usage_ping_enabled'] = false
- ```
-
-1. Reconfigure GitLab:
-
- ```shell
- sudo gitlab-ctl reconfigure
- ```
-
-**For installations from source:**
-
-1. Edit `/home/git/gitlab/config/gitlab.yml`:
-
- ```yaml
- production: &base
- # ...
- gitlab:
- # ...
- usage_ping_enabled: false
- ```
-
-1. Restart GitLab:
-
- ```shell
- sudo service gitlab restart
- ```
-
## Service Ping request flow
The following example shows a basic request/response flow between a GitLab instance, the Versions Application, the License Application, Salesforce, the GitLab S3 Bucket, the GitLab Snowflake Data Warehouse, and Sisense:
@@ -211,23 +93,53 @@ sequenceDiagram
the required URL is <https://version.gitlab.com/>.
1. In case of an error, it will be reported to the Version application along with following pieces of information:
-- `uuid` - GitLab instance unique identifier
-- `hostname` - GitLab instance hostname
-- `version` - GitLab instance current versions
-- `elapsed` - Amount of time which passed since Service Ping report process started and moment of error occurrence
-- `message` - Error message
-
-<pre>
-<code>
-{
- "uuid"=>"02333324-1cd7-4c3b-a45b-a4993f05fb1d",
- "hostname"=>"127.0.0.1",
- "version"=>"14.7.0-pre",
- "elapsed"=>0.006946,
- "message"=>'PG::UndefinedColumn: ERROR: column \"non_existent_attribute\" does not exist\nLINE 1: SELECT COUNT(non_existent_attribute) FROM \"issues\" /*applica...'
-}
-</code>
-</pre>
+ - `uuid` - GitLab instance unique identifier
+ - `hostname` - GitLab instance hostname
+ - `version` - GitLab instance current versions
+ - `elapsed` - Amount of time which passed since Service Ping report process started and moment of error occurrence
+ - `message` - Error message
+
+ <pre>
+ <code>
+ {
+ "uuid"=>"02333324-1cd7-4c3b-a45b-a4993f05fb1d",
+ "hostname"=>"127.0.0.1",
+ "version"=>"14.7.0-pre",
+ "elapsed"=>0.006946,
+ "message"=>'PG::UndefinedColumn: ERROR: column \"non_existent_attribute\" does not exist\nLINE 1: SELECT COUNT(non_existent_attribute) FROM \"issues\" /*applica...'
+ }
+ </code>
+ </pre>
+
+1. Finally, the timing metadata information that is used for diagnostic purposes is submitted to the Versions application. It consists of a list of metric identifiers and the time it took to calculate the metrics:
+
+ > [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/37911) in GitLab 15.0 [with a flag(../../user/feature_flags.md), enabled by default.
+
+FLAG:
+On self-managed GitLab, by default this feature is available. To hide the feature, ask an administrator to [disable the feature flag](../../administration/feature_flags.md) named `measure_service_ping_metric_collection`.
+On GitLab.com, this feature is available.
+
+```ruby
+ {"metadata"=>
+ {"metrics"=>
+ [{"name"=>"version", "time_elapsed"=>1.1811964213848114e-05},
+ {"name"=>"installation_type", "time_elapsed"=>0.00017242692410945892},
+ {"name"=>"license_billable_users", "time_elapsed"=>0.009520471096038818},
+ ....
+ {"name"=>"counts.clusters_platforms_eks",
+ "time_elapsed"=>0.05638605775311589},
+ {"name"=>"counts.clusters_platforms_gke",
+ "time_elapsed"=>0.40995341585949063},
+ {"name"=>"counts.clusters_platforms_user",
+ "time_elapsed"=>0.06410990096628666},
+ {"name"=>"counts.clusters_management_project",
+ "time_elapsed"=>0.24020783510059118},
+ {"name"=>"counts.clusters_integrations_elastic_stack",
+ "time_elapsed"=>0.03484998410567641}
+ ]
+ }
+ }
+ ```
### On a Geo secondary site
@@ -251,6 +163,25 @@ We also collect metrics specific to [Geo](../../administration/geo/index.md) sec
]
```
+### Enable or disable service ping metadata reporting
+
+Service Ping timing metadata reporting is under development but ready for production use.
+It is deployed behind a feature flag that is **enabled by default**.
+[GitLab administrators with access to the GitLab Rails console](../../administration/feature_flags.md)
+can opt to disable it.
+
+To enable it:
+
+```ruby
+Feature.enable(:measure_service_ping_metric_collection)
+```
+
+To disable it:
+
+```ruby
+Feature.disable(:measure_service_ping_metric_collection)
+```
+
## Implementing Service Ping
See the [implement Service Ping](implement.md) guide.
@@ -513,7 +444,7 @@ To generate Service Ping, use [Teleport](https://goteleport.com/docs/) or a deta
```
1. Connect to console host:
-
+
```shell
ssh $USER-rails@console-01-sv-gprd.c.gitlab-production.internal
```
@@ -526,11 +457,15 @@ To generate Service Ping, use [Teleport](https://goteleport.com/docs/) or a deta
1. To detach from screen, press `ctrl + A`, `ctrl + D`.
1. Exit from bastion:
-
+
```shell
exit
```
+1. Get the metrics duration from logs:
+
+Search in Google Console logs for `time_elapsed`. Query example [here](https://cloudlogging.app.goo.gl/nWheZvD8D3nWazNe6).
+
### Verification (After approx 30 hours)
#### Verify with Teleport
@@ -560,7 +495,7 @@ To generate Service Ping, use [Teleport](https://goteleport.com/docs/) or a deta
```
1. Check the last payload in `raw_usage_data` table:
-
+
```shell
RawUsageData.last.payload
```
@@ -580,115 +515,10 @@ skip_db_write:
ServicePing::SubmitService.new(skip_db_write: true).execute
```
-## Manually upload Service Ping payload
-
-> - [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/7388) in GitLab 14.8 with a flag named `admin_application_settings_service_usage_data_center`. Disabled by default.
-> - [Feature flag removed](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/83265) in GitLab 14.10.
-
-Service Ping payload can be uploaded to GitLab even if your application instance doesn't have access to the internet,
-or you don't have Service Ping [cron job](#how-service-ping-works) enabled.
-
-To upload payload manually:
-
-1. Sign in as a user with administrator access.
-1. On the top bar, select **Menu > Admin**.
-1. On the left sidebar, select **Settings > Service** usage data.
-1. Select **Download payload**.
-1. Save the JSON file.
-1. Visit [Service usage data center](https://version.gitlab.com/usage_data/new).
-1. Select **Choose file** and choose the file from p5.
-1. Select **Upload**.
-
-The uploaded file is encrypted and sent using secure [HTTPS protocol](https://en.wikipedia.org/wiki/HTTPS). HTTPS creates a secure
-communication channel between web browser and the server, and protects transmitted data against man-in-the-middle attacks.
-
## Monitoring
Service Ping reporting process state is monitored with [internal SiSense dashboard](https://app.periscopedata.com/app/gitlab/968489/Product-Intelligence---Service-Ping-Health).
-## Troubleshooting
-
-### Cannot disable Service Ping using the configuration file
-
-The method to disable Service Ping using the GitLab configuration file does not work in
-GitLab versions 9.3.0 to 13.12.3. To disable it, you must use the Admin Area in
-the GitLab UI instead. For more information, see
-[this issue](https://gitlab.com/gitlab-org/gitlab/-/issues/333269).
-
-GitLab functionality and application settings cannot override or circumvent
-restrictions at the network layer. If Service Ping is blocked by your firewall,
-you are not impacted by this bug.
-
-#### Check if you are affected
-
-You can check if you were affected by this bug by using the Admin Area or by
-checking the configuration file of your GitLab instance:
-
-- Using the Admin Area:
-
- 1. On the top bar, select **Menu > Admin**.
- 1. On the left sidebar, select **Settings > Metrics and profiling**.
- 1. Expand **Usage Statistics**.
- 1. Are you able to check or uncheck the checkbox to disable Service Ping?
-
- - If _yes_, your GitLab instance is not affected by this bug.
- - If you can't check or uncheck the checkbox, you are affected by this bug.
- See the steps on [how to fix this](#how-to-fix-the-cannot-disable-service-ping-bug).
-
-- Checking your GitLab instance configuration file:
-
- To check whether you're impacted by this bug, check your instance configuration
- settings. The configuration file in which Service Ping can be disabled depends
- on your installation and deployment method, but is typically one of the following:
-
- - `/etc/gitlab/gitlab.rb` for Omnibus GitLab Linux Package and Docker.
- - `charts.yaml` for GitLab Helm and cloud-native Kubernetes deployments.
- - `gitlab.yml` for GitLab installations from source.
-
- To check the relevant configuration file for strings that indicate whether
- Service Ping is disabled, you can use `grep`:
-
- ```shell
- # Linux package
- grep "usage_ping_enabled'\] = false" /etc/gitlab/gitlab.rb
-
- # Kubernetes charts
- grep "enableUsagePing: false" values.yaml
-
- # From source
- grep "usage_ping_enabled'\] = false" gitlab/config.yml
- ```
-
- If you see any output after running the relevant command, your GitLab instance
- may be affected by the bug. Otherwise, your instance is not affected.
-
-#### How to fix the "Cannot disable Service Ping" bug
-
-To work around this bug, you have two options:
-
-- [Update](../../update/index.md) to GitLab 13.12.4 or newer to fix this bug.
-- If you can't update to GitLab 13.12.4 or newer, enable Service Ping in the
- configuration file, then disable Service Ping in the UI. For example, if you're
- using the Linux package:
-
- 1. Edit `/etc/gitlab/gitlab.rb`:
-
- ```ruby
- gitlab_rails['usage_ping_enabled'] = true
- ```
-
- 1. Reconfigure GitLab:
-
- ```shell
- sudo gitlab-ctl reconfigure
- ```
-
- 1. In GitLab, on the top bar, select **Menu > Admin**.
- 1. On the left sidebar, select **Settings > Metrics and profiling**.
- 1. Expand **Usage Statistics**.
- 1. Clear the **Enable Service Ping** checkbox.
- 1. Select **Save Changes**.
-
## Related topics
- [Product Intelligence Guide](https://about.gitlab.com/handbook/product/product-intelligence-guide/)
diff --git a/doc/development/service_ping/metrics_dictionary.md b/doc/development/service_ping/metrics_dictionary.md
index ab3d301908b..ead11a412fa 100644
--- a/doc/development/service_ping/metrics_dictionary.md
+++ b/doc/development/service_ping/metrics_dictionary.md
@@ -25,7 +25,7 @@ All metrics are stored in YAML files:
- [`config/metrics`](https://gitlab.com/gitlab-org/gitlab/-/tree/master/config/metrics)
WARNING:
-Only metrics with a metric definition YAML are added to the Service Ping JSON payload.
+Only metrics with a metric definition YAML and whose status is not `removed` are added to the Service Ping JSON payload.
Each metric is defined in a separate YAML file consisting of a number of fields:
@@ -50,6 +50,7 @@ Each metric is defined in a separate YAML file consisting of a number of fields:
| `milestone` | no | The milestone when the metric is introduced and when it's available to self-managed instances with the official GitLab release. |
| `milestone_removed` | no | The milestone when the metric is removed. |
| `introduced_by_url` | no | The URL to the merge request that introduced the metric to be available for self-managed instances. |
+| `removed_by_url` | no | The URL to the merge request that removed the metric. |
| `repair_issue_url` | no | The URL of the issue that was created to repair a metric with a `broken` status. |
| `options` | no | `object`: options information needed to calculate the metric value. |
| `skip_validation` | no | This should **not** be set. [Used for imported metrics until we review, update and make them valid](https://gitlab.com/groups/gitlab-org/-/epics/5425). |
@@ -131,7 +132,7 @@ which has a related schema in `/config/metrics/objects_schemas/topology_schema.j
We use the following categories to classify a metric:
- `operational`: Required data for operational purposes.
-- `optional`: Default value for a metric. Data that is optional to collect. This can be [enabled or disabled](../service_ping/index.md#disable-service-ping) in the Admin Area.
+- `optional`: Default value for a metric. Data that is optional to collect. This can be [enabled or disabled](../../user/admin_area/settings/usage_statistics.md#enable-or-disable-usage-statistics) in the Admin Area.
- `subscription`: Data related to licensing.
- `standard`: Standard set of identifiers that are included when collecting data.
diff --git a/doc/development/service_ping/metrics_instrumentation.md b/doc/development/service_ping/metrics_instrumentation.md
index 3d56f3e777f..e718d972fba 100644
--- a/doc/development/service_ping/metrics_instrumentation.md
+++ b/doc/development/service_ping/metrics_instrumentation.md
@@ -26,7 +26,7 @@ A metric definition has the [`instrumentation_class`](metrics_dictionary.md) fie
The defined instrumentation class should inherit one of the existing metric classes: `DatabaseMetric`, `RedisMetric`, `RedisHLLMetric`, or `GenericMetric`.
-The current convention is that a single instrumentation class corresponds to a single metric. On a rare occasions, there are exceptions to that convention like [Redis metrics](#redis-metrics). To use a single instrumentation class for more than one metric, please reach out to one of the `@gitlab-org/growth/product-intelligence/engineers` members to consult about your case.
+The current convention is that a single instrumentation class corresponds to a single metric. On a rare occasions, there are exceptions to that convention like [Redis metrics](#redis-metrics). To use a single instrumentation class for more than one metric, please reach out to one of the `@gitlab-org/growth/product-intelligence/engineers` members to consult about your case.
Using the instrumentation classes ensures that metrics can fail safe individually, without breaking the entire
process of Service Ping generation.
@@ -40,6 +40,7 @@ We have built a domain-specific language (DSL) to define the metrics instrumenta
- `start`: Specifies the start value of the batch counting, by default is `relation.minimum(:id)`.
- `finish`: Specifies the end value of the batch counting, by default is `relation.maximum(:id)`.
- `cache_start_and_finish_as`: Specifies the cache key for `start` and `finish` values and sets up caching them. Use this call when `start` and `finish` are expensive queries that should be reused between different metric calculations.
+- `available?`: Specifies whether the metric should be reported. The default is `true`.
[Example of a merge request that adds a database metric](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/60022).
@@ -123,6 +124,37 @@ options:
counter_class: SourceCodeCounter
```
+### Availability-restrained Redis metrics
+
+If the Redis metric should only be available in the report under some conditions, then you must specify these conditions in a new class that is a child of the `RedisMetric` class.
+
+```ruby
+# frozen_string_literal: true
+
+module Gitlab
+ module Usage
+ module Metrics
+ module Instrumentations
+ class MergeUsageCountRedisMetric < RedisMetric
+ available? { Feature.enabled?(:merge_usage_data_missing_key_paths) }
+ end
+ end
+ end
+ end
+end
+```
+
+You must also use the class's name in the YAML setup.
+
+```yaml
+time_frame: all
+data_source: redis
+instrumentation_class: 'MergeUsageCountRedisMetric'
+options:
+ event: pushes
+ counter_class: SourceCodeCounter
+```
+
## Redis HyperLogLog metrics
[Example of a merge request that adds a `RedisHLL` metric](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/61685).
@@ -138,8 +170,42 @@ options:
- i_quickactions_approve
```
+### Availability-restrained Redis HyperLogLog metrics
+
+If the Redis HyperLogLog metric should only be available in the report under some conditions, then you must specify these conditions in a new class that is a child of the `RedisHLLMetric` class.
+
+```ruby
+# frozen_string_literal: true
+
+module Gitlab
+ module Usage
+ module Metrics
+ module Instrumentations
+ class MergeUsageCountRedisHLLMetric < RedisHLLMetric
+ available? { Feature.enabled?(:merge_usage_data_missing_key_paths) }
+ end
+ end
+ end
+ end
+end
+```
+
+You must also use the class's name in the YAML setup.
+
+```yaml
+time_frame: 28d
+data_source: redis_hll
+instrumentation_class: 'MergeUsageCountRedisHLLMetric'
+options:
+ events:
+ - i_quickactions_approve
+```
+
## Generic metrics
+- `value`: Specifies the value of the metric.
+- `available?`: Specifies whether the metric should be reported. The default is `true`.
+
[Example of a merge request that adds a generic metric](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/60256).
```ruby
diff --git a/doc/development/service_ping/metrics_lifecycle.md b/doc/development/service_ping/metrics_lifecycle.md
index 844c989c640..c9cc9a4f2d2 100644
--- a/doc/development/service_ping/metrics_lifecycle.md
+++ b/doc/development/service_ping/metrics_lifecycle.md
@@ -113,6 +113,7 @@ To remove a metric:
update the attributes of the metric's YAML definition:
- Set the `status:` to `removed`.
+ - Set `removed_by_url:` to the URL of the MR removing the metric
- Set `milestone_removed:` to the number of the
milestone in which the metric was removed.
diff --git a/doc/development/service_ping/troubleshooting.md b/doc/development/service_ping/troubleshooting.md
index 15bc01f1270..2764ef41f98 100644
--- a/doc/development/service_ping/troubleshooting.md
+++ b/doc/development/service_ping/troubleshooting.md
@@ -22,10 +22,91 @@ The alert compares the current daily value with the daily value from previous we
You can use [this query](https://gitlab.com/gitlab-org/gitlab/-/issues/347298#note_836685350) as an example, to start detecting when the drop started.
-### Troubleshooting GitLab application layer
+### Troubleshoot the GitLab application layer
For results about an investigation conducted into an unexpected drop in Service ping Payload events volume, see [this issue](https://gitlab.com/gitlab-data/analytics/-/issues/11071).
-### Troubleshooting data warehouse layer
+### Troubleshoot the data warehouse layer
Reach out to the [Data team](https://about.gitlab.com/handbook/business-technology/data-team/) to ask about current state of data warehouse. On their handbook page there is a [section with contact details](https://about.gitlab.com/handbook/business-technology/data-team/#how-to-connect-with-us).
+
+### Cannot disable Service Ping with the configuration file
+
+The method to disable Service Ping with the GitLab configuration file does not work in
+GitLab versions 9.3.0 to 13.12.3. To disable it, you must use the Admin Area in
+the GitLab UI instead. For more information, see
+[this issue](https://gitlab.com/gitlab-org/gitlab/-/issues/333269).
+
+GitLab functionality and application settings cannot override or circumvent
+restrictions at the network layer. If Service Ping is blocked by your firewall,
+you are not impacted by this bug.
+
+#### Check if you are affected
+
+You can check if you were affected by this bug by using the Admin Area or by
+checking the configuration file of your GitLab instance:
+
+- Using the Admin Area:
+
+ 1. On the top bar, select **Menu > Admin**.
+ 1. On the left sidebar, select **Settings > Metrics and profiling**.
+ 1. Expand **Usage Statistics**.
+ 1. Are you able to check or uncheck the checkbox to disable Service Ping?
+
+ - If _yes_, your GitLab instance is not affected by this bug.
+ - If you can't check or uncheck the checkbox, you are affected by this bug.
+ See the steps on [how to fix this](#how-to-fix-the-cannot-disable-service-ping-bug).
+
+- Checking your GitLab instance configuration file:
+
+ To check whether you're impacted by this bug, check your instance configuration
+ settings. The configuration file in which Service Ping can be disabled depends
+ on your installation and deployment method, but is typically one of the following:
+
+ - `/etc/gitlab/gitlab.rb` for Omnibus GitLab Linux Package and Docker.
+ - `charts.yaml` for GitLab Helm and cloud-native Kubernetes deployments.
+ - `gitlab.yml` for GitLab installations from source.
+
+ To check the relevant configuration file for strings that indicate whether
+ Service Ping is disabled, you can use `grep`:
+
+ ```shell
+ # Linux package
+ grep "usage_ping_enabled'\] = false" /etc/gitlab/gitlab.rb
+
+ # Kubernetes charts
+ grep "enableUsagePing: false" values.yaml
+
+ # From source
+ grep "usage_ping_enabled'\] = false" gitlab/config.yml
+ ```
+
+ If you see any output after running the relevant command, your GitLab instance
+ may be affected by the bug. Otherwise, your instance is not affected.
+
+#### How to fix the "Cannot disable Service Ping" bug
+
+To work around this bug, you have two options:
+
+- [Update](../../update/index.md) to GitLab 13.12.4 or newer to fix this bug.
+- If you can't update to GitLab 13.12.4 or newer, enable Service Ping in the
+ configuration file, then disable Service Ping in the UI. For example, if you're
+ using the Linux package:
+
+ 1. Edit `/etc/gitlab/gitlab.rb`:
+
+ ```ruby
+ gitlab_rails['usage_ping_enabled'] = true
+ ```
+
+ 1. Reconfigure GitLab:
+
+ ```shell
+ sudo gitlab-ctl reconfigure
+ ```
+
+ 1. In GitLab, on the top bar, select **Menu > Admin**.
+ 1. On the left sidebar, select **Settings > Metrics and profiling**.
+ 1. Expand **Usage Statistics**.
+ 1. Clear the **Enable Service Ping** checkbox.
+ 1. Select **Save Changes**.
diff --git a/doc/development/sidekiq/idempotent_jobs.md b/doc/development/sidekiq/idempotent_jobs.md
index 38db22f8467..a5ae8737ad1 100644
--- a/doc/development/sidekiq/idempotent_jobs.md
+++ b/doc/development/sidekiq/idempotent_jobs.md
@@ -135,7 +135,7 @@ happened. See [this issue](https://gitlab.com/gitlab-org/gitlab/-/issues/342123)
GitLab doesn't skip jobs scheduled in the future, as we assume that
the state has changed by the time the job is scheduled to
-execute. Deduplication of jobs scheduled in the feature is possible
+execute. Deduplication of jobs scheduled in the future is possible
for both `until_executed` and `until_executing` strategies.
If you do want to deduplicate jobs scheduled in the future,
diff --git a/doc/development/sidekiq_style_guide.md b/doc/development/sidekiq_style_guide.md
deleted file mode 100644
index 1b5e7addf29..00000000000
--- a/doc/development/sidekiq_style_guide.md
+++ /dev/null
@@ -1,11 +0,0 @@
----
-redirect_to: 'sidekiq/index.md'
-remove_date: '2022-04-13'
----
-
-This document was moved to [another location](sidekiq/index.md).
-
-<!-- This redirect file can be deleted after <2022-04-13>. -->
-<!-- Redirects that point to other docs in the same project expire in three months. -->
-<!-- Redirects that point to docs in a different project or site (for example, link is not relative and starts with `https:`) expire in one year. -->
-<!-- Before deletion, see: https://docs.gitlab.com/ee/development/documentation/redirects.html -->
diff --git a/doc/development/snowplow/implementation.md b/doc/development/snowplow/implementation.md
index 162b77772f9..f4123e3ba86 100644
--- a/doc/development/snowplow/implementation.md
+++ b/doc/development/snowplow/implementation.md
@@ -36,7 +36,7 @@ as base:
_\* Undergoes a pseudonymization process at the collector level._
-These properties [are overriden](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/assets/javascripts/tracking/get_standard_context.js)
+These properties [are overridden](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/assets/javascripts/tracking/get_standard_context.js)
with frontend-specific values, like `source` (`gitlab-javascript`), `google_analytics_id`
and the custom `extra` object. You can modify this object for any subsequent
structured event that fires, although this is not recommended.
@@ -83,7 +83,7 @@ The following example shows `data-track-*` attributes assigned to a button:
| `data-track-action` | true | Action the user is taking. Clicks must be prepended with `click` and activations must be prepended with `activate`. For example, focusing a form field is `activate_form_input` and clicking a button is `click_button`. Replaces `data-track-event`, which was [deprecated](https://gitlab.com/gitlab-org/gitlab/-/issues/290962) in GitLab 13.11. |
| `data-track-label` | false | The specific element or object to act on. This can be: the label of the element, for example, a tab labeled 'Create from template' for `create_from_template`; a unique identifier if no text is available, for example, `groups_dropdown_close` for closing the Groups dropdown in the top bar; or the name or title attribute of a record being created. |
| `data-track-property` | false | Any additional property of the element, or object being acted on. |
-| `data-track-value` | false | Describes a numeric value (decimal) directly related to the event. This could be the value of an input. For example, `10` when clicking `internal` visibility. If omitted, this is the element's `value` property or `undefined`. For checkboxes, the default value is the element's checked attribute or `0` when unchecked. The value is parsed as numeric before sendind the event. |
+| `data-track-value` | false | Describes a numeric value (decimal) directly related to the event. This could be the value of an input. For example, `10` when clicking `internal` visibility. If omitted, this is the element's `value` property or `undefined`. For checkboxes, the default value is the element's checked attribute or `0` when unchecked. The value is parsed as numeric before sending the event. |
| `data-track-extra` | false | A key-value pair object passed as a valid JSON string. This attribute is added to the `extra` property in our [`gitlab_standard`](schemas.md#gitlab_standard) schema. |
| `data-track-context` | false | To append a custom context object, passed as a valid JSON string. |
@@ -97,10 +97,12 @@ If click events stop propagating, you must implement listeners and [Vue componen
#### Helper methods
-You can use the following Ruby helper:
+You can use the following Ruby helpers:
```ruby
tracking_attrs(label, action, property) # { data: { track_label... } }
+
+tracking_attrs_data(label, action, property) # { track_label... }
```
You can also use it on HAML templates:
@@ -108,8 +110,8 @@ You can also use it on HAML templates:
```haml
%button{ **tracking_attrs('main_navigation', 'click_button', 'navigation') }
-// When adding additional data
-// %button{ data: { platform: "...", **tracking_attrs('main_navigation', 'click_button', 'navigation') } }
+// When merging with additional data
+// %button{ data: { platform: "...", **tracking_attrs_data('main_navigation', 'click_button', 'navigation') } }
```
If you use the GitLab helper method [`nav_link`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/helpers/tab_helper.rb#L76), you must wrap `html_options` under the `html_options` keyword argument. If you
diff --git a/doc/development/snowplow/troubleshooting.md b/doc/development/snowplow/troubleshooting.md
index 47d775d89aa..2a6db80a6f2 100644
--- a/doc/development/snowplow/troubleshooting.md
+++ b/doc/development/snowplow/troubleshooting.md
@@ -21,7 +21,7 @@ While on CloudWatch dashboard set time range to last 4 weeks, to get better pict
1. `ELB New Flow Count` and `Collector Auto Scaling Group Network In/Out` - they show in order: number of connections to collectors via load balancers and data volume (in bytes) processed by collectors. If there is drop visible there, it means less events were fired from the GitLab application. Proceed to [application layer guide](#troubleshooting-gitlab-application-layer) for more details
1. `Firehose Records to S3` - it shows how many event records were saved to S3 bucket, if there was drop on this chart but not on the charts from 1. it means that problem is located at AWS infrastructure layer, please refer to [AWS layer guide](#troubleshooting-aws-layer)
-1. If drop wasn't visible on any of previous charts it means that probelm is at data warehouse layer, please refer to [data warehouse layer guide](#troubleshooting-data-warehouse-layer)
+1. If drop wasn't visible on any of previous charts it means that problem is at data warehouse layer, please refer to [data warehouse layer guide](#troubleshooting-data-warehouse-layer)
### Troubleshooting GitLab application layer
@@ -48,3 +48,27 @@ Already conducted investigations:
### Troubleshooting data warehouse layer
Reach out to [Data team](https://about.gitlab.com/handbook/business-technology/data-team/) to ask about current state of data warehouse. On their handbook page there is a [section with contact details](https://about.gitlab.com/handbook/business-technology/data-team/#how-to-connect-with-us)
+
+## Delay in Snowplow Enrichers
+
+If there is an alert for **Snowplow Raw Good Stream Backing Up**, we receive an email notification. This sometimes happens because Snowplow Enrichers don't scale well enough for the amount of Snowplow events.
+
+If the delay goes over 48 hours, we lose data.
+
+### Contact SRE on-call
+
+Send a message in the [#infrastructure_lounge](https://gitlab.slack.com/archives/CB3LSMEJV) Slack channel using the following template:
+
+```markdown
+Hello team!
+
+We received an alert for [Snowplow Raw Good Stream Backing Up](https://us-east-1.console.aws.amazon.com/cloudwatch/home?region=us-east-1#alarmsV2:alarm/SnowPlow+Raw+Good+Stream+Backing+Up?).
+
+Enrichers are not scalling well for the amount of events we receive.
+
+See the [dashboard](https://us-east-1.console.aws.amazon.com/cloudwatch/home?region=us-east-1#dashboards:name=SnowPlow).
+
+Could we get assistance in order to fix the delay?
+
+Thank you!
+```
diff --git a/doc/development/testing_guide/end_to_end/beginners_guide.md b/doc/development/testing_guide/end_to_end/beginners_guide.md
index 27a87d25170..39a3e3445ea 100644
--- a/doc/development/testing_guide/end_to_end/beginners_guide.md
+++ b/doc/development/testing_guide/end_to_end/beginners_guide.md
@@ -49,7 +49,7 @@ For information about the distribution of tests per level in GitLab, see
- Review how often the feature changes. Stable features that don't change very often
might not be worth covering with end-to-end tests if they are already covered
in lower level tests.
-- Finally, discuss the proposed test with the developer(s) involved in implementing
+- Finally, discuss the proposed test with the developers involved in implementing
the feature and the lower-level tests.
WARNING:
diff --git a/doc/development/testing_guide/end_to_end/feature_flags.md b/doc/development/testing_guide/end_to_end/feature_flags.md
index 47ebef37a4d..b4ec9e8ccd3 100644
--- a/doc/development/testing_guide/end_to_end/feature_flags.md
+++ b/doc/development/testing_guide/end_to_end/feature_flags.md
@@ -23,7 +23,7 @@ Please be sure to include the `feature_flag` tag so that the test can be skipped
`name`
- Format: `feature_flag: { name: 'feature_flag_name' }`
-- Used only for informational purposes at this time. It should be included to help quickly determine what
+- Used only for informational purposes at this time. It should be included to help quickly determine what
feature flag is under test.
`scope`
@@ -31,28 +31,28 @@ feature flag is under test.
- Format: `feature_flag: { name: 'feature_flag_name', scope: :project }`
- When `scope` is set to `:global`, the test will be **skipped on all live .com environments**. This is to avoid issues with feature flag changes affecting other tests or users on that environment.
- When `scope` is set to any other value (such as `:project`, `:group` or `:user`), or if no `scope` is specified, the test will only be **skipped on canary and production**.
-This is due to the fact that admin access is not available there.
+This is due to the fact that administrator access is not available there.
**WARNING:** You are strongly advised to first try and [enable feature flags only for a group, project, user](../../feature_flags/index.md#feature-actors),
or [feature group](../../feature_flags/index.md#feature-groups).
-- If a global feature flag must be used, it is strongly recommended to apply `scope: :global` to the `feature_flag` metadata. This is, however, left up to the SET's discretion to determine the level of risk.
- - For example, a test uses a global feature flag that only affects a small area of the application and is also needed to check for critical issues on live environments.
- In such a scenario, it would be riskier to skip running the test. For cases like this, `scope` can be left out of the metadata so that it can still run in live environments
- with admin access, such as staging.
+- If a global feature flag must be used, it is strongly recommended to apply `scope: :global` to the `feature_flag` metadata. This is, however, left up to the SET's discretion to determine the level of risk.
+ - For example, a test uses a global feature flag that only affects a small area of the application and is also needed to check for critical issues on live environments.
+ In such a scenario, it would be riskier to skip running the test. For cases like this, `scope` can be left out of the metadata so that it can still run in live environments
+ with administrator access, such as staging.
-**Note on `requires_admin`:** This tag should still be applied if there are other actions within the test that require admin access that are unrelated to updating a
+**Note on `requires_admin`:** This tag should still be applied if there are other actions within the test that require administrator access that are unrelated to updating a
feature flag (ex: creating a user via the API).
The code below would enable a feature flag named `:feature_flag_name` for the project
created by the test:
```ruby
-RSpec.describe "with feature flag enabled", feature_flag: {
- name: 'feature_flag_name',
- scope: :project
+RSpec.describe "with feature flag enabled", feature_flag: {
+ name: 'feature_flag_name',
+ scope: :project
} do
-
+
let(:project) { Resource::Project.fabricate_via_api! }
before do
diff --git a/doc/development/testing_guide/end_to_end/index.md b/doc/development/testing_guide/end_to_end/index.md
index 1e7cba9d247..9730115fd9f 100644
--- a/doc/development/testing_guide/end_to_end/index.md
+++ b/doc/development/testing_guide/end_to_end/index.md
@@ -147,11 +147,11 @@ as well as these:
| Variable | Description |
|-|-|
| `QA_SCENARIO` | The scenario to run (default `Test::Instance::Image`) |
-| `QA_TESTS` | The test(s) to run (no default, which means run all the tests in the scenario). Use file paths as you would when running tests via RSpec, for example, `qa/specs/features/ee/browser_ui` would include all the `EE` UI tests. |
+| `QA_TESTS` | The tests to run (no default, which means run all the tests in the scenario). Use file paths as you would when running tests via RSpec, for example, `qa/specs/features/ee/browser_ui` would include all the `EE` UI tests. |
| `QA_RSPEC_TAGS` | The RSpec tags to add (no default) |
-For now [manual jobs with custom variables don't use the same variable
-when retried](https://gitlab.com/gitlab-org/gitlab/-/issues/31367), so if you want to run the same test(s) multiple times,
+For now, [manual jobs with custom variables don't use the same variable
+when retried](https://gitlab.com/gitlab-org/gitlab/-/issues/31367), so if you want to run the same tests multiple times,
specify the same variables in each `custom-parallel` job (up to as
many of the 10 available jobs that you want to run).
diff --git a/doc/development/testing_guide/end_to_end/rspec_metadata_tests.md b/doc/development/testing_guide/end_to_end/rspec_metadata_tests.md
index 45161404c73..0163f2e648c 100644
--- a/doc/development/testing_guide/end_to_end/rspec_metadata_tests.md
+++ b/doc/development/testing_guide/end_to_end/rspec_metadata_tests.md
@@ -21,8 +21,8 @@ This is a partial list of the [RSpec metadata](https://relishapp.com/rspec/rspec
| `:github` | The test requires a GitHub personal access token. |
| `:group_saml` | The test requires a GitLab instance that has SAML SSO enabled at the group level. Interacts with an external SAML identity provider. Paired with the `:orchestrated` tag. |
| `:instance_saml` | The test requires a GitLab instance that has SAML SSO enabled at the instance level. Interacts with an external SAML identity provider. Paired with the `:orchestrated` tag. |
-| `:integrations` | This aims to test the available [integrations](../../../user/project/integrations/overview.md#integrations-listing). The test requires Docker to be installed in the run context. It will provision the containers and can be run against a local instance or using the `gitlab-qa` scenario `Test::Integration::Integrations` |
-| `:service_ping_disabled` | The test interacts with the GitLab configuration service ping at the instance level to turn admin setting service ping checkbox on or off. This tag will have the test run only in the `service_ping_disabled` job and must be paired with the `:orchestrated` and `:requires_admin` tags. |
+| `:integrations` | This aims to test the available [integrations](../../../user/project/integrations/index.md#available-integrations). The test requires Docker to be installed in the run context. It will provision the containers and can be run against a local instance or using the `gitlab-qa` scenario `Test::Integration::Integrations` |
+| `:service_ping_disabled` | The test interacts with the GitLab configuration service ping at the instance level to turn Admin Area setting service ping checkbox on or off. This tag will have the test run only in the `service_ping_disabled` job and must be paired with the `:orchestrated` and `:requires_admin` tags. |
| `:jira` | The test requires a Jira Server. [GitLab-QA](https://gitlab.com/gitlab-org/gitlab-qa) provisions the Jira Server in a Docker container when the `Test::Integration::Jira` test scenario is run. |
| `:kubernetes` | The test includes a GitLab instance that is configured to be run behind an SSH tunnel, allowing a TLS-accessible GitLab. This test also includes provisioning of at least one Kubernetes cluster to test against. _This tag is often be paired with `:orchestrated`._ |
| `:ldap_no_server` | The test requires a GitLab instance to be configured to use LDAP. To be used with the `:orchestrated` tag. It does not spin up an LDAP server at orchestration time. Instead, it creates the LDAP server at runtime. |
diff --git a/doc/development/testing_guide/frontend_testing.md b/doc/development/testing_guide/frontend_testing.md
index d03a4976a8c..d91c53823e2 100644
--- a/doc/development/testing_guide/frontend_testing.md
+++ b/doc/development/testing_guide/frontend_testing.md
@@ -297,7 +297,7 @@ it('tests a promise rejection', async () => {
You can also simply return a promise from the test function.
Using the `done` and `done.fail` callbacks is discouraged when working with
-promises. They should only be used when testing callback-based code.
+promises. They should not be used.
**Bad**:
@@ -466,18 +466,22 @@ it('waits for an Ajax call', () => {
#### Vue rendering
-To wait until a Vue component is re-rendered, use either of the equivalent
-[`Vue.nextTick()`](https://vuejs.org/v2/api/#Vue-nextTick) or `vm.$nextTick()`.
+Use [`nextTick()`](https://vuejs.org/v2/api/#Vue-nextTick) to wait until a Vue component is
+re-rendered.
**in Jest:**
```javascript
-it('renders something', () => {
+import { nextTick } from 'vue';
+
+// ...
+
+it('renders something', async () => {
wrapper.setProps({ value: 'new value' });
- return wrapper.vm.$nextTick().then(() => {
- expect(wrapper.text()).toBe('new value');
- });
+ await nextTick();
+
+ expect(wrapper.text()).toBe('new value');
});
```
@@ -487,15 +491,17 @@ If the application triggers an event that you need to wait for in your test, reg
the assertions:
```javascript
-it('waits for an event', done => {
+it('waits for an event', () => {
eventHub.$once('someEvent', eventHandler);
someFunction();
- function eventHandler() {
- expect(something).toBe('done');
- done();
- }
+ return new Promise((resolve) => {
+ function expectEventHandler() {
+ expect(something).toBe('done');
+ resolve();
+ }
+ });
});
```
@@ -807,11 +813,14 @@ The following are examples of tests that work for Jest:
```javascript
it('uses some HTML element', () => {
- loadFixtures('some/page.html'); // loads spec/frontend/fixtures/some/page.html and adds it to the DOM
+ loadHTMLFixture('some/page.html'); // loads spec/frontend/fixtures/some/page.html and adds it to the DOM
const element = document.getElementById('#my-id');
// ...
+
+ // Jest does not clean up the DOM automatically
+ resetHTMLFixture();
});
```
diff --git a/doc/development/testing_guide/img/k9s.png b/doc/development/testing_guide/img/k9s.png
deleted file mode 100644
index 34585b2a43a..00000000000
--- a/doc/development/testing_guide/img/k9s.png
+++ /dev/null
Binary files differ
diff --git a/doc/development/testing_guide/review_apps.md b/doc/development/testing_guide/review_apps.md
index f5483a4b79c..ff4b77dec2c 100644
--- a/doc/development/testing_guide/review_apps.md
+++ b/doc/development/testing_guide/review_apps.md
@@ -16,6 +16,7 @@ For any of the following scenarios, the `start-review-app-pipeline` job would be
- for merge requests with frontend changes
- for merge requests with changes to `{,ee/,jh/}{app/controllers}/**/*`
- for merge requests with changes to `{,ee/,jh/}{app/models}/**/*`
+- for merge requests with changes to `{,ee/,jh/}lib/{,ee/,jh/}gitlab/**/*`
- for merge requests with QA changes
- for scheduled pipelines
- the MR has the `pipeline:run-review-app` label set
@@ -198,7 +199,7 @@ subgraph "CNG-mirror pipeline"
issue with a link to your merge request. Note that the deployment failure can
reveal an actual problem introduced in your merge request (that is, this isn't
necessarily a transient failure)!
-- If the `review-qa-smoke` or `review-qa-reliable` job keeps failing (note that we already retry them once),
+- If the `review-qa-smoke` or `review-qa-reliable` job keeps failing,
please check the job's logs: you could discover an actual problem introduced in
your merge request. You can also download the artifacts to see screenshots of
the page at the time the failures occurred. If you don't find the cause of the
diff --git a/doc/development/testing_guide/testing_migrations_guide.md b/doc/development/testing_guide/testing_migrations_guide.md
index 4092c1a2f6d..d71788e21f3 100644
--- a/doc/development/testing_guide/testing_migrations_guide.md
+++ b/doc/development/testing_guide/testing_migrations_guide.md
@@ -227,6 +227,18 @@ expect('MigrationClass').to have_scheduled_batched_migration(
)
```
+#### `be_finalize_background_migration_of`
+
+Verifies that a migration calls `finalize_background_migration` with the expected background migration class.
+
+```ruby
+# Migration
+finalize_background_migration('MigrationClass')
+
+# Spec
+expect(described_class).to be_finalize_background_migration_of('MigrationClass')
+```
+
### Examples of migration tests
Migration tests depend on what the migration does exactly, the most common types are data migrations and scheduling background migrations.
diff --git a/doc/development/uploads/background.md b/doc/development/uploads/background.md
index e68e4127b57..1ad1aec23f2 100644
--- a/doc/development/uploads/background.md
+++ b/doc/development/uploads/background.md
@@ -1,154 +1,11 @@
---
-stage: none
-group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
+redirect_to: 'index.md'
+remove_date: '2022-07-25'
---
-# Uploads guide: Why GitLab uses custom upload logic
+This document was moved to [another location](index.md).
-This page is for developers trying to better understand the history behind GitLab uploads and the
-technical challenges associated with uploads.
-
-## Problem description
-
-GitLab and [GitLab Workhorse](https://gitlab.com/gitlab-org/gitlab-workhorse) use special rules for handling file uploads,
-because in an ordinary Rails application file uploads can become expensive as files grow in size.
-Rails often sacrifices performance to provide a better developer experience, including how it handles
-`multipart/form-post` uploads. In any Rack server, Rails applications included, when such a request arrives at the application server,
-several things happen:
-
-1. A [Rack middleware](https://github.com/rack/rack/blob/main/lib/rack/multipart.rb) intercepts the request and parses the request body.
-1. The middleware writes each file in the multipart request to a temporary directory on disk.
-1. A `params` hash is constructed with entries pointing to the respective files on disk.
-1. A Rails controller acts on the file contents.
-
-While this is convenient for developers, it is costly for the Ruby server process to buffer large files on disk.
-Because of Ruby's [global interpreter lock](https://en.wikipedia.org/wiki/Global_interpreter_lock),
-only a single thread of execution of a given Ruby process can be on CPU. This means the amount of CPU
-time spent doing this is not available to other worker threads serving user requests.
-Buffering files to disk also means spending more time in I/O routines and mode switches, which are expensive operations.
-
-The following diagram shows how GitLab handled such a request prior to putting optimizations in place.
-
-```mermaid
-graph TB
- subgraph "load balancers"
- LB(Proxy)
- end
-
- subgraph "Shared storage"
- nfs(NFS)
- end
-
- subgraph "redis cluster"
- r(persisted redis)
- end
- LB-- 1 -->Workhorse
-
- subgraph "web or API fleet"
- Workhorse-- 2 -->rails
- end
- rails-- "3 (write files)" -->nfs
- rails-- "4 (schedule a job)" -->r
-
- subgraph sidekiq
- s(sidekiq)
- end
- s-- "5 (fetch a job)" -->r
- s-- "6 (read files)" -->nfs
-```
-
-We went through two major iterations of our uploads architecture to improve on these problems:
-
-1. [Moving disk buffering to Workhorse.](#moving-disk-buffering-to-workhorse)
-1. [Uploading to Object Storage from Workhorse.](#moving-to-object-storage-and-direct-uploads)
-
-### Moving disk buffering to Workhorse
-
-To address the performance issues resulting from buffering files in Ruby, we moved this logic to Workhorse instead,
-our reverse proxy fronting the GitLab Rails application.
-Workhorse is written in Go, and is much better at dealing with stream processing and I/O than Rails.
-
-There are two parts to this implementation:
-
-1. In Workhorse, a request handler detects `multipart/form-data` content in an incoming user request.
- If such a request is detected, Workhorse hijacks the request body before forwarding it to Rails.
- Workhorse writes all files to disk, rewrites the multipart form fields to point to the new locations, signs the
- request, then forwards it to Rails.
-1. In Rails, a [custom multipart Rack middleware](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/middleware/multipart.rb)
- identifies any signed multipart requests coming from Workhorse and prepares the `params` hash Rails
- would expect, now pointing to the files cached by Workhorse. This makes it a drop-in replacement for `Rack::Multipart`.
-
-The diagram below shows how GitLab handles such a request today:
-
-```mermaid
-graph TB
- subgraph "load balancers"
- LB(HA Proxy)
- end
-
- subgraph "Shared storage"
- nfs(NFS)
- end
-
- subgraph "redis cluster"
- r(persisted redis)
- end
- LB-- 1 -->Workhorse
-
- subgraph "web or API fleet"
- Workhorse-- "3 (without files)" -->rails
- end
- Workhorse -- "2 (write files)" -->nfs
- rails-- "4 (schedule a job)" -->r
-
- subgraph sidekiq
- s(sidekiq)
- end
- s-- "5 (fetch a job)" -->r
- s-- "6 (read files)" -->nfs
-```
-
-While this "one-size-fits-all" solution greatly improves performance for multipart uploads without compromising
-developer ergonomics, it severely limits GitLab [availability](#availability-challenges)
-and [scalability](#scalability-challenges).
-
-#### Availability challenges
-
-Moving file buffering to Workhorse addresses the immediate performance problems stemming from Ruby not being good at
-handling large file uploads. However, a remaining issue of this solution is its reliance on attached storage,
-whether via ordinary hard drives or network attached storage like NFS.
-NFS is a [single point of failure](https://en.wikipedia.org/wiki/Single_point_of_failure), and is unsuitable for
-deploying GitLab in highly available, cloud native environments.
-
-#### Scalability challenges
-
-NFS is not a part of cloud native installations, such as those running in Kubernetes.
-In Kubernetes, machine boundaries translate to pods, and without network-attached storage, disk-buffered uploads
-must be written directly to the pod's file system.
-
-Using disk buffering presents us with a scalability challenge here. If Workhorse can only
-write files to a pod's private file system, then these files are inaccessible outside of this particular pod.
-With disk buffering, a Rails controller will accept a file upload and enqueue it for upload in a Sidekiq
-background job. Therefore, Sidekiq requires access to these files.
-However, in a cloud native environment all Sidekiq instances run on separate pods, so they are
-not able to access files buffered to disk on a web server pod.
-
-Therefore, all features that involve Sidekiq uploading disk-buffered files severely limit the scalability of GitLab.
-
-## Moving to object storage and direct uploads
-
-To address these availability and scalability problems,
-instead of buffering files to disk, we have added support for uploading files directly
-from Workhorse to a given destination. While it remains possible to upload to local or network-attached storage
-this way, you should use a highly available
-[object store](https://en.wikipedia.org/wiki/Object_storage),
-such as AWS S3, Google GCS, or Azure, for scalability reasons.
-
-With direct uploads, Workhorse does not buffer files to disk. Instead, it first authorizes the request with
-the Rails application to find out where to upload it, then streams the file directly to its ultimate destination.
-
-To learn more about how disk buffering and direct uploads are implemented, see:
-
-- [How uploads work technically](implementation.md)
-- [Adding new uploads](working_with_uploads.md)
+<!-- This redirect file can be deleted after <2022-07-25>. -->
+<!-- Redirects that point to other docs in the same project expire in three months. -->
+<!-- Redirects that point to docs in a different project or site (for example, link is not relative and starts with `https:`) expire in one year. -->
+<!-- Before deletion, see: https://docs.gitlab.com/ee/development/documentation/redirects.html -->
diff --git a/doc/development/uploads/implementation.md b/doc/development/uploads/implementation.md
index 13a875cd1af..1ad1aec23f2 100644
--- a/doc/development/uploads/implementation.md
+++ b/doc/development/uploads/implementation.md
@@ -1,190 +1,11 @@
---
-stage: none
-group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
+redirect_to: 'index.md'
+remove_date: '2022-07-25'
---
-# Uploads guide: How uploads work technically
+This document was moved to [another location](index.md).
-This page is for developers trying to better understand what kinds of uploads exist in GitLab and how they are implemented.
-
-## Kinds of uploads and how to choose between them
-
-We can identify three major use-cases for an upload:
-
-1. **storage:** if we are uploading for storing a file (like artifacts, packages, or discussion attachments). In this case [direct upload](#direct-upload) is the proper level as it's the less resource-intensive operation. Additional information can be found on [File Storage in GitLab](../file_storage.md).
-1. **in-controller/synchronous processing:** if we allow processing **small files** synchronously, using [disk buffered upload](#disk-buffered-upload) may speed up development.
-1. **Sidekiq/asynchronous processing:** Asynchronous processing must implement [direct upload](#direct-upload), the reason being that it's the only way to support Cloud Native deployments without a shared NFS.
-
-Selecting the proper acceleration is a tradeoff between speed of development and operational costs.
-
-For more details about currently broken feature see [epic &1802](https://gitlab.com/groups/gitlab-org/-/epics/1802).
-
-### Handling repository uploads
-
-Some features involves Git repository uploads without using a regular Git client.
-Some examples are uploading a repository file from the web interface and [design management](../../user/project/issues/design_management.md).
-
-Those uploads requires the rails controller to act as a Git client in lieu of the user.
-Those operation falls into _in-controller/synchronous processing_ category, but we have no warranties on the file size.
-
-In case of a LFS upload, the file pointer is committed synchronously, but file upload to object storage is performed asynchronously with Sidekiq.
-
-## Upload encodings
-
-By upload encoding we mean how the file is included within the incoming request.
-
-We have three kinds of file encoding in our uploads:
-
-1. <i class="fa fa-check-circle"></i> **multipart**: `multipart/form-data` is the most common, a file is encoded as a part of a multipart encoded request.
-1. <i class="fa fa-check-circle"></i> **body**: some APIs uploads files as the whole request body.
-1. <i class="fa fa-times-circle"></i> **JSON**: some JSON APIs upload files as base64-encoded strings. This requires a change to GitLab Workhorse,
- which is tracked [in this issue](https://gitlab.com/gitlab-org/gitlab/-/issues/325068).
-
-## Uploading technologies
-
-By uploading technologies we mean how all the involved services interact with each other.
-
-GitLab supports 3 kinds of uploading technologies, here follows a brief description with a sequence diagram for each one. Diagrams are not meant to be exhaustive.
-
-### Rack Multipart upload
-
-This is the default kind of upload, and it's the most expensive in terms of resources.
-
-In this case, Workhorse is unaware of files being uploaded and acts as a regular proxy.
-
-When a multipart request reaches the rails application, `Rack::Multipart` leaves behind temporary files in `/tmp` and uses valuable Ruby process time to copy files around.
-
-```mermaid
-sequenceDiagram
- participant c as Client
- participant w as Workhorse
- participant r as Rails
-
- activate c
- c ->>+w: POST /some/url/upload
- w->>+r: POST /some/url/upload
-
- r->>r: save the incoming file on /tmp
- r->>r: read the file for processing
-
- r-->>-c: request result
- deactivate c
- deactivate w
-```
-
-### Disk buffered upload
-
-This kind of upload avoids wasting resources caused by handling upload writes to `/tmp` in rails.
-
-This optimization is not active by default on REST API requests.
-
-When enabled, Workhorse looks for files in multipart MIME requests, uploading
-any it finds to a temporary file on shared storage. The MIME data in the request
-is replaced with the path to the corresponding file before it is forwarded to
-Rails.
-
-To prevent abuse of this feature, Workhorse signs the modified request with a
-special header, stating which entries it modified. Rails ignores any
-unsigned path entries.
-
-```mermaid
-sequenceDiagram
- participant c as Client
- participant w as Workhorse
- participant r as Rails
- participant s as NFS
-
- activate c
- c ->>+w: POST /some/url/upload
-
- w->>+s: save the incoming file on a temporary location
- s-->>-w: request result
-
- w->>+r: POST /some/url/upload
- Note over w,r: file was replaced with its location<br>and other metadata
-
- opt requires async processing
- r->>+redis: schedule a job
- redis-->>-r: job is scheduled
- end
-
- r-->>-c: request result
- deactivate c
- w->>-w: cleanup
-
- opt requires async processing
- activate sidekiq
- sidekiq->>+redis: fetch a job
- redis-->>-sidekiq: job
-
- sidekiq->>+s: read file
- s-->>-sidekiq: file
-
- sidekiq->>sidekiq: process file
-
- deactivate sidekiq
- end
-```
-
-### Direct upload
-
-This is the more advanced acceleration technique we have in place.
-
-Workhorse asks Rails for temporary pre-signed object storage URLs and directly uploads to object storage.
-
-In this setup, an extra Rails route must be implemented in order to handle authorization. Examples of this can be found in:
-
-- [`Projects::LfsStorageController`](https://gitlab.com/gitlab-org/gitlab/-/blob/cc723071ad337573e0360a879cbf99bc4fb7adb9/app/controllers/projects/lfs_storage_controller.rb)
- and [its routes](https://gitlab.com/gitlab-org/gitlab/-/blob/cc723071ad337573e0360a879cbf99bc4fb7adb9/config/routes/git_http.rb#L31-32).
-- [API endpoints for uploading packages](../packages.md#file-uploads).
-
-Direct upload falls back to _disk buffered upload_ when `direct_upload` is disabled inside the [object storage setting](../../administration/uploads.md#object-storage-settings).
-The answer to the `/authorize` call contains only a file system path.
-
-```mermaid
-sequenceDiagram
- participant c as Client
- participant w as Workhorse
- participant r as Rails
- participant os as Object Storage
-
- activate c
- c ->>+w: POST /some/url/upload
-
- w ->>+r: POST /some/url/upload/authorize
- Note over w,r: this request has an empty body
- r-->>-w: presigned OS URL
-
- w->>+os: PUT file
- Note over w,os: file is stored on a temporary location. Rails select the destination
- os-->>-w: request result
-
- w->>+r: POST /some/url/upload
- Note over w,r: file was replaced with its location<br>and other metadata
-
- r->>+os: move object to final destination
- os-->>-r: request result
-
- opt requires async processing
- r->>+redis: schedule a job
- redis-->>-r: job is scheduled
- end
-
- r-->>-c: request result
- deactivate c
- w->>-w: cleanup
-
- opt requires async processing
- activate sidekiq
- sidekiq->>+redis: fetch a job
- redis-->>-sidekiq: job
-
- sidekiq->>+os: get object
- os-->>-sidekiq: file
-
- sidekiq->>sidekiq: process file
-
- deactivate sidekiq
- end
-```
+<!-- This redirect file can be deleted after <2022-07-25>. -->
+<!-- Redirects that point to other docs in the same project expire in three months. -->
+<!-- Redirects that point to docs in a different project or site (for example, link is not relative and starts with `https:`) expire in one year. -->
+<!-- Before deletion, see: https://docs.gitlab.com/ee/development/documentation/redirects.html -->
diff --git a/doc/development/uploads/index.md b/doc/development/uploads/index.md
index c486f2d3689..b8326489d40 100644
--- a/doc/development/uploads/index.md
+++ b/doc/development/uploads/index.md
@@ -6,9 +6,159 @@ info: To determine the technical writer assigned to the Stage/Group associated w
# Uploads development guide
-Uploads are an integral part of many GitLab features. To understand how GitLab handles uploads, refer to
-the following pages:
+Uploads are an integral part of many GitLab features. To understand how GitLab handles uploads, this page
+provides an overview of the key mechanisms for transferring files to a storage destination.
-- [Why GitLab uses custom upload logic.](background.md)
-- [How uploads work technically.](implementation.md)
-- [How to add new uploads.](working_with_uploads.md)
+GitLab uploads are configured by feature. All features that involve uploads provide the same configuration options,
+but they can be configured independently of one another. For example, Git LFS uploads can be configured
+independently of CI/CD build artifact uploads, but they both offer the same set of settings keys. These settings
+govern how an upload is processed, which can have a dramatic impact on performance and scalability.
+
+This page summarizes the upload settings that are important in deciding how such files are handled. The sections
+that follow then describe each of these mechanisms in more detail.
+
+## How upload settings drive upload flow
+
+Before we examine individual upload strategies in more detail, let's examine a high-level
+breakdown of which upload settings map to each of these strategies.
+
+Upload settings themselves are documented in [Uploads administration](../../administration/uploads.md).
+Here, we focus on how these settings drive the internals of GitLab upload logic.
+At the top level, we distinguish between two **destinations** for uploaded files:
+
+- [**Local storage**](#local-storage) - Files are stored on a volume attached to the web server node.
+- [**Object storage**](#object-storage) - Files are stored in a remote object store bucket.
+
+In this table, `x.y.z` specifies the path taken through `gitlab.yml`:
+
+| Setting | Value | Behavior |
+| -------------------------------------- | ------- | ------------------------------- |
+| `<feature>.object_store.enabled` | `false` | Files are stored locally in `<feature>.storage_path` |
+| `<feature>.object_store.enabled` | `true` | Files are stored remotely in `<feature>.object_store.remote_directory` |
+
+When using object storage, administrators can control how those files are moved into the respective bucket.
+This move can happen in one of these ways:
+
+- [Rails controller upload](#rails-controller-upload).
+- [Background upload](#background-upload).
+- [Direct upload](#direct-upload).
+
+These strategies activate as per the following `<feature>.object_store.*` settings:
+
+| | `background_upload` = `false` | `background_upload` = `true` |
+| ------------------------- | ----------------------------- | ------------------------------- |
+| `direct_upload` = `false` | Controller upload | Background upload |
+| `direct_upload` = `true` | Direct upload | Direct upload (takes precedence)|
+
+Individual Sidekiq workers might also store files in object storage, which is not something we cover here.
+More importantly, `background_upload` does not imply _all files are uploaded by Sidekiq._
+Sidekiq workers that store files in object storage could still exist when this setting is `false`.
+Those cases are never user-initiated uploads, but they might occur in response to another user-initiated
+action, such as exporting a GitLab repository.
+
+Finally, Workhorse assists most user-initiated uploads using an upload buffering mechanism to keep slow work out of Rails controllers.
+This mechanism is explained in [Workhorse assisted uploads](#workhorse-assisted-uploads),
+as it runs orthogonal to much of what we discuss beforehand.
+
+We now look at each case in more detail.
+
+## Local storage
+
+Local storage is the simplest path an upload can take. It was how GitLab treated uploads in its early days.
+It assumes a storage volume (like a disk or network attached storage) is accessible
+to the Rails application at `storage_path`. This file path is relative to the Rails root directory and,
+like any upload setting, configurable per feature.
+
+When a client sends a file upload, Workhorse first buffers the file to disk, a mechanism explained in more
+detail in [Workhorse assisted uploads](#workhorse-assisted-uploads). When the request reaches the Rails
+application, the file already exists on local storage, so Rails merely has to move it to the specified
+directory to finalize the transaction.
+
+Local storage cannot be used with cloud-native GitLab (CNG) installations. It is therefore not used for
+GitLab SaaS either.
+
+## Object storage
+
+To provide horizontally scalable storage, you must use an object store provider such as:
+
+- Amazon AWS.
+- Google Cloud Storage (GCS).
+- Azure Cloud Storage.
+
+Using object storage provides two main benefits:
+
+- Ease of adding more storage capacity: cloud providers do this for you automatically.
+- Enabling horizontal scaling of your GitLab installation: multiple GitLab application servers can access the same data
+ when it is stored in object storage.
+
+CNG installations including GitLab SaaS always use object storage (GCS in the case of GitLab SaaS.)
+
+A challenge with uploading to a remote object store is that it includes an outgoing HTTP request from
+GitLab to the object store provider. As mentioned above, there are three different strategies available for how
+this HTTP request is sent.
+
+- [Rails controller upload](#rails-controller-upload).
+- [Background upload](#background-upload).
+- [Direct upload](#direct-upload).
+
+### Rails controller upload
+
+When neither background upload nor direct upload are available, Rails uploads the file to object storage
+as part of the controller `create` action. Which controller is responsible depends on the kind of file uploaded.
+
+A Rails controller upload is very similar to uploading to local storage. The main difference: Rails must
+send an HTTP request to the object store. This happens via the [CarrierWave Fog](https://github.com/carrierwaveuploader/carrierwave#fog)
+uploader.
+
+As with local storage, this strategy benefits from [Workhorse assistance](#workhorse-assisted-uploads) to
+keep some of the costly I/O work out of Ruby and Rails. Direct upload does a better job at this because it also keeps the HTTP PUT requests to object storage outside Puma.
+
+This strategy is only suitable for small file uploads, as it is subject to Puma's 60 second request timeout.
+
+### Background upload
+
+WARNING:
+This strategy is deprecated in GitLab 14.9 and later, and is scheduled to [be removed in GitLab 15.0](https://gitlab.com/gitlab-org/gitlab/-/issues/26600).
+
+With background uploads enabled:
+
+1. Files are uploaded as if they were to reside in local storage.
+1. When Rails saves the upload metadata and the transaction completes, a Sidekiq job is scheduled.
+1. The Sidekiq job transfers the file to the object store bucket.
+ - If the job completes, the upload record is updated to reflect the file's new location.
+ - If the job fails or gets lost, the upload stays in local storage and has the lifecycle of a normal local storage upload.
+
+As Rails and Sidekiq must cooperate to move the file to its final destination, it requires shared
+storage and as such is unsuitable for CNG installations. We do not use background upload in GitLab SaaS.
+
+As background upload is an extension of local storage, it benefits from the same [Workhorse assistance](#workhorse-assisted-uploads) to
+keep costly I/O work out of Ruby and Rails.
+
+### Direct upload
+
+Direct upload is the recommended way to move large files into object storage in CNG installations like GitLab SaaS.
+
+With direct upload enabled, Workhorse:
+
+1. Authorizes the request with Rails.
+1. Establishes a connection with the object store itself to transfer the file to a temporary location.
+1. When the transfer is complete, Workhorse finalizes the request with Rails. Rails issues an object store copy operation to put the file in its final location.
+1. Completes the upload by deleting the temporary file in object storage.
+
+This strategy is a different form of [Workhorse assistance](#workhorse-assisted-uploads). It does not rely on shared storage that is accessible by both Workhorse and Puma.
+
+Of all existing upload strategies, direct upload is best able to handle large (gigabyte) uploads. However, because Puma still does an object storage copy operation, which takes time proportional to the size of the upload, there remains a possibility of hitting Puma timeouts.
+
+## Workhorse assisted uploads
+
+Most uploads receive assistance from Workhorse in some way.
+
+- Often, Workhorse buffers the upload to a temporary file. Workhorse adds metadata to the request to tell
+ Puma the name and location of the temporary file. This requires shared temporary storage between Workhorse and Puma.
+ All GitLab installations (including CNG) have this shared temporary storage.
+- Workhorse sometimes pre-processes the file. For example, for CI artifact uploads, Workhorse creates a separate index
+ of the contents of the ZIP file. By doing this in Workhorse we bypass the Puma request timeout.
+ Compared to Sidekiq background processing, this has the advantage that the user does not see an intermediate state
+ where GitLab accepts the file but has not yet processed it.
+- With direct upload, Workhorse can both pre-process the file and upload it to object storage.
+ Uploading a large file to object storage takes time; by doing this in Workhorse we avoid the Puma request timeout.
diff --git a/doc/development/uploads/working_with_uploads.md b/doc/development/uploads/working_with_uploads.md
index 99c04888804..4e907530a9f 100644
--- a/doc/development/uploads/working_with_uploads.md
+++ b/doc/development/uploads/working_with_uploads.md
@@ -6,7 +6,7 @@ info: To determine the technical writer assigned to the Stage/Group associated w
# Uploads guide: Adding new uploads
-In this section, we describe how to add a new upload route [accelerated](implementation.md#uploading-technologies) by Workhorse for [body and multipart](implementation.md#upload-encodings) encoded uploads.
+Here, we describe how to add a new upload route [accelerated](index.md#workhorse-assisted-uploads) by Workhorse.
Upload routes belong to one of these categories:
@@ -15,31 +15,31 @@ Upload routes belong to one of these categories:
1. GraphQL API: uploads handled by a GraphQL resolve function.
WARNING:
-GraphQL uploads do not support [direct upload](implementation.md#direct-upload) yet. Depending on the use case, the feature may not work on installations without NFS (like GitLab.com or Kubernetes installations). Uploading to object storage inside the GraphQL resolve function may result in timeout errors. For more details please follow [issue #280819](https://gitlab.com/gitlab-org/gitlab/-/issues/280819).
+GraphQL uploads do not support [direct upload](index.md#direct-upload). Depending on the use case, the feature may not work on installations without NFS (like GitLab.com or Kubernetes installations). Uploading to object storage inside the GraphQL resolve function may result in timeout errors. For more details, follow [issue #280819](https://gitlab.com/gitlab-org/gitlab/-/issues/280819).
## Update Workhorse for the new route
-For both the Rails controller and Grape API uploads, Workhorse has to be updated in order to get the
+For both the Rails controller and Grape API uploads, Workhorse must be updated to get the
support for the new upload route.
1. Open a new issue in the [Workhorse tracker](https://gitlab.com/gitlab-org/gitlab-workhorse/-/issues/new) describing precisely the new upload route:
- The route's URL.
- - The [upload encoding](implementation.md#upload-encodings).
+ - The upload encoding.
- If possible, provide a dump of the upload request.
1. Implement and get the MR merged for this issue above.
-1. Ask the Maintainers of [Workhorse](https://gitlab.com/gitlab-org/gitlab-workhorse) to create a new release. You can do that in the MR
- directly during the maintainer review or ask for it in the `#workhorse` Slack channel.
+1. Ask the Maintainers of [Workhorse](https://gitlab.com/gitlab-org/gitlab-workhorse) to create a new release. You can do that in the merge request
+ directly during the maintainer review, or ask for it in the `#workhorse` Slack channel.
1. Bump the [Workhorse version file](https://gitlab.com/gitlab-org/gitlab/-/blob/master/GITLAB_WORKHORSE_VERSION)
to the version you have from the previous points, or bump it in the same merge request that contains
- the Rails changes (see [Implementing the new route with a Rails controller](#implementing-the-new-route-with-a-rails-controller) or [Implementing the new route with a Grape API endpoint](#implementing-the-new-route-with-a-grape-api-endpoint) below).
+ the Rails changes. Refer to [Implementing the new route with a Rails controller](#implementing-the-new-route-with-a-rails-controller) or [Implementing the new route with a Grape API endpoint](#implementing-the-new-route-with-a-grape-api-endpoint) below.
## Implementing the new route with a Rails controller
-For a Rails controller upload, we usually have a [multipart](implementation.md#upload-encodings) upload and there are a
+For a Rails controller upload, we usually have a `multipart/form-data` upload and there are a
few things to do:
1. The upload is available under the parameter name you're using. For example, it could be an `artifact`
- or a nested parameter such as `user[avatar]`. Let's say that we have the upload under the
+ or a nested parameter such as `user[avatar]`. If you have the upload under the
`file` parameter, reading `params[:file]` should get you an [`UploadedFile`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/uploaded_file.rb) instance.
1. Generally speaking, it's a good idea to check if the instance is from the [`UploadedFile`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/uploaded_file.rb) class. For example, see how we checked
[that the parameter is indeed an `UploadedFile`](https://gitlab.com/gitlab-org/gitlab/-/commit/ea30fe8a71bf16ba07f1050ab4820607b5658719#51c0cc7a17b7f12c32bc41cfab3649ff2739b0eb_79_77).
@@ -53,7 +53,7 @@ builds automatically for you.
## Implementing the new route with a Grape API endpoint
-For a Grape API upload, we can have [body or a multipart](implementation.md#upload-encodings) upload. Things are slightly more complicated: two endpoints are needed. One for the
+For a Grape API upload, we can have a body or multipart upload. Things are slightly more complicated: two endpoints are needed. One for the
Workhorse pre-upload authorization and one for accepting the upload metadata from Workhorse:
1. Implement an endpoint with the URL + `/authorize` suffix that will:
@@ -70,8 +70,8 @@ use `requires :file, type: ::API::Validations::Types::WorkhorseFile`.
- Check that the request is coming from Workhorse with the `require_gitlab_workhorse!` from the
[API helpers](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/api/helpers.rb).
- Check the user permissions.
- - The remaining code of the processing. This is where the code must be reading the parameter (for
-our example, it would be `params[:file]`).
+ - The remaining code of the processing. In this step, the code must read the parameter. For
+our example, it would be `params[:file]`.
WARNING:
**Do not** call `UploadedFile#from_params` directly! Do not build an [`UploadedFile`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/uploaded_file.rb)
@@ -124,40 +124,40 @@ Therefore, document new uploads here by slotting them into the following tables:
### CarrierWave integration
-| File | Carrierwave usage | Categorized |
+| File | CarrierWave usage | Categorized |
|---------------------------------------------------------|----------------------------------------------------------------------------------|---------------------|
-| `app/models/project.rb` | `include Avatarable` | :white_check_mark: |
-| `app/models/projects/topic.rb` | `include Avatarable` | :white_check_mark: |
-| `app/models/group.rb` | `include Avatarable` | :white_check_mark: |
-| `app/models/user.rb` | `include Avatarable` | :white_check_mark: |
-| `app/models/terraform/state_version.rb` | `include FileStoreMounter` | :white_check_mark: |
-| `app/models/ci/job_artifact.rb` | `include FileStoreMounter` | :white_check_mark: |
-| `app/models/ci/pipeline_artifact.rb` | `include FileStoreMounter` | :white_check_mark: |
-| `app/models/pages_deployment.rb` | `include FileStoreMounter` | :white_check_mark: |
-| `app/models/lfs_object.rb` | `include FileStoreMounter` | :white_check_mark: |
-| `app/models/dependency_proxy/blob.rb` | `include FileStoreMounter` | :white_check_mark: |
-| `app/models/dependency_proxy/manifest.rb` | `include FileStoreMounter` | :white_check_mark: |
-| `app/models/packages/composer/cache_file.rb` | `include FileStoreMounter` | :white_check_mark: |
-| `app/models/packages/package_file.rb` | `include FileStoreMounter` | :white_check_mark: |
-| `app/models/concerns/packages/debian/component_file.rb` | `include FileStoreMounter` | :white_check_mark: |
+| `app/models/project.rb` | `include Avatarable` | **{check-circle}** Yes |
+| `app/models/projects/topic.rb` | `include Avatarable` | **{check-circle}** Yes |
+| `app/models/group.rb` | `include Avatarable` | **{check-circle}** Yes |
+| `app/models/user.rb` | `include Avatarable` | **{check-circle}** Yes |
+| `app/models/terraform/state_version.rb` | `include FileStoreMounter` | **{check-circle}** Yes |
+| `app/models/ci/job_artifact.rb` | `include FileStoreMounter` | **{check-circle}** Yes |
+| `app/models/ci/pipeline_artifact.rb` | `include FileStoreMounter` | **{check-circle}** Yes |
+| `app/models/pages_deployment.rb` | `include FileStoreMounter` | **{check-circle}** Yes |
+| `app/models/lfs_object.rb` | `include FileStoreMounter` | **{check-circle}** Yes |
+| `app/models/dependency_proxy/blob.rb` | `include FileStoreMounter` | **{check-circle}** Yes |
+| `app/models/dependency_proxy/manifest.rb` | `include FileStoreMounter` | **{check-circle}** Yes |
+| `app/models/packages/composer/cache_file.rb` | `include FileStoreMounter` | **{check-circle}** Yes |
+| `app/models/packages/package_file.rb` | `include FileStoreMounter` | **{check-circle}** Yes |
+| `app/models/concerns/packages/debian/component_file.rb` | `include FileStoreMounter` | **{check-circle}** Yes |
| `ee/app/models/issuable_metric_image.rb` | `include FileStoreMounter` | |
| `ee/app/models/vulnerabilities/remediation.rb` | `include FileStoreMounter` | |
| `ee/app/models/vulnerabilities/export.rb` | `include FileStoreMounter` | |
-| `app/models/packages/debian/project_distribution.rb` | `include Packages::Debian::Distribution` | :white_check_mark: |
-| `app/models/packages/debian/group_distribution.rb` | `include Packages::Debian::Distribution` | :white_check_mark: |
-| `app/models/packages/debian/project_component_file.rb` | `include Packages::Debian::ComponentFile` | :white_check_mark: |
-| `app/models/packages/debian/group_component_file.rb` | `include Packages::Debian::ComponentFile` | :white_check_mark: |
-| `app/models/merge_request_diff.rb` | `mount_uploader :external_diff, ExternalDiffUploader` | :white_check_mark: |
-| `app/models/note.rb` | `mount_uploader :attachment, AttachmentUploader` | :white_check_mark: |
-| `app/models/appearance.rb` | `mount_uploader :logo, AttachmentUploader` | :white_check_mark: |
-| `app/models/appearance.rb` | `mount_uploader :header_logo, AttachmentUploader` | :white_check_mark: |
-| `app/models/appearance.rb` | `mount_uploader :favicon, FaviconUploader` | :white_check_mark: |
+| `app/models/packages/debian/project_distribution.rb` | `include Packages::Debian::Distribution` | **{check-circle}** Yes |
+| `app/models/packages/debian/group_distribution.rb` | `include Packages::Debian::Distribution` | **{check-circle}** Yes |
+| `app/models/packages/debian/project_component_file.rb` | `include Packages::Debian::ComponentFile` | **{check-circle}** Yes |
+| `app/models/packages/debian/group_component_file.rb` | `include Packages::Debian::ComponentFile` | **{check-circle}** Yes |
+| `app/models/merge_request_diff.rb` | `mount_uploader :external_diff, ExternalDiffUploader` | **{check-circle}** Yes |
+| `app/models/note.rb` | `mount_uploader :attachment, AttachmentUploader` | **{check-circle}** Yes |
+| `app/models/appearance.rb` | `mount_uploader :logo, AttachmentUploader` | **{check-circle}** Yes |
+| `app/models/appearance.rb` | `mount_uploader :header_logo, AttachmentUploader` | **{check-circle}** Yes |
+| `app/models/appearance.rb` | `mount_uploader :favicon, FaviconUploader` | **{check-circle}** Yes |
| `app/models/project.rb` | `mount_uploader :bfg_object_map, AttachmentUploader` | |
-| `app/models/import_export_upload.rb` | `mount_uploader :import_file, ImportExportUploader` | :white_check_mark: |
-| `app/models/import_export_upload.rb` | `mount_uploader :export_file, ImportExportUploader` | :white_check_mark: |
+| `app/models/import_export_upload.rb` | `mount_uploader :import_file, ImportExportUploader` | **{check-circle}** Yes |
+| `app/models/import_export_upload.rb` | `mount_uploader :export_file, ImportExportUploader` | **{check-circle}** Yes |
| `app/models/ci/deleted_object.rb` | `mount_uploader :file, DeletedObjectUploader` | |
-| `app/models/design_management/action.rb` | `mount_uploader :image_v432x230, DesignManagement::DesignV432x230Uploader` | :white_check_mark: |
-| `app/models/concerns/packages/debian/distribution.rb` | `mount_uploader :signed_file, Packages::Debian::DistributionReleaseFileUploader` | :white_check_mark: |
-| `app/models/bulk_imports/export_upload.rb` | `mount_uploader :export_file, ExportUploader` | :white_check_mark: |
+| `app/models/design_management/action.rb` | `mount_uploader :image_v432x230, DesignManagement::DesignV432x230Uploader` | **{check-circle}** Yes |
+| `app/models/concerns/packages/debian/distribution.rb` | `mount_uploader :signed_file, Packages::Debian::DistributionReleaseFileUploader` | **{check-circle}** Yes |
+| `app/models/bulk_imports/export_upload.rb` | `mount_uploader :export_file, ExportUploader` | **{check-circle}** Yes |
| `ee/app/models/user_permission_export_upload.rb` | `mount_uploader :file, AttachmentUploader` | |
| `app/models/ci/secure_file.rb` | `include FileStoreMounter` | |
diff --git a/doc/development/workhorse/configuration.md b/doc/development/workhorse/configuration.md
index 7f9331e6f1e..ce80a155489 100644
--- a/doc/development/workhorse/configuration.md
+++ b/doc/development/workhorse/configuration.md
@@ -128,6 +128,25 @@ relative URL in the `authBackend` setting:
gitlab-workhorse -authBackend http://localhost:8080/gitlab
```
+## TLS support
+
+A listener with TLS can be configured to be used for incoming requests.
+Paths to the files containing a certificate and matching private key for the server must be provided:
+
+```toml
+[[listeners]]
+network = "tcp"
+addr = "localhost:3443"
+[listeners.tls]
+ certificate = "/path/to/certificate"
+ key = "/path/to/private/key"
+ min_version = "tls1.2"
+ max_version = "tls1.3"
+```
+
+The `certificate` file should contain the concatenation
+of the server's certificate, any intermediates, and the CA's certificate.
+
## Interaction of authBackend and authSocket
The interaction between `authBackend` and `authSocket` can be confusing.
diff --git a/doc/development/workhorse/index.md b/doc/development/workhorse/index.md
index f7ca16e0f31..3aa7e945f53 100644
--- a/doc/development/workhorse/index.md
+++ b/doc/development/workhorse/index.md
@@ -44,7 +44,7 @@ On some operating systems, such as FreeBSD, you may have to use
### Run time dependencies
-Workhorse uses [Exiftool](https://www.sno.phy.queensu.ca/~phil/exiftool/) for
+Workhorse uses [ExifTool](https://www.sno.phy.queensu.ca/~phil/exiftool/) for
removing EXIF data (which may contain sensitive information) from uploaded
images. If you installed GitLab: