summaryrefslogtreecommitdiff
path: root/doc/development
diff options
context:
space:
mode:
authorGitLab Bot <gitlab-bot@gitlab.com>2020-08-20 18:42:06 +0000
committerGitLab Bot <gitlab-bot@gitlab.com>2020-08-20 18:42:06 +0000
commit6e4e1050d9dba2b7b2523fdd1768823ab85feef4 (patch)
tree78be5963ec075d80116a932011d695dd33910b4e /doc/development
parent1ce776de4ae122aba3f349c02c17cebeaa8ecf07 (diff)
downloadgitlab-ce-6e4e1050d9dba2b7b2523fdd1768823ab85feef4.tar.gz
Add latest changes from gitlab-org/gitlab@13-3-stable-ee
Diffstat (limited to 'doc/development')
-rw-r--r--doc/development/README.md16
-rw-r--r--doc/development/api_graphql_styleguide.md153
-rw-r--r--doc/development/api_styleguide.md7
-rw-r--r--doc/development/architecture.md164
-rw-r--r--doc/development/auto_devops.md6
-rw-r--r--doc/development/background_migrations.md10
-rw-r--r--doc/development/build_test_package.md6
-rw-r--r--doc/development/changelog.md2
-rw-r--r--doc/development/chatops_on_gitlabcom.md6
-rw-r--r--doc/development/cicd/img/ci_architecture.pngbin102944 -> 42677 bytes
-rw-r--r--doc/development/cicd/img/ci_template_selection_v13_1.pngbin21284 -> 8676 bytes
-rw-r--r--doc/development/cicd/index.md7
-rw-r--r--doc/development/cicd/templates.md48
-rw-r--r--doc/development/code_review.md12
-rw-r--r--doc/development/contributing/issue_workflow.md20
-rw-r--r--doc/development/contributing/style_guides.md19
-rw-r--r--doc/development/dangerbot.md2
-rw-r--r--doc/development/database_debugging.md2
-rw-r--r--doc/development/database_review.md4
-rw-r--r--doc/development/deleting_migrations.md2
-rw-r--r--doc/development/deprecation_guidelines/index.md23
-rw-r--r--doc/development/diffs.md9
-rw-r--r--doc/development/distributed_tracing.md6
-rw-r--r--doc/development/doc_styleguide.md2
-rw-r--r--doc/development/documentation/feature_flags.md8
-rw-r--r--doc/development/documentation/index.md104
-rw-r--r--doc/development/documentation/site_architecture/deployment_process.md58
-rw-r--r--doc/development/documentation/site_architecture/global_nav.md6
-rw-r--r--doc/development/documentation/site_architecture/index.md2
-rw-r--r--doc/development/documentation/site_architecture/release_process.md98
-rw-r--r--doc/development/documentation/structure.md117
-rw-r--r--doc/development/documentation/styleguide.md382
-rw-r--r--doc/development/ee_features.md23
-rw-r--r--doc/development/elasticsearch.md9
-rw-r--r--doc/development/fe_guide/frontend_faq.md30
-rw-r--r--doc/development/fe_guide/graphql.md37
-rw-r--r--doc/development/fe_guide/icons.md83
-rw-r--r--doc/development/fe_guide/img/webpack_report_v12_8.pngbin0 -> 47453 bytes
-rw-r--r--doc/development/fe_guide/index.md2
-rw-r--r--doc/development/fe_guide/style/javascript.md2
-rw-r--r--doc/development/fe_guide/style/scss.md12
-rw-r--r--doc/development/fe_guide/tooling.md3
-rw-r--r--doc/development/fe_guide/vue.md8
-rw-r--r--doc/development/fe_guide/vue3_migration.md3
-rw-r--r--doc/development/fe_guide/vuex.md142
-rw-r--r--doc/development/feature_categorization/index.md2
-rw-r--r--doc/development/feature_flags.md4
-rw-r--r--doc/development/feature_flags/controls.md2
-rw-r--r--doc/development/feature_flags/development.md414
-rw-r--r--doc/development/feature_flags/process.md3
-rw-r--r--doc/development/filtering_by_label.md5
-rw-r--r--doc/development/geo.md115
-rw-r--r--doc/development/geo/framework.md147
-rw-r--r--doc/development/go_guide/index.md4
-rw-r--r--doc/development/gotchas.md31
-rw-r--r--doc/development/graphql_guide/index.md13
-rw-r--r--doc/development/i18n/externalization.md77
-rw-r--r--doc/development/i18n/merging_translations.md9
-rw-r--r--doc/development/i18n/proofreader.md7
-rw-r--r--doc/development/img/bullet_v13_0.pngbin1085958 -> 407482 bytes
-rw-r--r--doc/development/img/deployment_pipeline_v13_3.pngbin0 -> 7695 bytes
-rw-r--r--doc/development/img/telemetry_system_overview.pngbin429082 -> 103618 bytes
-rw-r--r--doc/development/instrumentation.md6
-rw-r--r--doc/development/integrations/elasticsearch_for_paid_tiers_on_gitlabcom.md28
-rw-r--r--doc/development/integrations/example_vuln.pngbin102950 -> 36208 bytes
-rw-r--r--doc/development/integrations/jenkins.md2
-rw-r--r--doc/development/integrations/jira_connect.md2
-rw-r--r--doc/development/integrations/secure.md2
-rw-r--r--doc/development/internal_api.md86
-rw-r--r--doc/development/issuable-like-models.md11
-rw-r--r--doc/development/issue_types.md6
-rw-r--r--doc/development/kubernetes.md6
-rw-r--r--doc/development/licensed_feature_availability.md2
-rw-r--r--doc/development/logging.md8
-rw-r--r--doc/development/merge_request_performance_guidelines.md2
-rw-r--r--doc/development/migration_style_guide.md92
-rw-r--r--doc/development/multi_version_compatibility.md164
-rw-r--r--doc/development/new_fe_guide/development/performance.md4
-rw-r--r--doc/development/new_fe_guide/tips.md4
-rw-r--r--doc/development/packages.md21
-rw-r--r--doc/development/performance.md4
-rw-r--r--doc/development/pipelines.md55
-rw-r--r--doc/development/prometheus.md6
-rw-r--r--doc/development/prometheus_metrics.md6
-rw-r--r--doc/development/query_recorder.md3
-rw-r--r--doc/development/redis.md12
-rw-r--r--doc/development/reference_processing.md28
-rw-r--r--doc/development/service_measurement.md2
-rw-r--r--doc/development/sidekiq_style_guide.md53
-rw-r--r--doc/development/sql.md6
-rw-r--r--doc/development/telemetry/event_dictionary.md32
-rw-r--r--doc/development/telemetry/index.md178
-rw-r--r--doc/development/telemetry/snowplow.md18
-rw-r--r--doc/development/telemetry/usage_ping.md422
-rw-r--r--doc/development/testing_guide/best_practices.md110
-rw-r--r--doc/development/testing_guide/end_to_end/best_practices.md27
-rw-r--r--doc/development/testing_guide/end_to_end/index.md7
-rw-r--r--doc/development/testing_guide/end_to_end/page_objects.md26
-rw-r--r--doc/development/testing_guide/end_to_end/running_tests_that_require_special_setup.md86
-rw-r--r--doc/development/testing_guide/frontend_testing.md217
-rw-r--r--doc/development/testing_guide/review_apps.md12
-rw-r--r--doc/development/testing_guide/testing_migrations_guide.md31
-rw-r--r--doc/development/uploads.md3
-rw-r--r--doc/development/what_requires_downtime.md5
104 files changed, 2802 insertions, 1481 deletions
diff --git a/doc/development/README.md b/doc/development/README.md
index ab86c252948..74068db726c 100644
--- a/doc/development/README.md
+++ b/doc/development/README.md
@@ -5,6 +5,17 @@ description: 'Learn how to contribute to GitLab.'
# Contributor and Development Docs
+Learn the processes and technical information needed for contributing to GitLab.
+
+This content is intended for members of the GitLab Team as well as community contributors.
+Content specific to the GitLab Team should instead be included in the [Handbook](https://about.gitlab.com/handbook/).
+
+For information on using GitLab to work on your own software projects, see the [GitLab user documentation](../user/index.md).
+
+For information on working with GitLab's API, see the [API documentation](../api/README.md).
+
+For information on how to install, configure, update, and upgrade your own GitLab instance, see the [administration documentation](../administration/index.md).
+
## Get started
- Set up GitLab's development environment with [GitLab Development Kit (GDK)](https://gitlab.com/gitlab-org/gitlab-development-kit/blob/master/doc/howto/README.md)
@@ -45,6 +56,7 @@ Complementary reads:
- [Danger bot](dangerbot.md)
- [Generate a changelog entry with `bin/changelog`](changelog.md)
- [Requesting access to Chatops on GitLab.com](chatops_on_gitlabcom.md#requesting-access) (for GitLab team members)
+- [Patch release process for developers](https://gitlab.com/gitlab-org/release/docs/blob/master/general/patch/process.md#process-for-developers)
## UX and Frontend guides
@@ -134,6 +146,10 @@ See [database guidelines](database/index.md).
- [Refactoring guidelines](refactoring_guide/index.md)
+## Deprecation guides
+
+- [Deprecation guidelines](deprecation_guidelines/index.md)
+
## Documentation guides
- [Writing documentation](documentation/index.md)
diff --git a/doc/development/api_graphql_styleguide.md b/doc/development/api_graphql_styleguide.md
index 92e6add9f17..bf2d6400f56 100644
--- a/doc/development/api_graphql_styleguide.md
+++ b/doc/development/api_graphql_styleguide.md
@@ -36,6 +36,19 @@ can be shared.
It is also possible to add a `private_token` to the querystring, or
add a `HTTP_PRIVATE_TOKEN` header.
+## Global IDs
+
+GitLab's GraphQL API uses Global IDs (i.e: `"gid://gitlab/MyObject/123"`)
+and never database primary key IDs.
+
+Global ID is [a standard](https://graphql.org/learn/global-object-identification/)
+used for caching and fetching in client-side libraries.
+
+See also:
+
+- [Exposing Global IDs](#exposing-global-ids).
+- [Mutation arguments](#object-identifier-arguments).
+
## Types
We use a code-first schema, and we declare what type everything is in Ruby.
@@ -106,18 +119,28 @@ Further reading:
### Exposing Global IDs
-When exposing an `ID` field on a type, we will by default try to
-expose a global ID by calling `to_global_id` on the resource being
-rendered.
+In keeping with GitLab's use of [Global IDs](#global-ids), always convert
+database primary key IDs into Global IDs when you expose them.
+
+All fields named `id` are
+[converted automatically](https://gitlab.com/gitlab-org/gitlab/-/blob/b0f56e7/app/graphql/types/base_object.rb#L11-14)
+into the object's Global ID.
+
+Fields that are not named `id` need to be manually converted. We can do this using
+[`Gitlab::GlobalID.build`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/global_id.rb),
+or by calling `#to_global_id` on an object that has mixed in the
+`GlobalID::Identification` module.
-To override this behavior, you can implement an `id` method on the
-type for which you are exposing an ID. Please make sure that when
-exposing a `GraphQL::ID_TYPE` using a custom method that it is
-globally unique.
+Using an example from
+[`Types::Notes::DiscussionType`](https://gitlab.com/gitlab-org/gitlab/-/blob/3c95bd9/app/graphql/types/notes/discussion_type.rb#L24-26):
-The records that are exposing a `full_path` as an `ID_TYPE` are one of
-these exceptions. Since the full path is a unique identifier for a
-`Project` or `Namespace`.
+```ruby
+field :reply_id, GraphQL::ID_TYPE
+
+def reply_id
+ ::Gitlab::GlobalId.build(object, id: object.reply_id)
+end
+```
### Connection Types
@@ -429,6 +452,52 @@ module Types
end
```
+## JSON
+
+When data to be returned by GraphQL is stored as
+[JSON](migration_style_guide.md#storing-json-in-database), we should continue to use
+GraphQL types whenever possible. Avoid using the `GraphQL::Types::JSON` type unless
+the JSON data returned is _truly_ unstructured.
+
+If the structure of the JSON data varies, but will be one of a set of known possible
+structures, use a
+[union](https://graphql-ruby.org/type_definitions/unions.html).
+An example of the use of a union for this purpose is
+[!30129](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/30129).
+
+Field names can be mapped to hash data keys using the `hash_key:` keyword if needed.
+
+For example, given the following simple JSON data:
+
+```json
+{
+ "title": "My chart",
+ "data": [
+ { "x": 0, "y": 1 },
+ { "x": 1, "y": 1 },
+ { "x": 2, "y": 2 }
+ ]
+}
+```
+
+We can use GraphQL types like this:
+
+```ruby
+module Types
+ class ChartType < BaseObject
+ field :title, GraphQL::STRING_TYPE, null: true, description: 'Title of the chart'
+ field :data, [Types::ChartDatumType], null: true, description: 'Data of the chart'
+ end
+end
+
+module Types
+ class ChartDatumType < BaseObject
+ field :x, GraphQL::INT_TYPE, null: true, description: 'X-axis value of the chart datum'
+ field :y, GraphQL::INT_TYPE, null: true, description: 'Y-axis value of the chart datum'
+ end
+end
+```
+
## Descriptions
All fields and arguments
@@ -608,15 +677,8 @@ the objects in question.
To find objects to display in a field, we can add resolvers to
`app/graphql/resolvers`.
-Arguments can be defined within the resolver, those arguments will be
-made available to the fields using the resolver. When exposing a model
-that had an internal ID (`iid`), prefer using that in combination with
-the namespace path as arguments in a resolver over a database
-ID. Otherwise use a [globally unique ID](#exposing-global-ids).
-
-We already have a `FullPathLoader` that can be included in other
-resolvers to quickly find Projects and Namespaces which will have a
-lot of dependent objects.
+Arguments can be defined within the resolver in the same way as in a mutation.
+See the [Mutation arguments](#object-identifier-arguments) section.
To limit the amount of queries performed, we can use `BatchLoader`.
@@ -705,10 +767,6 @@ actions. In the same way a GET-request should not modify data, we
cannot modify data in a regular GraphQL-query. We can however in a
mutation.
-To find objects for a mutation, arguments need to be specified. As with
-[resolvers](#resolvers), prefer using internal ID or, if needed, a
-global ID rather than the database ID.
-
### Building Mutations
Mutations live in `app/graphql/mutations` ideally grouped per
@@ -763,10 +821,34 @@ If you need advice for mutation naming, canvass the Slack `#graphql` channel for
### Arguments
-Arguments required by the mutation can be defined as arguments
-required for a field. These will be wrapped up in an input type for
-the mutation. For example, the `Mutations::MergeRequests::SetWip`
-with GraphQL-name `MergeRequestSetWip` defines these arguments:
+Arguments for a mutation are defined using `argument`.
+
+Example:
+
+```ruby
+argument :my_arg, GraphQL::STRING_TYPE,
+ required: true,
+ description: "A description of the argument"
+```
+
+Each GraphQL `argument` defined will be passed to the `#resolve` method
+of a mutation as keyword arguments.
+
+Example:
+
+```ruby
+def resolve(my_arg:)
+ # Perform mutation ...
+end
+```
+
+`graphql-ruby` will automatically wrap up arguments into an
+[input type](https://graphql.org/learn/schema/#input-types).
+
+For example, the
+[`mergeRequestSetWip` mutation](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/graphql/mutations/merge_requests/set_wip.rb)
+defines these arguments (some
+[through inheritance](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/graphql/mutations/merge_requests/base.rb)):
```ruby
argument :project_path, GraphQL::ID_TYPE,
@@ -786,12 +868,19 @@ argument :wip,
DESC
```
-This would automatically generate an input type called
+These arguments automatically generate an input type called
`MergeRequestSetWipInput` with the 3 arguments we specified and the
`clientMutationId`.
-These arguments are then passed to the `resolve` method of a mutation
-as keyword arguments.
+### Object identifier arguments
+
+In keeping with GitLab's use of [Global IDs](#global-ids), mutation
+arguments should use Global IDs to identify an object and never database
+primary key IDs.
+
+Where an object has an `iid`, prefer to use the `full_path` or `group_path`
+of its parent in combination with its `iid` as arguments to identify an
+object rather than its `id`.
### Fields
@@ -1204,3 +1293,7 @@ See the [schema reference](../api/graphql/reference/index.md) for details.
This generated GraphQL documentation needs to be updated when the schema changes.
For information on generating GraphQL documentation and schema files, see
[updating the schema documentation](rake_tasks.md#update-graphql-documentation-and-schema-definitions).
+
+To help our readers, you should also add a new page to our [GraphQL API](../api/graphql/index.md) documentation.
+For guidance, see the [GraphQL API](documentation/styleguide.md#graphql-api) section
+of our documentation style guide.
diff --git a/doc/development/api_styleguide.md b/doc/development/api_styleguide.md
index 327f919d7f4..06b05f49b12 100644
--- a/doc/development/api_styleguide.md
+++ b/doc/development/api_styleguide.md
@@ -13,7 +13,7 @@ Always use an [Entity](https://gitlab.com/gitlab-org/gitlab/blob/master/lib/api/
## Documentation
-API endpoints must come with [documentation](documentation/styleguide.md#api), unless it is internal or behind a feature flag.
+API endpoints must come with [documentation](documentation/styleguide.md#restful-api), unless it is internal or behind a feature flag.
The docs should be in the same merge request, or, if strictly necessary,
in a follow-up with the same milestone as the original merge request.
@@ -85,7 +85,7 @@ User.create(params) # imagine the user submitted `admin=1`... :)
User.create(declared(params, include_parent_namespaces: false).to_h)
```
->**Note:**
+NOTE: **Note:**
`declared(params)` return a `Hashie::Mash` object, on which you will have to
call `.to_h`.
@@ -173,7 +173,8 @@ guide on how you can add a new custom validator.
validates the parameter value for different cases. Mainly, it checks whether a
path is relative and does it contain `../../` relative traversal using
`File::Separator` or not, and whether the path is absolute, for example
- `/etc/passwd/`.
+ `/etc/passwd/`. By default, absolute paths are not allowed. However, you can optionally pass in an allowlist for allowed absolute paths in the following way:
+ `requires :file_path, type: String, file_path: { allowlist: ['/foo/bar/', '/home/foo/', '/app/home'] }`
- `Git SHA`:
diff --git a/doc/development/architecture.md b/doc/development/architecture.md
index 8b28dd03017..963e1e618a1 100644
--- a/doc/development/architecture.md
+++ b/doc/development/architecture.md
@@ -46,69 +46,101 @@ https://docs.google.com/drawings/d/1fBzAyklyveF-i-2q-OHUIqDkYfjjxC4mq5shwKSZHLs/
```mermaid
graph TB
- HTTP[HTTP/HTTPS] -- TCP 80, 443 --> NGINX[NGINX]
- SSH -- TCP 22 --> GitLabShell[GitLab Shell]
- SMTP[SMTP Gateway]
- Geo[GitLab Geo Node] -- TCP 22, 80, 443 --> NGINX
-
- GitLabShell --TCP 8080 -->Unicorn["Unicorn (GitLab Rails)"]
- GitLabShell --> Praefect
- GitLabShell --> Redis
- Unicorn --> PgBouncer[PgBouncer]
- Unicorn --> Redis
- Unicorn --> Praefect
- Sidekiq --> Redis
- Sidekiq --> PgBouncer
- Sidekiq --> Praefect
- GitLabWorkhorse[GitLab Workhorse] --> Unicorn
- GitLabWorkhorse --> Redis
- GitLabWorkhorse --> Praefect
- Praefect --> Gitaly
- NGINX --> GitLabWorkhorse
- NGINX -- TCP 8090 --> GitLabPages[GitLab Pages]
- NGINX --> Grafana[Grafana]
- Grafana -- TCP 9090 --> Prometheus[Prometheus]
- Prometheus -- TCP 80, 443 --> Unicorn
- RedisExporter[Redis Exporter] --> Redis
- Prometheus -- TCP 9121 --> RedisExporter
- PostgreSQLExporter[PostgreSQL Exporter] --> PostgreSQL
- PgBouncerExporter[PgBouncer Exporter] --> PgBouncer
- Prometheus -- TCP 9187 --> PostgreSQLExporter
- Prometheus -- TCP 9100 --> NodeExporter[Node Exporter]
- Prometheus -- TCP 9168 --> GitLabExporter[GitLab Exporter]
- Prometheus -- TCP 9127 --> PgBouncerExporter
- GitLabExporter --> PostgreSQL
- GitLabExporter --> GitLabShell
- GitLabExporter --> Sidekiq
- PgBouncer --> Consul
- PostgreSQL --> Consul
- PgBouncer --> PostgreSQL
- NGINX --> Registry
- Unicorn --> Registry
- NGINX --> Mattermost
- Mattermost --- Unicorn
- Prometheus --> Alertmanager
- Migrations --> PostgreSQL
- Runner -- TCP 443 --> NGINX
- Unicorn -- TCP 9200 --> Elasticsearch
- Sidekiq -- TCP 9200 --> Elasticsearch
- Sidekiq -- TCP 80, 443 --> Sentry
- Unicorn -- TCP 80, 443 --> Sentry
- Sidekiq -- UDP 6831 --> Jaeger
- Unicorn -- UDP 6831 --> Jaeger
- Gitaly -- UDP 6831 --> Jaeger
- GitLabShell -- UDP 6831 --> Jaeger
- GitLabWorkhorse -- UDP 6831 --> Jaeger
- Alertmanager -- TCP 25 --> SMTP
- Sidekiq -- TCP 25 --> SMTP
- Unicorn -- TCP 25 --> SMTP
- Unicorn -- TCP 369 --> LDAP
- Sidekiq -- TCP 369 --> LDAP
- Unicorn -- TCP 443 --> ObjectStorage["Object Storage"]
- Sidekiq -- TCP 443 --> ObjectStorage
- GitLabWorkhorse -- TCP 443 --> ObjectStorage
- Registry -- TCP 443 --> ObjectStorage
- Geo -- TCP 5432 --> PostgreSQL
+HTTP[HTTP/HTTPS] -- TCP 80, 443 --> NGINX[NGINX]
+SSH -- TCP 22 --> GitLabShell[GitLab Shell]
+SMTP[SMTP Gateway]
+Geo[GitLab Geo Node] -- TCP 22, 80, 443 --> NGINX
+
+GitLabShell --TCP 8080 -->Unicorn["Unicorn (GitLab Rails)"]
+GitLabShell --> Praefect
+Unicorn --> PgBouncer[PgBouncer]
+Unicorn --> Redis
+Unicorn --> Praefect
+Sidekiq --> Redis
+Sidekiq --> PgBouncer
+Sidekiq --> Praefect
+GitLabWorkhorse[GitLab Workhorse] --> Unicorn
+GitLabWorkhorse --> Redis
+GitLabWorkhorse --> Praefect
+Praefect --> Gitaly
+NGINX --> GitLabWorkhorse
+NGINX -- TCP 8090 --> GitLabPages[GitLab Pages]
+NGINX --> Grafana[Grafana]
+Grafana -- TCP 9090 --> Prometheus[Prometheus]
+Prometheus -- TCP 80, 443 --> Unicorn
+RedisExporter[Redis Exporter] --> Redis
+Prometheus -- TCP 9121 --> RedisExporter
+PostgreSQLExporter[PostgreSQL Exporter] --> PostgreSQL
+PgBouncerExporter[PgBouncer Exporter] --> PgBouncer
+Prometheus -- TCP 9187 --> PostgreSQLExporter
+Prometheus -- TCP 9100 --> NodeExporter[Node Exporter]
+Prometheus -- TCP 9168 --> GitLabExporter[GitLab Exporter]
+Prometheus -- TCP 9127 --> PgBouncerExporter
+GitLabExporter --> PostgreSQL
+GitLabExporter --> GitLabShell
+GitLabExporter --> Sidekiq
+PgBouncer --> Consul
+PostgreSQL --> Consul
+PgBouncer --> PostgreSQL
+NGINX --> Registry
+Unicorn --> Registry
+NGINX --> Mattermost
+Mattermost --- Unicorn
+Prometheus --> Alertmanager
+Migrations --> PostgreSQL
+Runner -- TCP 443 --> NGINX
+Unicorn -- TCP 9200 --> Elasticsearch
+Sidekiq -- TCP 9200 --> Elasticsearch
+Sidekiq -- TCP 80, 443 --> Sentry
+Unicorn -- TCP 80, 443 --> Sentry
+Sidekiq -- UDP 6831 --> Jaeger
+Unicorn -- UDP 6831 --> Jaeger
+Gitaly -- UDP 6831 --> Jaeger
+GitLabShell -- UDP 6831 --> Jaeger
+GitLabWorkhorse -- UDP 6831 --> Jaeger
+Alertmanager -- TCP 25 --> SMTP
+Sidekiq -- TCP 25 --> SMTP
+Unicorn -- TCP 25 --> SMTP
+Unicorn -- TCP 369 --> LDAP
+Sidekiq -- TCP 369 --> LDAP
+Unicorn -- TCP 443 --> ObjectStorage["Object Storage"]
+Sidekiq -- TCP 443 --> ObjectStorage
+GitLabWorkhorse -- TCP 443 --> ObjectStorage
+Registry -- TCP 443 --> ObjectStorage
+Geo -- TCP 5432 --> PostgreSQL
+
+click Alertmanager "./architecture.html#alertmanager"
+click Praefect "./architecture.html#praefect"
+click Geo "./architecture.html#gitlab-geo"
+click NGINX "./architecture.html#nginx"
+click Runner "./architecture.html#gitlab-runner"
+click Registry "./architecture.html#registry"
+click ObjectStorage "./architecture.html#minio"
+click Mattermost "./architecture.html#mattermost"
+click Gitaly "./architecture.html#gitaly"
+click Jaeger "./architecture.html#jaeger"
+click GitLabWorkhorse "./architecture.html#gitlab-workhorse"
+click LDAP "./architecture.html#ldap-authentication"
+click Unicorn "./architecture.html#unicorn"
+click GitLabShell "./architecture.html#gitlab-shell"
+click SSH "./architecture.html#ssh-request-22"
+click Sidekiq "./architecture.html#sidekiq"
+click Sentry "./architecture.html#sentry"
+click GitLabExporter "./architecture.html#gitlab-exporter"
+click Elasticsearch "./architecture.html#elasticsearch"
+click Migrations "./architecture.html#database-migrations"
+click PostgreSQL "./architecture.html#postgresql"
+click Consul "./architecture.html#consul"
+click PgBouncer "./architecture.html#pgbouncer"
+click PgBouncerExporter "./architecture.html#pgbouncer-exporter"
+click RedisExporter "./architecture.html#redis-exporter"
+click Redis "./architecture.html#redis"
+click Prometheus "./architecture.html#prometheus"
+click Grafana "./architecture.html#grafana"
+click GitLabPages "./architecture.html#gitlab-pages"
+click PostgreSQLExporter "./architecture.html#postgresql-exporter"
+click SMTP "./architecture.html#outbound-email"
+click NodeExporter "./architecture.html#node-exporter"
```
### Component legend
@@ -215,7 +247,7 @@ GitLab can be considered to have two layers from a process perspective:
- [Project page](https://github.com/hashicorp/consul/blob/master/README.md)
- Configuration:
- - [Omnibus](../administration/high_availability/consul.md)
+ - [Omnibus](../administration/consul.md)
- [Charts](https://docs.gitlab.com/charts/installation/deployment.html#postgresql)
- Layer: Core Service (Data)
- GitLab.com: [Consul](../user/gitlab_com/index.md#consul)
@@ -435,7 +467,7 @@ NGINX has an Ingress port for all HTTP requests and routes them to the appropria
- [Project page](https://github.com/pgbouncer/pgbouncer/blob/master/README.md)
- Configuration:
- - [Omnibus](../administration/high_availability/pgbouncer.md)
+ - [Omnibus](../administration/postgresql/pgbouncer.md)
- [Charts](https://docs.gitlab.com/charts/installation/deployment.html#postgresql)
- Layer: Core Service (Data)
- GitLab.com: [Database Architecture](https://about.gitlab.com/handbook/engineering/infrastructure/production/architecture/#database-architecture)
@@ -554,7 +586,7 @@ An external registry can also be configured to use GitLab as an auth endpoint.
Sentry fundamentally is a service that helps you monitor and fix crashes in real time.
The server is in Python, but it contains a full API for sending events from any language, in any application.
-For monitoring deployed apps, see the [Sentry integration docs](../user/project/operations/error_tracking.md)
+For monitoring deployed apps, see the [Sentry integration docs](../operations/error_tracking.md)
#### Sidekiq
diff --git a/doc/development/auto_devops.md b/doc/development/auto_devops.md
index 7d0c020ef96..6bdc77fff63 100644
--- a/doc/development/auto_devops.md
+++ b/doc/development/auto_devops.md
@@ -1,3 +1,9 @@
+---
+stage: Configure
+group: Configure
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+---
+
# Auto DevOps development guide
This document provides a development guide for contributors to
diff --git a/doc/development/background_migrations.md b/doc/development/background_migrations.md
index d9e06206961..4b58758b5c7 100644
--- a/doc/development/background_migrations.md
+++ b/doc/development/background_migrations.md
@@ -29,10 +29,10 @@ Some examples where background migrations can be useful:
- Populating one column based on JSON stored in another column.
- Migrating data that depends on the output of external services (e.g. an API).
-> **Note:**
-> If the background migration is part of an important upgrade, make sure it's announced
-> in the release post. Discuss with your Project Manager if you're not sure the migration falls
-> into this category.
+NOTE: **Note:**
+If the background migration is part of an important upgrade, make sure it's announced
+in the release post. Discuss with your Project Manager if you're not sure the migration falls
+into this category.
## Isolation
@@ -123,7 +123,7 @@ once.
## Cleaning Up
->**Note:**
+NOTE: **Note:**
Cleaning up any remaining background migrations _must_ be done in either a major
or minor release, you _must not_ do this in a patch release.
diff --git a/doc/development/build_test_package.md b/doc/development/build_test_package.md
index 858ff41b685..e99915a24d0 100644
--- a/doc/development/build_test_package.md
+++ b/doc/development/build_test_package.md
@@ -1,3 +1,9 @@
+---
+stage: Enablement
+group: Distribution
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+---
+
# Building a package for testing
While developing a new feature or modifying an existing one, it is helpful if an
diff --git a/doc/development/changelog.md b/doc/development/changelog.md
index 00a0573a8ba..e83ce40ef60 100644
--- a/doc/development/changelog.md
+++ b/doc/development/changelog.md
@@ -37,6 +37,8 @@ the `author` field. GitLab team members **should not**.
- Any user-facing change **should** have a changelog entry. Example: "GitLab now
uses system fonts for all text."
- Performance improvements **should** have a changelog entry.
+- Changes that need to be documented in the Telemetry [Event Dictionary](telemetry/event_dictionary.md)
+ also require a changelog entry.
- _Any_ contribution from a community member, no matter how small, **may** have
a changelog entry regardless of these guidelines if the contributor wants one.
Example: "Fixed a typo on the search results page."
diff --git a/doc/development/chatops_on_gitlabcom.md b/doc/development/chatops_on_gitlabcom.md
index a3a4f1d7adc..0dd916c37fd 100644
--- a/doc/development/chatops_on_gitlabcom.md
+++ b/doc/development/chatops_on_gitlabcom.md
@@ -1,3 +1,9 @@
+---
+stage: Configure
+group: Configure
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+---
+
# Chatops on GitLab.com
ChatOps on GitLab.com allows GitLab team members to run various automation tasks on GitLab.com using Slack.
diff --git a/doc/development/cicd/img/ci_architecture.png b/doc/development/cicd/img/ci_architecture.png
index b888f2f07aa..0dd8ba57f51 100644
--- a/doc/development/cicd/img/ci_architecture.png
+++ b/doc/development/cicd/img/ci_architecture.png
Binary files differ
diff --git a/doc/development/cicd/img/ci_template_selection_v13_1.png b/doc/development/cicd/img/ci_template_selection_v13_1.png
index af9f6dd1a90..32de35f5c1f 100644
--- a/doc/development/cicd/img/ci_template_selection_v13_1.png
+++ b/doc/development/cicd/img/ci_template_selection_v13_1.png
Binary files differ
diff --git a/doc/development/cicd/index.md b/doc/development/cicd/index.md
index e0cca00fd69..5b598a19a6e 100644
--- a/doc/development/cicd/index.md
+++ b/doc/development/cicd/index.md
@@ -1,3 +1,10 @@
+---
+stage: Verify
+group: Continuous Integration
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+type: index, concepts, howto
+---
+
# CI/CD development documentation
Development guides that are specific to CI/CD are listed here.
diff --git a/doc/development/cicd/templates.md b/doc/development/cicd/templates.md
index 904e7adffe2..0169ca42ac6 100644
--- a/doc/development/cicd/templates.md
+++ b/doc/development/cicd/templates.md
@@ -1,3 +1,10 @@
+---
+stage: Release
+group: Progressive Delivery
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+type: index, concepts, howto
+---
+
# Development guide for GitLab CI/CD templates
This document explains how to develop [GitLab CI/CD templates](../../ci/examples/README.md).
@@ -8,6 +15,7 @@ All template files reside in the `lib/gitlab/ci/templates` directory, and are ca
| Sub-directroy | Content | [Selectable in UI](#make-sure-the-new-template-can-be-selected-in-ui) |
|---------------|--------------------------------------------------------------|-----------------------------------------------------------------------|
+| `/AWS/*` | Cloud Deployment (AWS) related jobs | No |
| `/Jobs/*` | Auto DevOps related jobs | Yes |
| `/Pages/*` | Static site generators for GitLab Pages (for example Jekyll) | Yes |
| `/Security/*` | Security related jobs | Yes |
@@ -25,9 +33,37 @@ Also, all templates must be named with the `*.gitlab-ci.yml` suffix.
### Backward compatibility
A template might be dynamically included with the `include:template:` keyword. If
-you make a change to an *existing* template, you must make sure that it won't break
+you make a change to an *existing* template, you **must** make sure that it won't break
CI/CD in existing projects.
+For example, changing a job name in a template could break pipelines in an existing project.
+Let's say there is a template named `Performance.gitlab-ci.yml` with the following content:
+
+```yaml
+performance:
+ image: registry.gitlab.com/gitlab-org/verify-tools/performance:v0.1.0
+ script: ./performance-test $TARGET_URL
+```
+
+and users include this template with passing an argument to the `performance` job.
+This can be done by specifying the environment variable `TARGET_URL` in _their_ `.gitlab-ci.yml`:
+
+```yaml
+include:
+ template: Performance.gitlab-ci.yml
+
+performance:
+ variables:
+ TARGET_URL: https://awesome-app.com
+```
+
+If the job name `performance` in the template is renamed to `browser-performance`,
+user's `.gitlab-ci.yml` will immediately cause a lint error because there
+are no such jobs named `performance` in the included template anymore. Therefore,
+users have to fix their `.gitlab-ci.yml` that could annoy their workflow.
+
+Please read [versioning](#versioning) section for introducing breaking change safely.
+
## Testing
Each CI/CD template must be tested in order to make sure that it's safe to be published.
@@ -64,3 +100,13 @@ You should write an RSpec test to make sure that pipeline jobs will be generated
A template could contain malicious code. For example, a template that contains the `export` shell command in a job
might accidentally expose project secret variables in a job log.
If you're unsure if it's secure or not, you need to ask security experts for cross-validation.
+
+## Versioning
+
+Versioning allows you to introduce a new template without modifying the existing
+one. This is useful process especially when we need to introduce a breaking change,
+but don't want to affect the existing projects that depends on the current template.
+
+There is an [open issue](https://gitlab.com/gitlab-org/gitlab/-/issues/17716) for
+introducing versioning concept in GitLab Ci Template. Please follow the issue for
+checking the progress.
diff --git a/doc/development/code_review.md b/doc/development/code_review.md
index fd53ce79534..2159f7a9ed5 100644
--- a/doc/development/code_review.md
+++ b/doc/development/code_review.md
@@ -230,7 +230,7 @@ Instead these should be sent to the [Release Manager](https://about.gitlab.com/c
- Ask for clarification. ("I didn't understand. Can you clarify?")
- Avoid selective ownership of code. ("mine", "not mine", "yours")
- Avoid using terms that could be seen as referring to personal traits. ("dumb",
- "stupid"). Assume everyone is attractive, intelligent, and well-meaning.
+ "stupid"). Assume everyone is intelligent and well-meaning.
- Be explicit. Remember people don't always understand your intentions online.
- Be humble. ("I'm not sure - let's look it up.")
- Don't use hyperbole. ("always", "never", "endlessly", "nothing")
@@ -281,12 +281,16 @@ first time.
### Assigning a merge request for a review
When you are ready to have your merge request reviewed,
-you should default to assigning it to a reviewer from your group or team for the first review,
-however, you can also assign it to any reviewer. The list of reviewers can be found on [Engineering projects](https://about.gitlab.com/handbook/engineering/projects/) page.
+you should request an initial review by assigning it to a reviewer from your group or team.
+However, you can also assign it to any reviewer. The list of reviewers can be found on [Engineering projects](https://about.gitlab.com/handbook/engineering/projects/) page.
You can also use `workflow::ready for review` label. That means that your merge request is ready to be reviewed and any reviewer can pick it. It is recommended to use that label only if there isn't time pressure and make sure the merge request is assigned to a reviewer.
-When your merge request was reviewed and can be passed to a maintainer, you should default to choosing a maintainer with [domain expertise](#domain-experts), and otherwise follow the Reviewer Roulette recommendation or use the label `ready for merge`.
+When your merge request receives an approval from the first reviewer it can be passed to a maintainer. You should default to choosing a maintainer with [domain expertise](#domain-experts), and otherwise follow the Reviewer Roulette recommendation or use the label `ready for merge`.
+
+Sometimes, a maintainer may not be available for review. They could be out of the office or [at capacity](#review-response-slo).
+You can and should check the maintainer’s availability in their profile. If the maintainer recommended by
+the roulette is not available, choose someone else from that list.
It is responsibility of the author of a merge request that the merge request is reviewed. If it stays in `ready for review` state too long it is recommended to assign it to a specific reviewer.
diff --git a/doc/development/contributing/issue_workflow.md b/doc/development/contributing/issue_workflow.md
index 9feaa485bd2..76175cb7b66 100644
--- a/doc/development/contributing/issue_workflow.md
+++ b/doc/development/contributing/issue_workflow.md
@@ -49,8 +49,8 @@ Most issues will have labels for at least one of the following:
- Team: `~"Technical Writing"`, `~Delivery`
- Specialization: `~frontend`, `~backend`, `~documentation`
- Release Scoping: `~Deliverable`, `~Stretch`, `~"Next Patch Release"`
-- Priority: `~P1`, `~P2`, `~P3`, `~P4`
-- Severity: ~`S1`, `~S2`, `~S3`, `~S4`
+- Priority: `~P::1`, `~P::2`, `~P::3`, `~P::4`
+- Severity: ~`S::1`, `~S::2`, `~S::3`, `~S::4`
All labels, their meaning and priority are defined on the
[labels page](https://gitlab.com/gitlab-org/gitlab/-/labels).
@@ -275,10 +275,10 @@ or ~"Stretch". Any open issue for a previous milestone should be labeled
We have the following priority labels:
-- ~P1
-- ~P2
-- ~P3
-- ~P4
+- ~P::1
+- ~P::2
+- ~P::3
+- ~P::4
Please refer to the issue triage [priority label](https://about.gitlab.com/handbook/engineering/quality/issue-triage/#priority) section in our handbook to see how it's used.
@@ -286,10 +286,10 @@ Please refer to the issue triage [priority label](https://about.gitlab.com/handb
We have the following severity labels:
-- ~S1
-- ~S2
-- ~S3
-- ~S4
+- ~S::1
+- ~S::2
+- ~S::3
+- ~S::4
Please refer to the issue triage [severity label](https://about.gitlab.com/handbook/engineering/quality/issue-triage/#severity) section in our handbook to see how it's used.
diff --git a/doc/development/contributing/style_guides.md b/doc/development/contributing/style_guides.md
index ed254052180..f6e64c1f1e6 100644
--- a/doc/development/contributing/style_guides.md
+++ b/doc/development/contributing/style_guides.md
@@ -10,26 +10,23 @@ we suggest investigating to see if a plugin exists. For instance here is the
## Pre-commit static analysis
-You're strongly advised to install
-[Overcommit](https://github.com/sds/overcommit) to automatically check for
+You should install [`overcommit`](https://github.com/sds/overcommit) to automatically check for
static analysis offenses before committing locally.
-In your GitLab source directory run:
+After installing `overcommit`, run the following in your GitLab source directory:
```shell
make -C tooling/overcommit
```
-Then before a commit is created, Overcommit will automatically check for
-RuboCop (and other checks) offenses on every modified file.
+Then before a commit is created, `overcommit` automatically checks for RuboCop (and other checks)
+offenses on every modified file.
-This saves you time as you don't have to wait for the same errors to be detected
-by the CI.
+This saves you time as you don't have to wait for the same errors to be detected by CI/CD.
-Overcommit relies on a pre-commit hook to prevent commits that violate its ruleset.
-If you wish to override this behavior, it can be done by passing the ENV variable
-`OVERCOMMIT_DISABLE`; i.e. `OVERCOMMIT_DISABLE=1 git rebase master` to rebase while
-disabling the Git hook.
+`overcommit` relies on a pre-commit hook to prevent commits that violate its ruleset. To override
+this behavior, pass the `OVERCOMMIT_DISABLE` environment variable. For example,
+`OVERCOMMIT_DISABLE=1 git rebase master` to rebase while disabling the Git hook.
## Ruby, Rails, RSpec
diff --git a/doc/development/dangerbot.md b/doc/development/dangerbot.md
index 93434479846..6fda394c10d 100644
--- a/doc/development/dangerbot.md
+++ b/doc/development/dangerbot.md
@@ -136,7 +136,7 @@ at GitLab so far:
- Their availability:
- No "OOO"/"PTO"/"Parental Leave" in their GitLab or Slack status.
- No `:red_circle:`/`:palm_tree:`/`:beach:`/`:beach_umbrella:`/`:beach_with_umbrella:` emojis in GitLab or Slack status.
- - [Experimental] Their timezone: people for which the local hour is between
+ - (Experimental) Their timezone: people for which the local hour is between
6 AM and 2 PM are eligible to be picked. This is to ensure they have a good
chance to get to perform a review during their current work day. The experimentation is tracked in
[this issue](https://gitlab.com/gitlab-org/quality/team-tasks/-/issues/563)
diff --git a/doc/development/database_debugging.md b/doc/development/database_debugging.md
index 6da0455f392..25b62e0d693 100644
--- a/doc/development/database_debugging.md
+++ b/doc/development/database_debugging.md
@@ -11,7 +11,7 @@ Available `RAILS_ENV`:
- `development` (this is your main GDK db).
- `test` (used for tests like RSpec).
-## Nuke everything and start over
+## Delete everything and start over
If you just want to delete everything and start over with an empty DB (approximately 1 minute):
diff --git a/doc/development/database_review.md b/doc/development/database_review.md
index 967df411db5..f56ffdbad21 100644
--- a/doc/development/database_review.md
+++ b/doc/development/database_review.md
@@ -78,7 +78,8 @@ the following preparations into account.
#### Preparation when adding migrations
-- Ensure `db/structure.sql` is updated as [documented](migration_style_guide.md#schema-changes).
+- Ensure `db/structure.sql` is updated as [documented](migration_style_guide.md#schema-changes), and additionally ensure that the relevant version files under
+`db/schema_migrations` were added or removed.
- Make migrations reversible by using the `change` method or include a `down` method when using `up`.
- Include either a rollback procedure or describe how to rollback changes.
- Add the output of both migrating and rolling back for all migrations into the MR description.
@@ -149,6 +150,7 @@ test its execution using `CREATE INDEX CONCURRENTLY` in the `#database-lab` Slac
- Ensure it was added in a post-migration.
- Maintainer: After the merge request is merged, notify Release Managers about it on `#f_upcoming_release` Slack channel.
- Check consistency with `db/structure.sql` and that migrations are [reversible](migration_style_guide.md#reversibility)
+ - Check that the relevant version files under `db/schema_migrations` were added or removed.
- Check queries timing (If any): Queries executed in a migration
need to fit comfortably within `15s` - preferably much less than that - on GitLab.com.
- For column removals, make sure the column has been [ignored in a previous release](what_requires_downtime.md#dropping-columns)
diff --git a/doc/development/deleting_migrations.md b/doc/development/deleting_migrations.md
index 8aa16710d55..b8f23019ac2 100644
--- a/doc/development/deleting_migrations.md
+++ b/doc/development/deleting_migrations.md
@@ -23,7 +23,7 @@ Migrations can be disabled if:
In order to disable a migration, the following steps apply to all types of migrations:
1. Turn the migration into a no-op by removing the code inside `#up`, `#down`
- or `#perform` methods, and adding `#no-op` comment instead.
+ or `#perform` methods, and adding `# no-op` comment instead.
1. Add a comment explaining why the code is gone.
Disabling migrations requires explicit approval of Database Maintainer.
diff --git a/doc/development/deprecation_guidelines/index.md b/doc/development/deprecation_guidelines/index.md
new file mode 100644
index 00000000000..1ee22644bbc
--- /dev/null
+++ b/doc/development/deprecation_guidelines/index.md
@@ -0,0 +1,23 @@
+# Deprecation guidelines
+
+This page includes information about how and when to remove or make breaking
+changes to GitLab features.
+
+## Terminology
+
+It's important to understand the difference between **deprecation** and
+**removal**:
+
+**Deprecation** is the process of flagging/marking/announcing that a feature
+will be removed in a future version of GitLab.
+
+**Removal** is the process of actually removing a feature that was previously
+deprecated.
+
+## When can a feature be deprecated?
+
+A feature can be deprecated at any time, provided there is a viable alternative.
+
+## When can a feature be removed/changed?
+
+See our [Release and Maintenance policy](../../policy/maintenance.md).
diff --git a/doc/development/diffs.md b/doc/development/diffs.md
index e065e0acc6f..eb070cbf4d7 100644
--- a/doc/development/diffs.md
+++ b/doc/development/diffs.md
@@ -93,7 +93,8 @@ Gitlab::Git::DiffCollection.collection_limits[:max_bytes] = Gitlab::Git::DiffCol
No more files will be rendered at all if 5 megabytes have already been rendered.
-*Note:* All collection limit parameters are currently sent and applied on Gitaly. That is, once the limit is surpassed,
+NOTE: **Note:**
+All collection limit parameters are currently sent and applied on Gitaly. That is, once the limit is surpassed,
Gitaly will only return the safe amount of data to be persisted on `merge_request_diff_files`.
### Individual diff file limits
@@ -107,7 +108,8 @@ That is, it's equivalent to 10kb if the maximum allowed value is 100kb.
The diff will still be persisted and expandable if the patch size doesn't
surpass `ApplicationSettings#diff_max_patch_bytes`.
-*Note:* Although this nomenclature (Collapsing) is also used on Gitaly, this limit is only used on GitLab (hardcoded - not sent to Gitaly).
+NOTE: **Note:**
+Although this nomenclature (Collapsing) is also used on Gitaly, this limit is only used on GitLab (hardcoded - not sent to Gitaly).
Gitaly will only return `Diff.Collapsed` (RPC) when surpassing collection limits.
#### Not expandable patches (too large)
@@ -121,7 +123,8 @@ Commit::DIFF_SAFE_LINES = Gitlab::Git::DiffCollection::DEFAULT_LIMITS[:max_lines
File diff will be suppressed (technically different from collapsed, but behaves the same, and is expandable) if it has more than 5000 lines.
-*Note:* This limit is currently hardcoded and only applied on GitLab.
+NOTE: **Note:**
+This limit is currently hardcoded and only applied on GitLab.
## Viewers
diff --git a/doc/development/distributed_tracing.md b/doc/development/distributed_tracing.md
index 15b3b8ba755..cbbeae47a41 100644
--- a/doc/development/distributed_tracing.md
+++ b/doc/development/distributed_tracing.md
@@ -1,3 +1,9 @@
+---
+stage: Monitor
+group: APM
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+---
+
# Distributed Tracing - development guidelines
GitLab is instrumented for distributed tracing.
diff --git a/doc/development/doc_styleguide.md b/doc/development/doc_styleguide.md
index 7d84f8ca86a..4a9f2a626f7 100644
--- a/doc/development/doc_styleguide.md
+++ b/doc/development/doc_styleguide.md
@@ -1,3 +1,5 @@
---
redirect_to: 'documentation/styleguide.md'
---
+
+This document was moved to [another location](documentation/styleguide.md).
diff --git a/doc/development/documentation/feature_flags.md b/doc/development/documentation/feature_flags.md
index f3ce9ce3a83..e2fbf25eb8a 100644
--- a/doc/development/documentation/feature_flags.md
+++ b/doc/development/documentation/feature_flags.md
@@ -56,7 +56,7 @@ not ready for production use:
> - [Introduced](link-to-issue) in GitLab 12.0.
> - It's deployed behind a feature flag, disabled by default.
> - It's disabled on GitLab.com.
-> - It's able to be enabled or disabled per-project
+> - It's able to be enabled or disabled per-project.
> - It's not recommended for production use.
> - To use it in GitLab self-managed instances, ask a GitLab administrator to [enable it](#anchor-to-section). **(CORE ONLY)**
@@ -67,7 +67,7 @@ not ready for production use:
<Feature Name> is under development and not ready for production use. It is
deployed behind a feature flag that is **disabled by default**.
[GitLab administrators with access to the GitLab Rails console](../path/to/administration/feature_flags.md)
-can enable it for your instance. <Feature Name> can be enabled or disabled per-project
+can enable it for your instance. <Feature Name> can be enabled or disabled per-project.
To enable it:
@@ -109,7 +109,7 @@ For example, for a feature initially deployed disabled by default, that became e
> - It was deployed behind a feature flag, disabled by default.
> - [Became enabled by default](link-to-issue) on GitLab 12.1.
> - It's enabled on GitLab.com.
-> - It's not able to be enabled or disabled per-project
+> - It's not able to be enabled or disabled per-project.
> - It's recommended for production use.
> - For GitLab self-managed instances, GitLab administrators can opt to [disable it](#anchor-to-section). **(CORE ONLY)**
@@ -155,7 +155,7 @@ For example, for a feature enabled by default, enabled on GitLab.com, cannot be
> - [Introduced](link-to-issue) in GitLab 12.0.
> - It's deployed behind a feature flag, enabled by default.
> - It's enabled on GitLab.com.
-> - It's not able to be enabled or disabled per-project
+> - It's not able to be enabled or disabled per-project.
> - It's recommended for production use.
> - For GitLab self-managed instances, GitLab administrators can opt to [disable it](#anchor-to-section). **(CORE ONLY)**
diff --git a/doc/development/documentation/index.md b/doc/development/documentation/index.md
index 2ea26985fcf..283060ba8d4 100644
--- a/doc/development/documentation/index.md
+++ b/doc/development/documentation/index.md
@@ -445,8 +445,8 @@ In case the review app URL returns 404, follow these steps to debug:
If you want to know the in-depth details, here's what's really happening:
1. You manually run the `review-docs-deploy` job in a merge request.
-1. The job runs the [`scripts/trigger-build-docs`](https://gitlab.com/gitlab-org/gitlab/blob/master/scripts/trigger-build-docs)
- script with the `deploy` flag, which in turn:
+1. The job runs the [`scripts/trigger-build`](https://gitlab.com/gitlab-org/gitlab/blob/master/scripts/trigger-build)
+ script with the `docs deploy` flag, which in turn:
1. Takes your branch name and applies the following:
- The `docs-preview-` prefix is added.
- The product slug is used to know the project the review app originated
@@ -481,7 +481,7 @@ We treat documentation as code, and so use tests in our CI pipeline to maintain
standards and quality of the docs. The current tests, which run in CI jobs when a
merge request with new or changed docs is submitted, are:
-- [`docs lint`](https://gitlab.com/gitlab-org/gitlab/blob/master/.gitlab/ci/docs.gitlab-ci.yml#L48):
+- [`docs lint`](https://gitlab.com/gitlab-org/gitlab/-/blob/0b562014f7b71f98540e682c8d662275f0011f2f/.gitlab/ci/docs.gitlab-ci.yml#L41):
Runs several tests on the content of the docs themselves:
- [`lint-doc.sh` script](https://gitlab.com/gitlab-org/gitlab/blob/master/scripts/lint-doc.sh)
runs the following checks and linters:
@@ -492,33 +492,20 @@ merge request with new or changed docs is submitted, are:
- [markdownlint](#markdownlint).
- [Vale](#vale).
- Nanoc tests:
- - [`internal_links`](https://gitlab.com/gitlab-org/gitlab/blob/master/.gitlab/ci/docs.gitlab-ci.yml#L67)
+ - [`internal_links`](https://gitlab.com/gitlab-org/gitlab/-/blob/0b562014f7b71f98540e682c8d662275f0011f2f/.gitlab/ci/docs.gitlab-ci.yml#L58)
checks that all internal links (ex: `[link](../index.md)`) are valid.
- - [`internal_anchors`](https://gitlab.com/gitlab-org/gitlab/blob/master/.gitlab/ci/docs.gitlab-ci.yml#L69)
+ - [`internal_anchors`](https://gitlab.com/gitlab-org/gitlab/-/blob/0b562014f7b71f98540e682c8d662275f0011f2f/.gitlab/ci/docs.gitlab-ci.yml#L60)
checks that all internal anchors (ex: `[link](../index.md#internal_anchor)`)
are valid.
+ - [`ui-docs-links lint`](https://gitlab.com/gitlab-org/gitlab/-/blob/0b562014f7b71f98540e682c8d662275f0011f2f/.gitlab/ci/docs.gitlab-ci.yml#L62)
+ checks that all links to docs from UI elements (`app/views` files, for example)
+ are linking to valid docs and anchors.
-### Running tests
+### Run tests locally
Apart from [previewing your changes locally](#previewing-the-changes-live), you can also run all lint checks
and Nanoc tests locally.
-#### Nanoc tests
-
-To execute Nanoc tests locally:
-
-1. Navigate to the [`gitlab-docs`](https://gitlab.com/gitlab-org/gitlab-docs) directory.
-1. Run:
-
- ```shell
- # Check for broken internal links
- bundle exec nanoc check internal_links
-
- # Check for broken external links (might take a lot of time to complete).
- # This test is set to be allowed to fail and is run only in the gitlab-docs project CI
- bundle exec nanoc check internal_anchors
- ```
-
#### Lint checks
Lint checks are performed by the [`lint-doc.sh`](https://gitlab.com/gitlab-org/gitlab/blob/master/scripts/lint-doc.sh)
@@ -550,6 +537,57 @@ The output should be similar to:
Note that this requires you to either have the required lint tools installed on your machine,
or a working Docker installation, in which case an image with these tools pre-installed will be used.
+#### Nanoc tests
+
+To execute Nanoc tests locally:
+
+1. Navigate to the [`gitlab-docs`](https://gitlab.com/gitlab-org/gitlab-docs) directory.
+1. Run:
+
+ ```shell
+ # Check for broken internal links
+ bundle exec nanoc check internal_links
+
+ # Check for broken external links (might take a lot of time to complete).
+ # This test is set to be allowed to fail and is run only in the gitlab-docs project CI
+ bundle exec nanoc check internal_anchors
+ ```
+
+#### `ui-docs-links` test
+
+The `ui-docs-links lint` job uses `haml-lint` to test that all links to docs from
+UI elements (`app/views` files, for example) are linking to valid docs and anchors.
+
+To run the `ui-docs-links` test locally:
+
+1. Open the `gitlab` directory in a terminal window.
+1. Run:
+
+ ```shell
+ bundle exec haml-lint -i DocumentationLinks
+ ```
+
+If you receive an error the first time you run this test, run `bundle install`, which
+installs GitLab's dependencies, and try again.
+
+If you don't want to install all of GitLab's dependencies to test the links, you can:
+
+1. Open the `gitlab` directory in a terminal window.
+1. Install `haml-lint`:
+
+ ```shell
+ gem install haml_lint
+ ```
+
+1. Run:
+
+ ```shell
+ haml-lint -i DocumentationLinks
+ ```
+
+If you manually install `haml-lint` with this process, it will not update automatically
+and you should make sure your version matches the version used by GitLab.
+
### Local linters
To help adhere to the [documentation style guidelines](styleguide.md), and improve the content
@@ -586,6 +624,7 @@ You can use markdownlint:
- [On the command line](https://github.com/igorshubovych/markdownlint-cli#markdownlint-cli--).
- [Within a code editor](#configure-editors).
+- [In a `pre-commit` hook](#configure-pre-commit-hooks).
#### Vale
@@ -612,6 +651,9 @@ You can use Vale:
- [On the command line](https://errata-ai.gitbook.io/vale/getting-started/usage).
- [Within a code editor](#configure-editors).
+- [In a `pre-commit` hook](#configure-pre-commit-hooks). Vale only reports errors in the
+ `pre-commit` hook (the same configuration as the CI/CD pipelines), and does not report suggestions
+ or warnings.
#### Install linters
@@ -655,14 +697,32 @@ To configure markdownlint within your editor, install one of the following as ap
- [Sublime Text](https://packagecontrol.io/packages/SublimeLinter-contrib-markdownlint)
- [Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=DavidAnson.vscode-markdownlint)
- [Atom](https://atom.io/packages/linter-node-markdownlint)
+- [Vim](https://github.com/dense-analysis/ale)
To configure Vale within your editor, install one of the following as appropriate:
- The Sublime Text [`SublimeLinter-contrib-vale` plugin](https://packagecontrol.io/packages/SublimeLinter-contrib-vale)
- The Visual Studio Code [`testthedocs.vale` extension](https://marketplace.visualstudio.com/items?itemName=testthedocs.vale)
+- [Vim](https://github.com/dense-analysis/ale)
We don't use [Vale Server](https://errata-ai.github.io/vale/#using-vale-with-a-text-editor-or-another-third-party-application).
+#### Configure pre-commit hooks
+
+Git [pre-commit hooks](https://git-scm.com/book/en/v2/Customizing-Git-Git-Hooks) allow Git users to
+run tests or other processes before committing to a branch, with the ability to not commit to the branch if
+failures occur with these tests.
+
+[`overcommit`](https://github.com/sds/overcommit) is a Git hooks manager, making configuring,
+installing, and removing Git hooks easy.
+
+Sample configuration for `overcommit` is available in the
+[`.overcommit.yml.example`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.overcommit.yml.example)
+file for the [`gitlab`](https://gitlab.com/gitlab-org/gitlab) project.
+
+To set up `overcommit` for documentation linting, see
+[Pre-commit static analysis](../contributing/style_guides.md#pre-commit-static-analysis).
+
#### Disable Vale tests
You can disable a specific Vale linting rule or all Vale linting rules for any portion of a
diff --git a/doc/development/documentation/site_architecture/deployment_process.md b/doc/development/documentation/site_architecture/deployment_process.md
new file mode 100644
index 00000000000..00cdc69d422
--- /dev/null
+++ b/doc/development/documentation/site_architecture/deployment_process.md
@@ -0,0 +1,58 @@
+# Documentation deployment process
+
+The [`dockerfiles` directory](https://gitlab.com/gitlab-org/gitlab-docs/blob/master/dockerfiles/)
+contains all needed Dockerfiles to build and deploy <https://docs.gitlab.com>. It
+is heavily inspired by Docker's
+[Dockerfile](https://github.com/docker/docker.github.io/blob/06ed03db13895bfe867761b6fc2ad40acf6026dd/Dockerfile).
+
+The following Dockerfiles are used.
+
+| Dockerfile | Docker image | Description |
+| ---------- | ------------ | ----------- |
+| [`Dockerfile.bootstrap`](https://gitlab.com/gitlab-org/gitlab-docs/blob/master/dockerfiles/Dockerfile.bootstrap) | `gitlab-docs:bootstrap` | Contains all the dependencies that are needed to build the website. If the gems are updated and `Gemfile{,.lock}` changes, the image must be rebuilt. |
+| [`Dockerfile.builder.onbuild`](https://gitlab.com/gitlab-org/gitlab-docs/blob/master/dockerfiles/Dockerfile.builder.onbuild) | `gitlab-docs:builder-onbuild` | Base image to build the docs website. It uses `ONBUILD` to perform all steps and depends on `gitlab-docs:bootstrap`. |
+| [`Dockerfile.nginx.onbuild`](https://gitlab.com/gitlab-org/gitlab-docs/blob/master/dockerfiles/Dockerfile.nginx.onbuild) | `gitlab-docs:nginx-onbuild` | Base image to use for building documentation archives. It uses `ONBUILD` to perform all required steps to copy the archive, and relies upon its parent `Dockerfile.builder.onbuild` that is invoked when building single documentation archives (see the `Dockerfile` of each branch. |
+| [`Dockerfile.archives`](https://gitlab.com/gitlab-org/gitlab-docs/blob/master/dockerfiles/Dockerfile.archives) | `gitlab-docs:archives` | Contains all the versions of the website in one archive. It copies all generated HTML files from every version in one location. |
+
+## How to build the images
+
+Although build images are built automatically via GitLab CI/CD, you can build
+and tag all tooling images locally:
+
+1. Make sure you have [Docker installed](https://docs.docker.com/install/).
+1. Make sure you're in the `dockerfiles/` directory of the `gitlab-docs` repository.
+1. Build the images:
+
+ ```shell
+ docker build -t registry.gitlab.com/gitlab-org/gitlab-docs:bootstrap -f Dockerfile.bootstrap ../
+ docker build -t registry.gitlab.com/gitlab-org/gitlab-docs:builder-onbuild -f Dockerfile.builder.onbuild ../
+ docker build -t registry.gitlab.com/gitlab-org/gitlab-docs:nginx-onbuild -f Dockerfile.nginx.onbuild ../
+ ```
+
+For each image, there's a manual job under the `images` stage in
+[`.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab-docs/blob/master/.gitlab-ci.yml) which can be invoked at will.
+
+## Update an old Docker image with new upstream docs content
+
+If there are any changes to any of the stable branches of the products that are
+not included in the single Docker image, just rerun the pipeline (`https://gitlab.com/gitlab-org/gitlab-docs/pipelines/new`)
+for the version in question.
+
+## Porting new website changes to old versions
+
+CAUTION: **Warning:**
+Porting changes to older branches can have unintended effects as we're constantly
+changing the backend of the website. Use only when you know what you're doing
+and make sure to test locally.
+
+The website will keep changing and being improved. In order to consolidate
+those changes to the stable branches, we'd need to pick certain changes
+from time to time.
+
+If this is not possible or there are many changes, merge master into them:
+
+```shell
+git branch 12.0
+git fetch origin master
+git merge origin/master
+```
diff --git a/doc/development/documentation/site_architecture/global_nav.md b/doc/development/documentation/site_architecture/global_nav.md
index 2625fbe0415..22d97a9e2cf 100644
--- a/doc/development/documentation/site_architecture/global_nav.md
+++ b/doc/development/documentation/site_architecture/global_nav.md
@@ -304,9 +304,9 @@ Examples:
# does not include index.html at the end
docs:
- - doc_title: Service Desk
- doc_url: 'user/project/service_desk.html'
- ee_only: false
+ - doc_title: Container Scanning
+ doc_url: 'user/application_security/container_scanning/'
+ ee_only: true
# note that the URL above ends in html and, as the
# document is EE-only, the attribute ee_only is set to true.
```
diff --git a/doc/development/documentation/site_architecture/index.md b/doc/development/documentation/site_architecture/index.md
index 60e6d4bcb13..63cd9959985 100644
--- a/doc/development/documentation/site_architecture/index.md
+++ b/doc/development/documentation/site_architecture/index.md
@@ -138,6 +138,8 @@ If you need to build and deploy the site immediately (must have maintainer level
1. In [`gitlab-docs`](https://gitlab.com/gitlab-org/gitlab-docs), go to **{rocket}** **CI / CD > Schedules**.
1. For the `Build docs.gitlab.com every 4 hours` scheduled pipeline, click the **play** (**{play}**) button.
+Read more about the [deployment process](deployment_process.md).
+
## Using YAML data files
The easiest way to achieve something similar to
diff --git a/doc/development/documentation/site_architecture/release_process.md b/doc/development/documentation/site_architecture/release_process.md
index d100ab8afa8..d04d34ff786 100644
--- a/doc/development/documentation/site_architecture/release_process.md
+++ b/doc/development/documentation/site_architecture/release_process.md
@@ -1,51 +1,21 @@
# GitLab Docs monthly release process
-The [`dockerfiles` directory](https://gitlab.com/gitlab-org/gitlab-docs/blob/master/dockerfiles/)
-contains all needed Dockerfiles to build and deploy the versioned website. It
-is heavily inspired by Docker's
-[Dockerfile](https://github.com/docker/docker.github.io/blob/06ed03db13895bfe867761b6fc2ad40acf6026dd/Dockerfile).
-
-The following Dockerfiles are used.
-
-| Dockerfile | Docker image | Description |
-| ---------- | ------------ | ----------- |
-| [`Dockerfile.bootstrap`](https://gitlab.com/gitlab-org/gitlab-docs/blob/master/dockerfiles/Dockerfile.bootstrap) | `gitlab-docs:bootstrap` | Contains all the dependencies that are needed to build the website. If the gems are updated and `Gemfile{,.lock}` changes, the image must be rebuilt. |
-| [`Dockerfile.builder.onbuild`](https://gitlab.com/gitlab-org/gitlab-docs/blob/master/dockerfiles/Dockerfile.builder.onbuild) | `gitlab-docs:builder-onbuild` | Base image to build the docs website. It uses `ONBUILD` to perform all steps and depends on `gitlab-docs:bootstrap`. |
-| [`Dockerfile.nginx.onbuild`](https://gitlab.com/gitlab-org/gitlab-docs/blob/master/dockerfiles/Dockerfile.nginx.onbuild) | `gitlab-docs:nginx-onbuild` | Base image to use for building documentation archives. It uses `ONBUILD` to perform all required steps to copy the archive, and relies upon its parent `Dockerfile.builder.onbuild` that is invoked when building single documentation archives (see the `Dockerfile` of each branch. |
-| [`Dockerfile.archives`](https://gitlab.com/gitlab-org/gitlab-docs/blob/master/dockerfiles/Dockerfile.archives) | `gitlab-docs:archives` | Contains all the versions of the website in one archive. It copies all generated HTML files from every version in one location. |
-
-## How to build the images
-
-Although build images are built automatically via GitLab CI/CD, you can build
-and tag all tooling images locally:
-
-1. Make sure you have [Docker installed](https://docs.docker.com/install/).
-1. Make sure you're in the `dockerfiles/` directory of the `gitlab-docs` repository.
-1. Build the images:
-
- ```shell
- docker build -t registry.gitlab.com/gitlab-org/gitlab-docs:bootstrap -f Dockerfile.bootstrap ../
- docker build -t registry.gitlab.com/gitlab-org/gitlab-docs:builder-onbuild -f Dockerfile.builder.onbuild ../
- docker build -t registry.gitlab.com/gitlab-org/gitlab-docs:nginx-onbuild -f Dockerfile.nginx.onbuild ../
- ```
-
-For each image, there's a manual job under the `images` stage in
-[`.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab-docs/blob/master/.gitlab-ci.yml) which can be invoked at will.
-
-## Monthly release process
-
When a new GitLab version is released on the 22nd, we need to create the respective
single Docker image, and update some files so that the dropdown works correctly.
-### 1. Add the chart version
+## 1. Add the chart version
Since the charts use a different version number than all the other GitLab
products, we need to add a
[version mapping](https://docs.gitlab.com/charts/installation/version_mappings.html):
-1. Check that there is a [stable branch created](https://gitlab.com/gitlab-org/charts/gitlab/-/branches)
- for the new chart version. If you're unsure or can't find it, drop a line in
- the `#g_delivery` channel.
+NOTE: **Note:**
+The charts stable branch is not created automatically like the other products.
+There's an [issue to track this](https://gitlab.com/gitlab-org/charts/gitlab/-/issues/1442).
+It is usually created on the 21st or the 22nd.
+
+To add a new charts version:
+
1. Make sure you're in the root path of the `gitlab-docs` repository.
1. Open `content/_data/chart_versions.yaml` and add the new stable branch version using the
version mapping. Note that only the `major.minor` version is needed.
@@ -56,7 +26,7 @@ It can be handy to create the future mappings since they are pretty much known.
In that case, when a new GitLab version is released, you don't have to repeat
this first step.
-### 2. Create an image for a single version
+## 2. Create an image for a single version
The single docs version must be created before the release merge request, but
this needs to happen when the stable branches for all products have been created.
@@ -89,7 +59,7 @@ docker run -it --rm -p 4000:4000 docs:12.0
Visit `http://localhost:4000/12.0/` to see if everything works correctly.
-### 3. Create the release merge request
+## 3. Create the release merge request
NOTE: **Note:**
To be [automated](https://gitlab.com/gitlab-org/gitlab-docs/-/issues/750).
@@ -135,7 +105,7 @@ version and rotates the old one:
git push origin release-12-0
```
-### 4. Update the dropdown for all online versions
+## 4. Update the dropdown for all online versions
The versions dropdown is in a way "hardcoded". When the site is built, it looks
at the contents of `content/_data/versions.yaml` and based on that, the dropdown
@@ -144,14 +114,18 @@ dropdown will list one or more releases behind. Remember that the new changes of
the dropdown are included in the unmerged `release-X-Y` branch.
The content of `content/_data/versions.yaml` needs to change for all online
-versions:
+versions (stable branches `X.Y` of the `gitlab-docs` project):
1. Run the Rake task that will create all the respective merge requests needed to
update the dropdowns and will be set to automatically be merged when their
- pipelines succeed. The `release-X-Y` branch needs to be present locally,
- and you need to have switched to it, otherwise the Rake task will fail:
+ pipelines succeed:
+
+ NOTE: **Note:**
+ The `release-X-Y` branch needs to be present locally,
+ and you need to have switched to it, otherwise the Rake task will fail.
```shell
+ git checkout release-X-Y
./bin/rake release:dropdowns
```
@@ -162,46 +136,22 @@ versions:
TIP: **Tip:**
In case a pipeline fails, see [troubleshooting](#troubleshooting).
-### 5. Merge the release merge request
+## 5. Merge the release merge request
The dropdown merge requests should have now been merged into their respective
-version (stable branch), which will trigger another pipeline. At this point,
+version (stable `X.Y` branch), which will trigger another pipeline. At this point,
you need to only babysit the pipelines and make sure they don't fail:
-1. Check the pipelines page: `https://gitlab.com/gitlab-org/gitlab-docs/pipelines`
+1. Check the [pipelines page](https://gitlab.com/gitlab-org/gitlab-docs/pipelines)
and make sure all stable branches have green pipelines.
1. After all the pipelines of the online versions succeed, merge the release merge request.
-1. Finally, from `https://gitlab.com/gitlab-org/gitlab-docs/pipeline_schedules` run
- the `Build docker images weekly` pipeline that will build the `:latest` and `:archives` Docker images.
+1. Finally, run the
+ [`Build docker images weekly` pipeline](https://gitlab.com/gitlab-org/gitlab-docs/pipeline_schedules)
+ that will build the `:latest` and `:archives` Docker images.
Once the scheduled pipeline succeeds, the docs site will be deployed with all
new versions online.
-## Update an old Docker image with new upstream docs content
-
-If there are any changes to any of the stable branches of the products that are
-not included in the single Docker image, just rerun the pipeline (`https://gitlab.com/gitlab-org/gitlab-docs/pipelines/new`)
-for the version in question.
-
-## Porting new website changes to old versions
-
-CAUTION: **Warning:**
-Porting changes to older branches can have unintended effects as we're constantly
-changing the backend of the website. Use only when you know what you're doing
-and make sure to test locally.
-
-The website will keep changing and being improved. In order to consolidate
-those changes to the stable branches, we'd need to pick certain changes
-from time to time.
-
-If this is not possible or there are many changes, merge master into them:
-
-```shell
-git branch 12.0
-git fetch origin master
-git merge origin/master
-```
-
## Troubleshooting
Releasing a new version is a long process that involves many moving parts.
diff --git a/doc/development/documentation/structure.md b/doc/development/documentation/structure.md
index 4cfc57aa57b..e13b2f4d031 100644
--- a/doc/development/documentation/structure.md
+++ b/doc/development/documentation/structure.md
@@ -27,7 +27,7 @@ be used. There is no need to add a title called "Introduction" or "Overview," be
search for these terms. Just put this information after the title.
- **Use cases**: describes real use case scenarios for that feature/configuration.
- **Requirements**: describes what software, configuration, account, or knowledge is required.
-- **Instructions**: One or more sets of detailed instructions to follow.
+- **Instructions**: one or more sets of detailed instructions to follow.
- **Troubleshooting** guide (recommended but not required).
For additional details on each, see the [template for new docs](#template-for-new-docs),
@@ -42,40 +42,48 @@ To start a new document, respect the file tree and file name guidelines,
as well as the style guidelines. Use the following template:
```markdown
-<!--Follow the Style Guide when working on this document. https://docs.gitlab.com/ee/development/documentation/styleguide.html
-When done, remove all of this commented-out text, except a commented-out Troubleshooting section,
-which, if empty, can be left in place to encourage future use.-->
+<!--Follow the Style Guide when working on this document.
+https://docs.gitlab.com/ee/development/documentation/styleguide.html
+When done, remove all of this commented-out text, except a commented-out
+Troubleshooting section, which, if empty, can be left in place to encourage future use.-->
---
-description: "Short document description." # Up to ~200 chars long. They will be displayed in Google Search snippets. It may help to write the page intro first, and then reuse it here.
+description: "Short document description." # Up to ~200 chars long. They will be displayed
+in Google Search snippets. It may help to write the page intro first, and then reuse it here.
stage: "Add the stage name here, and remove the quotation marks"
group: "Add the group name here, and remove the quotation marks"
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page,
+see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
---
# Feature Name or Use Case Name **[TIER]** (1)
-<!--If writing about a use case, drop the tier, and start with a verb, e.g. "Configure", "Implement", + the goal/scenario-->
+<!--If writing about a use case, drop the tier, and start with a verb,
+for example, "Configure", "Implement", + the goal/scenario-->
-<!--For pages on newly introduced features, add the following line. If only some aspects of the feature have been introduced, specify what parts of the feature.-->
+<!--For pages on newly-introduced features, add the following line.
+If only some aspects of the feature have been introduced, specify which parts of the feature.-->
> [Introduced](link_to_issue_or_mr) in GitLab (Tier) X.Y (2).
An introduction -- without its own additional header -- goes here.
Offer a description of the feature or use case, and what to expect on this page.
-(You can reuse this content, or part of it, for the front matter's `description` at the top of this file).
+(You can reuse this content, or part of it, for the front matter's `description` at the top
+of this file).
The introduction should answer the following questions:
- What is this feature or use case?
- Who is it for?
- What is the context in which it is used and are there any prerequisites/requirements?
-- What can the audience do with this? (Be sure to consider all applicable audiences, like GitLab admin and developer-user.)
+- What can the audience do with this? (Be sure to consider all applicable audiences, like
+ GitLab admin and developer-user.)
- What are the benefits to using this over any alternatives?
## Use cases
Describe some use cases, typically in bulleted form. Include real-life examples for each.
-If the page itself is dedicated to a use case, this section can usually include more specific scenarios
-for use (e.g. variations on the main use case), but if that's not applicable, the section can be omitted.
+If the page itself is dedicated to a use case, this section can usually include more specific
+scenarios for use (for example, variations on the main use case), but if that's not applicable,
+the section can be omitted.
Examples of use cases on feature pages:
- CE and EE: [Issues](../../user/project/issues/index.md#use-cases)
@@ -88,27 +96,60 @@ Examples of use cases on feature pages:
State any requirements for using the feature and/or following along with the instructions.
These can include both:
-- technical requirements (e.g. an account on a third party service, an amount of storage space, prior configuration of another feature)
-- prerequisite knowledge (e.g. familiarity with certain GitLab features, cloud technologies)
+- technical requirements (for example, an account on a third party service, an amount of storage space,
+ prior configuration of another feature)
+- prerequisite knowledge (for example, familiarity with certain GitLab features, cloud technologies)
Link each one to an appropriate place for more information.
## Instructions
-"Instructions" is usually not the name of the heading.
-This is the part of the document where you can include one or more sets of instructions, each to accomplish a specific task.
-Headers should describe the task the reader will achieve by following the instructions within, typically starting with a verb.
+This is the part of the document where you can include one or more sets of instructions.
+Each topic should help users accomplish a specific task.
+
+Headers should describe the task the reader will achieve by following the instructions within,
+typically starting with a verb. For example, `Create a package` or `Configure a pipeline`.
+
Larger instruction sets may have subsections covering specific phases of the process.
-Where appropriate, provide examples of code or configuration files to better clarify intended usage.
+Where appropriate, provide examples of code or configuration files to better clarify
+intended usage.
- Write a step-by-step guide, with no gaps between the steps.
-- Include example code or configurations as part of the relevant step. Use appropriate Markdown to [wrap code blocks with syntax highlighting](../../user/markdown.md#colored-code-and-syntax-highlighting).
+- Include example code or configurations as part of the relevant step.
+ Use appropriate Markdown to wrap code blocks with
+ [syntax highlighting](../../user/markdown.md#colored-code-and-syntax-highlighting).
- Start with an h2 (`##`), break complex steps into small steps using
-subheadings h3 > h4 > h5 > h6. _Never skip a hierarchy level, such
-as h2 > h4_, as it will break the TOC and may affect the breadcrumbs.
+ subheadings h3 > h4 > h5 > h6. _Never skip a hierarchy level, such
+ as h2 > h4_, as it will break the TOC and may affect the breadcrumbs.
- Use short and descriptive headings (up to ~50 chars). You can use one
-single heading like `## Configure X` for instructions when the feature
-is simple and the document is short.
+ single heading like `## Configure X` for instructions when the feature
+ is simple and the document is short.
+
+Example topic:
+
+## Create a teddy bear
+
+Start by writing a sentence or two about _why_ someone would want to perform this task.
+It's not always possible, but is a good practice. For example:
+
+Create a teddy bear when you need something to hug.
+
+Follow this information with the task steps.
+
+To create a teddy bear:
+
+1. Go to **Settings > CI/CD**.
+1. Expand **This** and click **This**.
+1. Do another step.
+
+After the numbered list, add a sentence with the expected result, if it
+is not obvious, and any next steps. For example:
+
+The teddy bear is now in the kitchen, in the cupboard above the sink.
+
+You can retrieve the teddy bear and put it on the couch with the other animals.
+
+Screenshots are not necessary. They are difficult to keep up-to-date and can clutter the page.
<!-- ## Troubleshooting
@@ -118,7 +159,7 @@ important to describe those, too. Think of things that may go wrong and include
This is important to minimize requests for support, and to avoid doc comments with
questions that you know someone might ask.
-Each scenario can be a third-level heading, e.g. `### Getting error message X`.
+Each scenario can be a third-level heading, for example, `### Getting error message X`.
If you have none to add when creating a doc, leave this section in place
but commented out to help encourage others to add to it in the future. -->
@@ -127,7 +168,8 @@ but commented out to help encourage others to add to it in the future. -->
Notes:
- (1): Apply the [tier badges](styleguide.md#product-badges) accordingly
-- (2): Apply the correct format for the [GitLab version introducing the feature](styleguide.md#gitlab-versions-and-tiers)
+- (2): Apply the correct format for the
+ [GitLab version that introduces the feature](styleguide.md#gitlab-versions-and-tiers)
```
## Help and feedback section
@@ -167,3 +209,28 @@ Disqus, therefore, don't add both keys to the same document.
The click events in the feedback section are tracked with Google Tag Manager. The
conversions can be viewed on Google Analytics by navigating to **Behavior > Events > Top events > docs**.
+
+## Guidelines for good practices
+
+> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/36576/) in GitLab 13.2 as GitLab Development documentation.
+
+"Good practice" examples demonstrate encouraged ways of writing code while comparing with examples of practices to avoid.
+These examples are labeled as "Bad" or "Good".
+In GitLab development guidelines, when presenting the cases, it is recommended
+to follow a **first-bad-then-good** strategy. First demonstrate the "Bad" practice (how things _could_ be done, which is often still working code),
+and then how things _should_ be done better, using a "Good" example. This is typically an improved example of the same code.
+
+Consider the following guidelines when offering examples:
+
+- First, offer the "Bad" example, then the "Good" one.
+- When only one bad case and one good case is given, use the same code block.
+- When more than one bad case or one good case is offered, use separated code blocks for each.
+With many examples being presented, a clear separation helps the reader to go directly to the good part.
+Consider offering an explanation (for example, a comment, a link to a resource, etc.) on why something is bad practice.
+- Better and best cases can be considered part of the good case(s) code block.
+In the same code block, precede each with comments: `# Better` and `# Best`.
+
+NOTE: **Note:**
+While the bad-then-good approach is acceptable for the GitLab development guidelines, do not use it
+for user documentation. For user documentation, use "Do" and "Don't." For example, see the
+[Pajamas Design System](https://design.gitlab.com/content/punctuation/).
diff --git a/doc/development/documentation/styleguide.md b/doc/development/documentation/styleguide.md
index f2c90e71bd5..c252f6425d0 100644
--- a/doc/development/documentation/styleguide.md
+++ b/doc/development/documentation/styleguide.md
@@ -13,6 +13,8 @@ For programmatic help adhering to the guidelines, see [Testing](index.md#testing
See the GitLab handbook for further [writing style guidelines](https://about.gitlab.com/handbook/communication/#writing-style-guidelines)
that apply to all GitLab content, not just documentation.
+View [a list of recent style guide updates](https://gitlab.com/dashboard/merge_requests?scope=all&utf8=%E2%9C%93&state=merged&label_name[]=tw-style&not[label_name][]=docs%3A%3Afix).
+
## Documentation is the single source of truth (SSOT)
### Why a single source of truth
@@ -249,7 +251,7 @@ GitLab documentation should be clear and easy to understand.
- Be clear, concise, and stick to the goal of the documentation.
- Write in US English with US grammar. (Tested in [`British.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab/British.yml).)
-- Use inclusive language.
+- Use [inclusive language](#inclusive-language).
### Point of view
@@ -261,37 +263,117 @@ because it’s friendly and easy to understand.
### Capitalization
-- Capitalize "G" and "L" in GitLab.
-- Use sentence case for:
- - Titles.
- - Labels.
- - Menu items.
- - Buttons.
- - Headings. Don't capitalize other words in the title, unless
- it refers to a product feature. For example:
- - Capitalizing "issues" is acceptable in
- `## What you can do with GitLab Issues`, but not in `## Closing multiple issues`.
-- Use title case when referring to:
- - [GitLab Features](https://about.gitlab.com/features/). For example, Issue Board,
- Geo, and Runner.
- - GitLab [product tiers](https://about.gitlab.com/pricing/). For example, GitLab Core
- and GitLab Ultimate. (Tested in [`BadgeCapitalization.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab/BadgeCapitalization.yml).)
- - Third-party products. For example, Prometheus, Kubernetes, and Git.
- - Methods or methodologies. For example, Continuous Integration, Continuous
- Deployment, Scrum, and Agile.
- (Tested in [`.markdownlint.json`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.markdownlint.json).)
-
- NOTE: **Note:**
- Some features are also objects. For example, "GitLab's Merge Requests support X" and
- "Create a new merge request for Z."
+#### Headings
+
+Use sentence case. For example:
+
+- `# Use variables to configure pipelines`
+- `## Use the To-Do List`
+
+#### UI text
+
+When referring to specific user interface text, like a button label or menu item, use the same capitalization that is displayed in the UI.
+Standards for this content are listed in the [Pajamas Design System Content section](https://design.gitlab.com/content/punctuation/) and typically
+match what is called for in this Documentation Style Guide.
+
+If you think there is a mistake in the way the UI text is styled,
+create an issue or an MR to propose a change to the UI text.
+
+#### Feature names
+
+- **Feature names are typically lowercase**, like those describing actions and types of objects in GitLab. For example:
+ - epics
+ - issues
+ - issue weights
+ - merge requests
+ - milestones
+ - reorder issues
+ - runner, runners, shared runners
+- **Some features are capitalized**, typically nouns naming GitLab-specific capabilities or tools. For example:
+ - GitLab CI/CD
+ - Repository Mirroring
+ - Value Stream Analytics
+ - the To-Do List
+ - the Web IDE
+ - Geo
+ - GitLab Runner (see [this issue](https://gitlab.com/gitlab-org/gitlab/-/issues/233529) for details)
+
+Document any exceptions in this style guide. If you're not sure, ask a GitLab Technical Writer so that they can help decide and document the result.
+
+Do not match the capitalization of terms or phrases on the [Features page](https://about.gitlab.com/features/) or [features.yml](https://gitlab.com/gitlab-com/www-gitlab-com/blob/master/data/features.yml) by default.
+
+#### Other terms
+
+Capitalize names of:
+
+- GitLab [product tiers](https://about.gitlab.com/pricing/). For example, GitLab Core
+ and GitLab Ultimate. (Tested in [`BadgeCapitalization.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab/BadgeCapitalization.yml).)
+- Third-party organizations, software, and products. For example, Prometheus, Kubernetes, Git, and The Linux Foundation.
+- Methods or methodologies. For example, Continuous Integration, Continuous Deployment, Scrum, and Agile. (Tested in [`.markdownlint.json`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.markdownlint.json).)
+
+Follow the capitalization style listed at the [authoritative source](#links-to-external-documentation) for the entity, which may use non-standard case styles. For example: GitLab and npm.
+
+### Inclusive language
+
+We strive to create documentation that is inclusive. This section includes guidance and examples in the
+following categories:
+
+- [Gender-specific wording](#avoid-gender-specific-wording).
+- [Ableist language](#avoid-ableist-language).
+- [Cultural sensitivity](#culturally-sensitive-language).
+
+We write our developer documentation with inclusivity and diversity in mind. This page is not an exhaustive reference, but describes some general guidelines and examples that illustrate some best practices to follow.
+
+#### Avoid gender-specific wording
+
+When possible, use gender-neutral pronouns. For example, you can use a singular
+[they](https://developers.google.com/style/pronouns#gender-neutral-pronouns) as a gender-neutral
+pronoun.
+
+Avoid the use of gender-specific pronouns, unless referring to a specific person.
+
+| Use | Avoid |
+|-----------------------------------|-----------------|
+| People, humanity | Mankind |
+| GitLab Team Members | Manpower |
+| You can install; They can install | He can install; She can install |
+
+If you need to set up [Fake user information](#fake-user-information), use diverse or non-gendered
+names with common surnames.
+
+#### Avoid ableist language
+
+Avoid terms that are also used in negative stereotypes for different groups.
+
+| Use | Avoid |
+|------------------------|----------------------|
+| Check for completeness | Sanity check |
+| Uncertain outliers | Crazy outliers |
+| Slows the service | Cripples the service |
+| Placeholder variable | Dummy variable |
+| Active/Inactive | Enabled/Disabled |
+| On/Off | Enabled/Disabled |
+
+Credit: [Avoid ableist language](https://developers.google.com/style/inclusive-documentation#ableist-language) in the Google Developer Style Guide.
+
+#### Culturally sensitive language
+
+Avoid terms that reflect negative cultural stereotypes and history. In most cases, you can replace terms such as `master` and `slave` with terms that are more precise and functional, such as `primary` and `secondary`.
+
+| Use | Avoid |
+|----------------------|-----------------------|
+| Primary / secondary | Master / slave |
+| Allowlist / denylist | Blacklist / whitelist |
+
+For more information see the following [Internet Draft specification](https://tools.ietf.org/html/draft-knodel-terminology-02).
### Language to avoid
When creating documentation, limit or avoid the use of the following verb
tenses, words, and phrases:
-- Avoid jargon.
-- Avoid uncommon words.
+- Avoid jargon when possible, and when not possible, define the term or [link to a definition](#links-to-external-documentation).
+- Avoid uncommon words when a more-common alternative is possible, ensuring that content is accessible to more readers.
- Don't write in the first person singular.
(Tested in [`FirstPerson.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab/FirstPerson.yml).)
- Instead of "I" or "me," use "we," "you," "us," or "one."
@@ -431,6 +513,7 @@ tenses, words, and phrases:
Check the general punctuation rules for the GitLab documentation on the table below.
Check specific punctuation rules for [lists](#lists) below.
+Additional examples are available in the [Pajamas guide for punctuation](https://design.gitlab.com/content/punctuation/).
| Rule | Example |
| ---- | ------- |
@@ -806,9 +889,40 @@ Using the Markdown extension is necessary for the [`/help`](index.md#gitlab-help
### Links to external documentation
When describing interactions with external software, it's often helpful to include links to external
-documentation. When possible, make sure that you are linking to an **authoritative** source.
+documentation. When possible, make sure that you're linking to an [**authoritative** source](#authoritative-sources).
For example, if you're describing a feature in Microsoft's Active Directory, include a link to official Microsoft documentation.
+### Authoritative sources
+
+When citing external information, use sources that are written by the people who created
+the item or product in question. These sources are the most likely to
+be accurate and remain up to date.
+
+Examples of authoritative sources include:
+
+- Specifications, such as a [Request for Comments](https://www.ietf.org/standards/rfcs/) document
+from the Internet Engineering Task Force.
+- Official documentation for a product. For example, if you're setting up an interface with the
+Google OAuth 2 authorization server, include a link to Google's documentation.
+- Official documentation for a project. For example, if you're citing NodeJS functionality,
+refer directly to [NodeJS documentation](https://nodejs.org/en/docs/).
+- Books from an authoritative publisher.
+
+Examples of sources to avoid include:
+
+- Personal blog posts.
+- Wikipedia.
+- Non-trustworthy articles.
+- Discussions on forums such as Stack Overflow.
+- Documentation from a company that describes another company's product.
+
+While many of these sources to avoid can help you learn skills and or features, they can become
+obsolete quickly. Nobody is obliged to maintain any of these sites. Therefore, we should avoid using them as reference literature.
+
+NOTE: **Note:**
+Non-authoritative sources are acceptable only if there is no equivalent authoritative source.
+Even then, focus on non-authoritative sources that are extensively cited or peer-reviewed.
+
### Links requiring permissions
Don't link directly to:
@@ -1135,48 +1249,39 @@ Usage examples:
[Bootstrap utility class](https://getbootstrap.com/docs/4.4/utilities/float/):
`**{tanuki, 32, float-right}**` renders as: **{tanuki, 32, float-right}**
-### Use GitLab SVGs to describe UI elements
+### When to use icons
-When using GitLab SVGs to describe screen elements, also include the name or tooltip of the element as text.
+Icons should be used sparingly, and only in ways that aid and do not hinder the readability of the
+text.
-For example, for references to the Admin Area:
+For example, the following adds little to the accompanying text:
-- Correct: `**{admin}** **Admin Area > Settings**` (**{admin}** **Admin Area > Settings**)
-- Incorrect: `**{admin}** **> Settings**` (**{admin}** **> Settings**)
+```markdown
+1. Go to **{home}** **Project overview > Details**
+```
-This will ensure that the source Markdown remains readable and should help with accessibility.
+1. Go to **{home}** **Project overview > Details**
-The following are examples of source Markdown for menu items with their published output:
+However, the following might help the reader connect the text to the user interface:
```markdown
-1. Go to **{home}** **Project overview > Details**
-1. Go to **{doc-text}** **Repository > Branches**
-1. Go to **{issues}** **Issues > List**
-1. Go to **{merge-request}** **Merge Requests**
-1. Go to **{rocket}** **CI/CD > Pipelines**
-1. Go to **{shield}** **Security & Compliance > Configuration**
-1. Go to **{cloud-gear}** **Operations > Metrics**
-1. Go to **{package}** **Packages > Container Registry**
-1. Go to **{chart}** **Project Analytics > Code Review**
-1. Go to **{book}** **Wiki**
-1. Go to **{snippet}** **Snippets**
-1. Go to **{users}** **Members**
-1. Select the **More actions** **{ellipsis_v}** icon > **Hide stage**
+| Section | Description |
+|:-------------------------|:----------------------------------------------------------------------------------------------------------------------------|
+| **{overview}** Overview | View your GitLab Dashboard, and administer projects, users, groups, jobs, Runners, and Gitaly servers. |
+| **{monitor}** Monitoring | View GitLab system information, and information on background jobs, logs, health checks, requests profiles, and audit logs. |
+| **{messages}** Messages | Send and manage broadcast messages for your users. |
```
-1. Go to **{home}** **Project overview > Details**
-1. Go to **{doc-text}** **Repository > Branches**
-1. Go to **{issues}** **Issues > List**
-1. Go to **{merge-request}** **Merge Requests**
-1. Go to **{rocket}** **CI/CD > Pipelines**
-1. Go to **{shield}** **Security & Compliance > Configuration**
-1. Go to **{cloud-gear}** **Operations > Metrics**
-1. Go to **{package}** **Packages > Container Registry**
-1. Go to **{chart}** **Project Analytics > Code Review**
-1. Go to **{book}** **Wiki**
-1. Go to **{snippet}** **Snippets**
-1. Go to **{users}** **Members**
-1. Select the **More actions** **{ellipsis_v}** icon > **Hide stage**
+| Section | Description |
+|:-------------------------|:----------------------------------------------------------------------------------------------------------------------------|
+| **{overview}** Overview | View your GitLab Dashboard, and administer projects, users, groups, jobs, Runners, and Gitaly servers. |
+| **{monitor}** Monitoring | View GitLab system information, and information on background jobs, logs, health checks, requests profiles, and audit logs. |
+| **{messages}** Messages | Send and manage broadcast messages for your users. |
+
+Use an icon when you find youself having to describe an interface element. For example:
+
+- Do: Click the Admin Area icon ( **{admin}** ).
+- Don't: Click the Admin Area icon (the wrench icon).
## Alert boxes
@@ -1294,21 +1399,20 @@ Which renders to:
To maintain consistency through GitLab documentation, the following guides documentation authors
on agreed styles and usage of terms.
-### Merge Requests (MRs)
+### Merge requests (MRs)
Merge requests allow you to exchange changes you made to source code and collaborate
with other people on the same project. You'll see this term used in the following ways:
-- If you're referring to the feature, use **Merge Request**.
-- In any other context, use **merge request**.
+- Use lowercase **merge requests** regardless of whether referring to the feature or individual merge requests.
-As noted in our corporate [Writing Style Guidelines](https://about.gitlab.com/handbook/communication/#writing-style-guidelines),
+As noted in the GitLab [Writing Style Guidelines](https://about.gitlab.com/handbook/communication/#writing-style-guidelines),
if you use the **MR** acronym, expand it at least once per document page.
-For example, the first time you specify a MR, specify either _Merge Request (MR)_ or _merge request (MR)_.
+Typically, the first use would be phrased as _merge request (MR)_ with subsequent instances being _MR_.
Examples:
-- "We prefer GitLab Merge Requests".
+- "We prefer GitLab merge requests".
- "Open a merge request to fix a broken link".
- "After you open a merge request (MR), submit your MR for review and approval".
@@ -1476,43 +1580,43 @@ GitLab Community Edition), don't split the product or feature name across lines.
### Product badges
-When a feature is available in EE-only tiers, add the corresponding tier according to the
-feature availability:
+When a feature is available in paid tiers, add the corresponding tier to the
+header or other page element according to the feature's availability:
-- For GitLab Core and GitLab.com Free: `**(CORE)**`.
-- For GitLab Starter and GitLab.com Bronze: `**(STARTER)**`.
-- For GitLab Premium and GitLab.com Silver: `**(PREMIUM)**`.
-- For GitLab Ultimate and GitLab.com Gold: `**(ULTIMATE)**`.
+| Tier in which feature is available | Tier markup |
+|:-----------------------------------------------------------------------|:----------------------|
+| GitLab Core and GitLab.com Free, and their higher tiers | `**(CORE)**` |
+| GitLab Starter and GitLab.com Bronze, and their higher tiers | `**(STARTER)**` |
+| GitLab Premium and GitLab.com Silver, and their higher tiers | `**(PREMIUM)**` |
+| GitLab Ultimate and GitLab.com Gold | `**(ULTIMATE)**` |
+| *Only* GitLab Core and higher tiers (no GitLab.com-based tiers) | `**(CORE ONLY)**` |
+| *Only* GitLab Starter and higher tiers (no GitLab.com-based tiers) | `**(STARTER ONLY)**` |
+| *Only* GitLab Premium and higher tiers (no GitLab.com-based tiers) | `**(PREMIUM ONLY)**` |
+| *Only* GitLab Ultimate (no GitLab.com-based tiers) | `**(ULTIMATE ONLY)**` |
+| *Only* GitLab.com Free and higher tiers (no self-managed instances) | `**(FREE ONLY)**` |
+| *Only* GitLab.com Bronze and higher tiers (no self-managed instances) | `**(BRONZE ONLY)**` |
+| *Only* GitLab.com Silver and higher tiers (no self-managed instances) | `**(SILVER ONLY)**` |
+| *Only* GitLab.com Gold (no self-managed instances) | `**(GOLD ONLY)**` |
-To exclude GitLab.com tiers (when the feature is not available in GitLab.com), add the
-keyword "only":
+For clarity, all page title headers (H1s) must be have a tier markup for
+the lowest tier that has information on the documentation page.
-- For GitLab Core: `**(CORE ONLY)**`.
-- For GitLab Starter: `**(STARTER ONLY)**`.
-- For GitLab Premium: `**(PREMIUM ONLY)**`.
-- For GitLab Ultimate: `**(ULTIMATE ONLY)**`.
+If sections of a page apply to higher tier levels, they can be separately
+labeled with their own tier markup.
-For GitLab.com only tiers (when the feature is not available for self-managed instances):
+#### Product badge display behavior
-- For GitLab Free and higher tiers: `**(FREE ONLY)**`.
-- For GitLab Bronze and higher tiers: `**(BRONZE ONLY)**`.
-- For GitLab Silver and higher tiers: `**(SILVER ONLY)**`.
-- For GitLab Gold: `**(GOLD ONLY)**`.
+When using the tier markup with headers, the documentation page will
+display the full tier badge with the header line.
-The tier should be ideally added to headers, so that the full badge will be displayed.
-However, it can be also mentioned from paragraphs, list items, and table cells. For these cases,
-the tier mention will be represented by an orange info icon **(information)** that will show the tiers on hover.
-
-Use the lowest tier at the page level, even if higher-level tiers exist on the page. For example, you might have a page that is marked as Starter but a section badged as Premium.
-
-For example:
+You can also use the tier markup with paragraphs, list items,
+and table cells. For these cases, the tier mention will be represented by an
+orange info icon **{information}** that will display the tiers when visitors
+point to the icon. For example:
-- `**(STARTER)**` renders as **(STARTER)**
-- `**(STARTER ONLY)**` renders as **(STARTER ONLY)**
-- `**(SILVER ONLY)**` renders as **(SILVER ONLY)**
-
-The absence of tiers' mentions mean that the feature is available in GitLab Core,
-GitLab.com Free, and all higher tiers.
+- `**(STARTER)**` displays as **(STARTER)**
+- `**(STARTER ONLY)**` displays as **(STARTER ONLY)**
+- `**(SILVER ONLY)**` displays as **(SILVER ONLY)**
#### How it works
@@ -1622,10 +1726,10 @@ Learn how to [document features deployed behind flags](feature_flags.md).
For guidance on developing GitLab with feature flags, see
[Feature flags in development of GitLab](../feature_flags/index.md).
-## API
+## RESTful API
-Here is a list of must-have items. Use them in the exact order that appears
-on this document. Further explanation is given below.
+Here is a list of must-have items for RESTful API documentation. Use them in the
+exact order that appears on this document. Further explanation is given below.
- Every method must have the REST API request. For example:
@@ -1776,7 +1880,8 @@ curl --data "name=foo" --header "PRIVATE-TOKEN: <your_access_token>" "https://gi
#### Post data using JSON content
-> **Note:** In this example we create a new group. Watch carefully the single
+NOTE: **Note:**
+In this example we create a new group. Watch carefully the single
and double quotes.
```shell
@@ -1816,3 +1921,80 @@ exclude specific users when requesting a list of users for a project, you would
```shell
curl --request PUT --header "PRIVATE-TOKEN: <your_access_token>" --data "skip_users[]=<user_id>" --data "skip_users[]=<user_id>" https://gitlab.example.com/api/v4/projects/<project_id>/users
```
+
+## GraphQL API
+
+GraphQL APIs are different from [RESTful APIs](#restful-api). Reference information is
+generated automatically in our [GraphQL reference](../../api/graphql/reference/index.md).
+
+However, it's helpful to include examples on how to use GraphQL for different "use cases",
+with samples that readers can use directly in the [GraphiQL explorer](../api_graphql_styleguide.md#graphiql).
+
+This section describes the steps required to add your GraphQL examples to GitLab documentation.
+
+### Add a dedicated GraphQL page
+
+To create a dedicated GraphQL page, create a new `.md` file in the `doc/api/graphql/` directory.
+Give that file a functional name, such as `import_from_specific_location.md`.
+
+### Start the page with an explanation
+
+Include a page title that describes the GraphQL functionality in a few words, such as:
+
+```markdown
+# Search for [substitute kind of data]
+```
+
+Describe the search. One sentence may be all you need. More information may help
+readers learn how to use the example for their GitLab deployments.
+
+### Include a procedure using the GraphiQL explorer
+
+The GraphiQL explorer can help readers test queries with working deployments. Set up the section with the following:
+
+- Use the following title:
+
+ ```markdown
+ ## Set up the GraphiQL explorer
+ ```
+
+- Include a code block with the query that anyone can include in their instance of
+ the GraphiQL explorer:
+
+ ````markdown
+ ```graphql
+ query {
+ <insert queries here>
+ }
+ ```
+ ````
+
+- Tell the user what to do:
+
+ ```markdown
+ 1. Open the GraphiQL explorer tool in the following URL: `https://gitlab.com/-/graphql-explorer`.
+ 1. Paste the `query` listed above into the left window of your GraphiQL explorer tool.
+ 1. Click Play to get the result shown here:
+ ```
+
+- Include a screenshot of the result in the GraphiQL explorer. Follow the naming
+ convention described in the [Save the image](#save-the-image) section.
+- Follow up with an example of what you can do with the output.
+ Make sure the example is something that readers can do on their own deployments.
+- Include a link to the [GraphQL API resources](../../api/graphql/reference/index.md).
+
+### Add the GraphQL example to the Table of Contents
+
+You'll need to open a second MR, against the [GitLab Docs repository](https://gitlab.com/gitlab-org/gitlab-docs/).
+
+We store our Table of Contents in the `default-nav.yaml` file, in the `content/_data`
+subdirectory. You can find the GraphQL section under the following line:
+
+```yaml
+ - category_title: GraphQL
+```
+
+Be aware that CI tests for that second MR will fail with a bad link until the main MR
+that adds the new GraphQL page is merged.
+
+And that's all you need!
diff --git a/doc/development/ee_features.md b/doc/development/ee_features.md
index ac544113cbd..e7954fa910b 100644
--- a/doc/development/ee_features.md
+++ b/doc/development/ee_features.md
@@ -56,6 +56,12 @@ This works because for every path that is present in CE's eager-load/auto-load
paths, we add the same `ee/`-prepended path in [`config/application.rb`](https://gitlab.com/gitlab-org/gitlab/blob/925d3d4ebc7a2c72964ce97623ae41b8af12538d/config/application.rb#L42-52).
This also applies to views.
+#### Testing EE-only features
+
+To test an EE class that doesn't exist in CE, create the spec file as you normally
+would in the `ee/spec` directory, but without the second `ee/` subdirectory.
+For example, a class `ee/app/models/vulnerability.rb` would have its tests in `ee/spec/models/vulnerability_spec.rb`.
+
### EE features based on CE features
For features that build on existing CE features, write a module in the `EE`
@@ -96,6 +102,21 @@ This is also not just applied to models. Here's a list of other examples:
- `ee/app/validators/ee/foo_attr_validator.rb`
- `ee/app/workers/ee/foo_worker.rb`
+#### Testing EE features based on CE features
+
+To test an `EE` namespaced module that extends a CE class with EE features,
+create the spec file as you normally would in the `ee/spec` directory, including the second `ee/` subdirectory.
+For example, an extension `ee/app/models/ee/user.rb` would have its tests in `ee/app/models/ee/user_spec.rb`.
+
+In the `RSpec.describe` call, use the CE class name where the EE module would be used.
+For example, in `ee/app/models/ee/user_spec.rb`, the test would start with:
+
+```ruby
+RSpec.describe User do
+ describe 'ee feature added through extension'
+end
+```
+
#### Overriding CE methods
To override a method present in the CE codebase, use `prepend`. It
@@ -904,7 +925,7 @@ export default {
- Please do not use mixins unless ABSOLUTELY NECESSARY. Please try to find an alternative pattern.
-##### Reccomended alternative approach (named/scoped slots)
+##### Recommended alternative approach (named/scoped slots)
- We can use slots and/or scoped slots to achieve the same thing as we did with mixins. If you only need an EE component there is no need to create the CE component.
diff --git a/doc/development/elasticsearch.md b/doc/development/elasticsearch.md
index 90debab3b5c..2f01692e944 100644
--- a/doc/development/elasticsearch.md
+++ b/doc/development/elasticsearch.md
@@ -1,3 +1,9 @@
+---
+stage: Enablement
+group: Global Search
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+---
+
# Elasticsearch knowledge **(STARTER ONLY)**
This area is to maintain a compendium of useful information when working with Elasticsearch.
@@ -121,6 +127,9 @@ Patterns:
## Zero downtime reindexing with multiple indices
+NOTE: **Note:**
+This is not applicable yet as multiple indices functionality is not fully implemented.
+
Currently GitLab can only handle a single version of setting. Any setting/schema changes would require reindexing everything from scratch. Since reindexing can take a long time, this can cause search functionality downtime.
To avoid downtime, GitLab is working to support multiple indices that
diff --git a/doc/development/fe_guide/frontend_faq.md b/doc/development/fe_guide/frontend_faq.md
index 3c0845a9aaa..71436a7c7fb 100644
--- a/doc/development/fe_guide/frontend_faq.md
+++ b/doc/development/fe_guide/frontend_faq.md
@@ -163,3 +163,33 @@ To return to the normal development mode:
1. Run `yarn clean` to remove the production assets and free some space (optional).
1. Start webpack again: `gdk start webpack`.
1. Restart GDK: `gdk-restart rails-web`.
+
+### 8. Babel polyfills
+
+> [Introduced](https://gitlab.com/gitlab-org/gitlab/issues/28837) in GitLab 12.8.
+
+GitLab has enabled the Babel `preset-env` option
+[`useBuiltIns: 'usage'`](https://babeljs.io/docs/en/babel-preset-env#usebuiltins-usage),
+which adds the appropriate `core-js` polyfills once for each JavaScript feature
+we're using that our target browsers don't support. You don't need to add `core-js`
+polyfills manually.
+
+NOTE: **Note:**
+GitLab still manually adds non-`core-js` polyfills for extending browser features
+(such as GitLab's SVG polyfill) that allow us reference SVGs by using `<use xlink:href>`.
+These polyfills should be added to `app/assets/javascripts/commons/polyfills.js`.
+
+To see what polyfills are being used:
+
+1. Navigate to your merge request.
+1. In the secondary menu below the title of the merge request, click **Pipelines**, then
+ click the pipeline you want to view, to display the jobs in that pipeline.
+1. Click the [`compile-production-assets`](https://gitlab.com/gitlab-org/gitlab/-/jobs/641770154) job.
+1. In the right-hand sidebar, scroll to **Job Artifacts**, and click **Browse**.
+1. Click the **webpack-report** folder to open it, and click **index.html**.
+1. In the upper left corner of the page, click the right arrow **{angle-right}**
+ to display the explorer.
+1. In the **Search modules** field, enter `gitlab/node_modules/core-js` to see
+ which polyfills are being loaded and where:
+
+ ![Image of webpack report](img/webpack_report_v12_8.png)
diff --git a/doc/development/fe_guide/graphql.md b/doc/development/fe_guide/graphql.md
index 3d74ec94ae4..f5e16d377f1 100644
--- a/doc/development/fe_guide/graphql.md
+++ b/doc/development/fe_guide/graphql.md
@@ -75,7 +75,7 @@ their execution by clicking **Execute query** button on the top left:
## Apollo Client
To save duplicated clients getting created in different apps, we have a
-[default client](https://gitlab.com/gitlab-org/gitlab/blob/master/app/assets/javascripts/lib/graphql.js) that should be used. This setups the
+[default client](https://gitlab.com/gitlab-org/gitlab/blob/master/app/assets/javascripts/lib/graphql.js) that should be used. This sets up the
Apollo client with the correct URL and also sets the CSRF headers.
Default client accepts two parameters: `resolvers` and `config`.
@@ -85,6 +85,7 @@ Default client accepts two parameters: `resolvers` and `config`.
- `cacheConfig` field accepts an optional object of settings to [customize Apollo cache](https://www.apollographql.com/docs/react/caching/cache-configuration/#configuring-the-cache)
- `baseUrl` allows us to pass a URL for GraphQL endpoint different from our main endpoint (i.e.`${gon.relative_url_root}/api/graphql`)
- `assumeImmutableResults` (set to `false` by default) - this setting, when set to `true`, will assume that every single operation on updating Apollo Cache is immutable. It also sets `freezeResults` to `true`, so any attempt on mutating Apollo Cache will throw a console warning in development environment. Please ensure you're following the immutability pattern on cache update operations before setting this option to `true`.
+ - `fetchPolicy` determines how you want your component to interact with the Apollo cache. Defaults to "cache-first".
## GraphQL Queries
@@ -167,9 +168,7 @@ import VueApollo from 'vue-apollo';
import createDefaultClient from '~/lib/graphql';
Vue.use(VueApollo);
-const defaultClient = createDefaultClient({
- resolvers: {}
-});
+const defaultClient = createDefaultClient();
defaultClient.cache.writeData({
data: {
@@ -257,10 +256,7 @@ We need to pass resolvers object to our existing Apollo Client:
import createDefaultClient from '~/lib/graphql';
import resolvers from './graphql/resolvers';
-const defaultClient = createDefaultClient(
- {},
- resolvers,
-);
+const defaultClient = createDefaultClient(resolvers);
```
Now every single time on attempt to fetch a version, our client will fetch `id` and `sha` from the remote API endpoint and will assign our hardcoded values to `author` and `createdAt` version properties. With this data, frontend developers are able to work on UI part without being blocked by backend. When actual response is added to the API, a custom local resolver can be removed fast and the only change to query/fragment is `@client` directive removal.
@@ -470,6 +466,28 @@ fetchNextPage() {
Please note we don't have to save `pageInfo` one more time; `fetchMore` triggers a query
`result` hook as well.
+### Managing performance
+
+The Apollo client will batch queries by default. This means that if you have 3 queries defined,
+Apollo will group them into one request, send the single request off to the server and only
+respond once all 3 queries have completed.
+
+If you need to have queries sent as individual requests, additional context can be provided
+to tell Apollo to do this.
+
+```javascript
+export default {
+ apollo: {
+ user: {
+ query: QUERY_IMPORT,
+ context: {
+ isSingleRequest: true,
+ }
+ }
+ },
+};
+```
+
### Testing
#### Mocking response as component data
@@ -501,6 +519,7 @@ If we need to test how our component renders when results from the GraphQL API a
designs: {
loading,
},
+ },
};
wrapper = shallowMount(Index, {
@@ -595,7 +614,7 @@ These errors are located at the "top level" of a GraphQL response. These are non
#### Handling top-level errors
-Apollo is aware of top-level errors, so we are able to leverage Apollo's various error-handling mechanisms to handle these errors (e.g. handling Promise rejections after invoking the [`mutate`](https://www.apollographql.com/docs/react/api/apollo-client/#ApolloClient.mutate) method, or handling the `error` event emitted from the [`ApolloMutation`](https://apollo.vuejs.org/api/apollo-mutation.html#events) component).
+Apollo is aware of top-level errors, so we are able to leverage Apollo's various error-handling mechanisms to handle these errors (e.g. handling Promise rejections after invoking the [`mutate`](https://www.apollographql.com/docs/react/api/core/ApolloClient/#ApolloClient.mutate) method, or handling the `error` event emitted from the [`ApolloMutation`](https://apollo.vuejs.org/api/apollo-mutation.html#events) component).
Because these errors are not intended for users, error messages for top-level errors should be defined client-side.
diff --git a/doc/development/fe_guide/icons.md b/doc/development/fe_guide/icons.md
index 7078b5e9b2f..b539293e9cf 100644
--- a/doc/development/fe_guide/icons.md
+++ b/doc/development/fe_guide/icons.md
@@ -76,6 +76,89 @@ export default {
Please use the following function inside JS to render an icon:
`gl.utils.spriteIcon(iconName)`
+## Loading icon
+
+### Usage in HAML/Rails
+
+DANGER: **Danger:**
+Do not use the `spinner` or `icon('spinner spin')` rails helpers to insert
+loading icons. These helpers rely on the Font Awesome icon library which is
+deprecated.
+
+To insert a loading spinner in HAML or Rails use the `loading_icon` helper:
+
+```haml
+= loading_icon
+```
+
+You can include one or more of the following properties with the `loading_icon` helper, as demonstrated
+by the examples that follow:
+
+- `container` (optional): wraps the loading icon in a container, which centers the loading icon using the `text-center` CSS property.
+- `color` (optional): either `orange` (default), `light`, or `dark`.
+- `size` (optional): either `sm` (default), `md`, `lg`, or `xl`.
+- `css_class` (optional): defaults to an empty string, but can be useful for utility classes to fine-tune alignment or spacing.
+
+**Example 1:**
+
+The following HAML expression generates a loading icon’s markup and
+centers the icon by wrapping it in a `gl-spinner-container` element.
+
+```haml
+= loading_icon(container: true)
+```
+
+**Output from example 1:**
+
+```html
+<div class="gl-spinner-container">
+ <span class="gl-spinner gl-spinner-orange gl-spinner-sm" aria-label="Loading"></span>
+</div>
+```
+
+**Example 2:**
+
+The following HAML expression generates a loading icon’s markup
+with a custom size. It also appends a margin utility class.
+
+```haml
+= loading_icon(size: 'lg', css_class: 'gl-mr-2')
+```
+
+**Output from example 2:**
+
+```html
+<span class="gl-spinner gl-spinner-orange gl-spinner-lg gl-mr-2" aria-label="Loading"></span>
+```
+
+### Usage in Vue
+
+The [GitLab UI](https://gitlab-org.gitlab.io/gitlab-ui/) components library provides a
+`GlLoadingIcon` component. See the component’s
+[storybook](https://gitlab-org.gitlab.io/gitlab-ui/?path=/story/base-loading-icon--default)
+for more information about its usage.
+
+**Example:**
+
+The following code snippet demonstrates how to use `GlLoadingIcon` in
+a Vue component.
+
+```html
+<script>
+import { GlLoadingIcon } from "@gitlab/ui";
+
+export default {
+ components: {
+ GlLoadingIcon,
+ },
+};
+<script>
+
+<template>
+ <gl-loading-icon inline />
+</template>
+```
+
## SVG Illustrations
Please use from now on for any SVG based illustrations simple `img` tags to show an illustration by simply using either `image_tag` or `image_path` helpers.
diff --git a/doc/development/fe_guide/img/webpack_report_v12_8.png b/doc/development/fe_guide/img/webpack_report_v12_8.png
new file mode 100644
index 00000000000..99c6f7d0eec
--- /dev/null
+++ b/doc/development/fe_guide/img/webpack_report_v12_8.png
Binary files differ
diff --git a/doc/development/fe_guide/index.md b/doc/development/fe_guide/index.md
index 3a2b3cac9bf..ef23b6c4ed2 100644
--- a/doc/development/fe_guide/index.md
+++ b/doc/development/fe_guide/index.md
@@ -5,7 +5,7 @@ across GitLab's frontend team.
## Overview
-GitLab is built on top of [Ruby on Rails](https://rubyonrails.org) using [Haml](http://haml.info/) and also a JavaScript based Frontend with [Vue.js](https://vuejs.org).
+GitLab is built on top of [Ruby on Rails](https://rubyonrails.org) using [Haml](https://haml.info/) and also a JavaScript based Frontend with [Vue.js](https://vuejs.org).
Be wary of [the limitations that come with using Hamlit](https://github.com/k0kubun/hamlit/blob/master/REFERENCE.md#limitations). We also use [SCSS](https://sass-lang.com) and plain JavaScript with
modern ECMAScript standards supported through [Babel](https://babeljs.io/) and ES module support through [webpack](https://webpack.js.org/).
diff --git a/doc/development/fe_guide/style/javascript.md b/doc/development/fe_guide/style/javascript.md
index b69a6f1941c..2a06a473878 100644
--- a/doc/development/fe_guide/style/javascript.md
+++ b/doc/development/fe_guide/style/javascript.md
@@ -10,7 +10,7 @@ linter to manage most of our JavaScript style guidelines.
In addition to the style guidelines set by Airbnb, we also have a few specific rules
listed below.
-> **Tip:**
+TIP: **Tip:**
You can run eslint locally by running `yarn eslint`
## Avoid forEach
diff --git a/doc/development/fe_guide/style/scss.md b/doc/development/fe_guide/style/scss.md
index 336c9b8ca35..dba39eeb98c 100644
--- a/doc/development/fe_guide/style/scss.md
+++ b/doc/development/fe_guide/style/scss.md
@@ -23,6 +23,14 @@ Classes in [`utilities.scss`](https://gitlab.com/gitlab-org/gitlab/blob/master/a
Avoid [Bootstrap's Utility Classes](https://getbootstrap.com/docs/4.3/utilities/).
+NOTE: **Note:**
+While migrating [Bootstrap's Utility Classes](https://getbootstrap.com/docs/4.3/utilities/)
+to the [GitLab UI](https://gitlab.com/gitlab-org/gitlab-ui/-/blob/master/doc/css.md#utilities)
+utility classes, note both the classes for margin and padding differ. The size scale used at
+GitLab differs from the scale used in the Bootstrap library. For a Bootstrap padding or margin
+utility, you may need to double the size of the applied utility to achieve the same visual
+result (such as `ml-1` becoming `gl-ml-2`).
+
#### Where should I put new utility classes?
If a class you need has not been added to GitLab UI, you get to add it! Follow the naming patterns documented in the [utility files](https://gitlab.com/gitlab-org/gitlab-ui/-/tree/master/src/scss/utility-mixins) and refer to [GitLab UI's CSS documentation](https://gitlab.com/gitlab-org/gitlab-ui/-/blob/master/doc/contributing/adding_css.md#adding-utility-mixins) for more details, especially about adding responsive and stateful rules.
@@ -45,8 +53,8 @@ Examples of component classes that were created using "utility-first" include:
Inspiration:
-- <https://tailwindcss.com/docs/utility-first/>
-- <https://tailwindcss.com/docs/extracting-components/>
+- <https://tailwindcss.com/docs/utility-first>
+- <https://tailwindcss.com/docs/extracting-components>
### Naming
diff --git a/doc/development/fe_guide/tooling.md b/doc/development/fe_guide/tooling.md
index 28deb7d95f9..5685ac5abcd 100644
--- a/doc/development/fe_guide/tooling.md
+++ b/doc/development/fe_guide/tooling.md
@@ -40,7 +40,8 @@ yarn eslint-fix
_If manual changes are required, a list of changes will be sent to the console._
-**Caution:** Limit use to global rule updates. Otherwise, the changes can lead to huge Merge Requests.
+CAUTION: **Caution:**
+Limit use to global rule updates. Otherwise, the changes can lead to huge Merge Requests.
### Disabling ESLint in new files
diff --git a/doc/development/fe_guide/vue.md b/doc/development/fe_guide/vue.md
index 2a0556c6cda..58a8332589d 100644
--- a/doc/development/fe_guide/vue.md
+++ b/doc/development/fe_guide/vue.md
@@ -64,11 +64,11 @@ which will make the tests easier. See the following example:
```javascript
// haml
-.js-vue-app{ data: { endpoint: 'foo' }}
+#js-vue-app{ data: { endpoint: 'foo' }}
// index.js
document.addEventListener('DOMContentLoaded', () => new Vue({
- el: '.js-vue-app',
+ el: '#js-vue-app',
data() {
const dataset = this.$options.el.dataset;
return {
@@ -85,6 +85,8 @@ document.addEventListener('DOMContentLoaded', () => new Vue({
}));
```
+> When adding an `id` attribute to mount a Vue application, please make sure this `id` is unique across the codebase
+
#### Accessing the `gl` object
When we need to query the `gl` object for data that won't change during the application's life cycle, we should do it in the same place where we query the DOM.
@@ -283,7 +285,7 @@ describe('~/todos/app.vue', () => {
### Test the component's output
The main return value of a Vue component is the rendered output. In order to test the component we
-need to test the rendered output. [Vue](https://vuejs.org/v2/guide/unit-testing.html) guide's to unit test show us exactly that:
+need to test the rendered output. Visit the [Vue testing guide](https://vuejs.org/v2/guide/testing.html#Unit-Testing).
### Events
diff --git a/doc/development/fe_guide/vue3_migration.md b/doc/development/fe_guide/vue3_migration.md
index 7ab48db7f76..afdb2690c02 100644
--- a/doc/development/fe_guide/vue3_migration.md
+++ b/doc/development/fe_guide/vue3_migration.md
@@ -76,6 +76,9 @@ const FunctionalComp = (props, slots) => {
}
```
+NOTE: **Note:**
+It is not recommended to replace stateful components with functional components unless you absolutely need a performance improvement right now. In Vue 3, performance gains for functional components will be negligible.
+
## Old slots syntax with `slot` attribute
**Why?**
diff --git a/doc/development/fe_guide/vuex.md b/doc/development/fe_guide/vuex.md
index 02387c15951..9573dd36e63 100644
--- a/doc/development/fe_guide/vuex.md
+++ b/doc/development/fe_guide/vuex.md
@@ -40,24 +40,25 @@ The following example shows an application that lists and adds users to the stat
This is the entry point for our store. You can use the following as a guide:
```javascript
-import Vue from 'vue';
import Vuex from 'vuex';
import * as actions from './actions';
import * as getters from './getters';
import mutations from './mutations';
import state from './state';
-Vue.use(Vuex);
-
-export const createStore = () => new Vuex.Store({
- actions,
- getters,
- mutations,
- state,
-});
-export default createStore();
+export const createStore = () =>
+ new Vuex.Store({
+ actions,
+ getters,
+ mutations,
+ state,
+ });
```
+_Note:_ Until this
+[RFC](https://gitlab.com/gitlab-org/frontend/rfcs/-/issues/20) is implemented,
+the above will need to disable the `import/prefer-default-export` ESLint rule.
+
### `state.js`
The first thing you should do before writing any code is to design the state.
@@ -137,44 +138,12 @@ import { mapActions } from 'vuex';
### `mutations.js`
The mutations specify how the application state changes in response to actions sent to the store.
-The only way to change state in a Vuex store should be by committing a mutation.
-
-**It's a good idea to think of the state before writing any code.**
-
-Remember that actions only describe that something happened, they don't describe how the application state changes.
+The only way to change state in a Vuex store is by committing a mutation.
-**Never commit a mutation directly from a component**
-
-Instead, you should create an action that will commit a mutation.
-
-```javascript
- import * as types from './mutation_types';
+Most mutations are committed from an action using `commit`. If you don't have any
+asynchronous operations, you can call mutations from a component using the `mapMutations` helper.
- export default {
- [types.REQUEST_USERS](state) {
- state.isLoading = true;
- },
- [types.RECEIVE_USERS_SUCCESS](state, data) {
- // Do any needed data transformation to the received payload here
- state.users = data;
- state.isLoading = false;
- },
- [types.RECEIVE_USERS_ERROR](state, error) {
- state.isLoading = false;
- },
- [types.REQUEST_ADD_USER](state, user) {
- state.isAddingUser = true;
- },
- [types.RECEIVE_ADD_USER_SUCCESS](state, user) {
- state.isAddingUser = false;
- state.users.push(user);
- },
- [types.REQUEST_ADD_USER_ERROR](state, error) {
- state.isAddingUser = false;
- state.errorAddingUser = error;
- },
- };
-```
+See the Vuex docs for examples of [committing mutations from components](https://vuex.vuejs.org/guide/mutations.html#committing-mutations-in-components).
#### Naming Pattern: `REQUEST` and `RECEIVE` namespaces
@@ -316,9 +285,12 @@ function when mounting your Vue component:
// in the Vue app's initialization script (e.g. mount_show.js)
import Vue from 'vue';
-import createStore from './stores';
+import Vuex from 'vuex';
+import { createStore } from './stores';
import AwesomeVueApp from './components/awesome_vue_app.vue'
+Vue.use(Vuex);
+
export default () => {
const el = document.getElementById('js-awesome-vue-app');
@@ -398,10 +370,8 @@ discussion](https://gitlab.com/gitlab-org/frontend/rfcs/-/issues/56#note_3025148
```javascript
<script>
import { mapActions, mapState, mapGetters } from 'vuex';
-import store from './store';
export default {
- store,
computed: {
...mapGetters([
'getUsersWithPets'
@@ -417,12 +387,10 @@ export default {
'fetchUsers',
'addUser',
]),
-
onClickAddUser(data) {
this.addUser(data);
}
},
-
created() {
this.fetchUsers()
}
@@ -448,29 +416,6 @@ export default {
</template>
```
-### Vuex Gotchas
-
-1. Do not call a mutation directly. Always use an action to commit a mutation. Doing so will keep consistency throughout the application. From Vuex docs:
-
- > Why don't we just call store.commit('action') directly? Well, remember that mutations must be synchronous? Actions aren't. We can perform asynchronous operations inside an action.
-
- ```javascript
- // component.vue
-
- // bad
- created() {
- this.$store.commit('mutation');
- }
-
- // good
- created() {
- this.$store.dispatch('action');
- }
- ```
-
-1. Use mutation types instead of hardcoding strings. It will be less error prone.
-1. The State will be accessible in all components descending from the use where the store is instantiated.
-
### Testing Vuex
#### Testing Vuex concerns
@@ -485,55 +430,50 @@ In order to write unit tests for those components, we need to include the store
```javascript
//component_spec.js
import Vue from 'vue';
+import Vuex from 'vuex';
+import { mount, createLocalVue } from '@vue/test-utils';
import { createStore } from './store';
-import component from './component.vue'
+import Component from './component.vue'
+
+const localVue = createLocalVue();
+localVue.use(Vuex);
describe('component', () => {
let store;
- let vm;
- let Component;
+ let wrapper;
+
+ const createComponent = () => {
+ store = createStore();
+
+ wrapper = mount(Component, {
+ localVue,
+ store,
+ });
+ };
beforeEach(() => {
- Component = Vue.extend(issueActions);
+ createComponent();
});
afterEach(() => {
- vm.$destroy();
+ wrapper.destroy();
+ wrapper = null;
});
- it('should show a user', () => {
+ it('should show a user', async () => {
const user = {
name: 'Foo',
age: '30',
};
- store = createStore();
-
// populate the store
- store.dispatch('addUser', user);
+ await store.dispatch('addUser', user);
- vm = new Component({
- store,
- propsData: props,
- }).$mount();
+ expect(wrapper.text()).toContain(user.name);
});
});
```
-#### Testing Vuex actions and getters
-
-Because we're currently using [`babel-plugin-rewire`](https://github.com/speedskater/babel-plugin-rewire), you may encounter the following error when testing your Vuex actions and getters:
-`[vuex] actions should be function or object with "handler" function`
-
-To prevent this error from happening, you need to export an empty function as `default`:
-
-```javascript
-// getters.js or actions.js
-
-// prevent babel-plugin-rewire from generating an invalid default during karma tests
-export default () => {};
-```
-
### Two way data binding
When storing form data in Vuex, it is sometimes necessary to update the value stored. The store should never be mutated directly, and an action should be used instead.
diff --git a/doc/development/feature_categorization/index.md b/doc/development/feature_categorization/index.md
index bcd5e16cc2b..3fd402ebe84 100644
--- a/doc/development/feature_categorization/index.md
+++ b/doc/development/feature_categorization/index.md
@@ -24,7 +24,7 @@ and generate a new version of the file, which needs to be committed to
the repository.
The [Scalabilitity
-team](https://about.gitlab.com/handbook/engineering/infrastructure/team/scalability)
+team](https://about.gitlab.com/handbook/engineering/infrastructure/team/scalability/)
currently maintains the `stages.yml` file. They will automatically be
notified on Slack when the file becomes outdated.
diff --git a/doc/development/feature_flags.md b/doc/development/feature_flags.md
index 6bad91d6287..cff88388ba0 100644
--- a/doc/development/feature_flags.md
+++ b/doc/development/feature_flags.md
@@ -1 +1,5 @@
+---
+redirect_to: 'feature_flags/index.md'
+---
+
This document was moved to [another location](feature_flags/index.md).
diff --git a/doc/development/feature_flags/controls.md b/doc/development/feature_flags/controls.md
index 9e7ce74cc0c..09260be1264 100644
--- a/doc/development/feature_flags/controls.md
+++ b/doc/development/feature_flags/controls.md
@@ -140,7 +140,7 @@ run the following in Slack:
This sets a feature flag to `true` based on the following formula:
```ruby
-feature_flag_state = Zlib.crc32("some_feature<Actor>:#{actor.id}") % (100 * 1_000) < 25 * 1_000]
+feature_flag_state = Zlib.crc32("some_feature<Actor>:#{actor.id}") % (100 * 1_000) < 25 * 1_000
# where <Actor>: is a `User`, `Group`, `Project` and actor is an instance
```
diff --git a/doc/development/feature_flags/development.md b/doc/development/feature_flags/development.md
index 0b918478668..9b30187fcd1 100644
--- a/doc/development/feature_flags/development.md
+++ b/doc/development/feature_flags/development.md
@@ -1,19 +1,219 @@
# Developing with feature flags
-In general, it's better to have a group- or user-based gate, and you should prefer
-it over the use of percentage gates. This would make debugging easier, as you
-filter for example logs and errors based on actors too. Furthermore, this allows
-for enabling for the `gitlab-org` or `gitlab-com` group first, while the rest of
+This document provides guidelines on how to use feature flags
+in the GitLab codebase to conditionally enable features
+and test them.
+
+Features that are developed and merged behind a feature flag
+should not include a changelog entry. The entry should be added either in the merge
+request removing the feature flag or the merge request where the default value of
+the feature flag is set to enabled. If the feature contains any database migrations, it
+*should* include a changelog entry for the database changes.
+
+CAUTION: **Caution:**
+All newly-introduced feature flags should be [disabled by default](process.md#feature-flags-in-gitlab-development).
+
+NOTE: **Note:**
+This document is the subject of continued work as part of an epic to [improve internal usage of Feature Flags](https://gitlab.com/groups/gitlab-org/-/epics/3551). Raise any suggestions as new issues and attach them to the epic.
+
+## Types of feature flags
+
+Currently, only a single type of feature flag is available.
+Additional feature flag types will be provided in the future,
+with descriptions for their usage.
+
+### `development` type
+
+`development` feature flags are short-lived feature flags,
+used so that unfinished code can be deployed in production.
+
+A `development` feature flag should have a rollout issue,
+ideally created using the [Feature Flag Roll Out template](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/issue_templates/Feature%20Flag%20Roll%20Out.md).
+
+NOTE: **Note:**
+This is the default type used when calling `Feature.enabled?`.
+
+## Feature flag definition and validation
+
+> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/229161) in GitLab 13.3.
+
+During development (`RAILS_ENV=development`) or testing (`RAILS_ENV=test`) all feature flag usage is being strictly validated.
+
+This process is meant to ensure consistent feature flag usage in the codebase. All feature flags **must**:
+
+- Be known. Only use feature flags that are explicitly defined.
+- Not be defined twice. They have to be defined either in FOSS or EE, but not both.
+- Use a valid and consistent `type:` across all invocations.
+- Use the same `default_enabled:` across all invocations.
+- Have an owner.
+
+All feature flags known to GitLab are self-documented in YAML files stored in:
+
+- [`config/feature_flags`](https://gitlab.com/gitlab-org/gitlab/-/tree/master/config/feature_flags)
+- [`ee/config/feature_flags`](https://gitlab.com/gitlab-org/gitlab/-/tree/master/ee/config/feature_flags)
+
+Each feature flag is defined in a separate YAML file consisting of a number of fields:
+
+| Field | Required | Description |
+|---------------------|----------|----------------------------------------------------------------|
+| `name` | yes | Name of the feature flag. |
+| `type` | yes | Type of feature flag. |
+| `default_enabled` | yes | The default state of the feature flag that is strongly validated, with `default_enabled:` passed as an argument. |
+| `introduced_by_url` | no | The URL to the Merge Request that introduced the feature flag. |
+| `rollout_issue_url` | no | The URL to the Issue covering the feature flag rollout. |
+| `group` | no | The [group](https://about.gitlab.com/handbook/product/product-categories/#devops-stages) that owns the feature flag. |
+
+TIP: **Tip:**
+All validations are skipped when running in `RAILS_ENV=production`.
+
+## Create a new feature flag
+
+The GitLab codebase provides [`bin/feature-flag`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/bin/feature-flag),
+a dedicated tool to create new feature flag definitions.
+The tool asks various questions about the new feature flag, then creates
+a YAML definition in `config/feature_flags` or `ee/config/feature_flags`.
+
+Only feature flags that have a YAML definition file can be used when running the development or testing environments.
+
+```shell
+$ bin/feature-flag my-feature-flag
+>> Please specify the group introducing feature flag, like `group::apm`:
+?> group::memory
+
+>> If you have MR open, can you paste the URL here? (or enter to skip)
+?> https://gitlab.com/gitlab-org/gitlab/-/merge_requests/38602
+
+>> Open this URL and fill the rest of details:
+https://gitlab.com/gitlab-org/gitlab/-/issues/new?issue%5Btitle%5D=%5BFeature+flag%5D+Rollout+of+%60test-flag%60&issuable_template=Feature+Flag+Roll+Out
+
+>> Paste URL of `rollout issue` here, or enter to skip:
+?> https://gitlab.com/gitlab-org/gitlab/-/issues/232533
+create config/feature_flags/development/test-flag.yml
+---
+name: test-flag
+introduced_by_url: https://gitlab.com/gitlab-org/gitlab/-/merge_requests/38602
+rollout_issue_url: https://gitlab.com/gitlab-org/gitlab/-/issues/232533
+group: group::memory
+type: development
+default_enabled: false
+```
+
+TIP: **Tip:**
+To create a feature flag that is only used in EE, add the `--ee` flag: `bin/feature-flag --ee`
+
+## Develop with a feature flag
+
+There are two main ways of using Feature Flags in the GitLab codebase:
+
+- [Backend code (Rails)](#backend)
+- [Frontend code (VueJS)](#frontend)
+
+### Backend
+
+The feature flag interface is defined in [`lib/feature.rb`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/feature.rb).
+This interface provides a set of methods to check if the feature flag is enabled or disabled:
+
+```ruby
+if Feature.enabled?(:my_feature_flag, project)
+ # execute code if feature flag is enabled
+else
+ # execute code if feature flag is disabled
+end
+
+if Feature.disabled?(:my_feature_flag, project)
+ # execute code if feature flag is disabled
+end
+```
+
+In rare cases you may want to make a feature enabled by default. If so, explain the reasoning
+in the merge request. Use `default_enabled: true` when checking the feature flag state:
+
+```ruby
+if Feature.enabled?(:feature_flag, project, default_enabled: true)
+ # execute code if feature flag is enabled
+else
+ # execute code if feature flag is disabled
+end
+
+if Feature.disabled?(:my_feature_flag, project, default_enabled: true)
+ # execute code if feature flag is disabled
+end
+```
+
+### Frontend
+
+Use the `push_frontend_feature_flag` method for frontend code, which is
+available to all controllers that inherit from `ApplicationController`. You can use
+this method to expose the state of a feature flag, for example:
+
+```ruby
+before_action do
+ # Prefer to scope it per project or user e.g.
+ push_frontend_feature_flag(:vim_bindings, project)
+end
+
+def index
+ # ...
+end
+
+def edit
+ # ...
+end
+```
+
+You can then check the state of the feature flag in JavaScript as follows:
+
+```javascript
+if ( gon.features.vimBindings ) {
+ // ...
+}
+```
+
+The name of the feature flag in JavaScript is always camelCase,
+so checking for `gon.features.vim_bindings` would not work.
+
+See the [Vue guide](../fe_guide/vue.md#accessing-feature-flags) for details about
+how to access feature flags in a Vue component.
+
+In rare cases you may want to make a feature enabled by default. If so, explain the reasoning
+in the merge request. Use `default_enabled: true` when checking the feature flag state:
+
+```ruby
+before_action do
+ # Prefer to scope it per project or user e.g.
+ push_frontend_feature_flag(:vim_bindings, project, default_enabled: true)
+end
+```
+
+### Feature actors
+
+**It is strongly advised to use actors with feature flags.** Actors provide a simple
+way to enable a feature flag only for a given project, group or user. This makes debugging
+easier, as you can filter logs and errors for example, based on actors. This also makes it possible
+to enable the feature on the `gitlab-org` or `gitlab-com` groups first, while the rest of
the users aren't impacted.
+Actors also provide an easy way to do a percentage rollout of a feature in a sticky way.
+If a 1% rollout enabled a feature for a specific actor, that actor will continue to have the feature enabled at
+10%, 50%, and 100%.
+
+GitLab currently supports the following models as feature flag actors:
+
+- `User`
+- `Project`
+- `Group`
+
+The actor is a second parameter of the `Feature.enabled?` call. The
+same actor type must be used consistently for all invocations of `Feature.enabled?`.
+
```ruby
-# Good
Feature.enabled?(:feature_flag, project)
-
-# Avoid, if possible
-Feature.enabled?(:feature_flag)
+Feature.enabled?(:feature_flag, group)
+Feature.enabled?(:feature_flag, user)
```
+### Enable additional objects as actors
+
To use feature gates based on actors, the model needs to respond to
`flipper_id`. For example, to enable for the Foo model:
@@ -26,29 +226,18 @@ end
Only models that `include FeatureGate` or expose `flipper_id` method can be
used as an actor for `Feature.enabled?`.
-Features that are developed and are intended to be merged behind a feature flag
-should not include a changelog entry. The entry should either be added in the merge
-request removing the feature flag or the merge request where the default value of
-the feature flag is set to true. If the feature contains any DB migration it
-should include a changelog entry for DB changes.
-
-If you need the feature flag to be on automatically, use `default_enabled: true`
-when checking:
+### Feature flags for licensed features
-```ruby
-Feature.enabled?(:feature_flag, project, default_enabled: true)
-```
+If a feature is license-gated, there's no need to add an additional
+explicit feature flag check since the flag will be checked as part of the
+`License.feature_available?` call. Similarly, there's no need to "clean up" a
+feature flag once the feature has reached general availability.
The [`Project#feature_available?`](https://gitlab.com/gitlab-org/gitlab/blob/4cc1c62918aa4c31750cb21dfb1a6c3492d71080/app/models/project_feature.rb#L63-68),
[`Namespace#feature_available?`](https://gitlab.com/gitlab-org/gitlab/blob/4cc1c62918aa4c31750cb21dfb1a6c3492d71080/ee/app/models/ee/namespace.rb#L71-85) (EE), and
[`License.feature_available?`](https://gitlab.com/gitlab-org/gitlab/blob/4cc1c62918aa4c31750cb21dfb1a6c3492d71080/ee/app/models/license.rb#L293-300) (EE) methods all implicitly check for
a by default enabled feature flag with the same name as the provided argument.
-For example if a feature is license-gated, there's no need to add an additional
-explicit feature flag check since the flag will be checked as part of the
-`License.feature_available?` call. Similarly, there's no need to "clean up" a
-feature flag once the feature has reached general availability.
-
You'd still want to use an explicit `Feature.enabled?` check if your new feature
isn't gated by a License or Plan.
@@ -58,7 +247,7 @@ the feature flag check will default to `true`.**
This is relevant when developing the feature using
[several smaller merge requests](https://about.gitlab.com/handbook/values/#make-small-merge-requests), or when the feature is considered to be an
-[alpha or beta](https://about.gitlab.com/handbook/product/#alpha-beta-ga), and
+[alpha or beta](https://about.gitlab.com/handbook/product/gitlab-the-product/#alpha-beta-ga), and
should not be available by default.
As an example, if you were to ship the frontend half of a feature without the
@@ -67,13 +256,10 @@ also ready to be shipped. To make sure this feature is disabled for both
GitLab.com and self-managed instances, you should use the
[`Namespace#alpha_feature_available?`](https://gitlab.com/gitlab-org/gitlab/blob/458749872f4a8f27abe8add930dbb958044cb926/ee/app/models/ee/namespace.rb#L113) or
[`Namespace#beta_feature_available?`](https://gitlab.com/gitlab-org/gitlab/blob/458749872f4a8f27abe8add930dbb958044cb926/ee/app/models/ee/namespace.rb#L100-112)
-method, according to our [definitions](https://about.gitlab.com/handbook/product/#alpha-beta-ga). This ensures the feature is disabled unless the feature flag is
+method, according to our [definitions](https://about.gitlab.com/handbook/product/gitlab-the-product/#alpha-beta-ga). This ensures the feature is disabled unless the feature flag is
_explicitly_ enabled.
-## Feature groups
-
-Starting from GitLab 9.4 we support feature groups via
-[Flipper groups](https://github.com/jnunemaker/flipper/blob/v0.10.2/docs/Gates.md#2-group).
+### Feature groups
Feature groups must be defined statically in `lib/feature.rb` (in the
`.register_feature_groups` method), but their implementation can obviously be
@@ -82,50 +268,144 @@ dynamic (querying the DB etc.).
Once defined in `lib/feature.rb`, you will be able to activate a
feature for a given feature group via the [`feature_group` parameter of the features API](../../api/features.md#set-or-create-a-feature)
-### Frontend
+### Enabling a feature flag locally (in development)
-For frontend code you can use the method `push_frontend_feature_flag`, which is
-available to all controllers that inherit from `ApplicationController`. Using
-this method you can expose the state of a feature flag as follows:
+In the rails console (`rails c`), enter the following command to enable a feature flag:
```ruby
-before_action do
- # Prefer to scope it per project or user e.g.
- push_frontend_feature_flag(:vim_bindings, project)
+Feature.enable(:feature_flag_name)
+```
- # Avoid, if possible
- push_frontend_feature_flag(:vim_bindings)
-end
+Similarly, the following command will disable a feature flag:
-def index
- # ...
-end
+```ruby
+Feature.disable(:feature_flag_name)
+```
-def edit
- # ...
-end
+You can also enable a feature flag for a given gate:
+
+```ruby
+Feature.enable(:feature_flag_name, Project.find_by_full_path("root/my-project"))
```
-You can then check for the state of the feature flag in JavaScript as follows:
+## Feature flags in tests
-```javascript
-if ( gon.features.vimBindings ) {
- // ...
-}
+Introducing a feature flag into the codebase creates an additional codepath that should be tested.
+It is strongly advised to test all code affected by a feature flag, both when **enabled** and **disabled**
+to ensure the feature works properly.
+
+NOTE: **Note:**
+When using the testing environment, all feature flags are enabled by default.
+
+To disable a feature flag in a test, use the `stub_feature_flags`
+helper. For example, to globally disable the `ci_live_trace` feature
+flag in a test:
+
+```ruby
+stub_feature_flags(ci_live_trace: false)
+
+Feature.enabled?(:ci_live_trace) # => false
```
-The name of the feature flag in JavaScript will always be camelCased, meaning
-that checking for `gon.features.vim_bindings` would not work.
+If you wish to set up a test where a feature flag is enabled only
+for some actors and not others, you can specify this in options
+passed to the helper. For example, to enable the `ci_live_trace`
+feature flag for a specific project:
-See the [Vue guide](../fe_guide/vue.md#accessing-feature-flags) for details about
-how to access feature flags in a Vue component.
+```ruby
+project1, project2 = build_list(:project, 2)
-### Specs
+# Feature will only be enabled for project1
+stub_feature_flags(ci_live_trace: project1)
+
+Feature.enabled?(:ci_live_trace) # => false
+Feature.enabled?(:ci_live_trace, project1) # => true
+Feature.enabled?(:ci_live_trace, project2) # => false
+```
+
+The behavior of FlipperGate is as follows:
+
+1. You can enable an override for a specified actor to be enabled
+1. You can disable (remove) an override for a specified actor,
+ falling back to default state
+1. There's no way to model that you explicitly disable a specified actor
+
+```ruby
+Feature.enable(:my_feature)
+Feature.disable(:my_feature, project1)
+Feature.enabled?(:my_feature) # => true
+Feature.enabled?(:my_feature, project1) # => true
+
+Feature.disable(:my_feature2)
+Feature.enable(:my_feature2, project1)
+Feature.enabled?(:my_feature2) # => false
+Feature.enabled?(:my_feature2, project1) # => true
+```
+
+### `stub_feature_flags` vs `Feature.enable*`
+
+It is preferred to use `stub_feature_flags` to enable feature flags
+in the testing environment. This method provides a simple and well described
+interface for simple use cases.
+
+However, in some cases more complex behavior needs to be tested,
+like percentage rollouts of feature flags. This can be done using
+`.enable_percentage_of_time` or `.enable_percentage_of_actors`:
+
+```ruby
+# Good: feature needs to be explicitly disabled, as it is enabled by default if not defined
+stub_feature_flags(my_feature: false)
+stub_feature_flags(my_feature: true)
+stub_feature_flags(my_feature: project)
+stub_feature_flags(my_feature: [project, project2])
+
+# Bad
+Feature.enable(:my_feature_2)
+
+# Good: enable my_feature for 50% of time
+Feature.enable_percentage_of_time(:my_feature_3, 50)
+
+# Good: enable my_feature for 50% of actors/gates/things
+Feature.enable_percentage_of_actors(:my_feature_4, 50)
+```
+
+Each feature flag that has a defined state will be persisted
+during test execution time:
+
+```ruby
+Feature.persisted_names.include?('my_feature') => true
+Feature.persisted_names.include?('my_feature_2') => true
+Feature.persisted_names.include?('my_feature_3') => true
+Feature.persisted_names.include?('my_feature_4') => true
+```
+
+### Stubbing actor
+
+When you want to enable a feature flag for a specific actor only,
+you can stub its representation. A gate that is passed
+as an argument to `Feature.enabled?` and `Feature.disabled?` must be an object
+that includes `FeatureGate`.
+
+In specs you can use the `stub_feature_flag_gate` method that allows you to
+quickly create a custom actor:
+
+```ruby
+gate = stub_feature_flag_gate('CustomActor')
+
+stub_feature_flags(ci_live_trace: gate)
+
+Feature.enabled?(:ci_live_trace) # => false
+Feature.enabled?(:ci_live_trace, gate) # => true
+```
+
+### Controlling feature flags engine in tests
Our Flipper engine in the test environment works in a memory mode `Flipper::Adapters::Memory`.
`production` and `development` modes use `Flipper::Adapters::ActiveRecord`.
-### `stub_feature_flags: true` (default and preferred)
+You can control whether the `Flipper::Adapters::Memory` or `ActiveRecord` mode is being used.
+
+#### `stub_feature_flags: true` (default and preferred)
In this mode Flipper is configured to use `Flipper::Adapters::Memory` and mark all feature
flags to be on-by-default and persisted on a first use. This overwrites the `default_enabled:`
@@ -145,23 +425,3 @@ a mode that is used by `production` and `development`.
You should use this mode only when you really want to tests aspects of Flipper
with how it interacts with `ActiveRecord`.
-
-### Enabling a feature flag (in development)
-
-In the rails console (`rails c`), enter the following command to enable your feature flag
-
-```ruby
-Feature.enable(:feature_flag_name)
-```
-
-Similarly, the following command will disable a feature flag:
-
-```ruby
-Feature.disable(:feature_flag_name)
-```
-
-You can as well enable feature flag for a given gate:
-
-```ruby
-Feature.enable(:feature_flag_name, Project.find_by_full_path("root/my-project"))
-```
diff --git a/doc/development/feature_flags/process.md b/doc/development/feature_flags/process.md
index b053838964b..5dc3cf44a6e 100644
--- a/doc/development/feature_flags/process.md
+++ b/doc/development/feature_flags/process.md
@@ -24,7 +24,8 @@ should be leveraged:
- When development of a feature will be spread across multiple merge
requests, you can use the following workflow:
- 1. Introduce a feature flag which is **off** by default, in the first merge request.
+ 1. [Create a new feature flag](development.md#create-a-new-feature-flag)
+ which is **off** by default, in the first merge request.
1. Submit incremental changes via one or more merge requests, ensuring that any
new code added can only be reached if the feature flag is **on**.
You can keep the feature flag enabled on your local GDK during development.
diff --git a/doc/development/filtering_by_label.md b/doc/development/filtering_by_label.md
index ef92bd35985..9c1993fdf7f 100644
--- a/doc/development/filtering_by_label.md
+++ b/doc/development/filtering_by_label.md
@@ -1,3 +1,8 @@
+---
+stage: Plan
+group: Project Management
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+---
# Filtering by label
## Introduction
diff --git a/doc/development/geo.md b/doc/development/geo.md
index 5c781a60bac..57959b07e49 100644
--- a/doc/development/geo.md
+++ b/doc/development/geo.md
@@ -209,121 +209,12 @@ To migrate the tracking database, run:
bundle exec rake geo:db:migrate
```
-### Foreign Data Wrapper
-
-> Introduced in GitLab 10.1.
-
-Foreign Data Wrapper ([FDW](#fdw)) is used by the [Geo Log Cursor](#geo-log-cursor) and improves
-the performance of many synchronization operations.
-
-FDW is a PostgreSQL extension ([`postgres_fdw`](https://www.postgresql.org/docs/11/postgres-fdw.html)) that is enabled within
-the Geo Tracking Database (on a **secondary** node), which allows it
-to connect to the read-only database replica and perform queries and filter
-data from both instances.
-
-This persistent connection is configured as an FDW server
-named `gitlab_secondary`. This configuration exists within the database's user
-context only. To access the `gitlab_secondary`, GitLab needs to use the
-same database user that had previously been configured.
-
-The Geo Tracking Database accesses the read-only database replica via FDW as a regular user,
-limited by its own restrictions. The credentials are configured as a
-`USER MAPPING` associated with the `SERVER` mapped previously
-(`gitlab_secondary`).
-
-FDW configuration and credentials definition are managed automatically by the
-Omnibus GitLab `gitlab-ctl reconfigure` command.
-
-#### Refreshing the Foreign Tables
-
-Whenever a new Geo node is configured or the database schema changes on the
-**primary** node, you must refresh the foreign tables on the **secondary** node
-by running the following:
-
-```shell
-bundle exec rake geo:db:refresh_foreign_tables
-```
-
-Failure to do this will prevent the **secondary** node from
-functioning properly. The **secondary** node will generate error
-messages, as the following PostgreSQL error:
-
-```sql
-ERROR: relation "gitlab_secondary.ci_job_artifacts" does not exist at character 323
-STATEMENT: SELECT a.attname, format_type(a.atttypid, a.atttypmod),
- pg_get_expr(d.adbin, d.adrelid), a.attnotnull, a.atttypid, a.atttypmod
- FROM pg_attribute a LEFT JOIN pg_attrdef d
- ON a.attrelid = d.adrelid AND a.attnum = d.adnum
- WHERE a.attrelid = '"gitlab_secondary"."ci_job_artifacts"'::regclass
- AND a.attnum > 0 AND NOT a.attisdropped
- ORDER BY a.attnum
-```
-
-#### Accessing data from a Foreign Table
-
-At the SQL level, all you have to do is `SELECT` data from `gitlab_secondary.*`.
-
-Here's an example of how to access all projects from the Geo Tracking Database's FDW:
-
-```sql
-SELECT * FROM gitlab_secondary.projects;
-```
-
-As a more real-world example, this is how you filter for unarchived projects
-on the Tracking Database:
-
-```sql
-SELECT project_registry.*
- FROM project_registry
- JOIN gitlab_secondary.projects
- ON (project_registry.project_id = gitlab_secondary.projects.id
- AND gitlab_secondary.projects.archived IS FALSE)
-```
-
-At the ActiveRecord level, we have additional Models that represent the
-foreign tables. They must be mapped in a slightly different way, and they are read-only.
-
-Check the existing FDW models in `ee/app/models/geo/fdw` for reference.
-
-From a developer's perspective, it's no different than creating a model that
-represents a Database View.
-
-With the examples above, you can access the projects with:
-
-```ruby
-Geo::Fdw::Project.all
-```
-
-and to access the `ProjectRegistry` filtering by unarchived projects:
-
-```ruby
-# We have to use Arel here:
-project_registry_table = Geo::ProjectRegistry.arel_table
-fdw_project_table = Geo::Fdw::Project.arel_table
-
-project_registry_table.join(fdw_project_table)
- .on(project_registry_table[:project_id].eq(fdw_project_table[:id]))
- .where((fdw_project_table[:archived]).eq(true)) # if you append `.to_sql` you can check generated query
-```
-
## Finders
Geo uses [Finders](https://gitlab.com/gitlab-org/gitlab/tree/master/app/finders),
which are classes take care of the heavy lifting of looking up
projects/attachments/etc. in the tracking database and main database.
-### Finders Performance
-
-The Finders need to compare data from the main database with data in
-the tracking database. For example, counting the number of synced
-projects normally involves retrieving the project IDs from one
-database and checking their state in the other database. This is slow
-and requires a lot of memory.
-
-To overcome this, the Finders use [FDW](#fdw), or Foreign Data
-Wrappers. This allows a regular `JOIN` between the main database and
-the tracking database.
-
## Redis
Redis on the **secondary** node works the same as on the **primary**
@@ -397,12 +288,6 @@ migration do not need to run on the secondary nodes.
A database on each Geo **secondary** node that keeps state for the node
on which it resides. Read more in [Using the Tracking database](#using-the-tracking-database).
-### FDW
-
-Foreign Data Wrapper, or FDW, is a feature built-in in PostgreSQL. It
-allows data to be queried from different data sources. In Geo, it's
-used to query data from different PostgreSQL instances.
-
## Geo Event Log
The Geo **primary** stores events in the `geo_event_log` table. Each
diff --git a/doc/development/geo/framework.md b/doc/development/geo/framework.md
index de4d6fb869d..64c9030e3dd 100644
--- a/doc/development/geo/framework.md
+++ b/doc/development/geo/framework.md
@@ -90,6 +90,14 @@ module Geo
def self.model
::Packages::PackageFile
end
+
+ # Change this to `true` to release replication of this model. Then remove
+ # this override in the next release.
+ # The feature flag follows the format `geo_#{replicable_name}_replication`,
+ # so here it would be `geo_package_file_replication`
+ def self.replication_enabled_by_default?
+ false
+ end
end
end
```
@@ -180,6 +188,11 @@ For example, to add support for files referenced by a `Widget` model with a
mount_uploader :file, WidgetUploader
+ def local?
+ # Must to be implemented, Check the uploader's storage types
+ file_store == ObjectStorage::Store::LOCAL
+ end
+
def self.replicables_for_geo_node
# Should be implemented. The idea of the method is to restrict
# the set of synced items depending on synchronization settings
@@ -206,10 +219,30 @@ For example, to add support for files referenced by a `Widget` model with a
def carrierwave_uploader
model_record.file
end
+
+ # Change this to `true` to release replication of this model. Then remove
+ # this override in the next release.
+ # The feature flag follows the format `geo_#{replicable_name}_replication`,
+ # so here it would be `geo_widget_replication`
+ def self.replication_enabled_by_default?
+ false
+ end
end
end
```
+1. Add this replicator class to the method `replicator_classes` in
+`ee/lib/gitlab/geo.rb`:
+
+ ```ruby
+ def self.replicator_classes
+ classes = [::Geo::PackageFileReplicator,
+ ::Geo::WidgetReplicator]
+
+ classes.select(&:enabled?)
+ end
+ ```
+
1. Create `ee/spec/replicators/geo/widget_replicator_spec.rb` and perform
the setup necessary to define the `model_record` variable for the shared
examples.
@@ -226,13 +259,15 @@ For example, to add support for files referenced by a `Widget` model with a
end
```
-1. Create the `widget_registry` table so Geo secondaries can track the sync and
- verification state of each Widget's file:
+1. Create the `widget_registry` table, with columns ordered according to [our guidelines](../ordering_table_columns.md) so Geo secondaries can track the sync and
+ verification state of each Widget's file. This migration belongs in `ee/db/geo/migrate`:
```ruby
# frozen_string_literal: true
class CreateWidgetRegistry < ActiveRecord::Migration[6.0]
+ include Gitlab::Database::MigrationHelpers
+
DOWNTIME = false
disable_ddl_transaction!
@@ -244,12 +279,12 @@ For example, to add support for files referenced by a `Widget` model with a
t.integer :widget_id, null: false
t.integer :state, default: 0, null: false, limit: 2
t.integer :retry_count, default: 0, limit: 2
- t.text :last_sync_failure
t.datetime_with_timezone :retry_at
t.datetime_with_timezone :last_synced_at
t.datetime_with_timezone :created_at, null: false
+ t.text :last_sync_failure
- t.index :widget_id
+ t.index :widget_id, name: :index_widget_registry_on_widget_id
t.index :retry_at
t.index :state
end
@@ -286,6 +321,8 @@ For example, to add support for files referenced by a `Widget` model with a
1. Update `REGISTRY_CLASSES` in `ee/app/workers/geo/secondary/registry_consistency_worker.rb`.
+1. Add `widget_registry` to `ActiveSupport::Inflector.inflections` in `config/initializers_before_autoloader/000_inflections.rb`.
+
1. Create `ee/spec/factories/geo/widget_registry.rb`:
```ruby
@@ -343,8 +380,13 @@ Widgets should now be replicated by Geo!
#### Verification
-1. Add verification state fields to the `widgets` table so the Geo primary can
- track verification state:
+There are two ways to add verification related fields so that the Geo primary
+can track verification state:
+
+##### Option 1: Add verification state fields to the existing `widgets` table itself
+
+1. Add a migration to add columns ordered according to [our guidelines](../ordering_table_columns.md)
+for verification state to the widgets table:
```ruby
# frozen_string_literal: true
@@ -353,17 +395,39 @@ Widgets should now be replicated by Geo!
DOWNTIME = false
def change
- add_column :widgets, :verification_retry_at, :datetime_with_timezone
- add_column :widgets, :verified_at, :datetime_with_timezone
- add_column :widgets, :verification_checksum, :binary, using: 'verification_checksum::bytea'
- add_column :widgets, :verification_failure, :string
- add_column :widgets, :verification_retry_count, :integer
+ change_table(:widgets) do |t|
+ t.integer :verification_retry_count, limit: 2
+ t.column :verification_retry_at, :datetime_with_timezone
+ t.column :verified_at, :datetime_with_timezone
+ t.binary :verification_checksum, using: 'verification_checksum::bytea'
+
+ # rubocop:disable Migration/AddLimitToTextColumns
+ t.text :verification_failure
+ # rubocop:enable Migration/AddLimitToTextColumns
+ end
+ end
+ end
+ ```
+
+1. Adding a `text` column also [requires](../database/strings_and_the_text_data_type.md#add-a-text-column-to-an-existing-table)
+ setting a limit:
+
+ ```ruby
+ # frozen_string_literal: true
+
+ class AddVerificationFailureLimitToWidgets < ActiveRecord::Migration[6.0]
+ DOWNTIME = false
+
+ disable_ddl_transaction!
+
+ def change
+ add_text_limit :widgets, :verification_failure, 255
end
end
```
1. Add a partial index on `verification_failure` and `verification_checksum` to ensure
- re-verification can be performed efficiently:
+ re-verification can be performed efficiently. Add a migration in `ee/db/geo/migrate/`:
```ruby
# frozen_string_literal: true
@@ -387,6 +451,65 @@ Widgets should now be replicated by Geo!
end
```
+##### Option 2: Create a separate `widget_states` table with verification state fields
+
+1. Add a migration in `ee/db/geo/migrate/` to create a `widget_states` table and add a
+ partial index on `verification_failure` and `verification_checksum` to ensure
+ re-verification can be performed efficiently. Order the columns according to [our guidelines](../ordering_table_columns.md):
+
+ ```ruby
+ # frozen_string_literal: true
+
+ class CreateWidgetStates < ActiveRecord::Migration[6.0]
+ include Gitlab::Database::MigrationHelpers
+
+ DOWNTIME = false
+
+ disable_ddl_transaction!
+
+ def up
+ unless table_exists?(:widget_states)
+ with_lock_retries do
+ create_table :widget_states, id: false do |t|
+ t.references :widget, primary_key: true, null: false, foreign_key: { on_delete: :cascade }
+ t.datetime_with_timezone :verification_retry_at
+ t.datetime_with_timezone :verified_at
+ t.integer :verification_retry_count, limit: 2
+ t.binary :verification_checksum, using: 'verification_checksum::bytea'
+ t.text :verification_failure
+
+ t.index :verification_failure, where: "(verification_failure IS NOT NULL)", name: "widgets_verification_failure_partial"
+ t.index :verification_checksum, where: "(verification_checksum IS NOT NULL)", name: "widgets_verification_checksum_partial"
+ end
+ end
+ end
+
+ add_text_limit :widget_states, :verification_failure, 255
+ end
+
+ def down
+ drop_table :widget_states
+ end
+ end
+ ```
+
+1. Add the following lines to the `widget` model:
+
+ ```ruby
+ class Widget < ApplicationRecord
+ ...
+ has_one :widget_state, inverse_of: :widget
+
+ delegate :verification_retry_at, :verification_retry_at=,
+ :verified_at, :verified_at=,
+ :verification_checksum, :verification_checksum=,
+ :verification_failure, :verification_failure=,
+ :verification_retry_count, :verification_retry_count=,
+ to: :widget_state
+ ...
+ end
+ ```
+
To do: Add verification on secondaries. This should be done as part of
[Geo: Self Service Framework - First Implementation for Package File verification](https://gitlab.com/groups/gitlab-org/-/epics/1817)
diff --git a/doc/development/go_guide/index.md b/doc/development/go_guide/index.md
index 779dcd9b7a2..6954b2bd6dc 100644
--- a/doc/development/go_guide/index.md
+++ b/doc/development/go_guide/index.md
@@ -108,13 +108,13 @@ lint:
- '[ -e .golangci.yml ] || cp /golangci/.golangci.yml .'
# Write the code coverage report to gl-code-quality-report.json
# and print linting issues to stdout in the format: path/to/file:line description
- - golangci-lint run --out-format code-climate | tee gl-code-quality-report.json | jq -r '.[] | "\(.location.path):\(.location.lines.begin) \(.description)"'
+ # remove `--issues-exit-code 0` or set to non-zero to fail the job if linting issues are detected
+ - golangci-lint run --issues-exit-code 0 --out-format code-climate | tee gl-code-quality-report.json | jq -r '.[] | "\(.location.path):\(.location.lines.begin) \(.description)"'
artifacts:
reports:
codequality: gl-code-quality-report.json
paths:
- gl-code-quality-report.json
- allow_failure: true
```
Including a `.golangci.yml` in the root directory of the project allows for
diff --git a/doc/development/gotchas.md b/doc/development/gotchas.md
index 34fe3f11489..6dff9deb59d 100644
--- a/doc/development/gotchas.md
+++ b/doc/development/gotchas.md
@@ -238,3 +238,34 @@ This problem will disappear as soon as we upgrade to Rails 6 and use the Zeitwer
- Rails Guides: [Autoloading and Reloading Constants (Classic Mode)](https://guides.rubyonrails.org/autoloading_and_reloading_constants_classic_mode.html)
- Ruby Constant lookup: [Everything you ever wanted to know about constant lookup in Ruby](http://cirw.in/blog/constant-lookup)
- Rails 6 and Zeitwerk autoloader: [Understanding Zeitwerk in Rails 6](https://medium.com/cedarcode/understanding-zeitwerk-in-rails-6-f168a9f09a1f)
+
+## Storing assets that do not require pre-compiling
+
+Assets that need to be served to the user are stored under the `app/assets` directory, which is later pre-compiled and placed in the `public/` directory.
+
+However, you cannot access the content of any file from within `app/assets` from the application code, as we do not include that folder in production installations as a [space saving measure](https://gitlab.com/gitlab-org/omnibus-gitlab/-/commit/ca049f990b223f5e1e412830510a7516222810be).
+
+```ruby
+support_bot = User.support_bot
+
+# accessing a file from the `app/assets` folder
+support_bot.avatar = Rails.root.join('app', 'assets', 'images', 'bot_avatars', 'support_bot.png').open
+
+support_bot.save!
+```
+
+While the code above works in local environments, it errors out in production installations as the `app/assets` folder is not included.
+
+### Solution
+
+The alternative is the `lib/assets` folder. Use it if you need to add assets (like images) to the repo that meet the following conditions:
+
+- The assets do not need to be directly served to the user (and hence need not be pre-compiled).
+- The assets do need to be accessed via application code.
+
+In short:
+
+Use `app/assets` for storing any asset that needs to be precompiled and served to the end user.
+Use `lib/assets` for storing any asset that does not need to be served to the end user directly, but is still required to be accessed by the application code.
+
+MR for reference: [!37671](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/37671)
diff --git a/doc/development/graphql_guide/index.md b/doc/development/graphql_guide/index.md
new file mode 100644
index 00000000000..62650f7cd3f
--- /dev/null
+++ b/doc/development/graphql_guide/index.md
@@ -0,0 +1,13 @@
+# GraphQL development guidelines
+
+This guide contains all the information to successfully contribute to GitLab's
+GraphQL API. This is a living document, and we welcome contributions,
+feedback, and suggestions.
+
+## Resources
+
+- [GraphQL API development style guide](../api_graphql_styleguide.md): development style guide for
+ GraphQL.
+- [GraphQL API documentation style guide](../documentation/styleguide.md#graphql-api): documentation
+ style guide for GraphQL.
+- [GraphQL API](../../api/graphql/index.md): user documentation for the GitLab GraphQL API.
diff --git a/doc/development/i18n/externalization.md b/doc/development/i18n/externalization.md
index bce95dfc24e..1374ca92256 100644
--- a/doc/development/i18n/externalization.md
+++ b/doc/development/i18n/externalization.md
@@ -196,7 +196,7 @@ For example use `%{created_at}` in Ruby but `%{createdAt}` in JavaScript. Make s
// => 'This is &lt;strong&gt;&lt;script&gt;alert(&#x27;evil&#x27;)&lt;/script&gt;&lt;/strong&gt;'
// OK:
- sprintf(__('This is %{value}'), { value: `<strong>${escape(someDynamicValue)}</strong>`, false);
+ sprintf(__('This is %{value}'), { value: `<strong>${escape(someDynamicValue)}</strong>` }, false);
// => 'This is <strong>&lt;script&gt;alert(&#x27;evil&#x27;)&lt;/script&gt;</strong>'
```
@@ -296,6 +296,74 @@ Namespaces should be PascalCase.
Note: The namespace should be removed from the translation. See the [translation
guidelines for more details](translation.md#namespaced-strings).
+### HTML
+
+We no longer include HTML directly in the strings that are submitted for translation. This is for a couple of reasons:
+
+1. It introduces a chance for the translated string to accidentally include invalid HTML.
+1. It introduces a security risk where translated strings become an attack vector for XSS, as noted by the
+ [Open Web Application Security Project (OWASP)](https://owasp.org/www-community/attacks/xss/).
+
+To include formatting in the translated string, we can do the following:
+
+- In Ruby/HAML:
+
+ ```ruby
+ html_escape(_('Some %{strongOpen}bold%{strongClose} text.')) % { strongOpen: '<strong>'.html_safe, strongClose: '</strong>'.html_safe }
+
+ # => 'Some <strong>bold</strong> text.'
+ ```
+
+- In JavaScript:
+
+ ```javascript
+ sprintf(__('Some %{strongOpen}bold%{strongClose} text.'), { strongOpen: '<strong>', strongClose: '</strong>'}, false);
+
+ // => 'Some <strong>bold</strong> text.'
+ ```
+
+- In Vue
+
+ See the section on [interpolation](#interpolation).
+
+When [this translation helper issue](https://gitlab.com/gitlab-org/gitlab/-/issues/217935) is complete, we'll update the
+process of including formatting in translated strings.
+
+#### Including Angle Brackets
+
+If a string contains angles brackets (`<`/`>`) that are not used for HTML, it will still be flagged by the
+`rake gettext:lint` linter.
+To avoid this error, use the applicable HTML entity code (`&lt;` or `&gt;`) instead:
+
+- In Ruby/HAML:
+
+ ```ruby
+ html_escape_once(_('In &lt; 1 hour')).html_safe
+
+ # => 'In < 1 hour'
+ ```
+
+- In JavaScript:
+
+ ```javascript
+ import sanitize from 'sanitize-html';
+
+ const i18n = { LESS_THAN_ONE_HOUR: sanitize(__('In &lt; 1 hours'), { allowedTags: [] }) };
+
+ // ... using the string
+ element.innerHTML = i18n.LESS_THAN_ONE_HOUR;
+
+ // => 'In < 1 hour'
+ ```
+
+- In Vue:
+
+ ```vue
+ <gl-sprintf :message="s__('In &lt; 1 hours')"/>
+
+ // => 'In < 1 hour'
+ ```
+
### Dates / times
- In JavaScript:
@@ -555,6 +623,7 @@ The linter will take the following into account:
- There should be no variables used in a translation that aren't in the
message ID
- Errors during translation.
+- Presence of angle brackets (`<` or `>`)
The errors are grouped per file, and per message ID:
@@ -567,7 +636,7 @@ Errors in `locale/zh_HK/gitlab.po`:
Syntax error in msgstr
Syntax error in message_line
There should be only whitespace until the end of line after the double quote character of a message text.
- Parsing result before error: '{:msgid=>["", "You are going to remove %{project_name_with_namespace}.\\n", "Removed project CANNOT be restored!\\n", "Are you ABSOLUTELY sure?"]}'
+ Parsing result before error: '{:msgid=>["", "You are going to delete %{project_name_with_namespace}.\\n", "Deleted projects CANNOT be restored!\\n", "Are you ABSOLUTELY sure?"]}'
SimplePoParser filtered backtrace: SimplePoParser::ParserError
Errors in `locale/zh_TW/gitlab.po`:
1 pipeline
@@ -581,6 +650,10 @@ aren't in the message with ID `1 pipeline`.
## Adding a new language
+NOTE: **Note:**
+[Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/221012) in GitLab 13.3:
+Languages with less than 2% of translations won't be available in the UI.
+
Let's suppose you want to add translations for a new language, let's say French.
1. The first step is to register the new language in `lib/gitlab/i18n.rb`:
diff --git a/doc/development/i18n/merging_translations.md b/doc/development/i18n/merging_translations.md
index e0fcaddda4c..8b3357c41d3 100644
--- a/doc/development/i18n/merging_translations.md
+++ b/doc/development/i18n/merging_translations.md
@@ -25,14 +25,15 @@ suggesting to automate this process. Disapproving will exclude the
invalid translation, the merge request will be updated within a few
minutes.
+If the translation has failed validation due to angle brackets `<` or `>`
+it should be disapproved on CrowdIn as our strings should be
+using [variables](externalization.md#html) for HTML instead.
+
It might be handy to pause the integration on the CrowdIn side for a
little while so translations don't keep coming. This can be done by
clicking `Pause sync` on the [CrowdIn integration settings
page](https://translate.gitlab.com/project/gitlab-ee/settings#integration).
-When all failures are resolved, the translations need to be double
-checked once more as discussed in [confidential issue](../../user/project/issues/confidential_issues.md) `https://gitlab.com/gitlab-org/gitlab/-/issues/19485`.
-
## Merging translations
When all translations are found good and pipelines pass the
@@ -56,7 +57,7 @@ have been fixed on master.
## Recreate the GitLab integration in CrowdIn
-NOTE: ***Note:**
+NOTE: **Note:**
These instructions work only for GitLab Team Members.
If for some reason the GitLab integration in CrowdIn does not exist, it can be
diff --git a/doc/development/i18n/proofreader.md b/doc/development/i18n/proofreader.md
index 935a171f34c..9d2997379c1 100644
--- a/doc/development/i18n/proofreader.md
+++ b/doc/development/i18n/proofreader.md
@@ -115,9 +115,10 @@ are very appreciative of the work done by translators and proofreaders!
## Become a proofreader
-> **Note:** Before requesting Proofreader permissions in CrowdIn please make
-> sure that you have a history of contributing translations to the GitLab
-> project.
+NOTE: **Note:**
+Before requesting Proofreader permissions in CrowdIn please make
+sure that you have a history of contributing translations to the GitLab
+project.
1. Contribute translations to GitLab. See instructions for
[translating GitLab](translation.md).
diff --git a/doc/development/img/bullet_v13_0.png b/doc/development/img/bullet_v13_0.png
index 04b476db581..e185bdef76d 100644
--- a/doc/development/img/bullet_v13_0.png
+++ b/doc/development/img/bullet_v13_0.png
Binary files differ
diff --git a/doc/development/img/deployment_pipeline_v13_3.png b/doc/development/img/deployment_pipeline_v13_3.png
new file mode 100644
index 00000000000..e4fd470fc3d
--- /dev/null
+++ b/doc/development/img/deployment_pipeline_v13_3.png
Binary files differ
diff --git a/doc/development/img/telemetry_system_overview.png b/doc/development/img/telemetry_system_overview.png
index 1667039e8cd..f2e6b300e94 100644
--- a/doc/development/img/telemetry_system_overview.png
+++ b/doc/development/img/telemetry_system_overview.png
Binary files differ
diff --git a/doc/development/instrumentation.md b/doc/development/instrumentation.md
index ee1aab1456e..e420ae0c54f 100644
--- a/doc/development/instrumentation.md
+++ b/doc/development/instrumentation.md
@@ -1,3 +1,9 @@
+---
+stage: Monitor
+group: APM
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+---
+
# Instrumenting Ruby code
[GitLab Performance Monitoring](../administration/monitoring/performance/index.md) allows instrumenting of both methods and custom
diff --git a/doc/development/integrations/elasticsearch_for_paid_tiers_on_gitlabcom.md b/doc/development/integrations/elasticsearch_for_paid_tiers_on_gitlabcom.md
deleted file mode 100644
index 8289be47253..00000000000
--- a/doc/development/integrations/elasticsearch_for_paid_tiers_on_gitlabcom.md
+++ /dev/null
@@ -1,28 +0,0 @@
-# Elasticsearch for paid tiers on GitLab.com
-
-> - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/220246) in GitLab 13.2
-> - It's deployed behind a feature flag, disabled by default.
-> - It's disabled on GitLab.com.
-> - It's not recommended for use in GitLab self-managed instances.
-
-This document describes how to enable Elasticsearch with GitLab for all paid tiers on GitLab.com. Once enabled,
-all paid tiers will have access to the [Advanced Global Search feature](../../integration/elasticsearch.md) on GitLab.com.
-
-## Enable or disable Elasticsearch for all paid tiers on GitLab.com
-
-Since we're still in the process of rolling this out and want to control the timing this is behind a feature flag
-which defaults to off.
-
-To enable it:
-
-```ruby
-# Instance-wide
-Feature.enable(:elasticsearch_index_only_paid_groups)
-```
-
-To disable it:
-
-```ruby
-# Instance-wide
-Feature.disable(:elasticsearch_index_only_paid_groups)
-```
diff --git a/doc/development/integrations/example_vuln.png b/doc/development/integrations/example_vuln.png
index f7a3c8b38f2..bbfeb3c805f 100644
--- a/doc/development/integrations/example_vuln.png
+++ b/doc/development/integrations/example_vuln.png
Binary files differ
diff --git a/doc/development/integrations/jenkins.md b/doc/development/integrations/jenkins.md
index f2bc6532dde..5d3c869d922 100644
--- a/doc/development/integrations/jenkins.md
+++ b/doc/development/integrations/jenkins.md
@@ -16,7 +16,7 @@ brew services start jenkins
GitLab does not allow requests to localhost or the local network by default. When running Jenkins on your local machine, you need to enable local access.
1. Log into your GitLab instance as an admin.
-1. Go to **{admin}** **Admin Area > Settings > Network**.
+1. Go to **Admin Area > Settings > Network**.
1. Expand **Outbound requests** and check the following checkboxes:
- **Allow requests to the local network from web hooks and services**
diff --git a/doc/development/integrations/jira_connect.md b/doc/development/integrations/jira_connect.md
index 374cc976caa..1f9b03075f0 100644
--- a/doc/development/integrations/jira_connect.md
+++ b/doc/development/integrations/jira_connect.md
@@ -11,7 +11,7 @@ The following are required to install and test the app:
For the app to work, Jira Cloud should be able to connect to the GitLab instance through the internet.
To easily expose your local development environment, you can use tools like
- [serveo](https://medium.com/testautomator/how-to-forward-my-local-port-to-public-using-serveo-4979f352a3bf)
+ [serveo](https://medium.com/automationmaster/how-to-forward-my-local-port-to-public-using-serveo-4979f352a3bf)
or [ngrok](https://ngrok.com). These also take care of SSL for you because Jira
requires all connections to the app host to be over SSL.
diff --git a/doc/development/integrations/secure.md b/doc/development/integrations/secure.md
index 22da57400e0..21076fa681f 100644
--- a/doc/development/integrations/secure.md
+++ b/doc/development/integrations/secure.md
@@ -358,7 +358,7 @@ It is recommended to reuse the identifiers the GitLab scanners already define:
| [CVE](https://cve.mitre.org/cve/) | `cve` | CVE-2019-10086 |
| [CWE](https://cwe.mitre.org/data/index.html) | `cwe` | CWE-1026 |
| [OSVD](https://cve.mitre.org/data/refs/refmap/source-OSVDB.html) | `osvdb` | OSVDB-113928 |
-| [USN](https://usn.ubuntu.com/) | `usn` | USN-4234-1 |
+| [USN](https://ubuntu.com/security/notices) | `usn` | USN-4234-1 |
| [WASC](http://projects.webappsec.org/Threat-Classification-Reference-Grid) | `wasc` | WASC-19 |
| [RHSA](https://access.redhat.com/errata/#/) | `rhsa` | RHSA-2020:0111 |
| [ELSA](https://linux.oracle.com/security/) | `elsa` | ELSA-2020-0085 |
diff --git a/doc/development/internal_api.md b/doc/development/internal_api.md
index d220a2d46fb..c51bf66be46 100644
--- a/doc/development/internal_api.md
+++ b/doc/development/internal_api.md
@@ -1,3 +1,10 @@
+---
+stage: Create
+group: Source Code
+info: "To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers"
+type: reference, api
+---
+
# Internal API
The internal API is used by different GitLab components, it can not be
@@ -24,10 +31,11 @@ authentication.
## Git Authentication
-This is called by Gitaly and GitLab-shell to check access to a
+This is called by [Gitaly](https://gitlab.com/gitlab-org/gitaly) and
+[GitLab Shell](https://gitlab.com/gitlab-org/gitlab-shell) to check access to a
repository.
-When called from GitLab-shell no changes are passed and the internal
+When called from GitLab Shell no changes are passed and the internal
API replies with the information needed to pass the request on to
Gitaly.
@@ -40,13 +48,13 @@ POST /internal/allowed
| Attribute | Type | Required | Description |
|:----------|:-------|:---------|:------------|
-| `key_id` | string | no | ID of the SSH-key used to connect to GitLab-shell |
-| `username` | string | no | Username from the certificate used to connect to GitLab-Shell |
+| `key_id` | string | no | ID of the SSH-key used to connect to GitLab Shell |
+| `username` | string | no | Username from the certificate used to connect to GitLab Shell |
| `project` | string | no (if `gl_repository` is passed) | Path to the project |
| `gl_repository` | string | no (if `project` is passed) | Repository identifier (e.g. `project-7`) |
| `protocol` | string | yes | SSH when called from GitLab-shell, HTTP or SSH when called from Gitaly |
| `action` | string | yes | Git command being run (`git-upload-pack`, `git-receive-pack`, `git-upload-archive`) |
-| `changes` | string | yes | `<oldrev> <newrev> <refname>` when called from Gitaly, The magic string `_any` when called from GitLab Shell |
+| `changes` | string | yes | `<oldrev> <newrev> <refname>` when called from Gitaly, the magic string `_any` when called from GitLab Shell |
| `check_ip` | string | no | IP address from which call to GitLab Shell was made |
Example request:
@@ -84,17 +92,17 @@ Example response:
### Known consumers
- Gitaly
-- GitLab-shell
+- GitLab Shell
## LFS Authentication
-This is the endpoint that gets called from GitLab-shell to provide
+This is the endpoint that gets called from GitLab Shell to provide
information for LFS clients when the repository is accessed over SSH.
| Attribute | Type | Required | Description |
|:----------|:-------|:---------|:------------|
-| `key_id` | string | no | ID of the SSH-key used to connect to GitLab-shell |
-| `username`| string | no | Username from the certificate used to connect to GitLab-Shell |
+| `key_id` | string | no | ID of the SSH-key used to connect to GitLab Shell |
+| `username`| string | no | Username from the certificate used to connect to GitLab Shell |
| `project` | string | no | Path to the project |
Example request:
@@ -114,17 +122,17 @@ curl --request POST --header "Gitlab-Shared-Secret: <Base64 encoded token>" --da
### Known consumers
-- GitLab-shell
+- GitLab Shell
## Authorized Keys Check
-This endpoint is called by the GitLab-shell authorized keys
+This endpoint is called by the GitLab Shell authorized keys
check. Which is called by OpenSSH for [fast SSH key
lookup](../administration/operations/fast_ssh_key_lookup.md).
| Attribute | Type | Required | Description |
|:----------|:-------|:---------|:------------|
-| `key` | string | yes | SSH key as passed by OpenSSH to GitLab-shell |
+| `key` | string | yes | SSH key as passed by OpenSSH to GitLab Shell |
```plaintext
GET /internal/authorized_keys
@@ -149,7 +157,7 @@ Example response:
### Known consumers
-- GitLab-shell
+- GitLab Shell
## Get user for user ID or key
@@ -159,7 +167,7 @@ discovers the user associated with an SSH key.
| Attribute | Type | Required | Description |
|:----------|:-------|:---------|:------------|
| `key_id` | integer | no | The ID of the SSH key used as found in the authorized-keys file or through the `/authorized_keys` check |
-| `username` | string | no | Username of the user being looked up, used by GitLab-shell when authenticating using a certificate |
+| `username` | string | no | Username of the user being looked up, used by GitLab Shell when authenticating using a certificate |
```plaintext
GET /internal/discover
@@ -183,12 +191,12 @@ Example response:
### Known consumers
-- GitLab-shell
+- GitLab Shell
## Instance information
This gets some generic information about the instance. This is used
-by Geo nodes to get information about each other
+by Geo nodes to get information about each other.
```plaintext
GET /internal/check
@@ -214,12 +222,12 @@ Example response:
### Known consumers
- GitLab Geo
-- GitLab-shell's `bin/check`
+- GitLab Shell's `bin/check`
## Get new 2FA recovery codes using an SSH key
-This is called from GitLab-shell and allows users to get new 2FA
-recovery codes based on their SSH key
+This is called from GitLab Shell and allows users to get new 2FA
+recovery codes based on their SSH key.
| Attribute | Type | Required | Description |
|:----------|:-------|:---------|:------------|
@@ -258,7 +266,45 @@ Example response:
### Known consumers
-- GitLab-shell
+- GitLab Shell
+
+## Get new personal access-token
+
+This is called from GitLab Shell and allows users to generate a new
+personal access token.
+
+| Attribute | Type | Required | Description |
+|:----------|:-------|:---------|:------------|
+| `name` | string | yes | The name of the new token |
+| `scopes` | string array | yes | The authorization scopes for the new token, these must be valid token scopes |
+| `expires_at` | string | no | The expiry date for the new token |
+| `key_id` | integer | no | The ID of the SSH key used as found in the authorized-keys file or through the `/authorized_keys` check |
+| `user_id` | integer | no | User\_id for which to generate the new token |
+
+```plaintext
+POST /internal/personal_access_token
+```
+
+Example request:
+
+```shell
+curl --request POST --header "Gitlab-Shared-Secret: <Base64 encoded secret>" --data "user_id=29&name=mytokenname&scopes[]=read_user&scopes[]=read_repository&expires_at=2020-07-24" http://localhost:3001/api/v4/internal/personal_access_token
+```
+
+Example response:
+
+```json
+{
+ "success": true,
+ "token": "Hf_79B288hRv_3-TSD1R",
+ "scopes": ["read_user","read_repository"],
+ "expires_at": "2020-07-24"
+}
+```
+
+### Known consumers
+
+- GitLab Shell
## Incrementing counter on pre-receive
diff --git a/doc/development/issuable-like-models.md b/doc/development/issuable-like-models.md
index d252735dbd8..9029886c334 100644
--- a/doc/development/issuable-like-models.md
+++ b/doc/development/issuable-like-models.md
@@ -1,3 +1,8 @@
+---
+stage: Plan
+group: Project Management
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+---
# Issuable-like Rails models utilities
GitLab Rails codebase contains several models that hold common functionality and behave similarly to
@@ -9,9 +14,9 @@ This guide accumulates guidelines on working with such Rails models.
## Important text fields
-There are max length constraints for the most important text fields for `Issuable`s:
+There are maximum length constraints for the most important text fields for issuables:
-- `title`: 255 chars
-- `title_html`: 800 chars
+- `title`: 255 characters
+- `title_html`: 800 characters
- `description`: 1 megabyte
- `description_html`: 5 megabytes
diff --git a/doc/development/issue_types.md b/doc/development/issue_types.md
index 028d42b27fc..416aa65b13f 100644
--- a/doc/development/issue_types.md
+++ b/doc/development/issue_types.md
@@ -1,3 +1,9 @@
+---
+stage: Plan
+group: Project Management
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+---
+
# Issue Types
Sometimes when a new resource type is added it's not clear if it should be only an
diff --git a/doc/development/kubernetes.md b/doc/development/kubernetes.md
index 9e0e686f447..64089d51c7b 100644
--- a/doc/development/kubernetes.md
+++ b/doc/development/kubernetes.md
@@ -1,3 +1,9 @@
+---
+stage: Configure
+group: Configure
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+---
+
# Kubernetes integration - development guidelines
This document provides various guidelines when developing for GitLab's
diff --git a/doc/development/licensed_feature_availability.md b/doc/development/licensed_feature_availability.md
index 4f962b6f5e2..cf479544eea 100644
--- a/doc/development/licensed_feature_availability.md
+++ b/doc/development/licensed_feature_availability.md
@@ -8,7 +8,7 @@ on-premise or GitLab.com plans and features.
GitLab.com plans are persisted on user groups and namespaces, therefore, if you're adding a
feature such as [Related issues](../user/project/issues/related_issues.md) or
-[Service desk](../user/project/service_desk.md),
+[Service Desk](../user/project/service_desk.md),
it should be restricted on namespace scope.
1. Add the feature symbol on `EES_FEATURES`, `EEP_FEATURES` or `EEU_FEATURES` constants in
diff --git a/doc/development/logging.md b/doc/development/logging.md
index ccd4adf7cef..474a500da61 100644
--- a/doc/development/logging.md
+++ b/doc/development/logging.md
@@ -1,3 +1,9 @@
+---
+stage: Monitor
+group: APM
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+---
+
# GitLab Developers Guide to Logging
[GitLab Logs](../administration/logs.md) play a critical role for both
@@ -281,7 +287,7 @@ method or variable shouldn't be evaluated right away)
See our [HOWTO: Use Sidekiq metadata logs](https://www.youtube.com/watch?v=_wDllvO_IY0) for further knowledge on
creating visualizations in Kibana.
-**Note:**
+NOTE: **Note:**
The fields of the context are currently only logged for Sidekiq jobs triggered
through web requests. See the
[follow-up work](https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/68)
diff --git a/doc/development/merge_request_performance_guidelines.md b/doc/development/merge_request_performance_guidelines.md
index ed6d08a9894..b3829a82d59 100644
--- a/doc/development/merge_request_performance_guidelines.md
+++ b/doc/development/merge_request_performance_guidelines.md
@@ -211,7 +211,7 @@ For keeping transaction as minimal as possible, please consider using `AfterComm
module or `after_commit` AR hook.
Here is [an example](https://gitlab.com/gitlab-org/gitlab/-/issues/36154#note_247228859)
-that one request to Gitaly instance during transaction triggered a P1 issue.
+that one request to Gitaly instance during transaction triggered a P::1 issue.
## Eager Loading
diff --git a/doc/development/migration_style_guide.md b/doc/development/migration_style_guide.md
index d1f956f957e..ebaceac6d17 100644
--- a/doc/development/migration_style_guide.md
+++ b/doc/development/migration_style_guide.md
@@ -30,7 +30,7 @@ versions. If needed copy-paste GitLab code into the migration to make it forward
compatible.
For GitLab.com, please take into consideration that regular migrations (under `db/migrate`)
-are run before [Canary is deployed](https://about.gitlab.com/handbook/engineering/infrastructure/library/canary/#configuration-and-deployment),
+are run before [Canary is deployed](https://gitlab.com/gitlab-com/gl-infra/readiness/-/tree/master/library/canary/#configuration-and-deployment),
and post-deployment migrations (`db/post_migrate`) are run after the deployment to production has finished.
## Schema Changes
@@ -380,7 +380,8 @@ Example changes:
- `change_column_default`
- `create_table` / `drop_table`
-**Note:** `with_lock_retries` method **cannot** be used within the `change` method, you must manually define the `up` and `down` methods to make the migration reversible.
+NOTE: **Note:**
+`with_lock_retries` method **cannot** be used within the `change` method, you must manually define the `up` and `down` methods to make the migration reversible.
### How the helper method works
@@ -440,7 +441,8 @@ the `with_multiple_threads` block, instead of re-using the global connection
pool. This ensures each thread has its own connection object, and won't time
out when trying to obtain one.
-**NOTE:** PostgreSQL has a maximum amount of connections that it allows. This
+NOTE: **Note:**
+PostgreSQL has a maximum amount of connections that it allows. This
limit can vary from installation to installation. As a result, it's recommended
you do not use more than 32 threads in a single migration. Usually, 4-8 threads
should be more than enough.
@@ -873,14 +875,7 @@ end
When doing so be sure to explicitly set the model's table name, so it's not
derived from the class name or namespace.
-Finally, make sure that `reset_column_information` is run in the `up` method of
-the migration for all local Models that update the database.
-
-The reason for that is that all migration classes are loaded at the beginning
-(when `db:migrate` starts), so they can get out of sync with the table schema
-they map to in case another migration updates that schema. That makes the data
-migration fail when trying to insert or make updates to the underlying table,
-as the new columns are reported as `unknown attribute` by `ActiveRecord`.
+Be aware of the limitations [when using models in migrations](#using-models-in-migrations-discouraged).
### Renaming reserved paths
@@ -906,3 +901,78 @@ _namespaces_ that have a `project_id`.
The `path` column for these rows will be renamed to their previous value followed
by an integer. For example: `users` would turn into `users0`
+
+## Using models in migrations (discouraged)
+
+The use of models in migrations is generally discouraged. As such models are
+[contraindicated for background migrations](background_migrations.md#isolation),
+the model needs to be declared in the migration.
+
+If using a model in the migrations, you should first
+[clear the column cache](https://api.rubyonrails.org/classes/ActiveRecord/ModelSchema/ClassMethods.html#method-i-reset_column_information)
+using `reset_column_information`.
+
+This avoids problems where a column that you are using was altered and cached
+in a previous migration.
+
+### Example: Add a column `my_column` to the users table
+
+NOTE: **Note:**
+It is important not to leave out the `User.reset_column_information` command, in order to ensure that the old schema is dropped from the cache and ActiveRecord loads the updated schema information.
+
+```ruby
+class AddAndSeedMyColumn < ActiveRecord::Migration[6.0]
+ class User < ActiveRecord::Base
+ self.table_name = 'users'
+ end
+
+ def up
+ User.count # Any ActiveRecord calls on the model that caches the column information.
+
+ add_column :users, :my_column, :integer, default: 1
+
+ User.reset_column_information # The old schema is dropped from the cache.
+ User.find_each do |user|
+ user.my_column = 42 if some_condition # ActiveRecord sees the correct schema here.
+ user.save!
+ end
+ end
+end
+```
+
+The underlying table is modified and then accessed via ActiveRecord.
+
+Note that this also needs to be used if the table is modified in a previous, different migration,
+if both migrations are run in the same `db:migrate` process.
+
+This results in the following. Note the inclusion of `my_column`:
+
+```shell
+== 20200705232821 AddAndSeedMyColumn: migrating ==============================
+D, [2020-07-06T00:37:12.483876 #130101] DEBUG -- : (0.2ms) BEGIN
+D, [2020-07-06T00:37:12.521660 #130101] DEBUG -- : (0.4ms) SELECT COUNT(*) FROM "user"
+-- add_column(:users, :my_column, :integer, {:default=>1})
+D, [2020-07-06T00:37:12.523309 #130101] DEBUG -- : (0.8ms) ALTER TABLE "users" ADD "my_column" integer DEFAULT 1
+ -> 0.0016s
+D, [2020-07-06T00:37:12.650641 #130101] DEBUG -- : AddAndSeedMyColumn::User Load (0.7ms) SELECT "users".* FROM "users" ORDER BY "users"."id" ASC LIMIT $1 [["LIMIT", 1000]]
+D, [2020-07-18T00:41:26.851769 #459802] DEBUG -- : AddAndSeedMyColumn::User Update (1.1ms) UPDATE "users" SET "my_column" = $1, "updated_at" = $2 WHERE "users"."id" = $3 [["my_column", 42], ["updated_at", "2020-07-17 23:41:26.849044"], ["id", 1]]
+D, [2020-07-06T00:37:12.653648 #130101] DEBUG -- : ↳ config/initializers/config_initializers_active_record_locking.rb:13:in `_update_row'
+== 20200705232821 AddAndSeedMyColumn: migrated (0.1706s) =====================
+```
+
+If you skip clearing the schema cache (`User.reset_column_information`), the column is not
+used by ActiveRecord and the intended changes are not made, leading to the result below,
+where `my_column` is missing from the query.
+
+```shell
+== 20200705232821 AddAndSeedMyColumn: migrating ==============================
+D, [2020-07-06T00:37:12.483876 #130101] DEBUG -- : (0.2ms) BEGIN
+D, [2020-07-06T00:37:12.521660 #130101] DEBUG -- : (0.4ms) SELECT COUNT(*) FROM "user"
+-- add_column(:users, :my_column, :integer, {:default=>1})
+D, [2020-07-06T00:37:12.523309 #130101] DEBUG -- : (0.8ms) ALTER TABLE "users" ADD "my_column" integer DEFAULT 1
+ -> 0.0016s
+D, [2020-07-06T00:37:12.650641 #130101] DEBUG -- : AddAndSeedMyColumn::User Load (0.7ms) SELECT "users".* FROM "users" ORDER BY "users"."id" ASC LIMIT $1 [["LIMIT", 1000]]
+D, [2020-07-06T00:37:12.653459 #130101] DEBUG -- : AddAndSeedMyColumn::User Update (0.5ms) UPDATE "users" SET "updated_at" = $1 WHERE "users"."id" = $2 [["updated_at", "2020-07-05 23:37:12.652297"], ["id", 1]]
+D, [2020-07-06T00:37:12.653648 #130101] DEBUG -- : ↳ config/initializers/config_initializers_active_record_locking.rb:13:in `_update_row'
+== 20200705232821 AddAndSeedMyColumn: migrated (0.1706s) =====================
+```
diff --git a/doc/development/multi_version_compatibility.md b/doc/development/multi_version_compatibility.md
index aedd5c1ffb7..5ca43b9b818 100644
--- a/doc/development/multi_version_compatibility.md
+++ b/doc/development/multi_version_compatibility.md
@@ -20,6 +20,121 @@ but AJAX requests to URLs (like the GraphQL endpoint) won't match the pattern.
With this canary setup, we'd be in this mixed-versions state for an extended period of time until canary is promoted to
production and post-deployment migrations run.
+Also be aware that during a deployment to production, Web, API, and
+Sidekiq nodes are updated in parallel, but they may finish at
+different times. That means there may be a window of time when the
+application code is not in sync across the whole fleet. Changes that
+cut across Sidekiq, Web, and/or the API may [introduce unexpected
+errors until the deployment is complete](#builds-failing-due-to-varying-deployment-times-across-node-types).
+
+One way to handle this is to use a feature flag that is disabled by
+default. The feature flag can be enabled when the deployment is in a
+consistent state. However, this method of synchronization doesn't
+guarantee that customers with on-premise instances can [upgrade with
+zero downtime](https://docs.gitlab.com/omnibus/update/#zero-downtime-updates)
+since point releases bundle many changes together. Minimizing the time
+between when versions are out of sync across the fleet may help mitigate
+errors caused by upgrades.
+
+## Requirements for zero downtime upgrades
+
+One way to guarantee zero downtime upgrades for on-premise instances is following the
+[expand and contract pattern](https://martinfowler.com/bliki/ParallelChange.html).
+
+This means that every breaking change is broken down in three phases: expand, migrate, and contract.
+
+1. **expand**: a breaking change is introduced keeping the software backward-compatible.
+1. **migrate**: all consumers are updated to make use of the new implementation.
+1. **contract**: backward compatibility is removed.
+
+Those three phases **must be part of different milestones**, to allow zero downtime upgrades.
+
+Depending on the support level for the feature, the contract phase could be delayed until the next major release.
+
+## Expand and contract examples
+
+Route changes, changing Sidekiq worker parameters, and database migrations are all perfect examples of a breaking change.
+Let's see how we can handle them safely.
+
+### Route changes
+
+When changing routing we should pay attention to make sure a route generated from the new version can be served by the old one and vice versa.
+As you can see in [an example later on this page](#some-links-to-issues-and-mrs-were-broken), not doing it can lead to an outage.
+This type of change may look like an immediate switch between the two implementations. However,
+especially with the canary stage, there is an extended period of time where both version of the code
+coexists in production.
+
+1. **expand**: a new route is added, pointing to the same controller as the old one. But nothing in the application will generate links for the new routes.
+1. **migrate**: now that every machine in the fleet can understand the new route, we can generate links with the new routing.
+1. **contract**: the old route can be safely removed. (If the old route was likely to be widely shared, like the link to a repository file, we might want to add redirects and keep the old route for a longer period.)
+
+### Changing Sidekiq worker's parameters
+
+This topic is explained in detail in [Sidekiq Compatibility across Updates](sidekiq_style_guide.md#sidekiq-compatibility-across-updates).
+
+When we need to add a new parameter to a Sidekiq worker class, we can split this into the following steps:
+
+1. **expand**: the worker class adds a new parameter with a default value.
+1. **migrate**: we add the new parameter to all the invocations of the worker.
+1. **contract**: we remove the default value.
+
+At a first look, it may seem safe to bundle expand and migrate into a single milestone, but this will cause an outage if Puma restarts before Sidekiq.
+Puma enqueues jobs with an extra parameter that the old Sidekiq cannot handle.
+
+### Database migrations
+
+The following graph is a simplified visual representation of a deployment, this will guide us in understanding how expand and contract is implemented in our migrations strategy.
+
+There's a special consideration here. Using our post-deployment migrations framework allows us to bundle all three phases into one milestone.
+
+```mermaid
+gantt
+ title Deployment
+ dateFormat HH:mm
+
+ section Deploy box
+ Run migrations :done, migr, after schemaA, 2m
+ Run post-deployment migrations :postmigr, after mcvn , 2m
+
+ section Database
+ Schema A :done, schemaA, 00:00 , 1h
+ Schema B :crit, schemaB, after migr, 58m
+ Schema C. : schmeaC, after postmigr, 1h
+
+ section Machine A
+ Version N :done, mavn, 00:00 , 75m
+ Version N+1 : after mavn, 105m
+
+ section Machine B
+ Version N :done, mbvn, 00:00 , 105m
+ Version N+1 : mbdone, after mbvn, 75m
+
+ section Machine C
+ Version N :done, mcvn, 00:00 , 2h
+ Version N+1 : mbcdone, after mcvn, 1h
+```
+
+If we look at this schema from a database point of view, we can see two deployments feed into a single GitLab deployment:
+
+1. from `Schema A` to `Schema B`
+1. from `Schema B` to `Schema C`
+
+And these deployments align perfectly with application changes.
+
+1. At the beginning we have `Version N` on `Schema A`.
+1. Then we have a _long_ transition periond with both `Version N` and `Version N+1` on `Schema B`.
+1. When we only have `Version N+1` on `Schema B` the schema changes again.
+1. Finally we have `Version N+1` on `Schema C`.
+
+With all those details in mind, let's imagine we need to replace a query, and this query has an index to support it.
+
+1. **expand**: this is the from `Schema A` to `Schema B` deployment. We add the new index, but the application will ignore it for now
+1. **migrate**: this is the `Version N` to `Version N+1` application deployment. The new code is deployed, at this point in time only the new query will run.
+1. **contract**: from `Schema B` to `Schema C` (post-deployment migration). Nothing uses the old index anymore, we can safely remove it.
+
+This is only an example. More complex migrations, especially when background migrations are needed will
+still require more than one milestone. For details please refer to our [migration style guide](migration_style_guide.md).
+
## Examples of previous incidents
### Some links to issues and MRs were broken
@@ -60,3 +175,52 @@ We added a `NOT NULL` constraint to a column and marked it as a `NOT VALID` cons
But even with that, this was still a problem because the old servers were still inserting new rows with null values.
For more information, see [the relevant issue](https://gitlab.com/gitlab-com/gl-infra/production/-/issues/1944).
+
+### Downtime on release features between canary and production deployment
+
+To address the issue, we added a new column to an existing table with a `NOT NULL` constraint without
+specifying a default value. In other words, this requires the application to set a value to the column.
+
+The older version of the application didn't set the `NOT NULL` constraint since the entity/concept didn't
+exist before.
+
+The problem starts right after the canary deployment is complete. At that moment,
+the database migration (to add the column) has successfully run and canary instance starts using
+the new application code, hence QA was successful. Unfortunately, the production
+instance still uses the older code, so it started failing to insert a new release entry.
+
+For more information, see [this issue related to the Releases API](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/64151).
+
+### Builds failing due to varying deployment times across node types
+
+In [one production issue](https://gitlab.com/gitlab-com/gl-infra/production/-/issues/2442),
+CI builds that used the `parallel` keyword and depending on the
+variable `CI_NODE_TOTAL` being an integer failed. This was caused because after a user pushed a commit:
+
+1. New code: Sidekiq created a new pipeline and new build. `build.options[:parallel]` is a `Hash`.
+1. Old code: Runners requested a job from an API node that is running the previous version.
+1. As a result, the [new code](https://gitlab.com/gitlab-org/gitlab/blob/42b82a9a3ac5a96f9152aad6cbc583c42b9fb082/app/models/concerns/ci/contextable.rb#L104)
+was not run on the API server. The runner's request failed because the
+older API server tried return the `CI_NODE_TOTAL` CI variable, but
+instead of sending an integer value (e.g. 9), it sent a serialized
+`Hash` value (`{:number=>9, :total=>9}`).
+
+If you look at the [deployment pipeline](https://ops.gitlab.net/gitlab-com/gl-infra/deployer/-/pipelines/202212),
+you see all nodes were updated in parallel:
+
+![GitLab.com deployment pipeline](img/deployment_pipeline_v13_3.png)
+
+However, even though the updated started around the same time, the completion time varied significantly:
+
+|Node type|Duration (min)|
+|---------|--------------|
+|API |54 |
+|Sidekiq |21 |
+|K8S |8 |
+
+Builds that used the `parallel` keyword and depended on `CI_NODE_TOTAL`
+and `CI_NODE_INDEX` would fail during the time after Sidekiq was
+updated. Since Kubernetes (K8S) also runs Sidekiq pods, the window could
+have been as long as 46 minutes or as short as 33 minutes. Either way,
+having a feature flag to turn on after the deployment finished would
+prevent this from happening.
diff --git a/doc/development/new_fe_guide/development/performance.md b/doc/development/new_fe_guide/development/performance.md
index 7b58a80576a..ad6fdc0b85c 100644
--- a/doc/development/new_fe_guide/development/performance.md
+++ b/doc/development/new_fe_guide/development/performance.md
@@ -10,7 +10,7 @@ Any frontend engineer can contribute to this dashboard. They can contribute by a
There are 3 recommended high impact metrics to review on each page:
- [First visual change](https://web.dev/first-meaningful-paint/)
-- [Speed Index](https://sites.google.com/a/webpagetest.org/docs/using-webpagetest/metrics/speed-index)
-- [Visual Complete 95%](https://sites.google.com/a/webpagetest.org/docs/using-webpagetest/metrics/speed-index)
+- [Speed Index](https://github.com/WPO-Foundation/webpagetest-docs/blob/master/user/Metrics/SpeedIndex.md)
+- [Visual Complete 95%](https://github.com/WPO-Foundation/webpagetest-docs/blob/master/user/Metrics/SpeedIndex.md)
For these metrics, lower numbers are better as it means that the website is more performant.
diff --git a/doc/development/new_fe_guide/tips.md b/doc/development/new_fe_guide/tips.md
index 13222e0cc9b..c65266a3f25 100644
--- a/doc/development/new_fe_guide/tips.md
+++ b/doc/development/new_fe_guide/tips.md
@@ -10,12 +10,12 @@ yarn clean
## Creating feature flags in development
-The process for creating a feature flag is the same as [enabling a feature flag in development](../feature_flags/development.md#enabling-a-feature-flag-in-development).
+The process for creating a feature flag is the same as [enabling a feature flag in development](../feature_flags/development.md#enabling-a-feature-flag-locally-in-development).
Your feature flag can now be:
- [Made available to the frontend](../feature_flags/development.md#frontend) via the `gon`
-- Queried in [tests](../feature_flags/development.md#specs)
+- Queried in [tests](../feature_flags/development.md#feature-flags-in-tests)
- Queried in HAML templates and Ruby files via the `Feature.enabled?(:my_shiny_new_feature_flag)` method
### More on feature flags
diff --git a/doc/development/packages.md b/doc/development/packages.md
index ae67af10d88..e21eff543b4 100644
--- a/doc/development/packages.md
+++ b/doc/development/packages.md
@@ -1,4 +1,10 @@
-# Packages **(PREMIUM)**
+---
+stage: Package
+group: Package
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+---
+
+# Packages
This document will guide you through adding another [package management system](../administration/packages/index.md) support to GitLab.
@@ -25,9 +31,9 @@ The existing database model requires the following:
### API endpoints
-Package systems work with GitLab via API. For example `ee/lib/api/npm_packages.rb`
+Package systems work with GitLab via API. For example `lib/api/npm_packages.rb`
implements API endpoints to work with NPM clients. So, the first thing to do is to
-add a new `ee/lib/api/your_name_packages.rb` file with API endpoints that are
+add a new `lib/api/your_name_packages.rb` file with API endpoints that are
necessary to make the package system client to work. Usually that means having
endpoints like:
@@ -73,7 +79,8 @@ NPM is currently a hybrid of the instance level and group level.
It is using the top-level group or namespace as the defining portion of the name
(for example, `@my-group-name/my-package-name`).
-**Note:** Composer package naming scope is Instance Level.
+NOTE: **Note:**
+Composer package naming scope is Instance Level.
### Naming conventions
@@ -174,15 +181,15 @@ The implementation of the different Merge Requests will vary between different p
The MVC must support [Personal Access Tokens](../user/profile/personal_access_tokens.md) right from the start. We currently support two options for these tokens: OAuth and Basic Access.
-OAuth authentication is already supported. You can see an example in the [npm API](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/lib/api/npm_packages.rb).
+OAuth authentication is already supported. You can see an example in the [npm API](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/api/npm_packages.rb).
[Basic Access authentication](https://developer.mozilla.org/en-US/docs/Web/HTTP/Authentication)
support is done by overriding a specific function in the API helpers, like
-[this example in the Conan API](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/lib/api/conan_packages.rb).
+[this example in the Conan API](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/api/conan_packages.rb).
For this authentication mechanism, keep in mind that some clients can send an unauthenticated
request first, wait for the 401 Unauthorized response with the [`WWW-Authenticate`](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/WWW-Authenticate)
field, then send an updated (authenticated) request. This case is more involved as
-GitLab needs to handle the 401 Unauthorized response. The [Nuget API](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/lib/api/nuget_packages.rb)
+GitLab needs to handle the 401 Unauthorized response. The [Nuget API](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/api/nuget_packages.rb)
supports this case.
#### Authorization
diff --git a/doc/development/performance.md b/doc/development/performance.md
index 16ea1aa27ff..1dc7538c55b 100644
--- a/doc/development/performance.md
+++ b/doc/development/performance.md
@@ -281,6 +281,10 @@ This can be done via `pkill -USR2 puma:`. The `:` disambiguates between `puma
4.3.3.gitlab.2 ...` (the master process) from `puma: cluster worker 0: ...` (the
worker processes), selecting the latter.
+For Sidekiq, the signal can be sent to the `sidekiq-cluster` process via `pkill
+-USR2 bin/sidekiq-cluster`, which will forward the signal to all Sidekiq
+children. Alternatively, you can also select a specific pid of interest.
+
Production profiles can be especially noisy. It can be helpful to visualize them
as a [flamegraph](https://github.com/brendangregg/FlameGraph). This can be done
via:
diff --git a/doc/development/pipelines.md b/doc/development/pipelines.md
index d3623529cd4..aef14535a96 100644
--- a/doc/development/pipelines.md
+++ b/doc/development/pipelines.md
@@ -13,10 +13,38 @@ as much as possible.
## Overview
-### Pipeline types
+Pipelines for the GitLab project are created using the [`workflow:rules` keyword](../ci/yaml/README.md#workflowrules)
+feature of the GitLab CI/CD.
-Since we use the [`rules:`](../ci/yaml/README.md#rules) and [`needs:`](../ci/yaml/README.md#needs) keywords extensively,
-we have four main pipeline types which are described below. Note that an MR that includes multiple types of changes would
+Pipelines are always created for the following scenarios:
+
+- `master` branch, including on schedules, pushes, merges, and so on.
+- Merge requests.
+- Tags.
+- Stable, `auto-deploy`, and security branches.
+
+Pipeline creation is also affected by the following CI variables:
+
+- If `$FORCE_GITLAB_CI` is set, pipelines are created.
+- If `$GITLAB_INTERNAL` is not set, pipelines are not created.
+
+No pipeline is created in any other cases (for example, when pushing a branch with no
+MR for it).
+
+The source of truth for these workflow rules is defined in <https://gitlab.com/gitlab-org/gitlab/blob/master/.gitlab-ci.yml>.
+
+### Pipelines for Merge Requests
+
+In general, pipelines for an MR fall into one or more of the following types,
+depending on the changes made in the MR:
+
+- [Docs-only MR pipeline](#docs-only-mr-pipeline): This is typically created for an MR that only changes documentation.
+- [Code-only MR pipeline](#code-only-mr-pipeline): This is typically created for an MR that only changes code, either backend or frontend.
+- [Frontend-only MR pipeline](#frontend-only-mr-pipeline): This is typically created for an MR that only changes frontend code.
+- [QA-only MR pipeline](#qa-only-mr-pipeline): This is typically created for an MR that only changes end to end tests related code.
+
+We use the [`rules:`](../ci/yaml/README.md#rules) and [`needs:`](../ci/yaml/README.md#needs) keywords extensively
+to determine the jobs that need to be run in a pipeline. Note that an MR that includes multiple types of changes would
have a pipelines that include jobs from multiple types (e.g. a combination of docs-only and code-only pipelines).
#### Docs-only MR pipeline
@@ -345,22 +373,6 @@ graph RL;
end
```
-### `workflow:rules`
-
-We're using the [`workflow:rules` keyword](../ci/yaml/README.md#workflowrules) to
-define default rules to determine whether or not a pipeline is created.
-
-These rules are defined in <https://gitlab.com/gitlab-org/gitlab/blob/master/.gitlab-ci.yml>
-and are as follows:
-
-1. If `$FORCE_GITLAB_CI` is set, create a pipeline.
-1. For merge requests, create a pipeline.
-1. For `master` branch, create a pipeline (this includes on schedules, pushes, merges, etc.).
-1. For tags, create a pipeline.
-1. If `$GITLAB_INTERNAL` isn't set, don't create a pipeline.
-1. For stable, auto-deploy, and security branches, create a pipeline.
-1. For any other cases (e.g. when pushing a branch with no MR for it), no pipeline is created.
-
### PostgreSQL versions testing
#### Current versions testing
@@ -535,7 +547,8 @@ The current stages are:
- `pages`: This stage includes a job that deploys the various reports as
GitLab Pages (e.g. [`coverage-ruby`](https://gitlab-org.gitlab.io/gitlab/coverage-ruby/),
[`coverage-javascript`](https://gitlab-org.gitlab.io/gitlab/coverage-javascript/),
- [`webpack-report`](https://gitlab-org.gitlab.io/gitlab/webpack-report/).
+ and `webpack-report` (found at `https://gitlab-org.gitlab.io/gitlab/webpack-report/`, but there is
+ [an issue with the deployment](https://gitlab.com/gitlab-org/gitlab/-/issues/233458)).
### Default image
@@ -572,7 +585,7 @@ that are scoped to a single [configuration parameter](../ci/yaml/README.md#confi
| `.static-analysis-cache` | Allows a job to use a default `cache` definition suitable for static analysis tasks. |
| `.yarn-cache` | Allows a job to use a default `cache` definition suitable for frontend jobs that do a `yarn install`. |
| `.assets-compile-cache` | Allows a job to use a default `cache` definition suitable for frontend jobs that compile assets. |
-| `.use-pg11` | Allows a job to use the `postgres:11.6` and `redis:alpine` services. |
+| `.use-pg11` | Allows a job to use the `postgres:11.6` and `redis:4.0-alpine` services. |
| `.use-pg11-ee` | Same as `.use-pg11` but also use the `docker.elastic.co/elasticsearch/elasticsearch:6.4.2` services. |
| `.use-kaniko` | Allows a job to use the `kaniko` tool to build Docker images. |
| `.as-if-foss` | Simulate the FOSS project by setting the `FOSS_ONLY='1'` environment variable. |
diff --git a/doc/development/prometheus.md b/doc/development/prometheus.md
index f64d4a2eda1..902d4e6a1d0 100644
--- a/doc/development/prometheus.md
+++ b/doc/development/prometheus.md
@@ -1,3 +1,9 @@
+---
+stage: Monitor
+group: APM
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+---
+
# Working with Prometheus
For more information on working with [Prometheus metrics](prometheus_metrics.md), see
diff --git a/doc/development/prometheus_metrics.md b/doc/development/prometheus_metrics.md
index 024da5cc943..a39d19d8750 100644
--- a/doc/development/prometheus_metrics.md
+++ b/doc/development/prometheus_metrics.md
@@ -1,3 +1,9 @@
+---
+stage: Monitor
+group: APM
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+---
+
# Working with Prometheus Metrics
## Adding to the library
diff --git a/doc/development/query_recorder.md b/doc/development/query_recorder.md
index f7d2c23e28d..ef9a3c657aa 100644
--- a/doc/development/query_recorder.md
+++ b/doc/development/query_recorder.md
@@ -20,7 +20,8 @@ end
As an example you might create 5 issues in between counts, which would cause the query count to increase by 5 if an N+1 problem exists.
-> **Note:** In some cases the query count might change slightly between runs for unrelated reasons. In this case you might need to test `exceed_query_limit(control_count + acceptable_change)`, but this should be avoided if possible.
+NOTE: **Note:**
+In some cases the query count might change slightly between runs for unrelated reasons. In this case you might need to test `exceed_query_limit(control_count + acceptable_change)`, but this should be avoided if possible.
## Cached queries
diff --git a/doc/development/redis.md b/doc/development/redis.md
index 693b9e1ad0d..d5d42a3869e 100644
--- a/doc/development/redis.md
+++ b/doc/development/redis.md
@@ -41,8 +41,16 @@ moment, but may wish to in the future: [#118820](https://gitlab.com/gitlab-org/g
This imposes an additional constraint on naming: where GitLab is performing
operations that require several keys to be held on the same Redis server - for
instance, diffing two sets held in Redis - the keys should ensure that by
-enclosing the changeable parts in curly braces, such as, `project:{1}:set_a` and
-`project:{1}:set_b`.
+enclosing the changeable parts in curly braces.
+For example:
+
+```plaintext
+project:{1}:set_a
+project:{1}:set_b
+project:{2}:set_c
+```
+
+`set_a` and `set_b` are guaranteed to be held on the same Redis server, while `set_c` is not.
Currently, we validate this in the development and test environments
with the [`RedisClusterValidator`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/instrumentation/redis_cluster_validator.rb),
diff --git a/doc/development/reference_processing.md b/doc/development/reference_processing.md
index 79377533966..527fb94f228 100644
--- a/doc/development/reference_processing.md
+++ b/doc/development/reference_processing.md
@@ -18,6 +18,16 @@ and link the same type of objects (as specified by the `data-reference-type`
attribute), then we only need one reference parser for that type of domain
object.
+## Banzai pipeline
+
+`Banzai` pipeline returns the `result` Hash after being filtered by the Pipeline.
+
+The `result` Hash is passed to each filter for modification. This is where Filters store extracted information from the content.
+It contains:
+
+- An `:output` key with the DocumentFragment or String HTML markup based on the output of the last filter in the pipeline.
+- A `:reference_filter_nodes` key with the list of DocumentFragment `nodes` that are ready for processing, updated by each filter in the pipeline.
+
## Reference filters
The first way that references are handled is by reference filters. These are
@@ -69,6 +79,8 @@ a minimum implementation of `AbstractReferenceFilter` should define:
### Performance
+#### Find object optimization
+
This default implementation is not very efficient, because we need to call
`#find_object` for each reference, which may require issuing a DB query every
time. For this reason, most reference filter implementations will instead use an
@@ -96,6 +108,22 @@ This makes the number of queries linear in the number of projects. We only need
to implement `parent_records` method when we call `records_per_parent` in our
reference filter.
+#### Filtering nodes optimization
+
+Each `ReferenceFilter` would iterate over all `<a>` and `text()` nodes in a document.
+
+Not all nodes are processed, document is filtered only for nodes that we want to process.
+We are skipping:
+
+- Link tags already processed by some previous filter (if they have a `gfm` class).
+- Nodes with the ancestor node that we want to ignore (`ignore_ancestor_query`).
+- Empty line.
+- Link tags with the empty `href` attribute.
+
+To avoid filtering such nodes for each `ReferenceFilter`, we do it only once and store the result in the result Hash of the pipeline as `result[:reference_filter_nodes]`.
+
+Pipeline `result` is passed to each filter for modification, so every time when `ReferenceFilter` replaces text or link tag, filtered list (`reference_filter_nodes`) will be updated for the next filter to use.
+
## Reference parsers
In a number of cases, as a performance optimization, we render Markdown to HTML
diff --git a/doc/development/service_measurement.md b/doc/development/service_measurement.md
index e53864c8640..24b16aac95b 100644
--- a/doc/development/service_measurement.md
+++ b/doc/development/service_measurement.md
@@ -71,7 +71,7 @@ In order to actually use it, you need to enable measuring for the desired servic
### Enabling measurement using feature flags
In the following example, the `:gitlab_service_measuring_projects_import_service`
-[feature flag](feature_flags/development.md#enabling-a-feature-flag-in-development) is used to enable the measuring feature
+[feature flag](feature_flags/development.md#enabling-a-feature-flag-locally-in-development) is used to enable the measuring feature
for `Projects::ImportService`.
From ChatOps:
diff --git a/doc/development/sidekiq_style_guide.md b/doc/development/sidekiq_style_guide.md
index 2793756ff64..c5dfc5731e6 100644
--- a/doc/development/sidekiq_style_guide.md
+++ b/doc/development/sidekiq_style_guide.md
@@ -64,6 +64,36 @@ the extra jobs will take resources away from jobs from workers that were already
there, if the resources available to the Sidekiq process handling the namespace
are not adjusted appropriately.
+## Versioning
+
+Version can be specified on each Sidekiq worker class.
+This is then sent along when the job is created.
+
+```ruby
+class FooWorker
+ include ApplicationWorker
+
+ version 2
+
+ def perform(*args)
+ if job_version == 2
+ foo = args.first['foo']
+ else
+ foo = args.first
+ end
+ end
+end
+```
+
+Under this schema, any worker is expected to be able to handle any job that was
+enqueued by an older version of that worker. This means that when changing the
+arguments a worker takes, you must increment the `version` (or set `version 1`
+if this is the first time a worker's arguments are changing), but also make sure
+that the worker is still able to handle jobs that were queued with any earlier
+version of the arguments. From the worker's `perform` method, you can read
+`self.job_version` if you want to specifically branch on job version, or you
+can read the number or type of provided arguments.
+
## Idempotent Jobs
It's known that a job can fail for multiple reasons. For example, network outages or bugs.
@@ -145,10 +175,27 @@ authorizations for both projects.
GitLab doesn't skip jobs scheduled in the future, as we assume that
the state will have changed by the time the job is scheduled to
-execute.
+execute. If you do want to deduplicate jobs scheduled in the future
+this can be specified on the worker as follows:
+
+```ruby
+module AuthorizedProjectUpdate
+ class UserRefreshOverUserRangeWorker
+ include ApplicationWorker
+
+ deduplicate :until_executing, including_scheduled: true
+ idempotent!
+
+ # ...
+ end
+end
+```
-More [deduplication strategies have been suggested](https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/195). If you are implementing a worker that
-could benefit from a different strategy, please comment in the issue.
+This strategy is called `until_executing`. More [deduplication
+strategies have been
+suggested](https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/195). If
+you are implementing a worker that could benefit from a different
+strategy, please comment in the issue.
If the automatic deduplication were to cause issues in certain
queues. This can be temporarily disabled by enabling a feature flag
diff --git a/doc/development/sql.md b/doc/development/sql.md
index d584a26e455..3b969c7d27a 100644
--- a/doc/development/sql.md
+++ b/doc/development/sql.md
@@ -218,9 +218,9 @@ Project.select(:id, :user_id).joins(:merge_requests)
## Plucking IDs
-This can't be stressed enough: **never** use ActiveRecord's `pluck` to pluck a
-set of values into memory only to use them as an argument for another query. For
-example, this will make the database **very** sad:
+Never use ActiveRecord's `pluck` to pluck a set of values into memory only to
+use them as an argument for another query. For example, this will execute an
+extra unecessary database query and load a lot of unecessary data into memory:
```ruby
projects = Project.all.pluck(:id)
diff --git a/doc/development/telemetry/event_dictionary.md b/doc/development/telemetry/event_dictionary.md
new file mode 100644
index 00000000000..d8cc32ea8d0
--- /dev/null
+++ b/doc/development/telemetry/event_dictionary.md
@@ -0,0 +1,32 @@
+---
+stage: Growth
+group: Telemetry
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+---
+
+# Event Dictionary
+
+**Note: We've temporarily moved the Event Dictionary to a [Google Sheet](https://docs.google.com/spreadsheets/d/1VzE8R72Px_Y_LlE3Z05LxUlG_dumWe3vl-HeUo70TPw/edit?usp=sharing)**. The previous Markdown table exceeded 600 rows making it difficult to manage. In the future, our intention is to move this back into our docs using a [YAML file](https://gitlab.com/gitlab-org/gitlab-docs/-/issues/823).
+
+The event dictionary is a single source of truth for the metrics and events we collect for product usage data. The Event Dictionary lists all the metrics and events we track, why we're tracking them, and where they are tracked.
+
+This is a living document that is updated any time a new event is planned or implemented. It includes the following information.
+
+- Section, stage, or group
+- Description
+- Implementation status
+- Availability by plan type
+- Code path
+
+We're currently focusing our Event Dictionary on [Usage Ping](usage_ping.md). In the future, we will also include [Snowplow](snowplow.md). We currently have an initiative across the entire product organization to complete the [Event Dictionary for Usage Ping](https://gitlab.com/groups/gitlab-org/-/epics/4174).
+
+## Instructions
+
+1. Open the Event Dictionary and fill in all the **PM to edit** columns highlighted in yellow.
+1. Check that all the metrics and events are assigned to the correct section, stage, or group. If a metric is used across many groups, assign it to the stage. If a metric is used across many stages, assign it to the section. If a metric is incorrectly assigned to another section, stage, or group, let the PM know you have reassigned it. If your group has no assigned metrics and events, check that your metrics and events are not incorrectly assigned to another PM.
+1. Add descriptions of what your metrics and events are tracking. Work with your Engineering team or the Telemetry team if you need help understanding this.
+1. Add what plans this metric is available on. Work with your Engineering team or the Telemetry team if you need help understanding this.
+
+## Planned metrics and events
+
+For future metrics and events you plan to track, please add them to the Event Dictionary and note the status as `Planned`, `In Progress`, or `Implemented`. Once you have confirmed the metric has been implemented and have confirmed the metric data is in our data warehouse, change the status to **Data Available**.
diff --git a/doc/development/telemetry/index.md b/doc/development/telemetry/index.md
index 0000e7e9e4f..b5032ce3730 100644
--- a/doc/development/telemetry/index.md
+++ b/doc/development/telemetry/index.md
@@ -6,60 +6,35 @@ info: To determine the technical writer assigned to the Stage/Group associated w
# Telemetry Guide
-At GitLab, we collect telemetry for the purpose of helping us build a better GitLab. Data about how GitLab is used is collected to better understand what parts of GitLab needs improvement and what features to build next. Telemetry also helps our team better understand the reasons why people use GitLab and with this knowledge we are able to make better product decisions.
+At GitLab, we collect product usage data for the purpose of helping us build a better product. Data helps GitLab understand which parts of the product need improvement and which features we should build next. Product usage data also helps our team better understand the reasons why people use GitLab. With this knowledge we are able to make better product decisions.
-We also encourage users to enable tracking, and we embrace full transparency with our tracking approach so it can be easily understood and trusted. By enabling tracking, users can:
+We encourage users to enable tracking, and we embrace full transparency with our tracking approach so it can be easily understood and trusted.
+
+By enabling tracking, users can:
- Contribute back to the wider community.
- Help GitLab improve on the product.
-This documentation consists of three guides providing an overview of Telemetry at GitLab.
-
-Telemetry Guide:
-
- 1. [Our tracking tools](#our-tracking-tools)
- 1. [What data can be tracked](#what-data-can-be-tracked)
- 1. [Telemetry systems overview](#telemetry-systems-overview)
- 1. [Snowflake data warehouse](#snowflake-data-warehouse)
+## Our tracking tools
-[Usage Ping Guide](usage_ping.md)
+We use three methods to gather product usage data:
- 1. [What is Usage Ping](usage_ping.md#what-is-usage-ping)
- 1. [Usage Ping payload](usage_ping.md#usage-ping-payload)
- 1. [Disable Usage Ping](usage_ping.md#disable-usage-ping)
- 1. [Usage Ping request flow](usage_ping.md#usage-ping-request-flow)
- 1. [How Usage Ping works](usage_ping.md#how-usage-ping-works)
- 1. [Implementing Usage Ping](usage_ping.md#implementing-usage-ping)
- 1. [Developing and testing Usage Ping](usage_ping.md#developing-and-testing-usage-ping)
+- [Snowplow](#snowplow)
+- [Usage Ping](#usage-ping)
+- [Database import](#database-import)
-[Snowplow Guide](snowplow.md)
+### Snowplow
-1. [What is Snowplow](snowplow.md#what-is-snowplow)
-1. [Snowplow schema](snowplow.md#snowplow-schema)
-1. [Enabling Snowplow](snowplow.md#enabling-snowplow)
-1. [Snowplow request flow](snowplow.md#snowplow-request-flow)
-1. [Implementing Snowplow JS (Frontend) tracking](snowplow.md#implementing-snowplow-js-frontend-tracking)
-1. [Implementing Snowplow Ruby (Backend) tracking](snowplow.md#implementing-snowplow-ruby-backend-tracking)
-1. [Developing and testing Snowplow](snowplow.md#developing-and-testing-snowplow)
+Snowplow is an enterprise-grade marketing and product analytics platform which helps track the way
+users engage with our website and application.
-More useful links:
+Snowplow consists of two components:
-- [Telemetry Direction](https://about.gitlab.com/direction/telemetry/)
-- [Data Analysis Process](https://about.gitlab.com/handbook/business-ops/data-team/#data-analysis-process/)
-- [Data for Product Managers](https://about.gitlab.com/handbook/business-ops/data-team/programs/data-for-product-managers/)
-- [Data Infrastructure](https://about.gitlab.com/handbook/business-ops/data-team/platform/infrastructure/)
-
-## Our tracking tools
+- [Snowplow JS](https://github.com/snowplow/snowplow/wiki/javascript-tracker) tracks client-side
+ events.
+- [Snowplow Ruby](https://github.com/snowplow/snowplow/wiki/ruby-tracker) tracks server-side events.
-We use several different technologies to gather product usage data.
-
-### Snowplow JS (Frontend)
-
-Snowplow is an enterprise-grade marketing and product analytics platform which helps track the way users engage with our website and application. [Snowplow JS](https://github.com/snowplow/snowplow/wiki/javascript-tracker) is a frontend tracker for client-side events.
-
-### Snowplow Ruby (Backend)
-
-Snowplow is an enterprise-grade marketing and product analytics platform which helps track the way users engage with our website and application. [Snowplow Ruby](https://github.com/snowplow/snowplow/wiki/ruby-tracker) is a backend tracker for server-side events.
+For more details, read the [Snowplow](snowplow.md) guide.
### Usage Ping
@@ -71,37 +46,39 @@ For more details, read the [Usage Ping](usage_ping.md) guide.
Database imports are full imports of data into GitLab's data warehouse. For GitLab.com, the PostgreSQL database is loaded into Snowflake data warehouse every 6 hours. For more details, see the [data team handbook](https://about.gitlab.com/handbook/business-ops/data-team/platform/#extract-and-load).
-### Log system
-
-System logs are the application logs generated from running the GitLab Rails application. For more details, see the [log system](../../administration/logs.md) and [logging infrastructure](https://gitlab.com/gitlab-com/runbooks/tree/master/logging/doc#logging-infrastructure-overview).
-
## What data can be tracked
Our different tracking tools allows us to track different types of events. The event types and examples of what data can be tracked are outlined below.
-| Event Type | Snowplow JS (Frontend) | Snowplow Ruby (Backend) | Usage Ping | Database import | Log system |
-|---------------------|------------------------|-------------------------|---------------------|---------------------|---------------------|
-| Database counts | **{dotted-circle}** | **{dotted-circle}** | **{check-circle}** | **{check-circle}** | **{dotted-circle}** |
-| Pageview events | **{check-circle}** | **{check-circle}** | **{dotted-circle}** | **{dotted-circle}** | **{dotted-circle}** |
-| UI events | **{check-circle}** | **{dotted-circle}** | **{dotted-circle}** | **{dotted-circle}** | **{dotted-circle}** |
-| CRUD and API events | **{dotted-circle}** | **{check-circle}** | **{dotted-circle}** | **{dotted-circle}** | **{dotted-circle}** |
-| Event funnels | **{check-circle}** | **{check-circle}** | **{dotted-circle}** | **{dotted-circle}** | **{dotted-circle}** |
-| PostgreSQL Data | **{dotted-circle}** | **{dotted-circle}** | **{dotted-circle}** | **{check-circle}** | **{dotted-circle}** |
-| Logs | **{dotted-circle}** | **{dotted-circle}** | **{dotted-circle}** | **{dotted-circle}** | **{check-circle}** |
-| External services | **{dotted-circle}** | **{dotted-circle}** | **{dotted-circle}** | **{dotted-circle}** | **{dotted-circle}** |
+The availability of event types and their tracking tools varies by segment. For example, on Self-Managed Users, we only have reporting using Database records via Usage Ping.
+
+| Event Types | SaaS Instance | SaaS Plan | SaaS Group | SaaS Session | SaaS User | SM Instance | SM Plan | SM Group | SM Session | SM User |
+|----------------------------------------|---------------|-----------|------------|--------------|-----------|-------------|---------|----------|------------|---------|
+| Snowplow (JS Pageview events) | ✅ | 📅 | 📅 | ✅ | 📅 | 📅 | 📅 | 📅 | 📅 | 📅 |
+| Snowplow (JS UI events) | ✅ | 📅 | 📅 | ✅ | 📅 | 📅 | 📅 | 📅 | 📅 | 📅 |
+| Snowplow (Ruby Pageview events) | ✅ | 📅 | 📅 | ✅ | 📅 | 📅 | 📅 | 📅 | 📅 | 📅 |
+| Snowplow (Ruby CRUD / API events) | ✅ | 📅 | 📅 | ✅ | 📅 | 📅 | 📅 | 📅 | 📅 | 📅 |
+| Usage Ping (Redis UI counters) | 🔄 | 🔄 | 🔄 | ✖️ | 🔄 | 🔄 | 🔄 | 🔄 | ✖️ | 🔄 |
+| Usage Ping (Redis Pageview counters) | 🔄 | 🔄 | 🔄 | ✖️ | 🔄 | 🔄 | 🔄 | 🔄 | ✖️ | 🔄 |
+| Usage Ping (Redis CRUD / API counters) | 🔄 | 🔄 | 🔄 | ✖️ | 🔄 | 🔄 | 🔄 | 🔄 | ✖️ | 🔄 |
+| Usage Ping (Database counters) | ✅ | 🔄 | 📅 | ✖️ | ✅ | ✅ | ✅ | ✅ | ✖️ | ✅ |
+| Usage Ping (Instance settings) | ✅ | 🔄 | 📅 | ✖️ | ✅ | ✅ | ✅ | ✅ | ✖️ | ✅ |
+| Usage Ping (Integration settings) | ✅ | 🔄 | 📅 | ✖️ | ✅ | ✅ | ✅ | ✅ | ✖️ | ✅ |
+| Database import (Database records) | ✅ | ✅ | ✅ | ✖️ | ✅ | ✖️ | ✖️ | ✖️ | ✖️ | ✖️ |
+
+[Source file](https://docs.google.com/spreadsheets/d/1e8Afo41Ar8x3JxAXJF3nL83UxVZ3hPIyXdt243VnNuE/edit?usp=sharing)
-### Database counts
+**Legend**
-- Number of Projects created by unique users
-- Number of users logged in the past 28 day
+✅ Available, 🔄 In Progress, 📅 Planned, ✖️ Not Possible
-Database counts are row counts for different tables in an instance’s database. These are SQL count queries which have been filtered, grouped, or aggregated which provide high level usage data. The full list of available tables can be found in [structure.sql](https://gitlab.com/gitlab-org/gitlab/-/blob/master/db/structure.sql).
+SaaS = GitLab.com. SM = Self-Managed instance
### Pageview events
- Number of sessions that visited the /dashboard/groups page
-### UI Events
+### UI events
- Number of sessions that clicked on a button or link
- Number of sessions that closed a modal
@@ -116,24 +93,55 @@ UI events are any interface-driven actions from the browser including click data
These are backend events that include the creation, read, update, deletion of records, and other events that might be triggered from layers other than those available in the interface.
-### Event funnels
+### Database records
-- Number of sessions that performed action A, B, then C
-- Conversion rate from step A to B
+These are raw database records which can be explored using business intelligence tools like Sisense. The full list of available tables can be found in [structure.sql](https://gitlab.com/gitlab-org/gitlab/-/blob/master/db/structure.sql).
-### PostgreSQL data
+### Instance settings
-These are raw database records which can be explored using business intelligence tools like Sisense. The full list of available tables can be found in [structure.sql](https://gitlab.com/gitlab-org/gitlab/-/blob/master/db/structure.sql).
+These are settings of your instance such as the instance's Git version and if certain features are enabled such as `container_registry_enabled`.
-### Logs
+### Integration settings
-These are raw logs such as the [Production logs](../../administration/logs.md#production_jsonlog), [API logs](../../administration/logs.md#api_jsonlog), or [Sidekiq logs](../../administration/logs.md#sidekiqlog). See the [overview of Logging Infrastructure](https://gitlab.com/gitlab-com/runbooks/tree/master/logging/doc#logging-infrastructure-overview) for more details.
+These are integrations your GitLab instance interacts with such as an [external storage provider](../../administration/static_objects_external_storage.md) or an [external container registry](../../administration/packages/container_registry.md#use-an-external-container-registry-with-gitlab-as-an-auth-endpoint). These services must be able to send data back into a GitLab instance for data to be tracked.
-### External services
+## Reporting level
-These are external services a GitLab instance interacts with such as an [external storage provider](../../administration/static_objects_external_storage.md) or an [external container registry](../../administration/packages/container_registry.md#use-an-external-container-registry-with-gitlab-as-an-auth-endpoint). These services must be able to send data back into a GitLab instance for data to be tracked.
+Our reporting levels of aggregate or individual reporting varies by segment. For example, on Self-Managed Users, we can report at an aggregate user level using Usage Ping but not on an Individual user level.
-## Telemetry systems overview
+| Aggregated Reporting | SaaS Instance | SaaS Plan | SaaS Group | SaaS Session | SaaS User | SM Instance | SM Plan | SM Group | SM Session | SM User |
+|----------------------|---------------|-----------|------------|--------------|-----------|-------------|---------|----------|------------|---------|
+| Snowplow | ✅ | 📅 | 📅 | ✅ | 📅 | ✅ | 📅 | 📅 | ✅ | 📅 |
+| Usage Ping | ✅ | 🔄 | 📅 | 📅 | ✅ | ✅ | ✅ | ✅ | 📅 | ✅ |
+| Database import | ✅ | ✅ | ✅ | ✖️ | ✅ | ✖️ | ✖️ | ✖️ | ✖️ | ✖️ |
+
+| Identifiable Reporting | SaaS Instance | SaaS Plan | SaaS Group | SaaS Session | SaaS User | SM Instance | SM Plan | SM Group | SM Session | SM User |
+|------------------------|---------------|-----------|------------|--------------|-----------|-------------|---------|----------|------------|---------|
+| Snowplow | ✅ | 📅 | 📅 | ✅ | 📅 | ✖️ | ✖️ | ✖️ | ✖️ | ✖️ |
+| Usage Ping | ✅ | 🔄 | 📅 | ✖️ | ✖️ | ✅ | ✅ | ✖️ | ✖️ | ✖️ |
+| Database import | ✅ | ✅ | ✅ | ✖️ | ✅ | ✖️ | ✖️ | ✖️ | ✖️ | ✖️ |
+
+**Legend**
+
+✅ Available, 🔄 In Progress, 📅 Planned, ✖️ Not Possible
+
+SaaS = GitLab.com. SM = Self-Managed instance
+
+## Reporting time period
+
+Our reporting time periods varies by segment. For example, on Self-Managed Users, we can report all time counts and 28 day counts in Usage Ping.
+
+| Reporting Time Period | All Time | 28 Days | 7 Days | Daily |
+|-----------------------|----------|---------|--------|-------|
+| Snowplow | ✅ | ✅ | ✅ | ✅ |
+| Usage Ping | ✅ | ✅ | 📅 | ✖️ |
+| Database import | ✅ | ✅ | ✅ | ✅ |
+
+**Legend**
+
+✅ Available, 🔄 In Progress, 📅 Planned, ✖️ Not Possible
+
+## Systems overview
The systems overview is a simplified diagram showing the interactions between GitLab Inc and self-managed instances.
@@ -145,7 +153,7 @@ The systems overview is a simplified diagram showing the interactions between Gi
For Telemetry purposes, GitLab Inc has three major components:
-1. [Data Infrastructure](https://about.gitlab.com/handbook/business-ops/data-team/platform/infrastructure/): This contains everything managed by our data team including Sisense Dashboards for visualization, Snowflake for Data Warehousing, incoming data sources such as PostgreSQL Pipeline and S3 Bucket, and lastly our data collectors [GitLab.com's Snowplow Collector](https://about.gitlab.com/handbook/engineering/infrastructure/library/snowplow/) and GitLab's Versions Application.
+1. [Data Infrastructure](https://about.gitlab.com/handbook/business-ops/data-team/platform/infrastructure/): This contains everything managed by our data team including Sisense Dashboards for visualization, Snowflake for Data Warehousing, incoming data sources such as PostgreSQL Pipeline and S3 Bucket, and lastly our data collectors [GitLab.com's Snowplow Collector](https://gitlab.com/gitlab-com/gl-infra/readiness/-/tree/master/library/snowplow/) and GitLab's Versions Application.
1. GitLab.com: This is the production GitLab application which is made up of a Client and Server. On the Client or browser side, a Snowplow JS Tracker (Frontend) is used to track client-side events. On the Server or application side, a Snowplow Ruby Tracker (Backend) is used to track server-side events. The server also contains Usage Ping which leverages a PostgreSQL database and a Redis in-memory data store to report on usage data. Lastly, the server also contains System Logs which are generated from running the GitLab application.
1. [Monitoring infrastructure](https://about.gitlab.com/handbook/engineering/monitoring/): This is the infrastructure used to ensure GitLab.com is operating smoothly. System Logs are sent from GitLab.com to our monitoring infrastructure and collected by a FluentD collector. From FluentD, logs are either sent to long term Google Cloud Services cold storage via Stackdriver, or, they are sent to our Elastic Cluster via Cloud Pub/Sub which can be explored in real-time using Kibana.
@@ -162,27 +170,13 @@ As shown by the orange lines, on GitLab.com Snowplow JS, Snowplow Ruby, Usage Pi
As shown by the green lines, on GitLab.com system logs flow into GitLab Inc's monitoring infrastructure. On self-managed, there are no logs sent to GitLab Inc's monitoring infrastructure.
-The differences between GitLab.com and self-managed are summarized below:
-
-| Environment | Snowplow JS (Frontend) | Snowplow Ruby (Backend) | Usage Ping | Database import | Logs system |
-|--------------|------------------------|-------------------------|--------------------|---------------------|---------------------|
-| GitLab.com | **{check-circle}** | **{check-circle}** | **{check-circle}** | **{check-circle}** | **{check-circle}** |
-| Self-Managed | **{dotted-circle}**(1) | **{dotted-circle}**(1) | **{check-circle}** | **{dotted-circle}** | **{dotted-circle}** |
-
Note (1): Snowplow JS and Snowplow Ruby are available on self-managed, however, the Snowplow Collector endpoint is set to a self-managed Snowplow Collector which GitLab Inc does not have access to.
-## Snowflake data warehouse
-
-The Snowflake data warehouse is where we keep all of GitLab Inc's data.
-
-### Data sources
-
-There are several data sources available in Snowflake and Sisense each representing a different view of the data along the transformation pipeline.
+## Additional information
-| Source | Description | Access |
-| ------ | ------ | ------ |
-| raw | These tables are the raw data source | Access via Snowflake |
-| analytics_staging | These tables have undergone little to no data transformation, meaning they're basically clones of the raw data source | Access via Snowflake or Sisense |
-| analytics | These tables have typically undergone more data transformation. They will typically end in `_xf` to represent the fact that they are transformed | Access via Snowflake or Sisense |
+More useful links:
-If you are a Product Manager interested in the raw data, you will likely focus on the `analytics` and `analytics_staging` sources. The raw source is limited to the data and infrastructure teams. For more information, please see [Data For Product Managers: What's the difference between analytics_staging and analytics?](https://about.gitlab.com/handbook/business-ops/data-team/programs/data-for-product-managers/#whats-the-difference-between-analytics_staging-and-analytics)
+- [Telemetry Direction](https://about.gitlab.com/direction/telemetry/)
+- [Data Analysis Process](https://about.gitlab.com/handbook/business-ops/data-team/#data-analysis-process/)
+- [Data for Product Managers](https://about.gitlab.com/handbook/business-ops/data-team/programs/data-for-product-managers/)
+- [Data Infrastructure](https://about.gitlab.com/handbook/business-ops/data-team/platform/infrastructure/)
diff --git a/doc/development/telemetry/snowplow.md b/doc/development/telemetry/snowplow.md
index f03742afe2d..547ba36464b 100644
--- a/doc/development/telemetry/snowplow.md
+++ b/doc/development/telemetry/snowplow.md
@@ -278,7 +278,7 @@ Custom event tracking and instrumentation can be added by directly calling the `
|:-----------|:-------|:--------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `category` | string | 'application' | Area or aspect of the application. This could be `HealthCheckController` or `Lfs::FileTransformer` for instance. |
| `action` | string | 'generic' | The action being taken, which can be anything from a controller action like `create` to something like an Active Record callback. |
-| `data` | object | {} | Additional data such as `label`, `property`, `value`, and `context` as described [in our Feature Instrumentation taxonomy](https://about.gitlab.com/handbook/product/feature-instrumentation/#taxonomy). These are set as empty strings if you don't provide them. |
+| `data` | object | {} | Additional data such as `label`, `property`, `value`, and `context` as described in [Instrumentation at GitLab](https://about.gitlab.com/handbook/product/product-processes/#taxonomy). These are set as empty strings if you don't provide them. |
Tracking can be viewed as either tracking user behavior, or can be utilized for instrumentation to monitor and visualize performance over time in an area or aspect of code.
@@ -305,12 +305,16 @@ We use the [AsyncEmitter](https://github.com/snowplow/snowplow/wiki/Ruby-Tracker
There are several tools for developing and testing Snowplow Event
-| Testing Tool | Frontend Tracking | Backend Tracking | Local Development Environment | Production Environment |
-|----------------------------------------------|--------------------|---------------------|-------------------------------|------------------------|
-| Snowplow Analytics Debugger Chrome Extension | **{check-circle}** | **{dotted-circle}** | **{check-circle}** | **{check-circle}** |
-| Snowplow Inspector Chrome Extension | **{check-circle}** | **{dotted-circle}** | **{check-circle}** | **{check-circle}** |
-| Snowplow Micro | **{check-circle}** | **{check-circle}** | **{check-circle}** | **{dotted-circle}** |
-| Snowplow Mini | **{check-circle}** | **{check-circle}** | **{dotted-circle}** | **{check-circle}** |
+| Testing Tool | Frontend Tracking | Backend Tracking | Local Development Environment | Production Environment | Production Environment |
+|----------------------------------------------|--------------------|---------------------|-------------------------------|------------------------|------------------------|
+| Snowplow Analytics Debugger Chrome Extension | **{check-circle}** | **{dotted-circle}** | **{check-circle}** | **{check-circle}** | **{check-circle}** |
+| Snowplow Inspector Chrome Extension | **{check-circle}** | **{dotted-circle}** | **{check-circle}** | **{check-circle}** | **{check-circle}** |
+| Snowplow Micro | **{check-circle}** | **{check-circle}** | **{check-circle}** | **{dotted-circle}** | **{dotted-circle}** |
+| Snowplow Mini | **{check-circle}** | **{check-circle}** | **{dotted-circle}** | **{status_preparing}** | **{status_preparing}** |
+
+**Legend**
+
+**{check-circle}** Available, **{status_preparing}** In progress, **{dotted-circle}** Not Planned
### Snowplow Analytics Debugger Chrome Extension
diff --git a/doc/development/telemetry/usage_ping.md b/doc/development/telemetry/usage_ping.md
index 5e78e2c5f25..ea5eb6c389f 100644
--- a/doc/development/telemetry/usage_ping.md
+++ b/doc/development/telemetry/usage_ping.md
@@ -131,7 +131,7 @@ There are four types of counters which are all found in `usage_data.rb`:
- **Ordinary Batch Counters:** Simple count of a given ActiveRecord_Relation
- **Distinct Batch Counters:** Distinct count of a given ActiveRecord_Relation on given column
- **Alternative Counters:** Used for settings and configurations
-- **Redis Counters:** Used for in-memory counts. This method is being deprecated due to data inaccuracies and will be replaced with a persistent method.
+- **Redis Counters:** Used for in-memory counts.
NOTE: **Note:**
Only use the provided counter methods. Each counter method contains a built in fail safe to isolate each counter to avoid breaking the entire Usage Ping.
@@ -155,7 +155,7 @@ There are two batch counting methods provided, `Ordinary Batch Counters` and `Di
Handles `ActiveRecord::StatementInvalid` error
-Simple count of a given ActiveRecord_Relation
+Simple count of a given ActiveRecord_Relation, does a non-distinct batch count, smartly reduces batch_size and handles errors.
Method: `count(relation, column = nil, batch: true, start: nil, finish: nil)`
@@ -179,15 +179,16 @@ count(::Clusters::Cluster.aws_installed.enabled, :cluster_id, start: ::Clusters:
Handles `ActiveRecord::StatementInvalid` error
-Distinct count of a given ActiveRecord_Relation on given column
+Distinct count of a given ActiveRecord_Relation on given column, a distinct batch count, smartly reduces batch_size and handles errors.
-Method: `distinct_count(relation, column = nil, batch: true, start: nil, finish: nil)`
+Method: `distinct_count(relation, column = nil, batch: true, batch_size: nil, start: nil, finish: nil)`
Arguments:
- `relation` the ActiveRecord_Relation to perform the count
- `column` the column to perform the distinct count, by default is the primary key
- `batch`: default `true` in order to use batch counting
+- `batch_size`: if none set it will use default value 10000 from `Gitlab::Database::BatchCounter`
- `start`: custom start of the batch counting in order to avoid complex min calculations
- `end`: custom end of the batch counting in order to avoid complex min calculations
@@ -212,14 +213,48 @@ Arguments:
- `counter`: a counter from `Gitlab::UsageDataCounters`, that has `fallback_totals` method implemented
- or a `block`: which is evaluated
+#### Ordinary Redis Counters
+
+Examples of implementation:
+
+- Using Redis methods [`INCR`](https://redis.io/commands/incr), [`GET`](https://redis.io/commands/get), and [`Gitlab::UsageDataCounters::WikiPageCounter`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/usage_data_counters/wiki_page_counter.rb)
+- Using Redis methods [`HINCRBY`](https://redis.io/commands/hincrby), [`HGETALL`](https://redis.io/commands/hgetall), and [`Gitlab::UsageCounters::PodLogs`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/usage_counters/pod_logs.rb)
+
+#### Redis HLL Counters
+
+With `Gitlab::Redis::HLL` we have available data structures used to count unique values.
+
+Implemented using Redis methods [PFADD](https://redis.io/commands/pfadd) and [PFCOUNT](https://redis.io/commands/pfcount).
+
+Recommendations:
+
+- Key should expire in 29 days.
+- If possible, data granularity should be a week. For example a key could be composed from the metric's name and week of the year, `2020-33-{metric_name}`.
+- Use a [feature flag](../../operations/feature_flags.md) in order to have a control over the impact when adding new metrics.
+- If possible, data granularity should be week, for example a key could be composed from metric name and week of the year, 2020-33-{metric_name}
+- Use a [feature flag](../../operations/feature_flags.md) in order to have a control over the impact when adding new metrics
+
+Examples of implementation:
+
+- [`Gitlab::UsageDataCounters::TrackUniqueActions`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/usage_data_counters/track_unique_actions.rb)
+- [`Gitlab::Analytics::UniqueVisits`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/analytics/unique_visits.rb)
+
Example of usage:
```ruby
+# Redis Counters
redis_usage_data(Gitlab::UsageDataCounters::WikiPageCounter)
redis_usage_data { ::Gitlab::UsageCounters::PodLogs.usage_totals[:total] }
-```
-Note that Redis counters are in the [process of being deprecated](https://gitlab.com/gitlab-org/gitlab/-/issues/216330) and you should instead try to use Snowplow events instead. We're in the process of building [self-managed event tracking](https://gitlab.com/gitlab-org/telemetry/-/issues/373) and once this is available, we will convert all Redis counters into Snowplow events.
+# Redis HLL counter
+counter = Gitlab::UsageDataCounters::TrackUniqueActions
+redis_usage_data do
+ counter.count_unique_events(
+ event_action: Gitlab::UsageDataCounters::TrackUniqueActions::PUSH_ACTION,
+ date_from: time_period[:created_at].first,
+ date_to: time_period[:created_at].last
+ )
+```
### Alternative Counters
@@ -313,14 +348,16 @@ In order to have an understanding of the query's execution we add in the MR desc
We also use `#database-lab` and [explain.depesz.com](https://explain.depesz.com/). For more details, see the [database review guide](../database_review.md#preparation-when-adding-or-modifying-queries).
-Examples of query optimization work:
+#### Optimization recommendations and examples
-- [Example 1](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/26445)
-- [Example 2](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/26871)
+- Use specialized indexes [example 1](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/26871), [example 2](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/26445).
+- Use defined `start` and `finish`, and simple queries, because these values can be memoized and reused, [example](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/37155).
+- Avoid joins and write the queries as simply as possible, [example](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/36316).
+- Set a custom `batch_size` for `distinct_count`, [example](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/38000).
### 4. Add the metric definition
-When adding, changing, or updating metrics, please update the [Usage Statistics definition table](#usage-statistics-definitions).
+When adding, changing, or updating metrics, please update the [Event Dictionary's **Usage Ping** table](event_dictionary.md).
### 5. Add new metric to Versions Application
@@ -340,6 +377,10 @@ Ensure you comply with the [Changelog entries guide](../changelog.md).
On GitLab.com, we have DangerBot setup to monitor Telemetry related files and DangerBot will recommend a Telemetry review. Mention `@gitlab-org/growth/telemetry/engineers` in your MR for a review.
+### 9. Verify your metric
+
+On GitLab.com, the Product Analytics team regularly monitors Usage Ping. They may alert you that your metrics need further optimization to run quicker and with greater success. You may also use the [Usage Ping QA dashboard](https://app.periscopedata.com/app/gitlab/632033/Usage-Ping-QA) to check how well your metric performs. The dashboard allows filtering by GitLab version, by "Self-managed" & "Saas" and shows you how many failures have occurred for each metric. Whenever you notice a high failure rate, you may re-optimize your metric.
+
### Optional: Test Prometheus based Usage Ping
If the data submitted includes metrics [queried from Prometheus](#prometheus-queries) that you would like to inspect and verify,
@@ -386,358 +427,6 @@ with any of the other services that are running. That is not how node metrics ar
always runs as a process alongside other GitLab components on any given node. From Usage Ping's perspective none of the node data would therefore
appear to be associated to any of the services running, since they all appear to be running on different hosts. To alleviate this problem, the `node_exporter` in GCK was arbitrarily "assigned" to the `web` service, meaning only for this service `node_*` metrics will appear in Usage Ping.
-## Usage Statistics definitions
-
-| Statistic | Section | Stage | Tier | Edition | Description |
-| --------------------------------------------------------- | ------------------------------------ | ------------- | ---------------- | ------- | -------------------------------------------------------------------------- |
-| `uuid` | | | | | |
-| `hostname` | | | | | |
-| `version` | | | | | |
-| `installation_type` | | | | | |
-| `active_user_count` | | | | | |
-| `recorded_at` | | | | | |
-| `recording_ce_finished_at` | | | | CE+EE | When the core features were computed |
-| `recording_ee_finished_at` | | | | EE | When the EE-specific features were computed |
-| `edition` | | | | | |
-| `license_md5` | | | | | |
-| `license_id` | | | | | |
-| `historical_max_users` | | | | | |
-| `Name` | `licensee` | | | | |
-| `Email` | `licensee` | | | | |
-| `Company` | `licensee` | | | | |
-| `license_user_count` | | | | | |
-| `license_starts_at` | | | | | |
-| `license_expires_at` | | | | | |
-| `license_plan` | | | | | |
-| `license_trial` | | | | | |
-| `assignee_lists` | `counts` | | | | |
-| `boards` | `counts` | | | | |
-| `ci_builds` | `counts` | `verify` | | | Unique builds in project |
-| `ci_internal_pipelines` | `counts` | `verify` | | | Total pipelines in GitLab repositories |
-| `ci_external_pipelines` | `counts` | `verify` | | | Total pipelines in external repositories |
-| `ci_pipeline_config_auto_devops` | `counts` | `verify` | | | Total pipelines from an Auto DevOps template |
-| `ci_pipeline_config_repository` | `counts` | `verify` | | | Total Pipelines from templates in repository |
-| `ci_runners` | `counts` | `verify` | | | Total configured Runners in project |
-| `ci_triggers` | `counts` | `verify` | | | Total configured Triggers in project |
-| `ci_pipeline_schedules` | `counts` | `verify` | | | Pipeline schedules in GitLab |
-| `auto_devops_enabled` | `counts` | `configure` | | | Projects with Auto DevOps template enabled |
-| `auto_devops_disabled` | `counts` | `configure` | | | Projects with Auto DevOps template disabled |
-| `deploy_keys` | `counts` | | | | |
-| `deployments` | `counts` | `release` | | | Total deployments |
-| `deployments` | `counts_monthly` | `release` | | | Total deployments last 28 days |
-| `dast_jobs` | `counts` | | | | |
-| `successful_deployments` | `counts` | `release` | | | Total successful deployments |
-| `successful_deployments` | `counts_monthly` | `release` | | | Total successful deployments last 28 days |
-| `failed_deployments` | `counts` | `release` | | | Total failed deployments |
-| `failed_deployments` | `counts_monthly` | `release` | | | Total failed deployments last 28 days |
-| `environments` | `counts` | `release` | | | Total available and stopped environments |
-| `clusters` | `counts` | `configure` | | | Total GitLab Managed clusters both enabled and disabled |
-| `clusters_enabled` | `counts` | `configure` | | | Total GitLab Managed clusters currently enabled |
-| `project_clusters_enabled` | `counts` | `configure` | | | Total GitLab Managed clusters attached to projects |
-| `group_clusters_enabled` | `counts` | `configure` | | | Total GitLab Managed clusters attached to groups |
-| `instance_clusters_enabled` | `counts` | `configure` | | | Total GitLab Managed clusters attached to the instance |
-| `clusters_disabled` | `counts` | `configure` | | | Total GitLab Managed disabled clusters |
-| `project_clusters_disabled` | `counts` | `configure` | | | Total GitLab Managed disabled clusters previously attached to projects |
-| `group_clusters_disabled` | `counts` | `configure` | | | Total GitLab Managed disabled clusters previously attached to groups |
-| `instance_clusters_disabled` | `counts` | `configure` | | | Total GitLab Managed disabled clusters previously attached to the instance |
-| `clusters_platforms_eks` | `counts` | `configure` | | | Total GitLab Managed clusters provisioned with GitLab on AWS EKS |
-| `clusters_platforms_gke` | `counts` | `configure` | | | Total GitLab Managed clusters provisioned with GitLab on GCE GKE |
-| `clusters_platforms_user` | `counts` | `configure` | | | Total GitLab Managed clusters that are user provisioned |
-| `clusters_applications_helm` | `counts` | `configure` | | | Total GitLab Managed clusters with Helm enabled |
-| `clusters_applications_ingress` | `counts` | `configure` | | | Total GitLab Managed clusters with Ingress enabled |
-| `clusters_applications_cert_managers` | `counts` | `configure` | | | Total GitLab Managed clusters with Cert Manager enabled |
-| `clusters_applications_crossplane` | `counts` | `configure` | | | Total GitLab Managed clusters with Crossplane enabled |
-| `clusters_applications_prometheus` | `counts` | `configure` | | | Total GitLab Managed clusters with Prometheus enabled |
-| `clusters_applications_runner` | `counts` | `configure` | | | Total GitLab Managed clusters with Runner enabled |
-| `clusters_applications_knative` | `counts` | `configure` | | | Total GitLab Managed clusters with Knative enabled |
-| `clusters_applications_elastic_stack` | `counts` | `configure` | | | Total GitLab Managed clusters with Elastic Stack enabled |
-| `clusters_applications_cilium` | `counts` | `configure` | | | Total GitLab Managed clusters with Cilium enabled |
-| `clusters_management_project` | `counts` | `configure` | | | Total GitLab Managed clusters with defined cluster management project |
-| `in_review_folder` | `counts` | | | | |
-| `grafana_integrated_projects` | `counts` | | | | |
-| `groups` | `counts` | | | | |
-| `issues` | `counts` | | | | |
-| `issues_created_from_gitlab_error_tracking_ui` | `counts` | `monitor` | | | |
-| `issues_with_associated_zoom_link` | `counts` | `monitor` | | | |
-| `issues_using_zoom_quick_actions` | `counts` | `monitor` | | | |
-| `issues_with_embedded_grafana_charts_approx` | `counts` | `monitor` | | | |
-| `issues_with_health_status` | `counts` | | | | |
-| `keys` | `counts` | | | | |
-| `label_lists` | `counts` | | | | |
-| `lfs_objects` | `counts` | | | | |
-| `milestone_lists` | `counts` | | | | |
-| `milestones` | `counts` | | | | |
-| `pages_domains` | `counts` | `release` | | | Total GitLab Pages domains |
-| `pool_repositories` | `counts` | | | | |
-| `projects` | `counts` | | | | |
-| `projects_imported_from_github` | `counts` | | | | |
-| `projects_with_repositories_enabled` | `counts` | | | | |
-| `projects_with_error_tracking_enabled` | `counts` | `monitor` | | | |
-| `protected_branches` | `counts` | | | | |
-| `releases` | `counts` | `release` | | | Unique release tags |
-| `remote_mirrors` | `counts` | | | | |
-| `requirements_created` | `counts` | | | | |
-| `snippets` | `counts` | 'create' | | CE+EE | |
-| `snippets` | `counts_monthly` | 'create' | | CE+EE | |
-| `personal_snippets` | `counts` | 'create' | | CE+EE | |
-| `personal_snippets` | `counts_monthly` | 'create' | | CE+EE | |
-| `project_snippets` | `counts` | 'create' | | CE+EE | |
-| `project_snippets` | `counts_monthly` | 'create' | | CE+EE | |
-| `suggestions` | `counts` | | | | |
-| `todos` | `counts` | | | | |
-| `uploads` | `counts` | | | | |
-| `web_hooks` | `counts` | | | | |
-| `projects_alerts_active` | `counts` | | | | |
-| `projects_asana_active` | `counts` | | | | |
-| `projects_assembla_active` | `counts` | | | | |
-| `projects_bamboo_active` | `counts` | | | | |
-| `projects_bugzilla_active` | `counts` | | | | |
-| `projects_buildkite_active` | `counts` | | | | |
-| `projects_campfire_active` | `counts` | | | | |
-| `projects_custom_issue_tracker_active` | `counts` | | | | |
-| `projects_discord_active` | `counts` | | | | |
-| `projects_drone_ci_active` | `counts` | | | | |
-| `projects_emails_on_push_active` | `counts` | | | | |
-| `projects_external_wiki_active` | `counts` | | | | |
-| `projects_flowdock_active` | `counts` | | | | |
-| `projects_github_active` | `counts` | | | | |
-| `projects_hangouts_chat_active` | `counts` | | | | |
-| `projects_hipchat_active` | `counts` | | | | |
-| `projects_irker_active` | `counts` | | | | |
-| `projects_jenkins_active` | `counts` | | | | |
-| `projects_jira_active` | `counts` | | | | |
-| `projects_mattermost_active` | `counts` | | | | |
-| `projects_mattermost_slash_commands_active` | `counts` | | | | |
-| `projects_microsoft_teams_active` | `counts` | | | | |
-| `projects_packagist_active` | `counts` | | | | |
-| `projects_pipelines_email_active` | `counts` | | | | |
-| `projects_pivotaltracker_active` | `counts` | | | | |
-| `projects_prometheus_active` | `counts` | | | | |
-| `projects_pushover_active` | `counts` | | | | |
-| `projects_redmine_active` | `counts` | | | | |
-| `projects_slack_active` | `counts` | | | | |
-| `projects_slack_slash_commands_active` | `counts` | | | | |
-| `projects_teamcity_active` | `counts` | | | | |
-| `projects_unify_circuit_active` | `counts` | | | | |
-| `projects_webex_teams_active` | `counts` | | | | |
-| `projects_youtrack_active` | `counts` | | | | |
-| `projects_jira_server_active` | `counts` | | | | |
-| `projects_jira_cloud_active` | `counts` | | | | |
-| `projects_jira_dvcs_cloud_active` | `counts` | | | | |
-| `projects_jira_dvcs_server_active` | `counts` | | | | |
-| `projects_jira_issuelist_active` | `counts` | `create` | | EE | Total Jira Issue feature enabled |
-| `labels` | `counts` | | | | |
-| `merge_requests` | `counts` | | | | |
-| `merge_requests_users` | `counts` | | | | |
-| `notes` | `counts` | | | | |
-| `wiki_pages_create` | `counts` | | | | |
-| `wiki_pages_update` | `counts` | | | | |
-| `wiki_pages_delete` | `counts` | | | | |
-| `web_ide_commits` | `counts` | | | | |
-| `web_ide_views` | `counts` | | | | |
-| `web_ide_merge_requests` | `counts` | | | | |
-| `web_ide_previews` | `counts` | | | | |
-| `snippet_comment` | `counts` | | | | |
-| `commit_comment` | `counts` | | | | |
-| `merge_request_comment` | `counts` | | | | |
-| `snippet_create` | `counts` | | | | |
-| `snippet_update` | `counts` | | | | |
-| `navbar_searches` | `counts` | | | | |
-| `cycle_analytics_views` | `counts` | | | | |
-| `productivity_analytics_views` | `counts` | | | | |
-| `source_code_pushes` | `counts` | | | | |
-| `merge_request_create` | `counts` | | | | |
-| `design_management_designs_create` | `counts` | | | | |
-| `design_management_designs_update` | `counts` | | | | |
-| `design_management_designs_delete` | `counts` | | | | |
-| `licenses_list_views` | `counts` | | | | |
-| `user_preferences_group_overview_details` | `counts` | | | | |
-| `user_preferences_group_overview_security_dashboard` | `counts` | | | | |
-| `ingress_modsecurity_logging` | `counts` | | | | |
-| `ingress_modsecurity_blocking` | `counts` | | | | |
-| `ingress_modsecurity_disabled` | `counts` | | | | |
-| `ingress_modsecurity_not_installed` | `counts` | | | | |
-| `dependency_list_usages_total` | `counts` | | | | |
-| `epics` | `counts` | | | | |
-| `feature_flags` | `counts` | | | | |
-| `geo_nodes` | `counts` | `geo` | | | Number of sites in a Geo deployment |
-| `geo_event_log_max_id` | `counts` | `geo` | | | Number of replication events on a Geo primary |
-| `incident_issues` | `counts` | `monitor` | | | Issues created by the alert bot |
-| `alert_bot_incident_issues` | `counts` | `monitor` | | | Issues created by the alert bot |
-| `incident_labeled_issues` | `counts` | `monitor` | | | Issues with the incident label |
-| `issues_created_gitlab_alerts` | `counts` | `monitor` | | | Issues created from alerts by non-alert bot users |
-| `issues_created_manually_from_alerts` | `counts` | `monitor` | | | Issues created from alerts by non-alert bot users |
-| `issues_created_from_alerts` | `counts` | `monitor` | | | Issues created from Prometheus and alert management alerts |
-| `ldap_group_links` | `counts` | | | | |
-| `ldap_keys` | `counts` | | | | |
-| `ldap_users` | `counts` | | | | |
-| `pod_logs_usages_total` | `counts` | | | | |
-| `projects_enforcing_code_owner_approval` | `counts` | | | | |
-| `projects_mirrored_with_pipelines_enabled` | `counts` | `release` | | | Projects with repository mirroring enabled |
-| `projects_reporting_ci_cd_back_to_github` | `counts` | `verify` | | | Projects with a GitHub service pipeline enabled |
-| `projects_with_packages` | `counts` | `package` | | | Projects with package registry configured |
-| `projects_with_prometheus_alerts` | `counts` | `monitor` | | | Projects with Prometheus alerting enabled |
-| `projects_with_tracing_enabled` | `counts` | `monitor` | | | Projects with tracing enabled |
-| `projects_with_alerts_service_enabled` | `counts` | `monitor` | | | Projects with alerting service enabled |
-| `template_repositories` | `counts` | | | | |
-| `container_scanning_jobs` | `counts` | | | | |
-| `dependency_scanning_jobs` | `counts` | | | | |
-| `license_management_jobs` | `counts` | | | | |
-| `sast_jobs` | `counts` | | | | |
-| `status_page_projects` | `counts` | `monitor` | | | Projects with status page enabled |
-| `status_page_issues` | `counts` | `monitor` | | | Issues published to a Status Page |
-| `status_page_incident_publishes` | `counts` | `monitor` | | | Cumulative count of usages of publish operation |
-| `status_page_incident_unpublishes` | `counts` | `monitor` | | | Cumulative count of usages of unpublish operation |
-| `epics_deepest_relationship_level` | `counts` | | | | |
-| `operations_dashboard_default_dashboard` | `counts` | `monitor` | | | Active users with enabled operations dashboard |
-| `operations_dashboard_users_with_projects_added` | `counts` | `monitor` | | | Active users with projects on operations dashboard |
-| `container_registry_enabled` | | | | | |
-| `dependency_proxy_enabled` | | | | | |
-| `gitlab_shared_runners_enabled` | | | | | |
-| `gravatar_enabled` | | | | | |
-| `ldap_enabled` | | | | | |
-| `mattermost_enabled` | | | | | |
-| `omniauth_enabled` | | | | | |
-| `prometheus_enabled` | | | | | Whether the bundled Prometheus is enabled |
-| `prometheus_metrics_enabled` | | | | | |
-| `reply_by_email_enabled` | | | | | |
-| `average` | `avg_cycle_analytics - code` | | | | |
-| `sd` | `avg_cycle_analytics - code` | | | | |
-| `missing` | `avg_cycle_analytics - code` | | | | |
-| `average` | `avg_cycle_analytics - test` | | | | |
-| `sd` | `avg_cycle_analytics - test` | | | | |
-| `missing` | `avg_cycle_analytics - test` | | | | |
-| `average` | `avg_cycle_analytics - review` | | | | |
-| `sd` | `avg_cycle_analytics - review` | | | | |
-| `missing` | `avg_cycle_analytics - review` | | | | |
-| `average` | `avg_cycle_analytics - staging` | | | | |
-| `sd` | `avg_cycle_analytics - staging` | | | | |
-| `missing` | `avg_cycle_analytics - staging` | | | | |
-| `average` | `avg_cycle_analytics - production` | | | | |
-| `sd` | `avg_cycle_analytics - production` | | | | |
-| `missing` | `avg_cycle_analytics - production` | | | | |
-| `total` | `avg_cycle_analytics` | | | | |
-| `g_analytics_contribution` | `analytics_unique_visits` | `manage` | | | Visits to /groups/:group/-/contribution_analytics |
-| `g_analytics_insights` | `analytics_unique_visits` | `manage` | | | Visits to /groups/:group/-/insights |
-| `g_analytics_issues` | `analytics_unique_visits` | `manage` | | | Visits to /groups/:group/-/issues_analytics |
-| `g_analytics_productivity` | `analytics_unique_visits` | `manage` | | | Visits to /groups/:group/-/analytics/productivity_analytics |
-| `g_analytics_valuestream` | `analytics_unique_visits` | `manage` | | | Visits to /groups/:group/-/analytics/value_stream_analytics |
-| `p_analytics_pipelines` | `analytics_unique_visits` | `manage` | | | Visits to /:group/:project/pipelines/charts |
-| `p_analytics_code_reviews` | `analytics_unique_visits` | `manage` | | | Visits to /:group/:project/-/analytics/code_reviews |
-| `p_analytics_valuestream` | `analytics_unique_visits` | `manage` | | | Visits to /:group/:project/-/value_stream_analytics |
-| `p_analytics_insights` | `analytics_unique_visits` | `manage` | | | Visits to /:group/:project/insights |
-| `p_analytics_issues` | `analytics_unique_visits` | `manage` | | | Visits to /:group/:project/-/analytics/issues_analytics |
-| `p_analytics_repo` | `analytics_unique_visits` | `manage` | | | Visits to /:group/:project/-/graphs/master/charts |
-| `u_analytics_todos` | `analytics_unique_visits` | `manage` | | | Visits to /dashboard/todos |
-| `i_analytics_cohorts` | `analytics_unique_visits` | `manage` | | | Visits to /-/instance_statistics/cohorts |
-| `i_analytics_dev_ops_score` | `analytics_unique_visits` | `manage` | | | Visits to /-/instance_statistics/dev_ops_score |
-| `analytics_unique_visits_for_any_target` | `analytics_unique_visits` | `manage` | | | Visits to any of the pages listed above |
-| `clusters_applications_cert_managers` | `usage_activity_by_stage` | `configure` | | CE+EE | Unique clusters with certificate managers enabled |
-| `clusters_applications_helm` | `usage_activity_by_stage` | `configure` | | CE+EE | Unique clusters with Helm enabled |
-| `clusters_applications_ingress` | `usage_activity_by_stage` | `configure` | | CE+EE | Unique clusters with Ingress enabled |
-| `clusters_applications_knative` | `usage_activity_by_stage` | `configure` | | CE+EE | Unique clusters with Knative enabled |
-| `clusters_management_project` | `usage_activity_by_stage` | `configure` | | CE+EE | Unique clusters with project management enabled |
-| `clusters_disabled` | `usage_activity_by_stage` | `configure` | | CE+EE | Total non-"GitLab Managed clusters" |
-| `clusters_enabled` | `usage_activity_by_stage` | `configure` | | CE+EE | Total GitLab Managed clusters |
-| `clusters_platforms_gke` | `usage_activity_by_stage` | `configure` | | CE+EE | Unique clusters with Google Cloud installed |
-| `clusters_platforms_eks` | `usage_activity_by_stage` | `configure` | | CE+EE | Unique clusters with AWS installed |
-| `clusters_platforms_user` | `usage_activity_by_stage` | `configure` | | CE+EE | Unique clusters that are user provided |
-| `instance_clusters_disabled` | `usage_activity_by_stage` | `configure` | | CE+EE | Unique clusters disabled on instance |
-| `instance_clusters_enabled` | `usage_activity_by_stage` | `configure` | | CE+EE | Unique clusters enabled on instance |
-| `group_clusters_disabled` | `usage_activity_by_stage` | `configure` | | CE+EE | Unique clusters disabled on group |
-| `group_clusters_enabled` | `usage_activity_by_stage` | `configure` | | CE+EE | Unique clusters enabled on group |
-| `project_clusters_disabled` | `usage_activity_by_stage` | `configure` | | CE+EE | Unique clusters disabled on project |
-| `project_clusters_enabled` | `usage_activity_by_stage` | `configure` | | CE+EE | Unique clusters enabled on project |
-| `projects_slack_notifications_active` | `usage_activity_by_stage` | `configure` | | EE | Unique projects with Slack service enabled |
-| `projects_slack_slash_active` | `usage_activity_by_stage` | `configure` | | EE | Unique projects with Slack '/' commands enabled |
-| `projects_with_prometheus_alerts` | `usage_activity_by_stage` | `configure` | | EE | Projects with Prometheus enabled and no alerts |
-| `deploy_keys` | `usage_activity_by_stage` | `create` | | CE+EE | |
-| `keys` | `usage_activity_by_stage` | `create` | | CE+EE | |
-| `merge_requests` | `usage_activity_by_stage` | `create` | | CE+EE | |
-| `projects_with_disable_overriding_approvers_per_merge_request` | `usage_activity_by_stage` | `create` | | CE+EE | |
-| `projects_without_disable_overriding_approvers_per_merge_request` | `usage_activity_by_stage` | `create` | | CE+EE | |
-| `remote_mirrors` | `usage_activity_by_stage` | `create` | | CE+EE | |
-| `snippets` | `usage_activity_by_stage` | `create` | | CE+EE | |
-| `merge_requests_users` | `usage_activity_by_stage_monthly` | `create` | | CE+EE | Unique count of users who used a merge request |
-| `action_monthly_active_users_project_repo` | `usage_activity_by_stage_monthly` | `create` | | CE+EE | Unique count of users who pushed to a project repo |
-| `action_monthly_active_users_design_management` | `usage_activity_by_stage_monthly` | `create` | | CE+EE | Unique count of users who interacted with the design system management |
-| `action_monthly_active_users_wiki_repo` | `usage_activity_by_stage_monthly` | `create` | | CE+EE | Unique count of users who created or updated a wiki repo |
-| `projects_enforcing_code_owner_approval` | `usage_activity_by_stage` | `create` | | EE | |
-| `merge_requests_with_optional_codeowners` | `usage_activity_by_stage` | `create` | | EE | |
-| `merge_requests_with_required_codeowners` | `usage_activity_by_stage` | `create` | | EE | |
-| `projects_imported_from_github` | `usage_activity_by_stage` | `create` | | EE | |
-| `projects_with_repositories_enabled` | `usage_activity_by_stage` | `create` | | EE | |
-| `protected_branches` | `usage_activity_by_stage` | `create` | | EE | |
-| `suggestions` | `usage_activity_by_stage` | `create` | | EE | |
-| `approval_project_rules` | `usage_activity_by_stage` | `create` | | EE | Number of project approval rules |
-| `approval_project_rules_with_target_branch` | `usage_activity_by_stage` | `create` | | EE | Number of project approval rules with not default target branch |
-| `merge_requests_with_added_rules` | `usage_activity_by_stage` | `create` | | EE | Merge Requests with added rules |
-| `clusters` | `usage_activity_by_stage` | `monitor` | | CE+EE | |
-| `clusters_applications_prometheus` | `usage_activity_by_stage` | `monitor` | | CE+EE | |
-| `operations_dashboard_default_dashboard` | `usage_activity_by_stage` | `monitor` | | CE+EE | |
-| `operations_dashboard_users_with_projects_added` | `usage_activity_by_stage` | `monitor` | | EE | |
-| `projects_prometheus_active` | `usage_activity_by_stage` | `monitor` | | EE | |
-| `projects_with_error_tracking_enabled` | `usage_activity_by_stage` | `monitor` | | EE | |
-| `projects_with_tracing_enabled` | `usage_activity_by_stage` | `monitor` | | EE | |
-| `events` | `usage_activity_by_stage` | `manage` | | CE+EE | |
-| `groups` | `usage_activity_by_stage` | `manage` | | CE+EE | |
-| `users_created_at` | `usage_activity_by_stage` | `manage` | | CE+EE | |
-| `omniauth_providers` | `usage_activity_by_stage` | `manage` | | CE+EE | |
-| `ldap_keys` | `usage_activity_by_stage` | `manage` | | EE | |
-| `ldap_users` | `usage_activity_by_stage` | `manage` | | EE | |
-| `value_stream_management_customized_group_stages` | `usage_activity_by_stage` | `manage` | | EE | |
-| `projects_with_compliance_framework` | `usage_activity_by_stage` | `manage` | | EE | |
-| `ldap_servers` | `usage_activity_by_stage` | `manage` | | EE | |
-| `ldap_group_sync_enabled` | `usage_activity_by_stage` | `manage` | | EE | |
-| `ldap_admin_sync_enabled` | `usage_activity_by_stage` | `manage` | | EE | |
-| `group_saml_enabled` | `usage_activity_by_stage` | `manage` | | EE | |
-| `issues` | `usage_activity_by_stage` | `plan` | | CE+EE | |
-| `notes` | `usage_activity_by_stage` | `plan` | | CE+EE | |
-| `projects` | `usage_activity_by_stage` | `plan` | | CE+EE | |
-| `todos` | `usage_activity_by_stage` | `plan` | | CE+EE | |
-| `assignee_lists` | `usage_activity_by_stage` | `plan` | | EE | |
-| `epics` | `usage_activity_by_stage` | `plan` | | EE | |
-| `label_lists` | `usage_activity_by_stage` | `plan` | | EE | |
-| `milestone_lists` | `usage_activity_by_stage` | `plan` | | EE | |
-| `projects_jira_active` | `usage_activity_by_stage` | `plan` | | EE | |
-| `projects_jira_dvcs_server_active` | `usage_activity_by_stage` | `plan` | | EE | |
-| `projects_jira_dvcs_server_active` | `usage_activity_by_stage` | `plan` | | EE | |
-| `service_desk_enabled_projects` | `usage_activity_by_stage` | `plan` | | CE+EE | |
-| `service_desk_issues` | `usage_activity_by_stage` | `plan` | | CE+EE | |
-| `deployments` | `usage_activity_by_stage` | `release` | | CE+EE | Total deployments |
-| `failed_deployments` | `usage_activity_by_stage` | `release` | | CE+EE | Total failed deployments |
-| `projects_mirrored_with_pipelines_enabled` | `usage_activity_by_stage` | `release` | | EE | Projects with repository mirroring enabled |
-| `releases` | `usage_activity_by_stage` | `release` | | CE+EE | Unique release tags in project |
-| `successful_deployments` | `usage_activity_by_stage` | `release` | | CE+EE | Total successful deployments |
-| `user_preferences_group_overview_security_dashboard` | `usage_activity_by_stage` | `secure` | | | |
-| `ci_builds` | `usage_activity_by_stage` | `verify` | | CE+EE | Unique builds in project |
-| `ci_external_pipelines` | `usage_activity_by_stage` | `verify` | | CE+EE | Total pipelines in external repositories |
-| `ci_internal_pipelines` | `usage_activity_by_stage` | `verify` | | CE+EE | Total pipelines in GitLab repositories |
-| `ci_pipeline_config_auto_devops` | `usage_activity_by_stage` | `verify` | | CE+EE | Total pipelines from an Auto DevOps template |
-| `ci_pipeline_config_repository` | `usage_activity_by_stage` | `verify` | | CE+EE | Pipelines from templates in repository |
-| `ci_pipeline_schedules` | `usage_activity_by_stage` | `verify` | | CE+EE | Pipeline schedules in GitLab |
-| `ci_pipelines` | `usage_activity_by_stage` | `verify` | | CE+EE | Total pipelines |
-| `ci_triggers` | `usage_activity_by_stage` | `verify` | | CE+EE | Triggers enabled |
-| `clusters_applications_runner` | `usage_activity_by_stage` | `verify` | | CE+EE | Unique clusters with Runner enabled |
-| `projects_reporting_ci_cd_back_to_github` | `usage_activity_by_stage` | `verify` | | EE | Unique projects with a GitHub pipeline enabled |
-| `merge_requests_users` | `usage_activity_by_stage_monthly` | `create` | | | Unique count of users who used a merge request |
-| `duration_s` | `topology` | `enablement` | | | Time it took to collect topology data |
-| `application_requests_per_hour` | `topology` | `enablement` | | | Number of requests to the web application per hour |
-| `failures` | `topology` | `enablement` | | | Contains information about failed queries |
-| `nodes` | `topology` | `enablement` | | | The list of server nodes on which GitLab components are running |
-| `node_memory_total_bytes` | `topology > nodes` | `enablement` | | | The total available memory of this node |
-| `node_cpus` | `topology > nodes` | `enablement` | | | The number of CPU cores of this node |
-| `node_uname_info` | `topology > nodes` | `enablement` | | | The basic hardware architecture and OS release information on this node |
-| `node_services` | `topology > nodes` | `enablement` | | | The list of GitLab services running on this node |
-| `name` | `topology > nodes > node_services` | `enablement` | | | The name of the GitLab service running on this node |
-| `process_count` | `topology > nodes > node_services` | `enablement` | | | The number of processes running for this service |
-| `process_memory_rss` | `topology > nodes > node_services` | `enablement` | | | The average Resident Set Size of a service process |
-| `process_memory_uss` | `topology > nodes > node_services` | `enablement` | | | The average Unique Set Size of a service process |
-| `process_memory_pss` | `topology > nodes > node_services` | `enablement` | | | The average Proportional Set Size of a service process |
-| `server` | `topology > nodes > node_services` | `enablement` | | | The type of web server used (Unicorn or Puma) |
-| `network_policy_forwards` | `counts` | `defend` | | EE | Cumulative count of forwarded packets by Container Network |
-| `network_policy_drops` | `counts` | `defend` | | EE | Cumulative count of dropped packets by Container Network |
-
## Example Usage Ping payload
The following is example content of the Usage Ping payload.
@@ -811,13 +500,14 @@ The following is example content of the Usage Ping payload.
"enabled": true,
"version": "1.17.0"
},
+ "container_registry_server": {
+ "vendor": "gitlab",
+ "version": "2.9.1-gitlab"
+ },
"database": {
"adapter": "postgresql",
"version": "9.6.15"
},
- "app_server": {
- "type": "console"
- },
"avg_cycle_analytics": {
"issue": {
"average": 999,
@@ -941,7 +631,9 @@ The following is example content of the Usage Ping payload.
"nodes": [
{
"node_memory_total_bytes": 33269903360,
+ "node_memory_utilization": 0.35,
"node_cpus": 16,
+ "node_cpu_utilization": 0.2,
"node_uname_info": {
"machine": "x86_64",
"sysname": "Linux",
diff --git a/doc/development/testing_guide/best_practices.md b/doc/development/testing_guide/best_practices.md
index 4e46e691405..b60a26c29b5 100644
--- a/doc/development/testing_guide/best_practices.md
+++ b/doc/development/testing_guide/best_practices.md
@@ -57,7 +57,7 @@ bundle exec guard
When using spring and guard together, use `SPRING=1 bundle exec guard` instead to make use of spring.
-Use [Factory Doctor](https://test-prof.evilmartians.io/#/factory_doctor.md) to find cases on un-necessary database manipulation, which can cause slow tests.
+Use [Factory Doctor](https://test-prof.evilmartians.io/#/profilers/factory_doctor) to find cases on un-necessary database manipulation, which can cause slow tests.
```shell
# run test for path
@@ -261,8 +261,8 @@ As much as possible, do not implement this using `before(:all)` or `before(:cont
you would need to manually clean up the data as those hooks run outside a database transaction.
Instead, this can be achieved by using
-[`let_it_be`](https://test-prof.evilmartians.io/#/let_it_be) variables and the
-[`before_all`](https://test-prof.evilmartians.io/#/before_all) hook
+[`let_it_be`](https://test-prof.evilmartians.io/#/recipes/let_it_be) variables and the
+[`before_all`](https://test-prof.evilmartians.io/#/recipes/before_all) hook
from the [`test-prof` gem](https://rubygems.org/gems/test-prof).
```ruby
@@ -315,109 +315,7 @@ end
### Feature flags in tests
-All feature flags are stubbed to be enabled by default in our Ruby-based
-tests.
-
-To disable a feature flag in a test, use the `stub_feature_flags`
-helper. For example, to globally disable the `ci_live_trace` feature
-flag in a test:
-
-```ruby
-stub_feature_flags(ci_live_trace: false)
-
-Feature.enabled?(:ci_live_trace) # => false
-```
-
-If you wish to set up a test where a feature flag is enabled only
-for some actors and not others, you can specify this in options
-passed to the helper. For example, to enable the `ci_live_trace`
-feature flag for a specific project:
-
-```ruby
-project1, project2 = build_list(:project, 2)
-
-# Feature will only be enabled for project1
-stub_feature_flags(ci_live_trace: project1)
-
-Feature.enabled?(:ci_live_trace) # => false
-Feature.enabled?(:ci_live_trace, project1) # => true
-Feature.enabled?(:ci_live_trace, project2) # => false
-```
-
-This represents an actual behavior of FlipperGate:
-
-1. You can enable an override for a specified actor to be enabled
-1. You can disable (remove) an override for a specified actor,
- falling back to default state
-1. There's no way to model that you explicitly disable a specified actor
-
-```ruby
-Feature.enable(:my_feature)
-Feature.disable(:my_feature, project1)
-Feature.enabled?(:my_feature) # => true
-Feature.enabled?(:my_feature, project1) # => true
-```
-
-```ruby
-Feature.disable(:my_feature2)
-Feature.enable(:my_feature2, project1)
-Feature.enabled?(:my_feature2) # => false
-Feature.enabled?(:my_feature2, project1) # => true
-```
-
-#### `stub_feature_flags` vs `Feature.enable*`
-
-It is preferred to use `stub_feature_flags` for enabling feature flags
-in testing environment. This method provides a simple and well described
-interface for a simple use-cases.
-
-However, in some cases a more complex behaviors needs to be tested,
-like a feature flag percentage rollouts. This can be achieved using
-the `.enable_percentage_of_time` and `.enable_percentage_of_actors`
-
-```ruby
-# Good: feature needs to be explicitly disabled, as it is enabled by default if not defined
-stub_feature_flags(my_feature: false)
-stub_feature_flags(my_feature: true)
-stub_feature_flags(my_feature: project)
-stub_feature_flags(my_feature: [project, project2])
-
-# Bad
-Feature.enable(:my_feature_2)
-
-# Good: enable my_feature for 50% of time
-Feature.enable_percentage_of_time(:my_feature_3, 50)
-
-# Good: enable my_feature for 50% of actors/gates/things
-Feature.enable_percentage_of_actors(:my_feature_4, 50)
-```
-
-Each feature flag that has a defined state will be persisted
-for test execution time:
-
-```ruby
-Feature.persisted_names.include?('my_feature') => true
-Feature.persisted_names.include?('my_feature_2') => true
-Feature.persisted_names.include?('my_feature_3') => true
-Feature.persisted_names.include?('my_feature_4') => true
-```
-
-#### Stubbing gate
-
-It is required that a gate that is passed as an argument to `Feature.enabled?`
-and `Feature.disabled?` is an object that includes `FeatureGate`.
-
-In specs you can use a `stub_feature_flag_gate` method that allows you to have
-quickly your custom gate:
-
-```ruby
-gate = stub_feature_flag_gate('CustomActor')
-
-stub_feature_flags(ci_live_trace: gate)
-
-Feature.enabled?(:ci_live_trace) # => false
-Feature.enabled?(:ci_live_trace, gate) # => true
-```
+This section was moved to [developing with feature flags](../feature_flags/development.md).
### Pristine test environments
diff --git a/doc/development/testing_guide/end_to_end/best_practices.md b/doc/development/testing_guide/end_to_end/best_practices.md
index 7df3cd614c7..3b193721143 100644
--- a/doc/development/testing_guide/end_to_end/best_practices.md
+++ b/doc/development/testing_guide/end_to_end/best_practices.md
@@ -55,6 +55,33 @@ Project::Issues::Index.perform do |index|
end
```
+## Prefer `aggregate_failures` when there are back-to-back expectations
+
+In cases where there must be multiple (back-to-back) expectations within a test case, it is preferable to use `aggregate_failures`.
+
+This allows you to group a set of expectations and see all the failures altogether, rather than having the test being aborted on the first failure.
+
+For example:
+
+```ruby
+#=> Good
+Page::Search::Results.perform do |search|
+ search.switch_to_code
+
+ aggregate_failures 'testing search results' do
+ expect(search).to have_file_in_project(template[:file_name], project.name)
+ expect(search).to have_file_with_content(template[:file_name], content[0..33])
+ end
+end
+
+#=> Bad
+Page::Search::Results.perform do |search|
+ search.switch_to_code
+ expect(search).to have_file_in_project(template[:file_name], project.name)
+ expect(search).to have_file_with_content(template[:file_name], content[0..33])
+end
+```
+
## Prefer to split tests across multiple files
Our framework includes a couple of parallelization mechanisms that work by executing spec files in parallel.
diff --git a/doc/development/testing_guide/end_to_end/index.md b/doc/development/testing_guide/end_to_end/index.md
index ac051b827d2..f61eab5c8f3 100644
--- a/doc/development/testing_guide/end_to_end/index.md
+++ b/doc/development/testing_guide/end_to_end/index.md
@@ -178,6 +178,13 @@ Once you decided where to put [test environment orchestration scenarios](https:/
the [GitLab QA orchestrator README](https://gitlab.com/gitlab-org/gitlab-qa/tree/master/README.md), and [the already existing
instance-level scenarios](https://gitlab.com/gitlab-org/gitlab-foss/tree/master/qa/qa/specs/features).
+### Consider **not** writing an end-to-end test
+
+We should follow these best practices for end-to-end tests:
+
+- Do not write an end-to-end test if a lower-level feature test exists. End-to-end tests require more work and resources.
+- Troubleshooting for end-to-end tests can be more complex as connections to the application under test are not known.
+
Continued reading:
- [Beginner's Guide](beginners_guide.md)
diff --git a/doc/development/testing_guide/end_to_end/page_objects.md b/doc/development/testing_guide/end_to_end/page_objects.md
index d43d88779c7..6ce44b2d359 100644
--- a/doc/development/testing_guide/end_to_end/page_objects.md
+++ b/doc/development/testing_guide/end_to_end/page_objects.md
@@ -4,22 +4,22 @@ In GitLab QA we are using a known pattern, called _Page Objects_.
This means that we have built an abstraction for all pages in GitLab that we use
to drive GitLab QA scenarios. Whenever we do something on a page, like filling
-in a form, or clicking a button, we do that only through a page object
+in a form or clicking a button, we do that only through a page object
associated with this area of GitLab.
For example, when GitLab QA test harness signs in into GitLab, it needs to fill
-in a user login and user password. In order to do that, we have a class, called
+in user login and user password. To do that, we have a class, called
`Page::Main::Login` and `sign_in_using_credentials` methods, that is the only
-piece of the code, that has knowledge about `user_login` and `user_password`
+piece of the code, that reads the `user_login` and `user_password`
fields.
## Why do we need that?
-We need page objects, because we need to reduce duplication and avoid problems
+We need page objects because we need to reduce duplication and avoid problems
whenever someone changes some selectors in GitLab's source code.
Imagine that we have a hundred specs in GitLab QA, and we need to sign into
-GitLab each time, before we make assertions. Without a page object one would
+GitLab each time, before we make assertions. Without a page object, one would
need to rely on volatile helpers or invoke Capybara methods directly. Imagine
invoking `fill_in :user_login` in every `*_spec.rb` file / test example.
@@ -28,7 +28,7 @@ this page to `t.text_field :username` it will generate a different field
identifier, what would effectively break all tests.
Because we are using `Page::Main::Login.perform(&:sign_in_using_credentials)`
-everywhere, when we want to sign into GitLab, the page object is the single
+everywhere, when we want to sign in to GitLab, the page object is the single
source of truth, and we will need to update `fill_in :user_login`
to `fill_in :user_username` only in a one place.
@@ -42,23 +42,23 @@ That is why when someone changes `t.text_field :login` to
change until our GitLab QA nightly pipeline fails, or until someone triggers
`package-and-qa` action in their merge request.
-Obviously such a change would break all tests. We call this problem a _fragile
+Such a change would break all tests. We call this problem a _fragile
tests problem_.
-In order to make GitLab QA more reliable and robust, we had to solve this
+To make GitLab QA more reliable and robust, we had to solve this
problem by introducing coupling between GitLab CE / EE views and GitLab QA.
## How did we solve fragile tests problem?
Currently, when you add a new `Page::Base` derived class, you will also need to
-define all selectors that your page objects depends on.
+define all selectors that your page objects depend on.
Whenever you push your code to CE / EE repository, `qa:selectors` sanity test
job is going to be run as a part of a CI pipeline.
This test is going to validate all page objects that we have implemented in
`qa/page` directory. When it fails, you will be notified about missing
-or invalid views / selectors definition.
+or invalid views/selectors definition.
## How to properly implement a page object?
@@ -89,7 +89,7 @@ end
### Defining Elements
-The `view` DSL method will correspond to the rails View, partial, or Vue component that renders the elements.
+The `view` DSL method will correspond to the Rails view, partial, or Vue component that renders the elements.
The `element` DSL method in turn declares an element for which a corresponding
`data-qa-selector=element_name_snaked` data attribute will need to be added to the view file.
@@ -134,7 +134,7 @@ view 'app/views/my/view.html.haml' do
end
```
-To add these elements to the view, you must change the rails View, partial, or Vue component by adding a `data-qa-selector` attribute
+To add these elements to the view, you must change the Rails view, partial, or Vue component by adding a `data-qa-selector` attribute
for each element defined.
In our case, `data-qa-selector="login_field"`, `data-qa-selector="password_field"` and `data-qa-selector="sign_in_button"`
@@ -228,7 +228,7 @@ expect(the_page).to have_element(:model, index: 1) #=> select on the first model
### Exceptions
-In some cases it might not be possible or worthwhile to add a selector.
+In some cases, it might not be possible or worthwhile to add a selector.
Some UI components use external libraries, including some maintained by third parties.
Even if a library is maintained by GitLab, the selector sanity test only runs
diff --git a/doc/development/testing_guide/end_to_end/running_tests_that_require_special_setup.md b/doc/development/testing_guide/end_to_end/running_tests_that_require_special_setup.md
index 77d820e1686..2cf2bb5b1d0 100644
--- a/doc/development/testing_guide/end_to_end/running_tests_that_require_special_setup.md
+++ b/doc/development/testing_guide/end_to_end/running_tests_that_require_special_setup.md
@@ -48,3 +48,89 @@ only to prevent it from running in the pipelines for live environments such as S
If Jenkins Docker container exits without providing any information in the logs, try increasing the memory used by
the Docker Engine.
+
+## Gitaly Cluster tests
+
+The tests tagged `:gitaly_ha` are orchestrated tests that can only be run against a set of Docker containers as configured and started by [the `Test::Integration::GitalyCluster` GitLab QA scenario](https://gitlab.com/gitlab-org/gitlab-qa/-/blob/master/docs/what_tests_can_be_run.md#testintegrationgitalycluster-ceeefull-image-address).
+
+As described in the documentation about the scenario noted above, the following command will run the tests:
+
+```shell
+gitlab-qa Test::Integration::GitalyCluster EE
+```
+
+However, that will remove the containers after it finishes running the tests. If you would like to do further testing, for example, if you would like to run a single test via a debugger, you can use [the `--no-tests` option](https://gitlab.com/gitlab-org/gitlab-qa#command-line-options) to make `gitlab-qa` skip running the tests, and to leave the containers running so that you can continue to use them.
+
+```shell
+gitlab-qa Test::Integration::GitalyCluster EE --no-tests
+```
+
+When all the containers are running you will see the output of the `docker ps` command, showing on which ports the GitLab container can be accessed. For example:
+
+```plaintext
+CONTAINER ID ... PORTS NAMES
+d15d3386a0a8 ... 22/tcp, 443/tcp, 0.0.0.0:32772->80/tcp gitlab-gitaly-ha
+```
+
+That shows that the GitLab instance running in the `gitlab-gitaly-ha` container can be reached via `http://localhost:32772`. However, Git operations like cloning and pushing are performed against the URL revealed via the UI as the clone URL. It uses the hostname configured for the GitLab instance, which in this case matches the Docker container name and network, `gitlab-gitaly-ha.test`. Before you can run the tests you need to configure your computer to access the container via that address. One option is to [use caddyserver as described for running tests against GDK](https://gitlab.com/gitlab-org/gitlab-qa/-/blob/master/docs/run_qa_against_gdk.md#workarounds).
+
+Another option is to use NGINX.
+
+In both cases you will need to configure your machine to translate `gitlab-gitlab-ha.test` into an appropriate IP address:
+
+```shell
+echo '127.0.0.1 gitlab-gitaly-ha.test' | sudo tee -a /etc/hosts
+```
+
+Then install NGINX:
+
+```shell
+# on macOS
+brew install nginx
+
+# on Debian/Ubuntu
+apt install nginx
+
+# on Fedora
+yum install nginx
+```
+
+Finally, configure NGINX to pass requests for `gitlab-gitaly-ha.test` to the GitLab instance:
+
+```plaintext
+# On Debian/Ubuntu, in /etc/nginx/sites-enabled/gitlab-cluster
+# On macOS, in /usr/local/etc/nginx/nginx.conf
+
+server {
+ server_name gitlab-gitaly-ha.test;
+ client_max_body_size 500m;
+
+ location / {
+ proxy_pass http://127.0.0.1:32772;
+ proxy_set_header Host gitlab-gitaly-ha.test;
+ }
+}
+```
+
+Restart NGINX for the configuration to take effect. For example:
+
+```shell
+# On Debian/Ubuntu
+sudo systemctl restart nginx
+
+# on macOS
+sudo nginx -s reload
+```
+
+You could then run the tests from the `/qa` directory:
+
+```shell
+CHROME_HEADLESS=false bin/qa Test::Instance::All http://gitlab-gitaly-ha.test -- --tag gitaly_ha
+```
+
+Once you have finished testing you can stop and remove the Docker containers:
+
+```shell
+docker stop gitlab-gitaly-ha praefect postgres gitaly3 gitaly2 gitaly1
+docker rm gitlab-gitaly-ha praefect postgres gitaly3 gitaly2 gitaly1
+```
diff --git a/doc/development/testing_guide/frontend_testing.md b/doc/development/testing_guide/frontend_testing.md
index ef9fd748dbb..42ca65a74f2 100644
--- a/doc/development/testing_guide/frontend_testing.md
+++ b/doc/development/testing_guide/frontend_testing.md
@@ -24,9 +24,8 @@ We have started to migrate frontend tests to the [Jest](https://jestjs.io) testi
Jest tests can be found in `/spec/frontend` and `/ee/spec/frontend` in EE.
-> **Note:**
->
-> Most examples have a Jest and Karma example. See the Karma examples only as explanation to what's going on in the code, should you stumble over some use cases during your discovery. The Jest examples are the one you should follow.
+NOTE: **Note:**
+Most examples have a Jest and Karma example. See the Karma examples only as explanation to what's going on in the code, should you stumble over some use cases during your discovery. The Jest examples are the one you should follow.
## Karma test suite
@@ -170,22 +169,14 @@ Some more examples can be found in the [Frontend unit tests section](testing_lev
Another common gotcha is that the specs end up verifying the mock is working. If you are using mocks, the mock should support the test, but not be the target of the test.
-**Bad:**
-
```javascript
const spy = jest.spyOn(idGenerator, 'create')
spy.mockImplementation = () = '1234'
+// Bad
expect(idGenerator.create()).toBe('1234')
-```
-
-**Good:**
-
-```javascript
-const spy = jest.spyOn(idGenerator, 'create')
-spy.mockImplementation = () = '1234'
-// Actually focusing on the logic of your component and just leverage the controllable mocks output
+// Good: actually focusing on the logic of your component and just leverage the controllable mocks output
expect(wrapper.find('div').html()).toBe('<div id="1234">...</div>')
```
@@ -204,29 +195,67 @@ Following you'll find some general common practices you will find as part of our
### How to query DOM elements
-When it comes to querying DOM elements in your tests, it is best to uniquely and semantically target the element. Sometimes this cannot be done feasibly. In these cases, adding test attributes to simplify the selectors might be the best option.
+When it comes to querying DOM elements in your tests, it is best to uniquely and semantically target
+the element.
+
+Preferentially, this is done by targeting text the user actually sees using [DOM Testing Library](https://testing-library.com/docs/dom-testing-library/intro).
+When selecting by text it is best to use [`getByRole` or `findByRole`](https://testing-library.com/docs/dom-testing-library/api-queries#byrole)
+as these enforce accessibility best practices as well. The examples below demonstrate the order of preference.
-Preferentially, in component testing with `@vue/test-utils`, you should query for child components using the component itself. This helps enforce that specific behavior can be covered by that component's individual unit tests. Otherwise, try to use:
+Sometimes this cannot be done feasibly. In these cases, adding test attributes to simplify the
+selectors might be the best option.
- A semantic attribute like `name` (also verifies that `name` was setup properly)
- A `data-testid` attribute ([recommended by maintainers of `@vue/test-utils`](https://github.com/vuejs/vue-test-utils/issues/1498#issuecomment-610133465))
- a Vue `ref` (if using `@vue/test-utils`)
-Examples:
-
```javascript
+import { mount, shallowMount } from '@vue/test-utils'
+import { getByRole, getByText } from '@testing-library/dom'
+
+let wrapper
+let el
+
+const createComponent = (mountFn = shallowMount) => {
+ wrapper = mountFn(Component)
+ el = wrapper.vm.$el // reference to the container element
+}
+
+beforeEach(() => {
+ createComponent()
+})
+
+
+it('exists', () => {
+ // Best
+
+ // NOTE: both mount and shallowMount work as long as a DOM element is available
+ // Finds a properly formatted link with an accessable name of "Click Me"
+ getByRole(el, 'link', { name: /Click Me/i })
+ getByRole(el, 'link', { name: 'Click Me' })
+ // Finds any element with the text "Click Me"
+ getByText(el, 'Click Me')
+ // Regex is also available
+ getByText(el, /Click Me/i)
+
+ // Good
+ wrapper.find('input[name=foo]');
+ wrapper.find('[data-testid="foo"]');
+ wrapper.find({ ref: 'foo'});
+
+ // Bad
+ wrapper.find('.js-foo');
+ wrapper.find('.btn-primary');
+ wrapper.find('.qa-foo-component');
+ wrapper.find('[data-qa-selector="foo"]');
+});
+
+// Good
it('exists', () => {
- // Good
wrapper.find(FooComponent);
wrapper.find('input[name=foo]');
wrapper.find('[data-testid="foo"]');
wrapper.find({ ref: 'foo'});
-
- // Bad
- wrapper.find('.js-foo');
- wrapper.find('.btn-primary');
- wrapper.find('.qa-foo-component');
- wrapper.find('[data-qa-selector="foo"]');
});
```
@@ -234,28 +263,47 @@ It is not recommended that you add `.js-*` classes just for testing purposes. On
Do not use a `.qa-*` class or `data-qa-selector` attribute for any tests other than QA end-to-end testing.
+### Querying for child components
+
+When testing Vue components with `@vue/test-utils` another possible approach is querying for child
+components instead of querying for DOM nodes. This assumes that implementation details of behavior
+under test should be covered by that component's individual unit test. There is no strong preference
+in writing DOM or component queries as long as your tests reliably cover expected behavior for the
+component under test.
+
+Example:
+
+```javascript
+it('exists', () => {
+ wrapper.find(FooComponent);
+});
+```
+
### Naming unit tests
When writing describe test blocks to test specific functions/methods,
please use the method name as the describe block name.
+**Bad**:
+
```javascript
-// Good
-describe('methodName', () => {
+describe('#methodName', () => {
it('passes', () => {
expect(true).toEqual(true);
});
});
-// Bad
-describe('#methodName', () => {
+describe('.methodName', () => {
it('passes', () => {
expect(true).toEqual(true);
});
});
+```
-// Bad
-describe('.methodName', () => {
+**Good**:
+
+```javascript
+describe('methodName', () => {
it('passes', () => {
expect(true).toEqual(true);
});
@@ -286,61 +334,67 @@ it('tests a promise rejection', async () => {
You can also work with Promise chains. In this case, you can make use of the `done` callback and `done.fail` in case an error occurred. Following are some examples:
+**Bad**:
+
```javascript
-// Good
+// missing done callback
+it('tests a promise', () => {
+ promise.then(data => {
+ expect(data).toBe(asExpected);
+ });
+});
+
+// missing catch
it('tests a promise', done => {
promise
.then(data => {
expect(data).toBe(asExpected);
})
- .then(done)
- .catch(done.fail);
+ .then(done);
});
-// Good
-it('tests a promise rejection', done => {
+// use done.fail in asynchronous tests
+it('tests a promise', done => {
promise
- .then(done.fail)
- .catch(error => {
- expect(error).toBe(expectedError);
+ .then(data => {
+ expect(data).toBe(asExpected);
})
.then(done)
- .catch(done.fail);
-});
-
-// Bad (missing done callback)
-it('tests a promise', () => {
- promise.then(data => {
- expect(data).toBe(asExpected);
- });
+ .catch(fail);
});
-// Bad (missing catch)
-it('tests a promise', done => {
+// missing catch
+it('tests a promise rejection', done => {
promise
- .then(data => {
- expect(data).toBe(asExpected);
+ .catch(error => {
+ expect(error).toBe(expectedError);
})
.then(done);
});
+```
-// Bad (use done.fail in asynchronous tests)
+**Good**:
+
+```javascript
+// handling success
it('tests a promise', done => {
promise
.then(data => {
expect(data).toBe(asExpected);
})
.then(done)
- .catch(fail);
+ .catch(done.fail);
});
-// Bad (missing catch)
+// failure case
it('tests a promise rejection', done => {
promise
+ .then(done.fail)
.catch(error => {
expect(error).toBe(expectedError);
})
- .then(done);
+ .then(done)
+ .catch(done.fail);
});
```
@@ -564,11 +618,11 @@ Examples:
```javascript
const foo = 1;
-// good
-expect(foo).toBe(1);
-
-// bad
+// Bad
expect(foo).toEqual(1);
+
+// Good
+expect(foo).toBe(1);
```
#### Prefer more befitting matchers
@@ -621,12 +675,11 @@ Jest has the tricky `toBeDefined` matcher that can produce false positive test.
the given value for `undefined` only.
```javascript
-// good
-expect(wrapper.find('foo').exists()).toBe(true);
-
-// bad
-// if finder returns null, the test will pass
+// Bad: if finder returns null, the test will pass
expect(wrapper.find('foo')).toBeDefined();
+
+// Good
+expect(wrapper.find('foo').exists()).toBe(true);
```
#### Avoid using `setImmediate`
@@ -771,13 +824,37 @@ yarn karma -f 'spec/javascripts/ide/**/file_spec.js'
## Frontend test fixtures
-Code that is added to HAML templates (in `app/views/`) or makes Ajax requests to the backend has tests that require HTML or JSON from the backend.
-Fixtures for these tests are located at:
+Frontend fixtures are files containing responses from backend controllers. These responses can be either HTML
+generated from haml templates or JSON payloads. Frontend tests that rely on these responses are
+often using fixtures to validate correct integration with the backend code.
+
+### Generate fixtures
+
+You can find code to generate test fixtures in:
- `spec/frontend/fixtures/`, for running tests in CE.
- `ee/spec/frontend/fixtures/`, for running tests in EE.
-Fixture files in:
+You can generate fixtures by running:
+
+- `bin/rake frontend:fixtures` to generate all fixtures
+- `bin/rspec spec/frontend/fixtures/merge_requests.rb` to generate specific fixtures (in this case for `merge_request.rb`)
+
+You can find generated fixtures are in `tmp/tests/frontend/fixtures-ee`.
+
+#### Creating new fixtures
+
+For each fixture, you can find the content of the `response` variable in the output file.
+For example, test named `"merge_requests/diff_discussion.json"` in `spec/frontend/fixtures/merge_requests.rb`
+will produce output file `tmp/tests/frontend/fixtures-ee/merge_requests/diff_discussion.json`.
+The `response` variable gets automatically set if the test is marked as `type: :request` or `type: :controller`.
+
+When creating a new fixture, it often makes sense to take a look at the corresponding tests for the
+endpoint in `(ee/)spec/controllers/` or `(ee/)spec/requests/`.
+
+### Use fixtures
+
+Jest and Karma test suites import fixtures in different ways:
- The Karma test suite are served by [jasmine-jquery](https://github.com/velesin/jasmine-jquery).
- Jest use `spec/frontend/helpers/fixtures.js`.
@@ -803,14 +880,6 @@ it('uses some HTML element', () => {
});
```
-HTML and JSON fixtures are generated from backend views and controllers using RSpec (see `spec/frontend/fixtures/*.rb`).
-
-For each fixture, the content of the `response` variable is stored in the output file.
-This variable gets automatically set if the test is marked as `type: :request` or `type: :controller`.
-Fixtures are regenerated using the `bin/rake frontend:fixtures` command but you can also generate them individually,
-for example `bin/rspec spec/frontend/fixtures/merge_requests.rb`.
-When creating a new fixture, it often makes sense to take a look at the corresponding tests for the endpoint in `(ee/)spec/controllers/` or `(ee/)spec/requests/`.
-
## Data-driven tests
Similar to [RSpec's parameterized tests](best_practices.md#table-based--parameterized-tests),
diff --git a/doc/development/testing_guide/review_apps.md b/doc/development/testing_guide/review_apps.md
index 54f8ca0d98b..68816ccfe45 100644
--- a/doc/development/testing_guide/review_apps.md
+++ b/doc/development/testing_guide/review_apps.md
@@ -142,6 +142,9 @@ the following node pools:
- `e2-highcpu-16` (16 vCPU, 16 GB memory) pre-emptible nodes with autoscaling
+Node pool image type must be `Container-Optimized OS (cos)`, not `Container-Optimized OS with Containerd (cos_containerd)`,
+due to this [known issue on GitLab Runner Kubernetes executor](https://gitlab.com/gitlab-org/gitlab-runner/-/issues/4755)
+
### Helm
The Helm version used is defined in the
@@ -153,13 +156,12 @@ used by the `review-deploy` and `review-stop` jobs.
### Get access to the GCP Review Apps cluster
You need to [open an access request (internal link)](https://gitlab.com/gitlab-com/access-requests/-/issues/new)
-for the `gcp-review-apps-sg` GCP group. In order to join a group, you must specify the desired GCP role in your access request.
-The role is what will grant you specific permissions in order to engage with Review App containers.
+for the `gcp-review-apps-dev` GCP group and role.
-Here are some permissions you may want to have, and the roles that grant them:
+This will grant you the following permissions for:
-- `container.pods.getLogs` - Required to [retrieve pod logs](#dig-into-a-pods-logs). Granted by [Viewer (`roles/viewer`)](https://cloud.google.com/iam/docs/understanding-roles#kubernetes-engine-roles).
-- `container.pods.exec` - Required to [run a Rails console](#run-a-rails-console). Granted by [Kubernetes Engine Developer (`roles/container.developer`)](https://cloud.google.com/iam/docs/understanding-roles#kubernetes-engine-roles).
+- [Retrieving pod logs](#dig-into-a-pods-logs). Granted by [Viewer (`roles/viewer`)](https://cloud.google.com/iam/docs/understanding-roles#kubernetes-engine-roles).
+- [Running a Rails console](#run-a-rails-console). Granted by [Kubernetes Engine Developer (`roles/container.pods.exec`)](https://cloud.google.com/iam/docs/understanding-roles#kubernetes-engine-roles).
### Log into my Review App
diff --git a/doc/development/testing_guide/testing_migrations_guide.md b/doc/development/testing_guide/testing_migrations_guide.md
index 8ee758177c3..a5bcb651d71 100644
--- a/doc/development/testing_guide/testing_migrations_guide.md
+++ b/doc/development/testing_guide/testing_migrations_guide.md
@@ -37,15 +37,37 @@ ensures proper isolation.
To test an `ActiveRecord::Migration` class (i.e., a
regular migration `db/migrate` or a post-migration `db/post_migrate`), you
-will need to manually `require` the migration file because it is not
-autoloaded with Rails. Example:
+will need to load the migration file by using the `require_migration!` helper
+method because it is not autoloaded by Rails.
+
+Example:
```ruby
-require Rails.root.join('db', 'post_migrate', '20170526185842_migrate_pipeline_stages.rb')
+require 'spec_helper'
+
+require_migration!
+
+RSpec.describe ...
```
### Test helpers
+#### `require_migration!`
+
+Since the migration files are not autoloaded by Rails, you will need to manually
+load the migration file. To do so, you can use the `require_migration!` helper method
+which can automatically load the correct migration file based on the spec file name.
+
+For example, if your spec file is named as `populate_foo_column_spec.rb` then the
+helper method will try to load `${schema_version}_populate_foo_column.rb` migration file.
+
+In case there is no pattern between your spec file and the actual migration file,
+you can provide the migration file name without the schema version, like so:
+
+```ruby
+require_migration!('populate_foo_column')
+```
+
#### `table`
Use the `table` helper to create a temporary `ActiveRecord::Base`-derived model
@@ -110,7 +132,8 @@ migration. You can find the complete spec in
```ruby
require 'spec_helper'
-require Rails.root.join('db', 'post_migrate', '20170526185842_migrate_pipeline_stages.rb')
+
+require_migration!
RSpec.describe MigratePipelineStages do
# Create test data - pipeline and CI/CD jobs.
diff --git a/doc/development/uploads.md b/doc/development/uploads.md
index 4693c93e3d0..0c8b712a001 100644
--- a/doc/development/uploads.md
+++ b/doc/development/uploads.md
@@ -214,7 +214,8 @@ In this setup, an extra Rails route must be implemented in order to handle autho
and [its routes](https://gitlab.com/gitlab-org/gitlab/blob/cc723071ad337573e0360a879cbf99bc4fb7adb9/config/routes/git_http.rb#L31-32).
- [API endpoints for uploading packages](packages.md#file-uploads).
-**note:** this will fallback to _disk buffered upload_ when `direct_upload` is disabled inside the [object storage setting](../administration/uploads.md#object-storage-settings).
+Note: **Note:**
+This will fallback to _disk buffered upload_ when `direct_upload` is disabled inside the [object storage setting](../administration/uploads.md#object-storage-settings).
The answer to the `/authorize` call will only contain a file system path.
```mermaid
diff --git a/doc/development/what_requires_downtime.md b/doc/development/what_requires_downtime.md
index d0b8aa18f5c..e9d4ed8eaf6 100644
--- a/doc/development/what_requires_downtime.md
+++ b/doc/development/what_requires_downtime.md
@@ -103,7 +103,8 @@ end
This will take care of renaming the column, ensuring data stays in sync, copying
over indexes and foreign keys, etc.
-**NOTE:** if a column contains 1 or more indexes that do not contain the name of
+NOTE: **Note:**
+If a column contains 1 or more indexes that do not contain the name of
the original column, the above procedure will fail. In this case you will first
need to rename these indexes.
@@ -132,7 +133,7 @@ end
NOTE: **Note:**
If you're renaming a [large table](https://gitlab.com/gitlab-org/gitlab/-/blob/master/rubocop/rubocop-migrations.yml#L3), please carefully consider the state when the first migration has run but the second cleanup migration hasn't been run yet.
-With [Canary](https://about.gitlab.com/handbook/engineering/infrastructure/library/canary/) it is possible that the system runs in this state for a significant amount of time.
+With [Canary](https://gitlab.com/gitlab-com/gl-infra/readiness/-/tree/master/library/canary/) it is possible that the system runs in this state for a significant amount of time.
## Changing Column Constraints