summaryrefslogtreecommitdiff
path: root/doc/development
diff options
context:
space:
mode:
Diffstat (limited to 'doc/development')
-rw-r--r--doc/development/README.md47
-rw-r--r--doc/development/adding_database_indexes.md4
-rw-r--r--doc/development/api_graphql_styleguide.md277
-rw-r--r--doc/development/api_styleguide.md67
-rw-r--r--doc/development/application_limits.md35
-rw-r--r--doc/development/architecture.md42
-rw-r--r--doc/development/background_migrations.md4
-rw-r--r--doc/development/build_test_package.md6
-rw-r--r--doc/development/changelog.md6
-rw-r--r--doc/development/cicd/index.md4
-rw-r--r--doc/development/code_comments.md2
-rw-r--r--doc/development/code_review.md23
-rw-r--r--doc/development/contributing/index.md105
-rw-r--r--doc/development/contributing/issue_workflow.md14
-rw-r--r--doc/development/contributing/merge_request_workflow.md2
-rw-r--r--doc/development/creating_enums.md2
-rw-r--r--doc/development/database/index.md48
-rw-r--r--doc/development/database/not_null_constraints.md217
-rw-r--r--doc/development/database/strings_and_the_text_data_type.md288
-rw-r--r--doc/development/database_debugging.md6
-rw-r--r--doc/development/database_review.md14
-rw-r--r--doc/development/db_dump.md4
-rw-r--r--doc/development/deleting_migrations.md4
-rw-r--r--doc/development/distributed_tracing.md2
-rw-r--r--doc/development/documentation/feature_flags.md22
-rw-r--r--doc/development/documentation/index.md65
-rw-r--r--doc/development/documentation/site_architecture/global_nav.md2
-rw-r--r--doc/development/documentation/site_architecture/index.md22
-rw-r--r--doc/development/documentation/site_architecture/release_process.md26
-rw-r--r--doc/development/documentation/structure.md2
-rw-r--r--doc/development/documentation/styleguide.md145
-rw-r--r--doc/development/documentation/workflow.md8
-rw-r--r--doc/development/ee_features.md4
-rw-r--r--doc/development/elasticsearch.md63
-rw-r--r--doc/development/emails.md15
-rw-r--r--doc/development/fe_guide/axios.md16
-rw-r--r--doc/development/fe_guide/dependencies.md2
-rw-r--r--doc/development/fe_guide/development_process.md2
-rw-r--r--doc/development/fe_guide/droplab/plugins/ajax.md2
-rw-r--r--doc/development/fe_guide/droplab/plugins/filter.md2
-rw-r--r--doc/development/fe_guide/droplab/plugins/input_setter.md4
-rw-r--r--doc/development/fe_guide/emojis.md4
-rw-r--r--doc/development/fe_guide/frontend_faq.md4
-rw-r--r--doc/development/fe_guide/graphql.md107
-rw-r--r--doc/development/fe_guide/icons.md8
-rw-r--r--doc/development/fe_guide/performance.md2
-rw-r--r--doc/development/fe_guide/style/vue.md2
-rw-r--r--doc/development/fe_guide/vue.md10
-rw-r--r--doc/development/feature_flags/controls.md63
-rw-r--r--doc/development/feature_flags/development.md36
-rw-r--r--doc/development/filtering_by_label.md8
-rw-r--r--doc/development/geo.md6
-rw-r--r--doc/development/geo/framework.md11
-rw-r--r--doc/development/git_object_deduplication.md6
-rw-r--r--doc/development/gitaly.md10
-rw-r--r--doc/development/github_importer.md12
-rw-r--r--doc/development/go_guide/dependencies.md175
-rw-r--r--doc/development/go_guide/index.md22
-rw-r--r--doc/development/gotchas.md2
-rw-r--r--doc/development/hash_indexes.md4
-rw-r--r--doc/development/i18n/externalization.md20
-rw-r--r--doc/development/i18n/merging_translations.md21
-rw-r--r--doc/development/i18n/proofreader.md22
-rw-r--r--doc/development/i18n/translation.md14
-rw-r--r--doc/development/img/bullet_v13_0.pngbin0 -> 1085958 bytes
-rw-r--r--doc/development/import_export.md55
-rw-r--r--doc/development/insert_into_tables_in_batches.md2
-rw-r--r--doc/development/instrumentation.md6
-rw-r--r--doc/development/integrations/jenkins.md2
-rw-r--r--doc/development/integrations/secure.md38
-rw-r--r--doc/development/integrations/secure_partner_integration.md14
-rw-r--r--doc/development/internal_api.md4
-rw-r--r--doc/development/issue_types.md4
-rw-r--r--doc/development/licensing.md8
-rw-r--r--doc/development/logging.md6
-rw-r--r--doc/development/merge_request_performance_guidelines.md4
-rw-r--r--doc/development/migration_style_guide.md27
-rw-r--r--doc/development/module_with_instance_variables.md4
-rw-r--r--doc/development/multi_version_compatibility.md2
-rw-r--r--doc/development/namespaces_storage_statistics.md10
-rw-r--r--doc/development/new_fe_guide/development/accessibility.md4
-rw-r--r--doc/development/new_fe_guide/development/performance.md4
-rw-r--r--doc/development/omnibus.md2
-rw-r--r--doc/development/packages.md10
-rw-r--r--doc/development/performance.md3
-rw-r--r--doc/development/permissions.md25
-rw-r--r--doc/development/pipelines.md423
-rw-r--r--doc/development/polling.md2
-rw-r--r--doc/development/profiling.md10
-rw-r--r--doc/development/rake_tasks.md8
-rw-r--r--doc/development/redis.md4
-rw-r--r--doc/development/refactoring_guide/index.md8
-rw-r--r--doc/development/reference_processing.md2
-rw-r--r--doc/development/renaming_features.md2
-rw-r--r--doc/development/routing.md12
-rw-r--r--doc/development/scalability.md22
-rw-r--r--doc/development/secure_coding_guidelines.md38
-rw-r--r--doc/development/service_measurement.md81
-rw-r--r--doc/development/shell_scripting_guide/index.md2
-rw-r--r--doc/development/sidekiq_style_guide.md118
-rw-r--r--doc/development/telemetry/index.md100
-rw-r--r--doc/development/telemetry/snowplow.md16
-rw-r--r--doc/development/telemetry/usage_ping.md898
-rw-r--r--doc/development/testing_guide/best_practices.md83
-rw-r--r--doc/development/testing_guide/end_to_end/best_practices.md2
-rw-r--r--doc/development/testing_guide/end_to_end/feature_flags.md2
-rw-r--r--doc/development/testing_guide/end_to_end/index.md20
-rw-r--r--doc/development/testing_guide/end_to_end/page_objects.md8
-rw-r--r--doc/development/testing_guide/end_to_end/rspec_metadata_tests.md3
-rw-r--r--doc/development/testing_guide/end_to_end/running_tests_that_require_special_setup.md12
-rw-r--r--doc/development/testing_guide/end_to_end/style_guide.md8
-rw-r--r--doc/development/testing_guide/flaky_tests.md36
-rw-r--r--doc/development/testing_guide/frontend_testing.md42
-rw-r--r--doc/development/testing_guide/index.md4
-rw-r--r--doc/development/testing_guide/review_apps.md22
-rw-r--r--doc/development/testing_guide/testing_levels.md4
-rw-r--r--doc/development/understanding_explain_plans.md34
-rw-r--r--doc/development/uploads.md6
-rw-r--r--doc/development/utilities.md4
-rw-r--r--doc/development/value_stream_analytics.md2
-rw-r--r--doc/development/what_requires_downtime.md36
-rw-r--r--doc/development/windows.md14
122 files changed, 3316 insertions, 1200 deletions
diff --git a/doc/development/README.md b/doc/development/README.md
index 22cc21e12b9..d330d6d466e 100644
--- a/doc/development/README.md
+++ b/doc/development/README.md
@@ -63,7 +63,7 @@ Complementary reads:
- [Working with Gitaly](gitaly.md)
- [Manage feature flags](feature_flags/index.md)
- [Licensed feature availability](licensed_feature_availability.md)
-- [View sent emails or preview mailers](emails.md)
+- [Dealing with email/mailers](emails.md)
- [Shell commands](shell_commands.md) in the GitLab codebase
- [`Gemfile` guidelines](gemfile.md)
- [Pry debugging](pry_debugging.md)
@@ -113,50 +113,7 @@ Complementary reads:
## Database guides
-### Tooling
-
-- [Understanding EXPLAIN plans](understanding_explain_plans.md)
-- [explain.depesz.com](https://explain.depesz.com/) for visualizing the output
- of `EXPLAIN`
-- [pgFormatter](http://sqlformat.darold.net/) a PostgreSQL SQL syntax beautifier
-
-### Migrations
-
-- [What requires downtime?](what_requires_downtime.md)
-- [SQL guidelines](sql.md) for working with SQL queries
-- [Migrations style guide](migration_style_guide.md) for creating safe SQL migrations
-- [Testing Rails migrations](testing_guide/testing_migrations_guide.md) guide
-- [Post deployment migrations](post_deployment_migrations.md)
-- [Background migrations](background_migrations.md)
-- [Swapping tables](swapping_tables.md)
-- [Deleting migrations](deleting_migrations.md)
-
-### Debugging
-
-- Tracing the source of an SQL query using query comments with [Marginalia](database_query_comments.md)
-- Tracing the source of an SQL query in Rails console using [Verbose Query Logs](https://guides.rubyonrails.org/debugging_rails_applications.html#verbose-query-logs)
-
-### Best practices
-
-- [Adding database indexes](adding_database_indexes.md)
-- [Foreign keys & associations](foreign_keys.md)
-- [Single table inheritance](single_table_inheritance.md)
-- [Polymorphic associations](polymorphic_associations.md)
-- [Serializing data](serializing_data.md)
-- [Hash indexes](hash_indexes.md)
-- [Storing SHA1 hashes as binary](sha1_as_binary.md)
-- [Iterating tables in batches](iterating_tables_in_batches.md)
-- [Insert into tables in batches](insert_into_tables_in_batches.md)
-- [Ordering table columns](ordering_table_columns.md)
-- [Verifying database capabilities](verifying_database_capabilities.md)
-- [Database Debugging and Troubleshooting](database_debugging.md)
-- [Query Count Limits](query_count_limits.md)
-- [Creating enums](creating_enums.md)
-
-### Case studies
-
-- [Database case study: Filtering by label](filtering_by_label.md)
-- [Database case study: Namespaces storage statistics](namespaces_storage_statistics.md)
+See [database guidelines](database/index.md).
## Integration guides
diff --git a/doc/development/adding_database_indexes.md b/doc/development/adding_database_indexes.md
index 01b621b6631..7fe047b380b 100644
--- a/doc/development/adding_database_indexes.md
+++ b/doc/development/adding_database_indexes.md
@@ -20,8 +20,8 @@ existing indexes need to be updated. The more indexes there are the slower this
can potentially become. Indexes can also take up quite some disk space depending
on the amount of data indexed and the index type. For example, PostgreSQL offers
"GIN" indexes which can be used to index certain data types that can not be
-indexed by regular btree indexes. These indexes however generally take up more
-data and are slower to update compared to btree indexes.
+indexed by regular B-tree indexes. These indexes however generally take up more
+data and are slower to update compared to B-tree indexes.
Because of all this one should not blindly add a new index for every column used
to filter data by. Instead one should ask themselves the following questions:
diff --git a/doc/development/api_graphql_styleguide.md b/doc/development/api_graphql_styleguide.md
index 6d3c0cf0eac..43e96340d10 100644
--- a/doc/development/api_graphql_styleguide.md
+++ b/doc/development/api_graphql_styleguide.md
@@ -346,7 +346,7 @@ GitLab's GraphQL API is versionless, which means we maintain backwards
compatibility with older versions of the API with every change. Rather
than removing a field, we need to _deprecate_ the field instead. In
future, GitLab
-[may remove deprecated fields](https://gitlab.com/gitlab-org/gitlab/issues/32292).
+[may remove deprecated fields](https://gitlab.com/gitlab-org/gitlab/-/issues/32292).
Fields are deprecated using the `deprecated` property. The value
of the property is a `Hash` of:
@@ -470,6 +470,23 @@ field :confidential, GraphQL::BOOLEAN_TYPE, description: 'Indicates the issue is
field :closed_at, Types::TimeType, description: 'Timestamp of when the issue was closed'
```
+### `copy_field_description` helper
+
+Sometimes we want to ensure that two descriptions will always be identical.
+For example, to keep a type field description the same as a mutation argument
+when they both represent the same property.
+
+Instead of supplying a description, we can use the `copy_field_description` helper,
+passing it the type, and field name to copy the description of.
+
+Example:
+
+```ruby
+argument :title, GraphQL::STRING_TYPE,
+ required: false,
+ description: copy_field_description(Types::MergeRequestType, :title)
+```
+
## Authorization
Authorizations can be applied to both types and fields using the same
@@ -602,6 +619,84 @@ lot of dependent objects.
To limit the amount of queries performed, we can use `BatchLoader`.
+### Correct use of `Resolver#ready?`
+
+Resolvers have two public API methods as part of the framework: `#ready?(**args)` and `#resolve(**args)`.
+We can use `#ready?` to perform set-up, validation or early-return without invoking `#resolve`.
+
+Good reasons to use `#ready?` include:
+
+- validating mutually exclusive arguments (see [validating arguments](#validating-arguments))
+- Returning `Relation.none` if we know before-hand that no results are possible
+- Performing setup such as initializing instance variables (although consider lazily initialized methods for this)
+
+Implementations of [`Resolver#ready?(**args)`](https://graphql-ruby.org/api-doc/1.10.9/GraphQL/Schema/Resolver#ready%3F-instance_method) should
+return `(Boolean, early_return_data)` as follows:
+
+```ruby
+def ready?(**args)
+ [false, 'have this instead']
+end
+```
+
+For this reason, whenever you call a resolver (mainly in tests - as framework
+abstractions Resolvers should not be considered re-usable, finders are to be
+preferred), remember to call the `ready?` method and check the boolean flag
+before calling `resolve`! An example can be seen in our [`GraphQLHelpers`](https://gitlab.com/gitlab-org/gitlab/-/blob/2d395f32d2efbb713f7bc861f96147a2a67e92f2/spec/support/helpers/graphql_helpers.rb#L20-27).
+
+### Look-Ahead
+
+The full query is known in advance during execution, which means we can make use
+of [lookahead](https://graphql-ruby.org/queries/lookahead.html) to optimize our
+queries, and batch load associations we know we will need. Consider adding
+lookahead support in your resolvers to avoid `N+1` performance issues.
+
+To enable support for common lookahead use-cases (pre-loading associations when
+child fields are requested), you can
+include [`LooksAhead`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/graphql/resolvers/concerns/looks_ahead.rb). For example:
+
+```ruby
+# Assuming a model `MyThing` with attributes `[child_attribute, other_attribute, nested]`,
+# where nested has an attribute named `included_attribute`.
+class MyThingResolver < BaseResolver
+ include LooksAhead
+
+ # Rather than defining `resolve(**args)`, we implement: `resolve_with_lookahead(**args)`
+ def resolve_with_lookahead(**args)
+ apply_lookahead(MyThingFinder.new(current_user).execute)
+ end
+
+ # We list things that should always be preloaded:
+ # For example, if child_attribute is always needed (during authorization
+ # perhaps), then we can include it here.
+ def unconditional_includes
+ [:child_attribute]
+ end
+
+ # We list things that should be included if a certain field is selected:
+ def preloads
+ {
+ field_one: [:other_attribute],
+ field_two: [{ nested: [:included_attribute] }]
+ }
+ end
+end
+```
+
+The final thing that is needed is that every field that uses this resolver needs
+to advertise the need for lookahead:
+
+```ruby
+ # in ParentType
+ field :my_things, MyThingType.connection_type, null: true,
+ extras: [:lookahead], # Necessary
+ resolver: MyThingResolver,
+ description: 'My things'
+```
+
+For an example of real world use, please
+see [`ResolvesMergeRequests`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/graphql/resolvers/concerns/resolves_merge_requests.rb).
+
## Mutations
Mutations are used to change any stored values, or to trigger
@@ -613,19 +708,6 @@ To find objects for a mutation, arguments need to be specified. As with
[resolvers](#resolvers), prefer using internal ID or, if needed, a
global ID rather than the database ID.
-### Fields
-
-In the most common situations, a mutation would return 2 fields:
-
-- The resource being modified
-- A list of errors explaining why the action could not be
- performed. If the mutation succeeded, this list would be empty.
-
-By inheriting any new mutations from `Mutations::BaseMutation` the
-`errors` field is automatically added. A `clientMutationId` field is
-also added, this can be used by the client to identify the result of a
-single mutation when multiple are performed within a single request.
-
### Building Mutations
Mutations live in `app/graphql/mutations` ideally grouped per
@@ -633,12 +715,16 @@ resources they are mutating, similar to our services. They should
inherit `Mutations::BaseMutation`. The fields defined on the mutation
will be returned as the result of the mutation.
+### Naming conventions
+
Always provide a consistent GraphQL-name to the mutation, this name is
used to generate the input types and the field the mutation is mounted
on. The name should look like `<Resource being modified><Mutation
class name>`, for example the `Mutations::MergeRequests::SetWip`
mutation has GraphQL name `MergeRequestSetWip`.
+### Arguments
+
Arguments required by the mutation can be defined as arguments
required for a field. These will be wrapped up in an input type for
the mutation. For example, the `Mutations::MergeRequests::SetWip`
@@ -667,11 +753,28 @@ This would automatically generate an input type called
`clientMutationId`.
These arguments are then passed to the `resolve` method of a mutation
-as keyword arguments. From here, we can call the service that will
-modify the resource.
+as keyword arguments.
+
+### Fields
+
+In the most common situations, a mutation would return 2 fields:
+
+- The resource being modified
+- A list of errors explaining why the action could not be
+ performed. If the mutation succeeded, this list would be empty.
+
+By inheriting any new mutations from `Mutations::BaseMutation` the
+`errors` field is automatically added. A `clientMutationId` field is
+also added, this can be used by the client to identify the result of a
+single mutation when multiple are performed within a single request.
+
+### The `resolve` method
+
+The `resolve` method receives the mutation's arguments as keyword arguments.
+From here, we can call the service that will modify the resource.
The `resolve` method should then return a hash with the same field
-names as defined on the mutation and an `errors` array. For example,
+names as defined on the mutation including an `errors` array. For example,
the `Mutations::MergeRequests::SetWip` defines a `merge_request`
field:
@@ -690,12 +793,15 @@ should look like this:
# The merge request modified, this will be wrapped in the type
# defined on the field
merge_request: merge_request,
- # An array if strings if the mutation failed after authorization
- errors: merge_request.errors.full_messages
+ # An array of strings if the mutation failed after authorization.
+ # The `errors_on_object` helper collects `errors.full_messages`
+ errors: errors_on_object(merge_request)
}
```
-To make the mutation available it should be defined on the mutation
+### Mounting the mutation
+
+To make the mutation available it must be defined on the mutation
type that lives in `graphql/types/mutation_types`. The
`mount_mutation` helper method will define a field based on the
GraphQL-name of the mutation:
@@ -744,6 +850,131 @@ found, we should raise a
`Gitlab::Graphql::Errors::ResourceNotAvailable` error. Which will be
correctly rendered to the clients.
+### Errors in mutations
+
+We encourage following the practice of [errors as
+data](https://graphql-ruby.org/mutations/mutation_errors) for mutations, which
+distinguishes errors by who they are relevant to, defined by who can deal with
+them.
+
+Key points:
+
+- All mutation responses have an `errors` field. This should be populated on
+ failure, and may be populated on success.
+- Consider who needs to see the error: the **user** or the **developer**.
+- Clients should always request the `errors` field when performing mutations.
+- Errors may be reported to users either at `$root.errors` (top-level error) or at
+ `$root.data.mutationName.errors` (mutation errors). The location depends on what kind of error
+ this is, and what information it holds.
+
+Consider an example mutation `doTheThing` that returns a response with
+two fields: `errors: [String]`, and `thing: ThingType`. The specific nature of
+the `thing` itself is irrelevant to these examples, as we are considering the
+errors.
+
+There are three states a mutation response can be in:
+
+- [Success](#success)
+- [Failure (relevant to the user)](#failure-relevant-to-the-user)
+- [Failure (irrelevant to the user)](#failure-irrelevant-to-the-user)
+
+#### Success
+
+In the happy path, errors *may* be returned, along with the anticipated payload, but
+if everything was successful, then `errors` should be an empty array, since
+there are no problems we need to inform the user of.
+
+```javascript
+{
+ data: {
+ doTheThing: {
+ errors: [] // if successful, this array will generally be empty.
+ thing: { .. }
+ }
+ }
+}
+```
+
+#### Failure (relevant to the user)
+
+An error that affects the **user** occurred. We refer to these as _mutation errors_. In
+this case there is typically no `thing` to return:
+
+```javascript
+{
+ data: {
+ doTheThing: {
+ errors: ["you cannot touch the thing"],
+ thing: null
+ }
+ }
+}
+```
+
+Examples of this include:
+
+- Model validation errors: the user may need to change the inputs.
+- Permission errors: the user needs to know they cannot do this, they may need to request permission or sign in.
+- Problems with application state that prevent the user's action, for example: merge conflicts, the resource was locked, and so on.
+
+Ideally, we should prevent the user from getting this far, but if they do, they
+need to be told what is wrong, so they understand the reason for the failure and
+what they can do to achieve their intent, even if that is as simple as retrying the
+request.
+
+It is possible to return *recoverable* errors alongside mutation data. For example, if
+a user uploads 10 files and 3 of them fail and the rest succeed, the errors for the
+failures can be made available to the user, alongside the information about
+the successes.
+
+#### Failure (irrelevant to the user)
+
+One or more *non-recoverable* errors can be returned at the _top level_. These
+are things over which the **user** has little to no control, and should mainly
+be system or programming problems, that a **developer** needs to know about.
+In this case there is no `data`:
+
+```javascript
+{
+ errors: [
+ {"message": "argument error: expected an integer, got null"},
+ ]
+}
+```
+
+This is the result of raising an error during the mutation. In our implementation,
+the messages of argument errors and validation errors are returned to the client, and all other
+`StandardError` instances are caught, logged and presented to the client with the message set to `"Internal server error"`.
+See [`GraphqlController`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/controllers/graphql_controller.rb) for details.
+
+These represent programming errors, such as:
+
+- A GraphQL syntax error, where an `Int` was passed instead of a `String`, or a required argument was not present.
+- Errors in our schema, such as being unable to provide a value for a non-nullable field.
+- System errors: for example, a Git storage exception, or database unavailability.
+
+The user should not be able to cause such errors in regular usage. This category
+of errors should be treated as internal, and not shown to the user in specific
+detail.
+
+We need to inform the user when the mutation fails, but we do not need to
+tell them why, since they cannot have caused it, and nothing they can do will
+fix it, although we may offer to retry the mutation.
+
+#### Categorizing errors
+
+When we write mutations, we need to be conscious about which of
+these two categories an error state falls into (and communicate about this with
+frontend developers to verify our assumptions). This means distinguishing the
+needs of the _user_ from the needs of the _client_.
+
+> _Never catch an error unless the user needs to know about it._
+
+If the user does need to know about it, communicate with frontend developers
+to make sure the error information we are passing back is useful.
+
+See also the [frontend GraphQL guide](../development/fe_guide/graphql.md#handling-errors).
+
## Validating arguments
For validations of single arguments, use the
@@ -751,8 +982,8 @@ For validations of single arguments, use the
as normal.
Sometimes a mutation or resolver may accept a number of optional
-arguments, but still want to validate that at least one of the optional
-arguments were given. In this situation, consider using the `#ready?`
+arguments, but we still want to validate that at least one of the optional
+arguments is provided. In this situation, consider using the `#ready?`
method within your mutation or resolver to provide the validation. The
`#ready?` method will be called before any work is done within the
`#resolve` method.
@@ -767,7 +998,7 @@ def ready?(**args)
end
# Always remember to call `#super`
- super(args)
+ super
end
```
diff --git a/doc/development/api_styleguide.md b/doc/development/api_styleguide.md
index e9a41e7ec58..6a044004926 100644
--- a/doc/development/api_styleguide.md
+++ b/doc/development/api_styleguide.md
@@ -116,6 +116,73 @@ For instance:
- endpoint = expose_path(api_v4_projects_issues_related_merge_requests_path(id: @project.id, issue_iid: @issue.iid))
```
+## Custom Validators
+
+In order to validate some parameters in the API request, we validate them
+before sending them further (say Gitaly). The following are the
+[custom validators](https://GitLab.com/gitlab-org/gitlab/-/tree/master/lib/api/validations/validators),
+which we have added so far and how to use them. We also wrote a
+guide on how you can add a new custom validator.
+
+### Using custom validators
+
+- `FilePath`:
+
+ GitLab supports various functionalities where we need to traverse a file path.
+ The [`FilePath` validator](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/api/validations/validators/file_path.rb)
+ validates the parameter value for different cases. Mainly, it checks whether a
+ path is relative and does it contain `../../` relative traversal using
+ `File::Separator` or not, and whether the path is absolute, for example
+ `/etc/passwd/`.
+
+- `Git SHA`:
+
+ The [`Git SHA` validator](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/api/validations/validators/git_sha.rb)
+ checks whether the Git SHA parameter is a valid SHA.
+ It checks by using the regex mentioned in [`commit.rb`](https://gitlab.com/gitlab-org/gitlab/-/commit/b9857d8b662a2dbbf54f46ecdcecb44702affe55#d1c10892daedb4d4dd3d4b12b6d071091eea83df_30_30) file.
+
+- `Absence`:
+
+ The [`Absence` validator](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/api/validations/validators/absence.rb)
+ checks whether a particular parameter is absent in a given parameters hash.
+
+- `IntegerNoneAny`:
+
+ The [`IntegerNoneAny` validator](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/api/validations/validators/integer_none_any.rb)
+ checks if the value of the given parameter is either an `Integer`, `None`, or `Any`.
+ It allows only either of these mentioned values to move forward in the request.
+
+- `ArrayNoneAny`:
+
+ The [`ArrayNoneAny` validator](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/api/validations/validators/array_none_any.rb)
+ checks if the value of the given parameter is either an `Array`, `None`, or `Any`.
+ It allows only either of these mentioned values to move forward in the request.
+
+### Adding a new custom validator
+
+Custom validators are a great way to validate parameters before sending
+them to platform for further processing. It saves some back-and-forth
+from the server to the platform if we identify invalid parameters at the beginning.
+
+If you need to add a custom validator, it would be added to
+it's own file in the [`validators`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/api/validations/validators) directory.
+Since we use [Grape](https://github.com/ruby-grape/grape) to add our API
+we inherit from the `Grape::Validations::Base` class in our validator class.
+Now, all you have to do is define the `validate_param!` method which takes
+in two parameters: the `params` hash and the `param` name to validate.
+
+The body of the method does the hard work of validating the parameter value
+and returns appropriate error messages to the caller method.
+
+Lastly, we register the validator using the line below:
+
+```ruby
+Grape::Validations.register_validator(<validator name as symbol>, ::API::Helpers::CustomValidators::<YourCustomValidatorClassName>)
+```
+
+Once you add the validator, make sure you add the `rspec`s for it into
+it's own file in the [`validators`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/spec/lib/api/validations/validators) directory.
+
## Internal API
The [internal API](./internal_api.md) is documented for internal use. Please keep it up to date so we know what endpoints
diff --git a/doc/development/application_limits.md b/doc/development/application_limits.md
index b13e2994c52..1f7a9ff09b9 100644
--- a/doc/development/application_limits.md
+++ b/doc/development/application_limits.md
@@ -11,7 +11,7 @@ coordinate with others to [document](../administration/instance_limits.md)
and communicate those limits.
There is a guide about [introducing application
-limits](https://about.gitlab.com/handbook/product/product-management/process/index.html#introducing-application-limits).
+limits](https://about.gitlab.com/handbook/product/product-management/process/#introducing-application-limits).
## Development
@@ -21,7 +21,7 @@ In the `plan_limits` table, you have to create a new column and insert the
limit values. It's recommended to create separate migration script files.
1. Add new column to the `plan_limits` table with non-null default value
- that represents desired limit, eg:
+ that represents desired limit, such as:
```ruby
add_column(:plan_limits, :project_hooks, :integer, default: 100, null: false)
@@ -31,7 +31,7 @@ limit values. It's recommended to create separate migration script files.
enabled. You should use this setting only in special and documented circumstances.
1. (Optionally) Create the database migration that fine-tunes each level with
- a desired limit using `create_or_update_plan_limit` migration helper, eg:
+ a desired limit using `create_or_update_plan_limit` migration helper, such as:
```ruby
class InsertProjectHooksPlanLimits < ActiveRecord::Migration[5.2]
@@ -65,7 +65,7 @@ for plans that do not exist.
#### Get current limit
Access to the current limit can be done through the project or the namespace,
-eg:
+such as:
```ruby
project.actual_limits.project_hooks
@@ -76,13 +76,13 @@ project.actual_limits.project_hooks
There is one method `PlanLimits#exceeded?` to check if the current limit is
being exceeded. You can use either an `ActiveRecord` object or an `Integer`.
-Ensures that the count of the records does not exceed the defined limit, eg:
+Ensures that the count of the records does not exceed the defined limit, such as:
```ruby
project.actual_limits.exceeded?(:project_hooks, ProjectHook.where(project: project))
```
-Ensures that the number does not exceed the defined limit, eg:
+Ensures that the number does not exceed the defined limit, such as:
```ruby
project.actual_limits.exceeded?(:project_hooks, 10)
@@ -115,6 +115,20 @@ it_behaves_like 'includes Limitable concern' do
end
```
+### Testing instance-wide limits
+
+Instance-wide features always use `default` Plan, as instance-wide features
+do not have license assigned.
+
+```ruby
+class InstanceVariable
+ include Limitable
+
+ self.limit_name = 'instance_variables' # Optional as InstanceVariable corresponds with instance_variables
+ self.limit_scope = Limitable::GLOBAL_SCOPE
+end
+```
+
### Subscription Plans
Self-managed:
@@ -123,9 +137,10 @@ Self-managed:
GitLab.com:
-- `free` - Everyone
-- `bronze`- Namespaces with a Bronze subscription
-- `silver` - Namespaces with a Silver subscription
-- `gold` - Namespaces with a Gold subscription
+- `default` - Any system-wide feature
+- `free` - Namespaces and projects with a Free subscription
+- `bronze`- Namespaces and projects with a Bronze subscription
+- `silver` - Namespaces and projects with a Silver subscription
+- `gold` - Namespaces and projects with a Gold subscription
NOTE: **Note:** The test environment doesn't have any plans.
diff --git a/doc/development/architecture.md b/doc/development/architecture.md
index f52a5d30c1f..f0ce033587d 100644
--- a/doc/development/architecture.md
+++ b/doc/development/architecture.md
@@ -12,7 +12,7 @@ Both EE and CE require some add-on components called GitLab Shell and Gitaly. Th
## Components
-A typical install of GitLab will be on GNU/Linux. It uses NGINX or Apache as a web front end to proxypass the Unicorn web server. By default, communication between Unicorn and the front end is via a Unix domain socket but forwarding requests via TCP is also supported. The web front end accesses `/home/git/gitlab/public` bypassing the Unicorn server to serve static pages, uploads (e.g. avatar images or attachments), and precompiled assets. GitLab serves web pages and the [GitLab API](../api/README.md) using the Unicorn web server. It uses Sidekiq as a job queue which, in turn, uses Redis as a non-persistent database backend for job information, meta data, and incoming jobs.
+A typical install of GitLab will be on GNU/Linux. It uses NGINX or Apache as a web front end to proxypass the Unicorn web server. By default, communication between Unicorn and the front end is via a Unix domain socket but forwarding requests via TCP is also supported. The web front end accesses `/home/git/gitlab/public` bypassing the Unicorn server to serve static pages, uploads (e.g. avatar images or attachments), and pre-compiled assets. GitLab serves web pages and the [GitLab API](../api/README.md) using the Unicorn web server. It uses Sidekiq as a job queue which, in turn, uses Redis as a non-persistent database backend for job information, meta data, and incoming jobs.
We also support deploying GitLab on Kubernetes using our [GitLab Helm chart](https://docs.gitlab.com/charts/).
@@ -196,7 +196,7 @@ GitLab can be considered to have two layers from a process perspective:
- Process: `alertmanager`
- GitLab.com: [Monitoring of GitLab.com](https://about.gitlab.com/handbook/engineering/monitoring/)
-[Alert manager](https://prometheus.io/docs/alerting/alertmanager/) is a tool provided by Prometheus that _"handles alerts sent by client applications such as the Prometheus server. It takes care of deduplicating, grouping, and routing them to the correct receiver integration such as email, PagerDuty, or OpsGenie. It also takes care of silencing and inhibition of alerts."_ You can read more in [issue #45740](https://gitlab.com/gitlab-org/gitlab-foss/issues/45740) about what we will be alerting on.
+[Alert manager](https://prometheus.io/docs/alerting/alertmanager/) is a tool provided by Prometheus that _"handles alerts sent by client applications such as the Prometheus server. It takes care of deduplicating, grouping, and routing them to the correct receiver integration such as email, PagerDuty, or OpsGenie. It also takes care of silencing and inhibition of alerts."_ You can read more in [issue #45740](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/45740) about what we will be alerting on.
#### Certificate management
@@ -227,7 +227,7 @@ Consul is a tool for service discovery and configuration. Consul is distributed,
- Configuration:
- [Omnibus](https://docs.gitlab.com/omnibus/settings/database.html#disabling-automatic-database-migration)
- [Charts](https://docs.gitlab.com/charts/charts/gitlab/migrations/)
- - [Source](../update/upgrading_from_source.md#13-install-libs-migrations-etc)
+ - [Source](../update/upgrading_from_source.md#13-install-libraries-migrations-etc)
- Layer: Core Service (Data)
#### Elasticsearch
@@ -254,7 +254,7 @@ Elasticsearch is a distributed RESTful search engine built for the cloud.
- Process: `gitaly`
- GitLab.com: [Service Architecture](https://about.gitlab.com/handbook/engineering/infrastructure/production/architecture/#service-architecture)
-Gitaly is a service designed by GitLab to remove our need for NFS for Git storage in distributed deployments of GitLab (think GitLab.com or High Availability Deployments). As of 11.3.0, this service handles all Git level access in GitLab. You can read more about the project [in the project's readme](https://gitlab.com/gitlab-org/gitaly).
+Gitaly is a service designed by GitLab to remove our need for NFS for Git storage in distributed deployments of GitLab (think GitLab.com or High Availability Deployments). As of 11.3.0, this service handles all Git level access in GitLab. You can read more about the project [in the project's README](https://gitlab.com/gitlab-org/gitaly).
#### Praefect
@@ -273,7 +273,7 @@ repository updates to secondary nodes.
- Configuration:
- [Omnibus](../administration/geo/replication/index.md#setup-instructions)
- - [Charts](https://gitlab.com/gitlab-org/charts/gitlab/issues/8)
+ - [Charts](https://gitlab.com/gitlab-org/charts/gitlab/-/issues/8)
- [GDK](https://gitlab.com/gitlab-org/gitlab-development-kit/blob/master/doc/howto/geo.md)
- Layer: Core Service (Processor)
@@ -287,13 +287,13 @@ repository updates to secondary nodes.
- Process: `gitlab-exporter`
- GitLab.com: [Monitoring of GitLab.com](https://about.gitlab.com/handbook/engineering/monitoring/)
-GitLab Exporter is a process designed in house that allows us to export metrics about GitLab application internals to Prometheus. You can read more [in the project's readme](https://gitlab.com/gitlab-org/gitlab-exporter).
+GitLab Exporter is a process designed in house that allows us to export metrics about GitLab application internals to Prometheus. You can read more [in the project's README](https://gitlab.com/gitlab-org/gitlab-exporter).
#### GitLab Pages
- Configuration:
- [Omnibus](../administration/pages/index.md)
- - [Charts](https://gitlab.com/gitlab-org/charts/gitlab/issues/37)
+ - [Charts](https://gitlab.com/gitlab-org/charts/gitlab/-/issues/37)
- [Source](../install/installation.md#install-gitlab-pages)
- [GDK](https://gitlab.com/gitlab-org/gitlab-development-kit/blob/master/doc/howto/pages.md)
- Layer: Core Service (Processor)
@@ -359,12 +359,12 @@ Grafana is an open source, feature rich metrics dashboard and graph editor for G
- [Project page](https://github.com/jaegertracing/jaeger/blob/master/README.md)
- Configuration:
- - [Omnibus](https://gitlab.com/gitlab-org/omnibus-gitlab/issues/4104)
- - [Charts](https://gitlab.com/gitlab-org/charts/gitlab/issues/1320)
+ - [Omnibus](https://gitlab.com/gitlab-org/omnibus-gitlab/-/issues/4104)
+ - [Charts](https://gitlab.com/gitlab-org/charts/gitlab/-/issues/1320)
- [Source](../development/distributed_tracing.md#enabling-distributed-tracing)
- [GDK](../development/distributed_tracing.md#using-jaeger-in-the-gitlab-development-kit)
- Layer: Monitoring
-- GitLab.com: [Configuration to enable Tracing for a GitLab instance](https://gitlab.com/gitlab-org/omnibus-gitlab/issues/4104) issue.
+- GitLab.com: [Configuration to enable Tracing for a GitLab instance](https://gitlab.com/gitlab-org/omnibus-gitlab/-/issues/4104) issue.
Jaeger, inspired by Dapper and OpenZipkin, is a distributed tracing system.
It can be used for monitoring microservices-based distributed systems.
@@ -424,7 +424,7 @@ NGINX has an Ingress port for all HTTP requests and routes them to the appropria
- [Project page](https://github.com/prometheus/node_exporter/blob/master/README.md)
- Configuration:
- [Omnibus](../administration/monitoring/prometheus/node_exporter.md)
- - [Charts](https://gitlab.com/gitlab-org/charts/gitlab/issues/1332)
+ - [Charts](https://gitlab.com/gitlab-org/charts/gitlab/-/issues/1332)
- Layer: Monitoring
- Process: `node-exporter`
- GitLab.com: [Monitoring of GitLab.com](https://about.gitlab.com/handbook/engineering/monitoring/)
@@ -444,7 +444,7 @@ Lightweight connection pooler for PostgreSQL.
#### PgBouncer Exporter
-- [Project page](https://github.com/stanhu/pgbouncer_exporter/blob/master/README.md)
+- [Project page](https://github.com/prometheus-community/pgbouncer_exporter/blob/master/README.md)
- Configuration:
- [Omnibus](../administration/monitoring/prometheus/pgbouncer_exporter.md)
- [Charts](https://docs.gitlab.com/charts/installation/deployment.html#postgresql)
@@ -523,7 +523,7 @@ Redis is packaged to provide a place to store:
- [Project page](https://github.com/docker/distribution/blob/master/README.md)
- Configuration:
- - [Omnibus](../update/upgrading_from_source.md#13-install-libs-migrations-etc)
+ - [Omnibus](../update/upgrading_from_source.md#13-install-libraries-migrations-etc)
- [Charts](https://docs.gitlab.com/charts/charts/registry/)
- [Source](../administration/packages/container_registry.md#enable-the-container-registry)
- [GDK](https://gitlab.com/gitlab-org/gitlab-development-kit/blob/master/doc/howto/registry.md)
@@ -545,13 +545,13 @@ An external registry can also be configured to use GitLab as an auth endpoint.
- [Project page](https://github.com/getsentry/sentry/)
- Configuration:
- [Omnibus](https://docs.gitlab.com/omnibus/settings/configuration.html#error-reporting-and-logging-with-sentry)
- - [Charts](https://gitlab.com/gitlab-org/charts/gitlab/issues/1319)
+ - [Charts](https://gitlab.com/gitlab-org/charts/gitlab/-/issues/1319)
- [Source](https://gitlab.com/gitlab-org/gitlab/blob/master/config/gitlab.yml.example)
- [GDK](https://gitlab.com/gitlab-org/gitlab/blob/master/config/gitlab.yml.example)
- Layer: Monitoring
- GitLab.com: [Searching Sentry](https://about.gitlab.com/handbook/support/workflows/500_errors.html#searching-sentry)
-Sentry fundamentally is a service that helps you monitor and fix crashes in realtime.
+Sentry fundamentally is a service that helps you monitor and fix crashes in real time.
The server is in Python, but it contains a full API for sending events from any language, in any application.
For monitoring deployed apps, see the [Sentry integration docs](../user/project/operations/error_tracking.md)
@@ -588,7 +588,7 @@ Sidekiq is a Ruby background job processor that pulls jobs from the Redis queue
#### LDAP Authentication
- Configuration:
- - [Omnibus](../administration/auth/ldap.md)
+ - [Omnibus](../administration/auth/ldap/index.md)
- [Charts](https://docs.gitlab.com/charts/charts/globals.html#ldap)
- [Source](https://gitlab.com/gitlab-org/gitlab/blob/master/config/gitlab.yml.example)
- [GDK](https://gitlab.com/gitlab-org/gitlab-development-kit/blob/master/doc/howto/ldap.md)
@@ -657,7 +657,7 @@ When making a request to an HTTP Endpoint (think `/users/sign_in`) the request w
### GitLab Git Request Cycle
-Below we describe the different pathing that HTTP vs. SSH Git requests will take. There is some overlap with the Web Request Cycle but also some differences.
+Below we describe the different paths that HTTP vs. SSH Git requests will take. There is some overlap with the Web Request Cycle but also some differences.
### Web Request (80/443)
@@ -790,7 +790,7 @@ ps aux | grep '^git'
```
GitLab has several components to operate. It requires a persistent database
-(PostgreSQL) and Redis database, and uses Apache httpd or NGINX to proxypass
+(PostgreSQL) and Redis database, and uses Apache `httpd` or NGINX to proxypass
Unicorn. All these components should run as different system users to GitLab
(e.g., `postgres`, `redis` and `www-data`, instead of `git`).
@@ -866,7 +866,7 @@ NGINX:
- `/var/log/nginx/` contains error and access logs.
-Apache httpd:
+Apache `httpd`:
- [Explanation of Apache logs](https://httpd.apache.org/docs/2.2/logs.html).
- `/var/log/apache2/` contains error and output logs (on Ubuntu).
@@ -880,7 +880,7 @@ PostgreSQL:
- `/var/log/postgresql/*`
-### GitLab specific config files
+### GitLab specific configuration files
GitLab has configuration files located in `/home/git/gitlab/config/*`. Commonly referenced config files include:
@@ -902,7 +902,7 @@ bundle exec rake gitlab:env:info RAILS_ENV=production
bundle exec rake gitlab:check RAILS_ENV=production
```
-Note: It is recommended to log into the `git` user using `sudo -i -u git` or `sudo su - git`. While the sudo commands provided by gitlabhq work in Ubuntu they do not always work in RHEL.
+Note: It is recommended to log into the `git` user using `sudo -i -u git` or `sudo su - git`. While the sudo commands provided by GitLab work in Ubuntu they do not always work in RHEL.
## GitLab.com
diff --git a/doc/development/background_migrations.md b/doc/development/background_migrations.md
index 0747224db30..d9e06206961 100644
--- a/doc/development/background_migrations.md
+++ b/doc/development/background_migrations.md
@@ -283,7 +283,7 @@ end
The final step runs for any un-migrated rows after all of the jobs have been
processed. This is in case a Sidekiq process running the background migrations
received SIGKILL, leading to the jobs being lost. (See
-[more reliable Sidekiq queue](https://gitlab.com/gitlab-org/gitlab-foss/issues/36791) for more information.)
+[more reliable Sidekiq queue](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/36791) for more information.)
If the application does not depend on the data being 100% migrated (for
instance, the data is advisory, and not mission-critical), then this final step
@@ -312,7 +312,7 @@ to migrate you database down and up, which can result in other background
migrations being called. That means that using `spy` test doubles with
`have_received` is encouraged, instead of using regular test doubles, because
your expectations defined in a `it` block can conflict with what is being
-called in RSpec hooks. See [issue #35351](https://gitlab.com/gitlab-org/gitlab/issues/18839)
+called in RSpec hooks. See [issue #35351](https://gitlab.com/gitlab-org/gitlab/-/issues/18839)
for more details.
## Best practices
diff --git a/doc/development/build_test_package.md b/doc/development/build_test_package.md
index d478d6e1653..858ff41b685 100644
--- a/doc/development/build_test_package.md
+++ b/doc/development/build_test_package.md
@@ -1,13 +1,13 @@
# Building a package for testing
While developing a new feature or modifying an existing one, it is helpful if an
-installable package (or a docker image) containing those changes is available
+installable package (or a Docker image) containing those changes is available
for testing. For this very purpose, a manual job is provided in the GitLab CI/CD
pipeline that can be used to trigger a pipeline in the Omnibus GitLab repository
that will create:
- A deb package for Ubuntu 16.04, available as a build artifact, and
-- A docker image, which is pushed to [Omnibus GitLab's container
+- A Docker image, which is pushed to [Omnibus GitLab's container
registry](https://gitlab.com/gitlab-org/omnibus-gitlab/container_registry)
(images titled `gitlab-ce` and `gitlab-ee` respectively and image tag is the
commit which triggered the pipeline).
@@ -24,7 +24,7 @@ trigger.
If you want to create a package from a specific branch, commit or tag of any of
the GitLab components (like GitLab Workhorse, Gitaly, GitLab Pages, etc.), you
-can specify the branch name, commit sha or tag in the component's respective
+can specify the branch name, commit SHA or tag in the component's respective
`*_VERSION` file. For example, if you want to build a package that uses the
branch `0-1-stable`, modify the content of `GITALY_SERVER_VERSION` to
`0-1-stable` and push the commit. This will create a manual job that can be
diff --git a/doc/development/changelog.md b/doc/development/changelog.md
index a0e9f9d3be3..5a8e05f888c 100644
--- a/doc/development/changelog.md
+++ b/doc/development/changelog.md
@@ -6,8 +6,8 @@ file, as well as information and history about our changelog process.
## Overview
Each bullet point, or **entry**, in our [`CHANGELOG.md`](https://gitlab.com/gitlab-org/gitlab/blob/master/CHANGELOG.md) file is
-generated from a single data file in the [`changelogs/unreleased/`](https://gitlab.com/gitlab-org/gitlab-foss/tree/master/changelogs/)
-(or corresponding EE) folder. The file is expected to be a [YAML](https://en.wikipedia.org/wiki/YAML) file in the
+generated from a single data file in the [`changelogs/unreleased/`](https://gitlab.com/gitlab-org/gitlab/tree/master/changelogs/unreleased/).
+The file is expected to be a [YAML](https://en.wikipedia.org/wiki/YAML) file in the
following format:
```yaml
@@ -282,7 +282,7 @@ multiple times per patch release. This was compounded when we had to release
multiple patches at once due to a security issue.
We needed to automate all of this manual work. So we
-[started brainstorming](https://gitlab.com/gitlab-org/gitlab-foss/issues/17826).
+[started brainstorming](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/17826).
After much discussion we settled on the current solution of one file per entry,
and then compiling the entries into the overall `CHANGELOG.md` file during the
[release process](https://gitlab.com/gitlab-org/release-tools).
diff --git a/doc/development/cicd/index.md b/doc/development/cicd/index.md
index db3f58d1d22..42d4b494544 100644
--- a/doc/development/cicd/index.md
+++ b/doc/development/cicd/index.md
@@ -10,7 +10,7 @@ the main components.
![CI software architecture](img/ci_architecture.png)
<!-- Editable diagram available at https://app.diagrams.net/#G1LFl-KW4fgpBPzz8VIH9rsOlAH4t0xwKj -->
-On the left side we have the events that can trigger a pipeline based on various events (trigged by a user or automation):
+On the left side we have the events that can trigger a pipeline based on various events (triggered by a user or automation):
- A `git push` is the most common event that triggers a pipeline.
- The [Web API](../../api/pipelines.md#create-a-new-pipeline).
@@ -98,7 +98,7 @@ Once all jobs are completed for the current stage, the server "unlocks" all the
### Communication between Runner and GitLab server
-Once the Runner is [registered](../../ci/runners/README.md#registering-a-shared-runner) using the registration token, the server knows what type of jobs it can execute. This depends on:
+Once the Runner is [registered](https://docs.gitlab.com/runner/register/) using the registration token, the server knows what type of jobs it can execute. This depends on:
- The type of runner it is registered as:
- a shared runner
diff --git a/doc/development/code_comments.md b/doc/development/code_comments.md
index a71e2b3c792..b0e83b29cb0 100644
--- a/doc/development/code_comments.md
+++ b/doc/development/code_comments.md
@@ -9,6 +9,6 @@ Examples:
```ruby
# Deprecated scope until code_owner column has been migrated to rule_type.
-# To be removed with https://gitlab.com/gitlab-org/gitlab/issues/11834.
+# To be removed with https://gitlab.com/gitlab-org/gitlab/-/issues/11834.
scope :code_owner, -> { where(code_owner: true).or(where(rule_type: :code_owner)) }
```
diff --git a/doc/development/code_review.md b/doc/development/code_review.md
index a5ad7dc0f46..9c6cb1d0237 100644
--- a/doc/development/code_review.md
+++ b/doc/development/code_review.md
@@ -308,12 +308,13 @@ experience, refactors the existing code). Then:
- Seek to understand the author's perspective.
- If you don't understand a piece of code, _say so_. There's a good chance
someone else would be confused by it as well.
-- Do prefix your comment with "Not blocking:" if you have a small,
- non-mandatory improvement you wish to suggest. This lets the author
- know that they can optionally resolve this issue in this merge request
- or follow-up at a later stage.
+- Ensure the author is clear on what is required from them to address/resolve the suggestion.
+ - Consider using the [Conventional Comment format](https://conventionalcomments.org#format) to
+ convey your intent.
+ - For non-mandatory suggestions, decorate with (non-blocking) so the author knows they can
+ optionally resolve within the merge request or follow-up at a later stage.
- After a round of line notes, it can be helpful to post a summary note such as
- "LGTM :thumbsup:", or "Just a couple things to address."
+ "Looks good to me", or "Just a couple things to address."
- Assign the merge request to the author if changes are required following your
review.
@@ -357,8 +358,8 @@ When ready to merge:
- If the **latest [Pipeline for Merged Results](../ci/merge_request_pipelines/pipelines_for_merged_results/#pipelines-for-merged-results-premium)** finished less than 2 hours ago, you
might merge without starting a new pipeline as the merge request is close
enough to `master`.
- - If the merge request is from a fork, check how far behind `master` the
- source branch is. If it's more than 100 commits behind, ask the author to
+ - If the **merge request is from a fork**, we can't use [Pipelines for Merged Results](../ci/merge_request_pipelines/pipelines_for_merged_results/index.md#prerequisites), therefore, they're more prone to breaking `master`.
+ Check how far behind `master` the source branch is. If it's more than 100 commits behind, ask the author to
rebase it before merging.
- If [master is broken](https://about.gitlab.com/handbook/engineering/workflow/#broken-master),
in addition to the two above rules, check that any failure also happens
@@ -380,7 +381,7 @@ Merge Results against the latest `master` at the time of the pipeline creation.
One of the most difficult things during code review is finding the right
balance in how deep the reviewer can interfere with the code created by a
-reviewee.
+author.
- Learning how to find the right balance takes time; that is why we have
reviewers that become maintainers after some time spent on reviewing merge
@@ -388,7 +389,7 @@ reviewee.
- Finding bugs and improving code style is important, but thinking about good
design is important as well. Building abstractions and good design is what
makes it possible to hide complexity and makes future changes easier.
-- Asking the reviewee to change the design sometimes means the complete rewrite
+- Asking the author to change the design sometimes means the complete rewrite
of the contributed code. It's usually a good idea to ask another maintainer or
reviewer before doing it, but have the courage to do it when you believe it is
important.
@@ -401,7 +402,7 @@ reviewee.
- There is a difference in doing things right and doing things right now.
Ideally, we should do the former, but in the real world we need the latter as
well. A good example is a security fix which should be released as soon as
- possible. Asking the reviewee to do the major refactoring in the merge
+ possible. Asking the author to do the major refactoring in the merge
request that is an urgent fix should be avoided.
- Doing things well today is usually better than doing something perfectly
tomorrow. Shipping a kludge today is usually worse than doing something well
@@ -501,7 +502,7 @@ Properties of customer critical merge requests:
- The DRI will assign the `customer-critical-merge-request` label to the merge request.
- It is required that the reviewer(s) and maintainer(s) involved with a customer critical merge request are engaged as soon as this decision is made.
- It is required to prioritize work for those involved on a customer critical merge request so that they have the time available necessary to focus on it.
-- It is required to adhere to GitLab [values](https://about.gitlab.com/handbook/values.md) and processes when working on customer critical merge requests, taking particular note of family and friends first/work second, definition of done, iteration, and release when it's ready.
+- It is required to adhere to GitLab [values](https://about.gitlab.com/handbook/values/) and processes when working on customer critical merge requests, taking particular note of family and friends first/work second, definition of done, iteration, and release when it's ready.
- Customer critical merge requests are required to not reduce security, introduce data-loss risk, reduce availability, nor break existing functionality per the process for [prioritizing technical decisions](https://about.gitlab.com/handbook/engineering/#prioritizing-technical-decisions.md).
- On customer critical requests, it is _recommended_ that those involved _consider_ coordinating synchronously (Zoom, Slack) in addition to asynchronously (merge requests comments) if they believe this will reduce elapsed time to merge even though this _may_ sacrifice [efficiency](https://about.gitlab.com/company/culture/all-remote/asynchronous/#evaluating-efficiency.md).
- After a customer critical merge request is merged, a retrospective must be completed with the intention of reducing the frequency of future customer critical merge requests.
diff --git a/doc/development/contributing/index.md b/doc/development/contributing/index.md
index aebff633f58..cea9043a333 100644
--- a/doc/development/contributing/index.md
+++ b/doc/development/contributing/index.md
@@ -6,8 +6,8 @@ to contribute to GitLab in a way that is easy for everyone.
For a first-time step-by-step guide to the contribution process, see our
[Contributing to GitLab](https://about.gitlab.com/community/contribute/) page.
-Looking for something to work on? Look for issues with the label
-[`~Accepting merge requests`](#how-to-contribute).
+Looking for something to work on? See the
+[How to contribute](#how-to-contribute) section for more information.
GitLab comes in two flavors:
@@ -76,9 +76,12 @@ Sign up for the mailing list, answer GitLab questions on StackOverflow or respon
## How to contribute
-If you want to contribute to GitLab,
-[issues with the `~Accepting merge requests` label](issue_workflow.md#label-for-community-contributors)
-are a great place to start.
+If you would like to contribute to GitLab:
+
+- Issues with the
+ [`~Accepting merge requests` label](issue_workflow.md#label-for-community-contributors)
+ are a great place to start.
+- Consult the [Contribution Flow](#contribution-flow) section to learn the process.
If you have any questions or need help visit [Getting Help](https://about.gitlab.com/get-help/) to
learn how to communicate with GitLab. We have a [Gitter channel for contributors](https://gitter.im/gitlab/contributors),
@@ -96,25 +99,77 @@ a Merge Request.
For more information, see the [`gitlab-development-kit`](https://gitlab.com/gitlab-org/gitlab-development-kit)
project.
-## Contribution Flow
-
-When contributing to GitLab, your merge request is subject to review by merge request maintainers of a particular specialty.
-
-When you submit code to GitLab, we really want it to get merged, but there will be times when it will not be merged.
-
-When maintainers are reading through a merge request they may request guidance from other maintainers. If merge request maintainers conclude that the code should not be merged, our reasons will be fully disclosed. If it has been decided that the code quality is not up to GitLab’s standards, the merge request maintainer will refer the author to our docs and code style guides, and provide some guidance.
-
-Sometimes style guides will be followed but the code will lack structural integrity, or the maintainer will have reservations about the code’s overall quality. When there is a reservation the maintainer will inform the author and provide some guidance. The author may then choose to update the merge request. Once the merge request has been updated and reassigned to the maintainer, they will review the code again. Once the code has been resubmitted any number of times, the maintainer may choose to close the merge request with a summary of why it will not be merged, as well as some guidance. If the merge request is closed the maintainer will be open to discussion as to how to improve the code so it can be approved in the future.
-
-GitLab will do its best to review community contributions as quickly as possible. Specially appointed developers review community contributions daily. You may take a look at the [team page](https://about.gitlab.com/company/team/) for the merge request coach who specializes in the type of code you have written and mention them in the merge request. For example, if you have written some JavaScript in your code then you should mention the frontend merge request coach. If your code has multiple disciplines you may mention multiple merge request coaches.
-
-GitLab receives a lot of community contributions, so if your code has not been reviewed within two days (excluding weekend and public holidays) of its initial submission feel free to re-mention the appropriate merge request coach.
-
-When submitting code to GitLab, you may feel that your contribution requires the aid of an external library. If your code includes an external library please provide a link to the library, as well as reasons for including it.
-
-When your code contains more than 500 changes, any major breaking changes, or an external library, `@mention` a maintainer in the merge request. If you are not sure who to mention, the reviewer will add one early in the merge request process.
-
-### Issues workflow
+### Contribution flow
+
+The general flow of contributing to GitLab is:
+
+1. [Create a fork](../../user/project/repository/forking_workflow.md#creating-a-fork)
+ of GitLab. In some cases, you will want to set up the
+ [GitLab Development Kit](https://gitlab.com/gitlab-org/gitlab-development-kit) to
+ [develop against your fork](https://gitlab.com/gitlab-org/gitlab-development-kit/-/blob/master/doc/index.md#develop-in-your-own-gitlab-fork).
+1. Make your changes in your fork.
+1. When you're ready, [create a new merge request](../../user/project/merge_requests/creating_merge_requests.md).
+1. In the merge request's description:
+ - Ensure you provide complete and accurate information.
+ - Review the provided checklist.
+1. Assign the merge request (if possible) to, or `@mention`, one of the
+ [code owners](../../user/project/code_owners.md) for the relevant project,
+ and explain that you are ready for review.
+
+When you submit code to GitLab, we really want it to get merged! However, we always review
+submissions carefully, and this takes time. Code submissions will usually be reviewed by two
+[domain experts](../code_review.md#domain-experts) before being merged:
+
+- A [reviewer](../code_review.md#the-responsibility-of-the-reviewer).
+- A [maintainer](../code_review.md#the-responsibility-of-the-maintainer).
+
+Keep the following in mind when submitting merge requests:
+
+- When reviewers are reading through a merge request they may request guidance from other
+ reviewers.
+- If the code quality is found to not meet GitLab’s standards, the merge request reviewer will
+ provide guidance and refer the author to our:
+ - [Documentation](../documentation/styleguide.md) style guide.
+ - Code style guides.
+- Sometimes style guides will be followed but the code will lack structural integrity, or the
+ reviewer will have reservations about the code’s overall quality. When there is a reservation,
+ the reviewer will inform the author and provide some guidance.
+- Though GitLab generally allows anyone to indicate
+ [approval](../../user/project/merge_requests/merge_request_approvals.md) of merge requests, the
+ maintainer may require [approvals from certain reviewers](../code_review.md#approval-guidelines)
+ before merging a merge request.
+- After review, the author may be asked to update the merge request. Once the merge request has been
+ updated and reassigned to the reviewer, they will review the code again. This process may repeat
+ any number of times before merge, to help make the contribution the best it can be.
+
+Sometimes a maintainer may choose to close a merge request. They will fully disclose why it will not
+be merged, as well as some guidance. The maintainers will be open to discussion about how to change
+the code so it can be approved and merged in the future.
+
+GitLab will do its best to review community contributions as quickly as possible. Specially
+appointed developers review community contributions daily. Look at the
+[team page](https://about.gitlab.com/company/team/) for the merge request coach who specializes in
+the type of code you have written and mention them in the merge request. For example, if you have
+written some front-end code, you should `@mention` the frontend merge request coach. If
+your code has multiple disciplines, you may `@mention` multiple merge request coaches.
+
+GitLab receives a lot of community contributions. If your code has not been reviewed within two
+working days of its initial submission, feel free to `@mention` all merge request coaches with
+`@gitlab-org/coaches` to get their attention.
+
+When submitting code to GitLab, you may feel that your contribution requires the aid of an external
+library. If your code includes an external library, please provide a link to the library, as well as
+reasons for including it.
+
+`@mention` a maintainer in merge requests that contain:
+
+- More than 500 changes.
+- Any major breaking changes.
+- External libraries.
+
+If you are not sure who to mention, the reviewer will do this for you early in the merge request process.
+
+#### Issues workflow
This [documentation](issue_workflow.md) outlines the current issue workflow:
@@ -127,7 +182,7 @@ This [documentation](issue_workflow.md) outlines the current issue workflow:
- [Technical and UX debt](issue_workflow.md#technical-and-ux-debt)
- [Technical debt in follow-up issues](issue_workflow.md#technical-debt-in-follow-up-issues)
-### Merge requests workflow
+#### Merge requests workflow
This [documentation](merge_request_workflow.md) outlines the current merge request process.
diff --git a/doc/development/contributing/issue_workflow.md b/doc/development/contributing/issue_workflow.md
index be416bf636e..f70299cbfc2 100644
--- a/doc/development/contributing/issue_workflow.md
+++ b/doc/development/contributing/issue_workflow.md
@@ -2,7 +2,7 @@
## Issue tracker guidelines
-**[Search the issue tracker](https://gitlab.com/gitlab-org/gitlab/issues)** for similar entries before
+**[Search the issue tracker](https://gitlab.com/gitlab-org/gitlab/-/issues)** for similar entries before
submitting your own, there's a good chance somebody else had the same issue or
feature proposal. Show your support with an award emoji and/or join the
discussion.
@@ -16,7 +16,7 @@ see fit.
Our issue triage policies are [described in our handbook](https://about.gitlab.com/handbook/engineering/quality/issue-triage/).
You are very welcome to help the GitLab team triage issues.
-We also organize [issue bash events](https://gitlab.com/gitlab-org/gitlab-foss/issues/17815)
+We also organize [issue bash events](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/17815)
once every quarter.
The most important thing is making sure valid issues receive feedback from the
@@ -351,15 +351,15 @@ features from GitLab EE to GitLab CE, related issues would be labeled with
~"stewardship".
A recent example of this was the issue for
-[bringing the time tracking API to GitLab CE](https://gitlab.com/gitlab-org/gitlab-foss/issues/25517#note_20019084).
+[bringing the time tracking API to GitLab CE](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/25517#note_20019084).
## Feature proposals
To create a feature proposal, open an issue on the
-[issue tracker](https://gitlab.com/gitlab-org/gitlab/issues).
+[issue tracker](https://gitlab.com/gitlab-org/gitlab/-/issues).
In order to help track the feature proposals, we have created a
-[`feature`](https://gitlab.com/gitlab-org/gitlab/issues?label_name=feature) label. For the time being, users that are not members
+[`feature`](https://gitlab.com/gitlab-org/gitlab/-/issues?label_name=feature) label. For the time being, users that are not members
of the project cannot add labels. You can instead ask one of the [core team](https://about.gitlab.com/community/core-team/)
members to add the label ~feature to the issue or add the following
code snippet right after your description in a new line: `~feature`.
@@ -403,7 +403,7 @@ below will make it easy to manage this, without unnecessary overhead.
Every monthly release has a corresponding issue on the CE issue tracker to keep
track of functionality broken by that release and any fixes that need to be
included in a patch release (see
-[8.3 Regressions](https://gitlab.com/gitlab-org/gitlab-foss/issues/4127) as an example).
+[8.3 Regressions](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/4127) as an example).
As outlined in the issue description, the intended workflow is to post one note
with a reference to an issue describing the regression, and then to update that
@@ -420,7 +420,7 @@ in the regression issue as fixes are addressed.
## Technical and UX debt
In order to track things that can be improved in GitLab's codebase,
-we use the ~"technical debt" label in [GitLab's issue tracker](https://gitlab.com/gitlab-org/gitlab/issues).
+we use the ~"technical debt" label in [GitLab's issue tracker](https://gitlab.com/gitlab-org/gitlab/-/issues).
For missed user experience requirements, we use the ~"UX debt" label.
These labels should be added to issues that describe things that can be improved,
diff --git a/doc/development/contributing/merge_request_workflow.md b/doc/development/contributing/merge_request_workflow.md
index a70fadfe8a9..14c6b0b1073 100644
--- a/doc/development/contributing/merge_request_workflow.md
+++ b/doc/development/contributing/merge_request_workflow.md
@@ -221,7 +221,7 @@ requirements.
1. [Changelog entry added](../changelog.md), if necessary.
1. Reviewed by relevant (UX/FE/BE/tech writing) reviewers and all concerns are addressed.
1. Merged by a project maintainer.
-1. Create an issue in the [infrastructure issue tracker](https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues) to inform the Infrastructure department when your contribution is changing default settings or introduces a new setting, if relevant.
+1. Create an issue in the [infrastructure issue tracker](https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues) to inform the Infrastructure department when your contribution is changing default settings or introduces a new setting, if relevant.
1. Confirmed to be working in the [Canary stage](https://about.gitlab.com/handbook/engineering/#canary-testing) or on GitLab.com once the contribution is deployed.
1. Added to the [release post](https://about.gitlab.com/handbook/marketing/blog/release-posts/),
if relevant.
diff --git a/doc/development/creating_enums.md b/doc/development/creating_enums.md
index e2ebad538d9..3833f771bb5 100644
--- a/doc/development/creating_enums.md
+++ b/doc/development/creating_enums.md
@@ -83,7 +83,7 @@ module EE
end
```
-This looks working as a workaround, however, this approach has some donwside that:
+This looks working as a workaround, however, this approach has some downsides that:
- Features could move from EE to FOSS or vice versa. Therefore, the offset might be mixed between FOSS and EE in the future.
e.g. When you move `activity_limit_exceeded` to FOSS, you'll see `{ unknown_failure: 0, config_error: 1, activity_limit_exceeded: 1_000 }`.
diff --git a/doc/development/database/index.md b/doc/development/database/index.md
new file mode 100644
index 00000000000..665af623059
--- /dev/null
+++ b/doc/development/database/index.md
@@ -0,0 +1,48 @@
+# Database guides
+
+## Tooling
+
+- [Understanding EXPLAIN plans](../understanding_explain_plans.md)
+- [explain.depesz.com](https://explain.depesz.com/) or [explain.dalibo.com](https://explain.dalibo.com/) for visualizing the output of `EXPLAIN`
+- [pgFormatter](http://sqlformat.darold.net/) a PostgreSQL SQL syntax beautifier
+
+## Migrations
+
+- [What requires downtime?](../what_requires_downtime.md)
+- [SQL guidelines](../sql.md) for working with SQL queries
+- [Migrations style guide](../migration_style_guide.md) for creating safe SQL migrations
+- [Testing Rails migrations](../testing_guide/testing_migrations_guide.md) guide
+- [Post deployment migrations](../post_deployment_migrations.md)
+- [Background migrations](../background_migrations.md)
+- [Swapping tables](../swapping_tables.md)
+- [Deleting migrations](../deleting_migrations.md)
+
+## Debugging
+
+- Tracing the source of an SQL query using query comments with [Marginalia](../database_query_comments.md)
+- Tracing the source of an SQL query in Rails console using [Verbose Query Logs](https://guides.rubyonrails.org/debugging_rails_applications.html#verbose-query-logs)
+
+## Best practices
+
+- [Adding database indexes](../adding_database_indexes.md)
+- [Foreign keys & associations](../foreign_keys.md)
+- [Adding a foreign key constraint to an existing column](add_foreign_key_to_existing_column.md)
+- [`NOT NULL` constraints](not_null_constraints.md)
+- [Strings and the Text data type](strings_and_the_text_data_type.md)
+- [Single table inheritance](../single_table_inheritance.md)
+- [Polymorphic associations](../polymorphic_associations.md)
+- [Serializing data](../serializing_data.md)
+- [Hash indexes](../hash_indexes.md)
+- [Storing SHA1 hashes as binary](../sha1_as_binary.md)
+- [Iterating tables in batches](../iterating_tables_in_batches.md)
+- [Insert into tables in batches](../insert_into_tables_in_batches.md)
+- [Ordering table columns](../ordering_table_columns.md)
+- [Verifying database capabilities](../verifying_database_capabilities.md)
+- [Database Debugging and Troubleshooting](../database_debugging.md)
+- [Query Count Limits](../query_count_limits.md)
+- [Creating enums](../creating_enums.md)
+
+## Case studies
+
+- [Database case study: Filtering by label](../filtering_by_label.md)
+- [Database case study: Namespaces storage statistics](../namespaces_storage_statistics.md)
diff --git a/doc/development/database/not_null_constraints.md b/doc/development/database/not_null_constraints.md
new file mode 100644
index 00000000000..e4dec2afa10
--- /dev/null
+++ b/doc/development/database/not_null_constraints.md
@@ -0,0 +1,217 @@
+# `NOT NULL` constraints
+
+> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/38358) in GitLab 13.0.
+
+All attributes that should not have `NULL` as a value, should be defined as `NOT NULL`
+columns in the database.
+
+Depending on the application logic, `NOT NULL` columns should either have a `presence: true`
+validation defined in their Model or have a default value as part of their database definition.
+As an example, the latter can be true for boolean attributes that should always have a non-`NULL`
+value, but have a well defined default value that the application does not need to enforce each
+time (for example, `active=true`).
+
+## Create a new table with `NOT NULL` columns
+
+When adding a new table, all `NOT NULL` columns should be defined as such directly inside `create_table`.
+
+For example, consider a migration that creates a table with two `NOT NULL` columns,
+`db/migrate/20200401000001_create_db_guides.rb`:
+
+```ruby
+class CreateDbGuides < ActiveRecord::Migration[6.0]
+ DOWNTIME = false
+
+ def change
+ create_table :db_guides do |t|
+ t.bigint :stars, default: 0, null: false
+ t.bigint :guide, null: false
+ end
+ end
+end
+```
+
+## Add a `NOT NULL` column to an existing table
+
+With PostgreSQL 11 being the minimum version since GitLab 13.0, adding columns with `NULL` and/or
+default values has become much easier and the standard `add_column` helper should be used in all cases.
+
+For example, consider a migration that adds a new `NOT NULL` column `active` to table `db_guides`,
+`db/migrate/20200501000001_add_active_to_db_guides.rb`:
+
+```ruby
+class AddExtendedTitleToSprints < ActiveRecord::Migration[6.0]
+ DOWNTIME = false
+
+ def change
+ add_column :db_guides, :active, :boolean, default: true, null: false
+ end
+end
+```
+
+## Add a `NOT NULL` constraint to an existing column
+
+Adding `NOT NULL` to existing database columns requires multiple steps split into at least two
+different releases:
+
+1. Release `N.M` (current release)
+
+ - Ensure the constraint is enforced at the application level (i.e. add a model validation).
+ - Add a post-deployment migration to add the `NOT NULL` constraint with `validate: false`.
+ - Add a post-deployment migration to fix the existing records.
+
+ NOTE: **Note:**
+ Depending on the size of the table, a background migration for cleanup could be required in the next release.
+ See the [`NOT NULL` constraints on large tables](not_null_constraints.md#not-null-constraints-on-large-tables) section for more information.
+
+ - Create an issue for the next milestone to validate the `NOT NULL` constraint.
+
+1. Release `N.M+1` (next release)
+
+ - Validate the `NOT NULL` constraint using a post-deployment migration.
+
+### Example
+
+Considering a given release milestone, such as 13.0, a model validation has been added into `epic.rb`
+to require a description:
+
+```ruby
+class Epic < ApplicationRecord
+ validates :description, presence: true
+end
+```
+
+The same constraint should be added at the database level for consistency purposes.
+We only want to enforce the `NOT NULL` constraint without setting a default, as we have decided
+that all epics should have a user-generated description.
+
+After checking our production database, we know that there are `epics` with `NULL` descriptions,
+so we can not add and validate the constraint in one step.
+
+NOTE: **Note:**
+Even if we did not have any epic with a `NULL` description, another instance of GitLab could have
+such records, so we would follow the same process either way.
+
+#### Prevent new invalid records (current release)
+
+We first add the `NOT NULL` constraint with a `NOT VALID` parameter, which enforces consistency
+when new records are inserted or current records are updated.
+
+In the example above, the existing epics with a `NULL` description will not be affected and you'll
+still be able to update records in the `epics` table. However, when you try to update or insert
+an epic without providing a description, the constraint causes a database error.
+
+Adding or removing a `NOT NULL` clause requires that any application changes are deployed _first_.
+Thus, adding a `NOT NULL` constraint to an existing column should happen in a post-deployment migration.
+
+Still in our example, for the 13.0 milestone example (current), we add the `NOT NULL` constraint
+with `validate: false` in a post-deployment migration,
+`db/post_migrate/20200501000001_add_not_null_constraint_to_epics_description.rb`:
+
+```ruby
+class AddNotNullConstraintToEpicsDescription < ActiveRecord::Migration[6.0]
+ include Gitlab::Database::MigrationHelpers
+ DOWNTIME = false
+
+ disable_ddl_transaction!
+
+ def up
+ # This will add the `NOT NULL` constraint WITHOUT validating it
+ add_not_null_constraint :epics, :description, validate: false
+ end
+
+ def down
+ # Down is required as `add_not_null_constraint` is not reversible
+ remove_not_null_constraint :epics, :description
+ end
+end
+```
+
+#### Data migration to fix existing records (current release)
+
+The approach here depends on the data volume and the cleanup strategy. The number of records that
+must be fixed on GitLab.com is a nice indicator that will help us decide whether to use a
+post-deployment migration or a background data migration:
+
+- If the data volume is less than `1000` records, then the data migration can be executed within the post-migration.
+- If the data volume is higher than `1000` records, it's advised to create a background migration.
+
+When unsure about which option to use, please contact the Database team for advice.
+
+Back to our example, the epics table is not considerably large nor frequently accessed,
+so we are going to add a post-deployment migration for the 13.0 milestone (current),
+`db/post_migrate/20200501000002_cleanup_epics_with_null_description.rb`:
+
+```ruby
+class CleanupEpicsWithNullDescription < ActiveRecord::Migration[6.0]
+ include Gitlab::Database::MigrationHelpers
+
+ # With BATCH_SIZE=1000 and epics.count=29500 on GitLab.com
+ # - 30 iterations will be run
+ # - each requires on average ~150ms
+ # Expected total run time: ~5 seconds
+ BATCH_SIZE = 1000
+
+ disable_ddl_transaction!
+
+ class Epic < ActiveRecord::Base
+ include EachBatch
+
+ self.table_name = 'epics'
+ end
+
+ def up
+ Epic.each_batch(of: BATCH_SIZE) do |relation|
+ relation.
+ where('description IS NULL').
+ update_all(description: 'No description')
+ end
+ end
+
+ def down
+ # no-op : can't go back to `NULL` without first dropping the `NOT NULL` constraint
+ end
+end
+```
+
+#### Validate the text limit (next release)
+
+Validating the `NOT NULL` constraint will scan the whole table and make sure that each record is correct.
+
+Still in our example, for the 13.1 milestone (next), we run the `validate_not_null_constraint`
+migration helper in a final post-deployment migration,
+`db/post_migrate/20200601000001_validate_not_null_constraint_on_epics_description.rb`:
+
+```ruby
+class ValidateNotNullConstraintOnEpicsDescription < ActiveRecord::Migration[6.0]
+ include Gitlab::Database::MigrationHelpers
+ DOWNTIME = false
+
+ disable_ddl_transaction!
+
+ def up
+ validate_not_null_constraint :epics, :description
+ end
+
+ def down
+ # no-op
+ end
+end
+```
+
+## `NOT NULL` constraints on large tables
+
+If you have to clean up a text column for a really [large table](https://gitlab.com/gitlab-org/gitlab/-/blob/master/rubocop/migration_helpers.rb#L12)
+(for example, the `artifacts` in `ci_builds`), your background migration will go on for a while and
+it will need an additional [background migration cleaning up](../background_migrations.md#cleaning-up)
+in the release after adding the data migration.
+
+In that rare case you will need 3 releases end-to-end:
+
+1. Release `N.M` - Add the `NOT NULL` constraint and the background-migration to fix the existing records.
+1. Release `N.M+1` - Cleanup the background migration.
+1. Release `N.M+2` - Validate the `NOT NULL` constraint.
+
+For these cases, please consult the database team early in the update cycle. The `NOT NULL`
+constraint may not be required or other options could exist that do not affect really large
+or frequently accessed tables.
diff --git a/doc/development/database/strings_and_the_text_data_type.md b/doc/development/database/strings_and_the_text_data_type.md
new file mode 100644
index 00000000000..0e77e3972e0
--- /dev/null
+++ b/doc/development/database/strings_and_the_text_data_type.md
@@ -0,0 +1,288 @@
+# Strings and the Text data type
+
+> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/30453) in GitLab 13.0.
+
+When adding new columns that will be used to store strings or other textual information:
+
+1. We always use the `text` data type instead of the `string` data type.
+1. `text` columns should always have a limit set by using the `add_text_limit` migration helper.
+
+The `text` data type can not be defined with a limit, so `add_text_limit` is enforcing that by
+adding a [check constraint](https://www.postgresql.org/docs/11/ddl-constraints.html) on the
+column and then validating it at a followup step.
+
+## Background info
+
+The reason we always want to use `text` instead of `string` is that `string` columns have the
+disadvantage that if you want to update their limit, you have to run an `ALTER TABLE ...` command.
+
+While a limit is added, the `ALTER TABLE ...` command requires an `EXCLUSIVE LOCK` on the table, which
+is held throughout the process of updating the column and while validating all existing records, a
+process that can take a while for large tables.
+
+On the other hand, texts are [more or less equivalent to strings](https://www.depesz.com/2010/03/02/charx-vs-varcharx-vs-varchar-vs-text/) in PostgreSQL,
+while having the additional advantage that adding a limit on an existing column or updating their
+limit does not require the very costly `EXCLUSIVE LOCK` to be held throughout the validation phase.
+We can start by updating the constraint with the valid option off, which requires an `EXCLUSIVE LOCK`
+but only for updating the declaration of the columns. We can then validate it at a later step using
+`VALIDATE CONSTRAINT`, which requires only a `SHARE UPDATE EXCLUSIVE LOCK` (only conflicts with other
+validations and index creation while it allows reads and writes).
+
+## Create a new table with text columns
+
+When adding a new table, the limits for all text columns should be added in the same migration as
+the table creation.
+
+For example, consider a migration that creates a table with two text columns,
+`db/migrate/20200401000001_create_db_guides.rb`:
+
+```ruby
+class CreateDbGuides < ActiveRecord::Migration[6.0]
+ DOWNTIME = false
+
+ disable_ddl_transaction!
+
+ def up
+ unless table_exists?(:db_guides)
+ create_table :db_guides do |t|
+ t.bigint :stars, default: 0, null: false
+ t.text :title
+ t.text :notes
+ end
+ end
+
+ # The following add the constraints and validate them immediately (no data in the table)
+ add_text_limit :db_guides, :title, 128
+ add_text_limit :db_guides, :notes, 1024
+ end
+
+ def down
+ # No need to drop the constraints, drop_table takes care of everything
+ drop_table :db_guides
+ end
+end
+```
+
+Adding a check constraint requires an exclusive lock while the `ALTER TABLE` that adds is running.
+As we don't want the exclusive lock to be held for the duration of a transaction, `add_text_limit`
+must always run in a migration with `disable_ddl_transaction!`.
+
+Also, note that we have to add a check that the table exists so that the migration can be repeated
+in case of a failure.
+
+## Add a text column to an existing table
+
+Adding a column to an existing table requires an exclusive lock for that table. Even though that lock
+is held for a brief amount of time, the time `add_column` needs to complete its execution can vary
+depending on how frequently the table is accessed. For example, acquiring an exclusive lock for a very
+frequently accessed table may take minutes in GitLab.com and requires the use of `with_lock_retries`.
+
+For these reasons, it is advised to add the text limit on a separate migration than the `add_column` one.
+
+For example, consider a migration that adds a new text column `extended_title` to table `sprints`,
+`db/migrate/20200501000001_add_extended_title_to_sprints.rb`:
+
+```ruby
+class AddExtendedTitleToSprints < ActiveRecord::Migration[6.0]
+ DOWNTIME = false
+
+ # rubocop:disable Migration/AddLimitToTextColumns
+ # limit is added in 20200501000002_add_text_limit_to_sprints_extended_title
+ def change
+ add_column :sprints, :extended_title, :text
+ end
+ # rubocop:enable Migration/AddLimitToTextColumns
+end
+```
+
+A second migration should follow the first one with a limit added to `extended_title`,
+`db/migrate/20200501000002_add_text_limit_to_sprints_extended_title.rb`:
+
+```ruby
+class AddTextLimitToSprintsExtendedTitle < ActiveRecord::Migration[6.0]
+ include Gitlab::Database::MigrationHelpers
+ DOWNTIME = false
+
+ disable_ddl_transaction!
+
+ def up
+ add_text_limit :sprints, :extended_title, 512
+ end
+
+ def down
+ # Down is required as `add_text_limit` is not reversible
+ remove_text_limit :sprints, :extended_title
+ end
+end
+```
+
+## Add a text limit constraint to an existing column
+
+Adding text limits to existing database columns requires multiple steps split into at least two different releases:
+
+1. Release `N.M` (current release)
+
+ - Add a post-deployment migration to add the limit to the text column with `validate: false`.
+ - Add a post-deployment migration to fix the existing records.
+
+ NOTE: **Note:**
+ Depending on the size of the table, a background migration for cleanup could be required in the next release.
+ See [text limit constraints on large tables](strings_and_the_text_data_type.md#text-limit-constraints-on-large-tables) for more information.
+
+ - Create an issue for the next milestone to validate the text limit.
+
+1. Release `N.M+1` (next release)
+
+ - Validate the text limit using a post-deployment migration.
+
+### Example
+
+Let's assume we want to add a `1024` limit to `issues.title_html` for a given release milestone,
+such as 13.0.
+
+Issues is a pretty busy and large table with more than 25 million rows, so we don't want to lock all
+other processes that try to access it while running the update.
+
+Also, after checking our production database, we know that there are `issues` with more characters in
+their title than the 1024 character limit, so we can not add and validate the constraint in one step.
+
+NOTE: **Note:**
+Even if we did not have any record with a title larger than the provided limit, another
+instance of GitLab could have such records, so we would follow the same process either way.
+
+#### Prevent new invalid records (current release)
+
+We first add the limit as a `NOT VALID` check constraint to the table, which enforces consistency when
+new records are inserted or current records are updated.
+
+In the example above, the existing issues with more than 1024 characters in their title will not be
+affected and you'll be still able to update records in the `issues` table. However, when you'd try
+to update the `title_html` with a title that has more than 1024 characters, the constraint causes
+a database error.
+
+Adding or removing a constraint to an existing attribute requires that any application changes are
+deployed _first_, [otherwise servers still in the old version of the application may try to update the
+attribute with invalid values](../multi_version_compatibility.md#ci-artifact-uploads-were-failing).
+For these reasons, `add_text_limit` should run in a post-deployment migration.
+
+Still in our example, for the 13.0 milestone (current), consider that the following validation
+has been added to model `Issue`:
+
+```ruby
+validates :title_html, length: { maximum: 1024 }
+```
+
+We can also update the database in the same milestone by adding the text limit with `validate: false`
+in a post-deployment migration,
+`db/post_migrate/20200501000001_add_text_limit_migration.rb`:
+
+```ruby
+class AddTextLimitMigration < ActiveRecord::Migration[6.0]
+ include Gitlab::Database::MigrationHelpers
+ DOWNTIME = false
+
+ disable_ddl_transaction!
+
+ def up
+ # This will add the constraint WITHOUT validating it
+ add_text_limit :issues, :title_html, 1024, validate: false
+ end
+
+ def down
+ # Down is required as `add_text_limit` is not reversible
+ remove_text_limit :issues, :title_html
+ end
+end
+```
+
+#### Data migration to fix existing records (current release)
+
+The approach here depends on the data volume and the cleanup strategy. The number of records that must
+be fixed on GitLab.com is a nice indicator that will help us decide whether to use a post-deployment
+migration or a background data migration:
+
+- If the data volume is less than `1,000` records, then the data migration can be executed within the post-migration.
+- If the data volume is higher than `1,000` records, it's advised to create a background migration.
+
+When unsure about which option to use, please contact the Database team for advice.
+
+Back to our example, the issues table is considerably large and frequently accessed, so we are going
+to add a background migration for the 13.0 milestone (current),
+`db/post_migrate/20200501000002_schedule_cap_title_length_on_issues.rb`:
+
+```ruby
+class ScheduleCapTitleLengthOnIssues < ActiveRecord::Migration[6.0]
+ include Gitlab::Database::MigrationHelpers
+
+ # Info on how many records will be affected on GitLab.com
+ # time each batch needs to run on average, etc ...
+ BATCH_SIZE = 5000
+ DELAY_INTERVAL = 2.minutes.to_i
+
+ # Background migration will update issues whose title is longer than 1024 limit
+ ISSUES_BACKGROUND_MIGRATION = 'CapTitleLengthOnIssues'.freeze
+
+ disable_ddl_transaction!
+
+ class Issue < ActiveRecord::Base
+ include EachBatch
+
+ self.table_name = 'issues'
+ end
+
+ def up
+ queue_background_migration_jobs_by_range_at_intervals(
+ Issue.where('char_length(title_html) > 1024'),
+ ISSUES_MIGRATION,
+ DELAY_INTERVAL,
+ batch_size: BATCH_SIZE
+ )
+ end
+
+ def down
+ # no-op : the part of the title_html after the limit is lost forever
+ end
+end
+```
+
+To keep this guide short, we skipped the definition of the background migration and only
+provided a high level example of the post-deployment migration that is used to schedule the batches.
+You can find more info on the guide about [background migrations](../background_migrations.md)
+
+#### Validate the text limit (next release)
+
+Validating the text limit will scan the whole table and make sure that each record is correct.
+
+Still in our example, for the 13.1 milestone (next), we run the `validate_text_limit` migration
+helper in a final post-deployment migration,
+`db/post_migrate/20200601000001_validate_text_limit_migration.rb`:
+
+```ruby
+class ValidateTextLimitMigration < ActiveRecord::Migration[6.0]
+ include Gitlab::Database::MigrationHelpers
+ DOWNTIME = false
+
+ disable_ddl_transaction!
+
+ def up
+ validate_text_limit :issues, :title_html
+ end
+
+ def down
+ # no-op
+ end
+end
+```
+
+## Text limit constraints on large tables
+
+If you have to clean up a text column for a really [large table](https://gitlab.com/gitlab-org/gitlab/-/blob/master/rubocop/rubocop-migrations.yml#L3)
+(for example, the `artifacts` in `ci_builds`), your background migration will go on for a while and
+it will need an additional [background migration cleaning up](../background_migrations.md#cleaning-up)
+in the release after adding the data migration.
+
+In that rare case you will need 3 releases end-to-end:
+
+1. Release `N.M` - Add the text limit and the background migration to fix the existing records.
+1. Release `N.M+1` - Cleanup the background migration.
+1. Release `N.M+2` - Validate the text limit.
diff --git a/doc/development/database_debugging.md b/doc/development/database_debugging.md
index 3753baa3c63..629aea5b121 100644
--- a/doc/development/database_debugging.md
+++ b/doc/development/database_debugging.md
@@ -70,7 +70,9 @@ Use these instructions for exploring the GitLab database while developing with t
1. **PostgreSQL user to authenticate as**: usually your local username, unless otherwise specified during PostgreSQL installation.
1. **Password of the PostgreSQL user**: the password you set when installing PostgreSQL.
1. **Port number to connect to**: `5432` (default).
- 1. **Use an ssl connection?** This depends on your installation. Options are:
+ 1. <!-- vale gitlab.Spelling = NO -->
+ **Use an ssl connection?**
+ <!-- vale gitlab.rulename = NO --> This depends on your installation. Options are:
- **Use Secure Connection**
- **Standard Connection** (default)
1. **(Optional) The database to connect to**: `gitlabhq_development`.
@@ -86,7 +88,7 @@ of the extension documentation.
### `ActiveRecord::PendingMigrationError` with Spring
-When running specs with the [Spring preloader](rake_tasks.md#speed-up-tests-rake-tasks-and-migrations),
+When running specs with the [Spring pre-loader](rake_tasks.md#speed-up-tests-rake-tasks-and-migrations),
the test database can get into a corrupted state. Trying to run the migration or
dropping/resetting the test database has no effect.
diff --git a/doc/development/database_review.md b/doc/development/database_review.md
index aa7ebb3756f..5405a8808f0 100644
--- a/doc/development/database_review.md
+++ b/doc/development/database_review.md
@@ -81,13 +81,20 @@ the following preparations into account.
- Ensure the down method reverts the changes in `db/structure.sql`.
- Update the migration output whenever you modify the migrations during the review process.
- Add tests for the migration in `spec/migrations` if necessary. See [Testing Rails migrations at GitLab](testing_guide/testing_migrations_guide.md) for more details.
-- When [high-traffic](https://gitlab.com/gitlab-org/gitlab/-/blob/master/rubocop/migration_helpers.rb#L12) tables are involved in the migration, use the [`with_lock_retries`](migration_style_guide.md#retry-mechanism-when-acquiring-database-locks) helper method. Review the relevant [examples in our documentation](migration_style_guide.md#examples) for use cases and solutions.
+- When [high-traffic](https://gitlab.com/gitlab-org/gitlab/-/blob/master/rubocop/rubocop-migrations.yml#L3) tables are involved in the migration, use the [`with_lock_retries`](migration_style_guide.md#retry-mechanism-when-acquiring-database-locks) helper method. Review the relevant [examples in our documentation](migration_style_guide.md#examples) for use cases and solutions.
- Ensure RuboCop checks are not disabled unless there's a valid reason to.
+- When adding an index to a [large table](https://gitlab.com/gitlab-org/gitlab/-/blob/master/rubocop/rubocop-migrations.yml#L3),
+test its execution using `CREATE INDEX CONCURRENTLY` in the `#database-lab` Slack channel and add the execution time to the MR description:
+ - Execution time largely varies between `#database-lab` and GitLab.com, but an elevated execution time from `#database-lab`
+ can give a hint that the execution on GitLab.com will also be considerably high.
+ - If the execution from `#database-lab` is longer than `1h`, the index should be moved to a [post-migration](post_deployment_migrations.md).
+ Keep in mind that in this case you may need to split the migration and the application changes in separate releases to ensure the index
+ will be in place when the code that needs it will be deployed.
#### Preparation when adding or modifying queries
- Write the raw SQL in the MR description. Preferably formatted
- nicely with [sqlformat.darold.net](http://sqlformat.darold.net) or
+ nicely with [pgFormatter](https://sqlformat.darold.net) or
[paste.depesz.com](https://paste.depesz.com).
- Include the output of `EXPLAIN (ANALYZE, BUFFERS)` of the relevant
queries in the description. If the output is too long, wrap it in
@@ -134,6 +141,9 @@ the following preparations into account.
- [Check indexes are present for foreign keys](migration_style_guide.md#adding-foreign-key-constraints)
- Ensure that migrations execute in a transaction or only contain
concurrent index/foreign key helpers (with transactions disabled)
+ - If an index to a large table is added and its execution time was elevated (more than 1h) on `#database-lab`:
+ - Ensure it was added in a post-migration.
+ - Maintainer: After the merge request is merged, notify Release Managers about it on `#f_upcoming_release` Slack channel.
- Check consistency with `db/structure.sql` and that migrations are [reversible](migration_style_guide.md#reversibility)
- Check queries timing (If any): Queries executed in a migration
need to fit comfortably within `15s` - preferably much less than that - on GitLab.com.
diff --git a/doc/development/db_dump.md b/doc/development/db_dump.md
index bb740d12f7b..4095932e44c 100644
--- a/doc/development/db_dump.md
+++ b/doc/development/db_dump.md
@@ -2,7 +2,7 @@
Sometimes it is useful to import the database from a production environment
into a staging environment for testing. The procedure below assumes you have
-SSH+sudo access to both the production environment and the staging VM.
+SSH and `sudo` access to both the production environment and the staging VM.
**Destroy your staging VM** when you are done with it. It is important to avoid
data leaks.
@@ -20,7 +20,7 @@ sudo gitlab-ctl stop sidekiq
```
Next, we let the production environment stream a compressed SQL dump to our
-local machine via SSH, and redirect this stream to a psql client on the staging
+local machine via SSH, and redirect this stream to a `psql` client on the staging
VM.
```shell
diff --git a/doc/development/deleting_migrations.md b/doc/development/deleting_migrations.md
index 3ac039a1692..8aa16710d55 100644
--- a/doc/development/deleting_migrations.md
+++ b/doc/development/deleting_migrations.md
@@ -5,7 +5,7 @@ the possibility of the migration already been included in past releases or in th
Because of it, it's not possible to delete existing migrations, as that could lead to:
-- Schema inconsistency, as changes introduced into the database were not rollbacked properly.
+- Schema inconsistency, as changes introduced into the database were not rolled back properly.
- Leaving a record on the `schema_versions` table, that points out to migration that no longer exists on the codebase.
Instead of deleting we can opt for disabling the migration.
@@ -22,7 +22,7 @@ Migrations can be disabled if:
In order to disable a migration, the following steps apply to all types of migrations:
-1. Turn the migration into a noop by removing the code inside `#up`, `#down`
+1. Turn the migration into a no-op by removing the code inside `#up`, `#down`
or `#perform` methods, and adding `#no-op` comment instead.
1. Add a comment explaining why the code is gone.
diff --git a/doc/development/distributed_tracing.md b/doc/development/distributed_tracing.md
index ae84e38e324..7fc33380aba 100644
--- a/doc/development/distributed_tracing.md
+++ b/doc/development/distributed_tracing.md
@@ -27,7 +27,7 @@ process boundaries, the correlation ID is injected into the outgoing request. Th
the propagation of the correlation ID to each downstream subsystem.
Correlation IDs are normally generated in the Rails application in response to
-certain webrequests. Some user facing systems don't generate correlation IDs in
+certain web requests. Some user facing systems don't generate correlation IDs in
response to user requests (for example, Git pushes over SSH).
### Developer guidelines for working with correlation IDs
diff --git a/doc/development/documentation/feature_flags.md b/doc/development/documentation/feature_flags.md
index 373c5a4ee7d..f3ce9ce3a83 100644
--- a/doc/development/documentation/feature_flags.md
+++ b/doc/development/documentation/feature_flags.md
@@ -43,10 +43,11 @@ For feature flags disabled by default, if they can be used by end users:
- Say that it's disabled by default.
- Say whether it's enabled on GitLab.com.
+- Say whether it can be enabled or disabled per-project.
- Say whether it's recommended for production use.
- Document how to enable and disable it.
-For example, for a feature disabled by default, disabled on GitLab.com, and
+For example, for a feature disabled by default, disabled on GitLab.com, can be enabled or disabled per-project, and
not ready for production use:
````markdown
@@ -55,6 +56,7 @@ not ready for production use:
> - [Introduced](link-to-issue) in GitLab 12.0.
> - It's deployed behind a feature flag, disabled by default.
> - It's disabled on GitLab.com.
+> - It's able to be enabled or disabled per-project
> - It's not recommended for production use.
> - To use it in GitLab self-managed instances, ask a GitLab administrator to [enable it](#anchor-to-section). **(CORE ONLY)**
@@ -65,18 +67,24 @@ not ready for production use:
<Feature Name> is under development and not ready for production use. It is
deployed behind a feature flag that is **disabled by default**.
[GitLab administrators with access to the GitLab Rails console](../path/to/administration/feature_flags.md)
-can enable it for your instance.
+can enable it for your instance. <Feature Name> can be enabled or disabled per-project
To enable it:
```ruby
+# Instance-wide
Feature.enable(:<feature flag>)
+# or by project
+Feature.enable(:<feature flag>, Project.find(<project id>))
```
To disable it:
```ruby
+# Instance-wide
Feature.disable(:<feature flag>)
+# or by project
+Feature.disable(:<feature flag>, Project.find(<project id>))
```
````
@@ -88,10 +96,11 @@ For features that became enabled by default:
- Say that it became enabled by default.
- Say whether it's enabled on GitLab.com.
+- Say whether it can be enabled or disabled per-project.
- Say whether it's recommended for production use.
- Document how to disable and enable it.
-For example, for a feature initially deployed disabled by default, that became enabled by default, that is enabled on GitLab.com, and ready for production use:
+For example, for a feature initially deployed disabled by default, that became enabled by default, that is enabled on GitLab.com, that cannot be enabled or disabled per-project, and ready for production use:
````markdown
# Feature Name
@@ -100,6 +109,7 @@ For example, for a feature initially deployed disabled by default, that became e
> - It was deployed behind a feature flag, disabled by default.
> - [Became enabled by default](link-to-issue) on GitLab 12.1.
> - It's enabled on GitLab.com.
+> - It's not able to be enabled or disabled per-project
> - It's recommended for production use.
> - For GitLab self-managed instances, GitLab administrators can opt to [disable it](#anchor-to-section). **(CORE ONLY)**
@@ -110,7 +120,7 @@ For example, for a feature initially deployed disabled by default, that became e
<Feature Name> is under development but ready for production use.
It is deployed behind a feature flag that is **enabled by default**.
[GitLab administrators with access to the GitLab Rails console](..path/to/administration/feature_flags.md)
-can opt to disable it for your instance.
+can opt to disable it for your instance it cannot be enabled or disabled per-project.
To disable it:
@@ -133,10 +143,11 @@ For features enabled by default:
- Say it's enabled by default.
- Say whether it's enabled on GitLab.com.
+- Say whether it can be enabled or disabled per-project.
- Say whether it's recommended for production use.
- Document how to disable and enable it.
-For example, for a feature enabled by default, enabled on GitLab.com, and ready for production use:
+For example, for a feature enabled by default, enabled on GitLab.com, cannot be enabled or disabled per-project, and ready for production use:
````markdown
# Feature Name
@@ -144,6 +155,7 @@ For example, for a feature enabled by default, enabled on GitLab.com, and ready
> - [Introduced](link-to-issue) in GitLab 12.0.
> - It's deployed behind a feature flag, enabled by default.
> - It's enabled on GitLab.com.
+> - It's not able to be enabled or disabled per-project
> - It's recommended for production use.
> - For GitLab self-managed instances, GitLab administrators can opt to [disable it](#anchor-to-section). **(CORE ONLY)**
diff --git a/doc/development/documentation/index.md b/doc/development/documentation/index.md
index 256d3f5d86c..f845a6ec592 100644
--- a/doc/development/documentation/index.md
+++ b/doc/development/documentation/index.md
@@ -59,7 +59,7 @@ However, anyone can contribute [documentation improvements](improvement-workflow
## Markdown and styles
[GitLab docs](https://gitlab.com/gitlab-org/gitlab-docs) uses [GitLab Kramdown](https://gitlab.com/gitlab-org/gitlab_kramdown)
-as its Markdown rendering engine. See the [GitLab Markdown Guide](https://about.gitlab.com/handbook/engineering/ux/technical-writing/markdown-guide/) for a complete Kramdown reference.
+as its Markdown rendering engine. See the [GitLab Markdown Guide](https://about.gitlab.com/handbook/markdown-guide/) for a complete Kramdown reference.
Adhere to the [Documentation Style Guide](styleguide.md). If a style standard is missing, you are welcome to suggest one via a merge request.
@@ -67,6 +67,41 @@ Adhere to the [Documentation Style Guide](styleguide.md). If a style standard is
See the [Structure](styleguide.md#structure) section of the [Documentation Style Guide](styleguide.md).
+## Metadata
+
+To provide additional directives or useful information, we add metadata in YAML
+format to the beginning of each product documentation page.
+
+For example, the following metadata would be at the beginning of a product
+documentation page whose content is primarily associated with the Audit Events
+feature:
+
+```yaml
+---
+stage: Monitor
+group: APM
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+---
+```
+
+The following list describes the YAML parameters in use:
+
+- `redirect_to`: The relative path and filename (with an `.md` extension) of the
+ location to which visitors should be redirected for a moved page.
+ [Learn more](#changing-document-location).
+- `stage`: The [Stage](https://about.gitlab.com/handbook/product/categories/#devops-stages)
+ to which the majority of the page's content belongs.
+- `group`: The [Group](https://about.gitlab.com/company/team/structure/#product-groups)
+ to which the majority of the page's content belongs.
+- `info`: The following line, which provides direction to contributors regarding
+ how to contact the Technical Writer associated with the page's Stage and
+ Group: `To determine the technical writer assigned to the Stage/Group
+ associated with this page, see
+ https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers`
+- `disqus_identifier`: Identifier for Disqus commenting system. Used to keep
+ comments with a page that's been moved to a new URL.
+ [Learn more](#redirections-for-pages-with-disqus-comments).
+
## Changing document location
Changing a document's location requires specific steps to ensure that
@@ -133,10 +168,9 @@ Things to note:
it in for `workflow/lfs/lfs_administration` and `lfs/lfs_administration`
and will print the file and the line where this file is mentioned.
You may ask why the two greps. Since [we use relative paths to link to
- documentation](styleguide.md#links)
- , sometimes it might be useful to search a path deeper.
+ documentation](styleguide.md#links), sometimes it might be useful to search a path deeper.
- The `*.md` extension is not used when a document is linked to GitLab's
- built-in help page, that's why we omit it in `git grep`.
+ built-in help page, which is why we omit it in `git grep`.
- Use the checklist on the "Change documentation location" MR description template.
### Redirections for pages with Disqus comments
@@ -380,7 +414,7 @@ The following GitLab features are used among others:
- [Multi project pipelines](../../ci/multi_project_pipeline_graphs.md)
- [Review Apps](../../ci/review_apps/index.md)
- [Artifacts](../../ci/yaml/README.md#artifacts)
-- [Specific Runner](../../ci/runners/README.md#locking-a-specific-runner-from-being-enabled-for-other-projects)
+- [Specific Runner](../../ci/runners/README.md#prevent-a-specific-runner-from-being-enabled-for-other-projects)
- [Pipelines for merge requests](../../ci/merge_request_pipelines/index.md)
## Testing
@@ -477,7 +511,7 @@ This list does not limit what other linters you can add to your local documentat
[markdownlint](https://github.com/DavidAnson/markdownlint) checks that Markdown
syntax follows [certain rules](https://github.com/DavidAnson/markdownlint/blob/master/doc/Rules.md#rules),
and is used by the [`docs-lint` test](#testing) with a [configuration file](#markdownlint-configuration).
-Our [Documentation Style Guide](styleguide.md#markdown) and [Markdown Guide](https://about.gitlab.com/handbook/engineering/ux/technical-writing/markdown-guide/)
+Our [Documentation Style Guide](styleguide.md#markdown) and [Markdown Guide](https://about.gitlab.com/handbook/markdown-guide/)
elaborate on which choices must be made when selecting Markdown syntax for GitLab
documentation. This tool helps catch deviations from those guidelines.
@@ -498,7 +532,9 @@ If you wish to use a different configuration file, use the `-c` flag:
markdownlint -c <config-file-name> 'doc/**/*.md'
```
-markdownlint can also be run from within text editors using [plugins/extensions](https://github.com/DavidAnson/markdownlint#related),
+##### Run markdownlint in an editor
+
+markdownlint can also be run as a linter within text editors using [plugins/extensions](https://github.com/DavidAnson/markdownlint#related),
such as:
- [Sublime Text](https://packagecontrol.io/packages/SublimeLinter-contrib-markdownlint)
@@ -524,7 +560,7 @@ four repositories that are the sources for <https://docs.gitlab.com>:
By default all rules are enabled, so the configuration file is used to disable unwanted
rules, and also to configure optional parameters for enabled rules as needed. You can
-also check [the issue](https://gitlab.com/gitlab-org/gitlab-foss/issues/64352) that
+also check [the issue](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/64352) that
tracked the changes required to implement these rules, and details which rules were
on or off when markdownlint was enabled on the docs.
@@ -546,11 +582,16 @@ and from GitLab's root directory (where `.vale.ini` is located), run:
vale --glob='*.{md}' doc
```
-You can also
-[configure the text editor of your choice](https://errata-ai.github.io/vale/#local-use-by-a-single-writer)
-to display the results.
+Vale's error-level test results [are displayed](#testing) in CI pipelines.
+
+##### Run Vale in an editor
+
+You can run Vale as a linter within your text editor of choice, with:
+
+- The Sublime Text [`SublimeLinter-contrib-vale` plugin](https://packagecontrol.io/packages/SublimeLinter-contrib-vale)
+- The Visual Studio Code [`testthedocs.vale` extension](https://marketplace.visualstudio.com/items?itemName=testthedocs.vale)
-Vale's test results [are displayed](#testing) in CI pipelines.
+We don't use [Vale Server](https://errata-ai.github.io/vale/#using-vale-with-a-text-editor-or-another-third-party-application).
##### Disable a Vale test
diff --git a/doc/development/documentation/site_architecture/global_nav.md b/doc/development/documentation/site_architecture/global_nav.md
index 12190e2cb9e..71020e6054e 100644
--- a/doc/development/documentation/site_architecture/global_nav.md
+++ b/doc/development/documentation/site_architecture/global_nav.md
@@ -356,7 +356,7 @@ files.
```
This also allows the nav to be displayed on other
-highest-level dirs (`/omnibus/`, `/runner/`, etc),
+highest-level directories (`/omnibus/`, `/runner/`, etc),
linking them back to `/ee/`.
The same logic is applied to all sections (`sec[:section_url]`),
diff --git a/doc/development/documentation/site_architecture/index.md b/doc/development/documentation/site_architecture/index.md
index 56dd3821b1c..942b202a3ec 100644
--- a/doc/development/documentation/site_architecture/index.md
+++ b/doc/development/documentation/site_architecture/index.md
@@ -53,12 +53,12 @@ product, and all together are pulled to generate the docs website:
- [GitLab Chart](https://gitlab.com/charts/gitlab/tree/master/doc)
NOTE: **Note:**
-In September 2019, we [moved towards a single codebase](https://gitlab.com/gitlab-org/gitlab/issues/2952),
+In September 2019, we [moved towards a single codebase](https://gitlab.com/gitlab-org/gitlab/-/issues/2952),
as such the docs for CE and EE are now identical. For historical reasons and
in order not to break any existing links throughout the internet, we still
maintain the CE docs (`https://docs.gitlab.com/ce/`), although it is hidden
from the website, and is now a symlink to the EE docs. When
-[Pages supports redirects](https://gitlab.com/gitlab-org/gitlab-pages/issues/24),
+[Pages supports redirects](https://gitlab.com/gitlab-org/gitlab-pages/-/issues/24),
we will be able to remove this completely.
## Assets
@@ -107,13 +107,13 @@ The pipeline in the `gitlab-docs` project:
Once a week on Mondays, a scheduled pipeline runs and rebuilds the Docker images
used in various pipeline jobs, like `docs-lint`. The Docker image configuration files are
-located at <https://gitlab.com/gitlab-org/gitlab-docs/-/tree/master/dockerfiles>.
+located in the [Dockerfiles directory](https://gitlab.com/gitlab-org/gitlab-docs/-/tree/master/dockerfiles).
If you need to rebuild the Docker images immediately (must have maintainer level permissions):
CAUTION: **Caution**
If you change the dockerfile configuration and rebuild the images, you can break the master
-pipeline in the main `gitlab` repo as well as in `gitlab-docs`. Create an image with
+pipeline in the main `gitlab` repository as well as in `gitlab-docs`. Create an image with
a different name first and test it to ensure you do not break the pipelines.
1. In [`gitlab-docs`](https://gitlab.com/gitlab-org/gitlab-docs), go to **{rocket}** **CI / CD > Pipelines**.
@@ -173,7 +173,7 @@ we reference the array with a symbol (`:versions`).
## Bumping versions of CSS and JavaScript
Whenever the custom CSS and JavaScript files under `content/assets/` change,
-make sure to bump their version in the frontmatter. This method guarantees that
+make sure to bump their version in the front matter. This method guarantees that
your changes will take effect by clearing the cache of previous files.
Always use Nanoc's way of including those files, do not hardcode them in the
@@ -207,22 +207,22 @@ If you don't specify `editor:`, the simple one is used by default.
## Algolia search engine
-The docs site uses [Algolia docsearch](https://community.algolia.com/docsearch/)
+The docs site uses [Algolia DocSearch](https://community.algolia.com/docsearch/)
for its search function. This is how it works:
-1. GitLab is a member of the [docsearch program](https://community.algolia.com/docsearch/#join-docsearch-program),
+1. GitLab is a member of the [DocSearch program](https://community.algolia.com/docsearch/#join-docsearch-program),
which is the free tier of [Algolia](https://www.algolia.com/).
1. Algolia hosts a [DocSearch configuration](https://github.com/algolia/docsearch-configs/blob/master/configs/gitlab.json)
for the GitLab docs site, and we've worked together to refine it.
-1. That [config](https://community.algolia.com/docsearch/config-file.html) is
+1. That [configuration](https://community.algolia.com/docsearch/config-file.html) is
parsed by their [crawler](https://community.algolia.com/docsearch/crawler-overview.html)
every 24h and [stores](https://community.algolia.com/docsearch/inside-the-engine.html)
- the [docsearch index](https://community.algolia.com/docsearch/how-do-we-build-an-index.html)
+ the [DocSearch index](https://community.algolia.com/docsearch/how-do-we-build-an-index.html)
on [Algolia's servers](https://community.algolia.com/docsearch/faq.html#where-is-my-data-hosted%3F).
-1. On the docs side, we use a [docsearch layout](https://gitlab.com/gitlab-org/gitlab-docs/blob/master/layouts/docsearch.html) which
+1. On the docs side, we use a [DocSearch layout](https://gitlab.com/gitlab-org/gitlab-docs/blob/master/layouts/docsearch.html) which
is present on pretty much every page except <https://docs.gitlab.com/search/>,
which uses its [own layout](https://gitlab.com/gitlab-org/gitlab-docs/blob/master/layouts/instantsearch.html). In those layouts,
- there's a JavaScript snippet which initiates docsearch by using an API key
+ there's a JavaScript snippet which initiates DocSearch by using an API key
and an index name (`gitlab`) that are needed for Algolia to show the results.
NOTE: **For GitLab employees:**
diff --git a/doc/development/documentation/site_architecture/release_process.md b/doc/development/documentation/site_architecture/release_process.md
index 13d6540fa35..d100ab8afa8 100644
--- a/doc/development/documentation/site_architecture/release_process.md
+++ b/doc/development/documentation/site_architecture/release_process.md
@@ -62,30 +62,15 @@ The single docs version must be created before the release merge request, but
this needs to happen when the stable branches for all products have been created.
1. Make sure you're in the root path of the `gitlab-docs` repository.
-1. Make sure your `master` is updated:
-
- ```shell
- git pull origin master
- ```
-
1. Run the Rake task to create the single version:
```shell
./bin/rake "release:single[12.0]"
```
- A new `Dockerfile.12.0` should have been created and committed to a new branch.
-
-1. Edit `.gitlab-ci.yml` and replace the `BRANCH_` variables with their respective
- upstream stable branch. For example, 12.6 would look like:
-
- ```yaml
- variables:
- BRANCH_EE: '12-6-stable-ee'
- BRANCH_OMNIBUS: '12-6-stable'
- BRANCH_RUNNER: '12-6-stable'
- BRANCH_CHARTS: '2-6-stable'
- ```
+ A new `Dockerfile.12.0` should have been created and `.gitlab-ci.yml` should
+ have the branches variables updated into a new branch. They will be automatically
+ committed.
1. Push the newly created branch, but **don't create a merge request**.
Once you push, the `image:docker-singe` job will create a new Docker image
@@ -106,6 +91,9 @@ Visit `http://localhost:4000/12.0/` to see if everything works correctly.
### 3. Create the release merge request
+NOTE: **Note:**
+To be [automated](https://gitlab.com/gitlab-org/gitlab-docs/-/issues/750).
+
Now it's time to create the monthly release merge request that adds the new
version and rotates the old one:
@@ -161,7 +149,7 @@ versions:
1. Run the Rake task that will create all the respective merge requests needed to
update the dropdowns and will be set to automatically be merged when their
pipelines succeed. The `release-X-Y` branch needs to be present locally,
- otherwise the Rake task will fail:
+ and you need to have switched to it, otherwise the Rake task will fail:
```shell
./bin/rake release:dropdowns
diff --git a/doc/development/documentation/structure.md b/doc/development/documentation/structure.md
index d19383bee27..eadcedfaac0 100644
--- a/doc/development/documentation/structure.md
+++ b/doc/development/documentation/structure.md
@@ -34,7 +34,7 @@ For additional details on each, see the [template for new docs](#template-for-ne
below.
Note that you can include additional subsections, as appropriate, such as 'How it Works', 'Architecture',
-and other logical divisions such as pre- and post-deployment steps.
+and other logical divisions such as pre-deployment and post-deployment steps.
## Template for new docs
diff --git a/doc/development/documentation/styleguide.md b/doc/development/documentation/styleguide.md
index 6d146051e13..9a93dbf4f94 100644
--- a/doc/development/documentation/styleguide.md
+++ b/doc/development/documentation/styleguide.md
@@ -96,7 +96,7 @@ Having a knowledge base in any form that is separate from the documentation woul
All GitLab documentation is written using [Markdown](https://en.wikipedia.org/wiki/Markdown).
-The [documentation website](https://docs.gitlab.com) uses GitLab Kramdown as its Markdown rendering engine. For a complete Kramdown reference, see the [GitLab Markdown Kramdown Guide](https://about.gitlab.com/handbook/engineering/ux/technical-writing/markdown-guide/).
+The [documentation website](https://docs.gitlab.com) uses GitLab Kramdown as its Markdown rendering engine. For a complete Kramdown reference, see the [GitLab Markdown Kramdown Guide](https://about.gitlab.com/handbook/markdown-guide/).
The [`gitlab-kramdown`](https://gitlab.com/gitlab-org/gitlab_kramdown)
Ruby gem will support all [GFM markup](../../user/markdown.md) in the future. That is,
@@ -134,6 +134,8 @@ be ignored by markdownlint.
In general, product names should follow the exact capitalization of the official names
of the products, protocols, and so on.
+See [`.markdownlint.json`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.markdownlint.json)
+for the words tested for proper capitalization in GitLab documentation.
Some examples fail if incorrect capitalization is used:
@@ -246,13 +248,14 @@ Do not include the same information in multiple places. [Link to a SSOT instead.
GitLab documentation should be clear and easy to understand.
- Be clear, concise, and stick to the goal of the documentation.
-- Write in US English with US grammar.
+- Write in US English with US grammar. (Tested in [`British.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab/British.yml).)
- Use inclusive language.
### Point of view
In most cases, it’s appropriate to use the second-person (you, yours) point of view,
because it’s friendly and easy to understand.
+(Tested in [`FirstPerson.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab/FirstPerson.yml).)
<!-- How do we harmonize the second person in Pajamas with our first person plural in our doc guide? -->
@@ -272,10 +275,11 @@ because it’s friendly and easy to understand.
- [GitLab Features](https://about.gitlab.com/features/). For example, Issue Board,
Geo, and Runner.
- GitLab [product tiers](https://about.gitlab.com/pricing/). For example, GitLab Core
- and GitLab Ultimate.
+ and GitLab Ultimate. (Tested in [`BadgeCapitalization.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab/BadgeCapitalization.yml).)
- Third-party products. For example, Prometheus, Kubernetes, and Git.
- Methods or methodologies. For example, Continuous Integration, Continuous
Deployment, Scrum, and Agile.
+ (Tested in [`.markdownlint.json`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.markdownlint.json).)
NOTE: **Note:**
Some features are also objects. For example, "GitLab's Merge Requests support X" and
@@ -289,6 +293,7 @@ tenses, words, and phrases:
- Avoid jargon.
- Avoid uncommon words.
- Don't write in the first person singular.
+ (Tested in [`FirstPerson.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab/FirstPerson.yml).)
- Instead of "I" or "me," use "we," "you," "us," or "one."
- When possible, stay user focused by writing in the second person ("you" or
the imperative).
@@ -311,6 +316,7 @@ tenses, words, and phrases:
- <!-- vale gitlab.LatinTerms = NO -->
We discourage use of Latin abbreviations, such as "e.g.," "i.e.," or "etc.,"
as even native users of English might misunderstand them.
+ (Tested in [`LatinTerms.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab/LatinTerms.yml).)
- Instead of "i.e.," use "that is."
- Instead of "e.g.," use "for example," "such as," "for instance," or "like."
- Instead of "etc.," either use "and so on" or consider editing it out, since
@@ -319,6 +325,9 @@ tenses, words, and phrases:
- Avoid using the word *currently* when talking about the product or its
features. The documentation describes the product as it is, and not as it
will be at some indeterminate point in the future.
+- Don't use profanity or obscenities. Doing so may negatively affect other
+ users and contributors, which is contrary to GitLab's value of
+ [diversity and inclusion](https://about.gitlab.com/handbook/values/#diversity-inclusion).
### Word usage clarifications
@@ -331,55 +340,55 @@ tenses, words, and phrases:
### Contractions
-- Use common contractions when it helps create a friendly and informal tone, especially in tutorials, instructional documentation, and [UIs](https://design.gitlab.com/content/punctuation/#contractions).
-
-| Do | Don't |
-|----------|-----------|
-| it's | it is |
-| can't | cannot |
-| wouldn't | would not |
-| you're | you are |
-| you've | you have |
-| haven't | have not |
-| don't | do not |
-| we're | we are |
-| that's | that is |
-| won't | will not |
+- Use common contractions when it helps create a friendly and informal tone, especially in tutorials, instructional documentation, and [UIs](https://design.gitlab.com/content/punctuation/#contractions). (Tested in [`Contractions.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab/Contractions.yml).)
+
+ | Do | Don't |
+ |----------|-----------|
+ | it's | it is |
+ | can't | cannot |
+ | wouldn't | would not |
+ | you're | you are |
+ | you've | you have |
+ | haven't | have not |
+ | don't | do not |
+ | we're | we are |
+ | that's | that is |
+ | won't | will not |
- Avoid less common contractions:
-| Do | Don't |
-|--------------|-------------|
-| he would | he'd |
-| it will | it'll |
-| should have | should've |
-| there would | there'd |
+ | Do | Don't |
+ |--------------|-------------|
+ | he would | he'd |
+ | it will | it'll |
+ | should have | should've |
+ | there would | there'd |
- Do not use contractions with a proper noun and a verb. For example:
-| Do | Don't |
-|----------------------|---------------------|
-| GitLab is creating X | GitLab's creating X |
+ | Do | Don't |
+ |----------------------|---------------------|
+ | GitLab is creating X | GitLab's creating X |
- Do not use contractions when you need to emphasize a negative. For example:
-| Do | Don't |
-|-----------------------------|----------------------------|
-| Do **not** install X with Y | **Don't** install X with Y |
+ | Do | Don't |
+ |-----------------------------|----------------------------|
+ | Do **not** install X with Y | **Don't** install X with Y |
- Do not use contractions in reference documentation. For example:
-| Do | Don't |
-|------------------------------------------|----------------------------|
-| Do **not** set a limit greater than 1000 | **Don't** set a limit greater than 1000 |
-| For `parameter1`, the default is 10 | For `parameter1`, the default's 10 |
+ | Do | Don't |
+ |------------------------------------------|----------------------------|
+ | Do **not** set a limit greater than 1000 | **Don't** set a limit greater than 1000 |
+ | For `parameter1`, the default is 10 | For `parameter1`, the default's 10 |
- Avoid contractions in error messages. Examples:
-| Do | Don't |
-|------------------------------------------|----------------------------|
-| Requests to localhost are not allowed | Requests to localhost aren't allowed |
-| Specified URL cannot be used | Specified URL can't be used |
+ | Do | Don't |
+ |------------------------------------------|----------------------------|
+ | Requests to localhost are not allowed | Requests to localhost aren't allowed |
+ | Specified URL cannot be used | Specified URL can't be used |
<!-- vale on -->
@@ -414,9 +423,9 @@ Check specific punctuation rules for [lists](#lists) below.
| ---- | ------- |
| Always end full sentences with a period. | _For a complete overview, read through this document._|
| Always add a space after a period when beginning a new sentence. | _For a complete overview, check this doc. For other references, check out this guide._ |
-| Do not use double spaces. | --- |
+| Do not use double spaces. (Tested in [`SentenceSpacing.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab/SentenceSpacing.yml).) | --- |
| Do not use tabs for indentation. Use spaces instead. You can configure your code editor to output spaces instead of tabs when pressing the tab key. | --- |
-| Use serial commas ("Oxford commas") before the final 'and/or' in a list. | _You can create new issues, merge requests, and milestones._ |
+| Use serial commas ("Oxford commas") before the final 'and/or' in a list. (Tested in [`OxfordComma.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab/OxfordComma.yml).) | _You can create new issues, merge requests, and milestones._ |
| Always add a space before and after dashes when using it in a sentence (for replacing a comma, for example). | _You should try this - or not._ |
| Always use lowercase after a colon. | _Related Issues: a way to create a relationship between issues._ |
@@ -439,7 +448,7 @@ cp <your_source_directory> <your_destination_directory>
- Always start list items with a capital letter, unless they are parameters or commands
that are in backticks, or similar.
- Always leave a blank line before and after a list.
-- Begin a line with spaces (not tabs) to denote a [nested subitem](#nesting-inside-a-list-item).
+- Begin a line with spaces (not tabs) to denote a [nested sub-item](#nesting-inside-a-list-item).
### Ordered vs. unordered lists
@@ -599,7 +608,7 @@ that is best described by a matrix, tables are the best choice for use.
### Creation guidelines
-Due to accessibility and scanability requirements, tables should not have any
+Due to accessibility and scannability requirements, tables should not have any
empty cells. If there is no otherwise meaningful value for a cell, consider entering
*N/A* (for 'not applicable') or *none*.
@@ -648,7 +657,7 @@ For other punctuation rules, please refer to the
links shift too, which eventually leads to dead links. If you think it is
compelling to add numbers in headings, make sure to at least discuss it with
someone in the Merge Request.
-- [Avoid using symbols and special chars](https://gitlab.com/gitlab-org/gitlab-docs/issues/84)
+- [Avoid using symbols and special characters](https://gitlab.com/gitlab-org/gitlab-docs/-/issues/84)
in headers. Whenever possible, they should be plain and short text.
- Avoid adding things that show ephemeral statuses. For example, if a feature is
considered beta or experimental, put this information in a note, not in the heading.
@@ -719,6 +728,7 @@ We include guidance for links in the following categories:
- How to set up [criteria](#basic-link-criteria) for configuring a link.
- What to set up when [linking to a `help`](../documentation/index.md#linking-to-help) page.
- How to set up [links to internal documentation](#links-to-internal-documentation) for cross-references.
+- How to set up [links to external documentation](#links-to-external-documentation) for authoritative sources.
- When to use [links requiring permissions](#links-requiring-permissions).
- How to set up a [link to a video](#link-to-video).
- How to [include links with version text](#text-for-documentation-requiring-version-text).
@@ -771,6 +781,12 @@ To link to internal documentation:
NOTE: **Note**:
Using the Markdown extension is necessary for the [`/help`](index.md#gitlab-help) section of GitLab.
+### Links to external documentation
+
+When describing interactions with external software, it's often helpful to include links to external
+documentation. When possible, make sure that you are linking to an **authoritative** source.
+For example, if you're describing a feature in Microsoft's Active Directory, include a link to official Microsoft documentation.
+
### Links requiring permissions
Don't link directly to:
@@ -793,7 +809,7 @@ Instead:
Example:
```markdown
-For more information, see the [confidential issue](../../user/project/issues/confidential_issues.md) `https://gitlab.com/gitlab-org/gitlab-foss/issues/<issue_number>`.
+For more information, see the [confidential issue](../../user/project/issues/confidential_issues.md) `https://gitlab.com/gitlab-org/gitlab-foss/-/issues/<issue_number>`.
```
### Link to specific lines of code
@@ -1040,11 +1056,11 @@ of language classes available.
| `xml` | |
| `yaml` | Alias: `yml`. |
-For a complete reference on code blocks, check the [Kramdown guide](https://about.gitlab.com/handbook/engineering/ux/technical-writing/markdown-guide/#code-blocks).
+For a complete reference on code blocks, check the [Kramdown guide](https://about.gitlab.com/handbook/markdown-guide/#code-blocks).
## GitLab SVG icons
-> [Introduced](https://gitlab.com/gitlab-org/gitlab-docs/issues/384) in GitLab 12.7.
+> [Introduced](https://gitlab.com/gitlab-org/gitlab-docs/-/issues/384) in GitLab 12.7.
You can use icons from the [GitLab SVG library](https://gitlab-org.gitlab.io/gitlab-svgs/) directly
in the documentation.
@@ -1359,7 +1375,35 @@ versions back, you can consider removing the text if it's irrelevant or confusin
For example, if the current major version is 12.x, version text referencing versions of GitLab 8.x
and older are candidates for removal if necessary for clearer or cleaner docs.
-## Product badges
+## Products and features
+
+Refer to the information in this section when describing products and features
+within the GitLab product documentation.
+
+### Avoid line breaks in names
+
+When entering a product or feature name that includes a space (such as
+GitLab Community Edition) or even other companies' products (such as
+Amazon Web Services), be sure to not split the product or feature name across
+lines with an inserted line break. Splitting product or feature names across
+lines makes searching for these items more difficult, and can cause problems if
+names change.
+
+For example, the followng Markdown content is *not* formatted correctly:
+
+```markdown
+When entering a product or feature name that includes a space (such as GitLab
+Community Edition), don't split the product or feature name across lines.
+```
+
+Instead, it should appear similar to the following:
+
+```markdown
+When entering a product or feature name that includes a space (such as
+GitLab Community Edition), don't split the product or feature name across lines.
+```
+
+### Product badges
When a feature is available in EE-only tiers, add the corresponding tier according to the
feature availability:
@@ -1399,7 +1443,7 @@ For example:
The absence of tiers' mentions mean that the feature is available in GitLab Core,
GitLab.com Free, and all higher tiers.
-### How it works
+#### How it works
Introduced by [!244](https://gitlab.com/gitlab-org/gitlab-docs/-/merge_requests/244),
the special markup `**(STARTER)**` will generate a `span` element to trigger the
@@ -1608,7 +1652,7 @@ Rendered example:
- Wherever needed use this personal access token: `<your_access_token>`.
- Always put the request first. `GET` is the default so you don't have to
include it.
-- Use double quotes to the URL when it includes additional parameters.
+- Wrap the URL in double quotes (`"`).
- Prefer to use examples using the personal access token and don't pass data of
username and password.
@@ -1686,9 +1730,8 @@ Use `%2F` for slashes (`/`).
#### Pass arrays to API calls
The GitLab API sometimes accepts arrays of strings or integers. For example, to
-restrict the sign-up e-mail domains of a GitLab instance to `*.example.com` and
-`example.net`, you would do something like this:
+exclude specific users when requesting a list of users for a project, you would do something like this:
```shell
-curl --request PUT --header "PRIVATE-TOKEN: <your_access_token>" --data "domain_whitelist[]=*.example.com" --data "domain_whitelist[]=example.net" https://gitlab.example.com/api/v4/application/settings
+curl --request PUT --header "PRIVATE-TOKEN: <your_access_token>" --data "skip_users[]=<user_id>" --data "skip_users[]=<user_id>" https://gitlab.example.com/api/v4/projects/<project_id>/users
```
diff --git a/doc/development/documentation/workflow.md b/doc/development/documentation/workflow.md
index ab6200155bf..c3e15cb1b2b 100644
--- a/doc/development/documentation/workflow.md
+++ b/doc/development/documentation/workflow.md
@@ -54,7 +54,7 @@ To update GitLab documentation:
1. Follow the described standards and processes listed on the page, including:
- The [Structure and template](structure.md) page.
- The [Style Guide](styleguide.md).
- - The [Markdown Guide](https://about.gitlab.com/handbook/engineering/ux/technical-writing/markdown-guide/).
+ - The [Markdown Guide](https://about.gitlab.com/handbook/markdown-guide/).
1. Follow GitLab's [Merge Request Guidelines](../contributing/merge_request_workflow.md#merge-request-guidelines).
TIP: **Tip:**
@@ -104,7 +104,7 @@ The process involves the following:
- Ensure the appropriate labels are applied, including any required to pick a merge request into
a release.
- Ensure that, if there has not been a Technical Writer review completed or scheduled, they
- [create the required issue](https://gitlab.com/gitlab-org/gitlab/issues/new?issuable_template=Doc%20Review), assign to the Technical Writer of the given stage group,
+ [create the required issue](https://gitlab.com/gitlab-org/gitlab/-/issues/new?issuable_template=Doc%20Review), assign to the Technical Writer of the given stage group,
and link it from the merge request.
The process is reflected in the **Documentation**
@@ -113,14 +113,14 @@ The process is reflected in the **Documentation**
## Other ways to help
If you have ideas for further documentation resources please
-[create an issue](https://gitlab.com/gitlab-org/gitlab/issues/new?issuable_template=Documentation)
+[create an issue](https://gitlab.com/gitlab-org/gitlab/-/issues/new?issuable_template=Documentation)
using the Documentation template.
## Post-merge reviews
If not assigned to a Technical Writer for review prior to merging, a review must be scheduled
immediately after merge by the developer or maintainer. For this,
-create an issue using the [Doc Review description template](https://gitlab.com/gitlab-org/gitlab/issues/new?issuable_template=Doc%20Review)
+create an issue using the [Doc Review description template](https://gitlab.com/gitlab-org/gitlab/-/issues/new?issuable_template=Doc%20Review)
and link to it from the merged merge request that introduced the documentation change.
Circumstances where a regular pre-merge Technical Writer review might be skipped include:
diff --git a/doc/development/ee_features.md b/doc/development/ee_features.md
index 3951b0516e8..e22e96b6f06 100644
--- a/doc/development/ee_features.md
+++ b/doc/development/ee_features.md
@@ -11,7 +11,7 @@
## Act as CE when unlicensed
Since the implementation of
-[GitLab CE features to work with unlicensed EE instance](https://gitlab.com/gitlab-org/gitlab/issues/2500)
+[GitLab CE features to work with unlicensed EE instance](https://gitlab.com/gitlab-org/gitlab/-/issues/2500)
GitLab Enterprise Edition should work like GitLab Community Edition
when no license is active. So EE features always should be guarded by
`project.feature_available?` or `group.feature_available?` (or
@@ -165,8 +165,6 @@ There are a few gotchas with it:
end
```
- This would require updating CE first, or make sure this is back ported to CE.
-
When prepending, place them in the `ee/` specific sub-directory, and
wrap class or module in `module EE` to avoid naming conflicts.
diff --git a/doc/development/elasticsearch.md b/doc/development/elasticsearch.md
index 185f536fc01..9f54386f1af 100644
--- a/doc/development/elasticsearch.md
+++ b/doc/development/elasticsearch.md
@@ -24,19 +24,19 @@ See the [Elasticsearch GDK setup instructions](https://gitlab.com/gitlab-org/git
- `gitlab:elastic:test:index_size`: Tells you how much space the current index is using, as well as how many documents are in the index.
- `gitlab:elastic:test:index_size_change`: Outputs index size, reindexes, and outputs index size again. Useful when testing improvements to indexing size.
-Additionally, if you need large repos or multiple forks for testing, please consider [following these instructions](rake_tasks.md#extra-project-seed-options)
+Additionally, if you need large repositories or multiple forks for testing, please consider [following these instructions](rake_tasks.md#extra-project-seed-options)
## How does it work?
-The Elasticsearch integration depends on an external indexer. We ship an [indexer written in Go](https://gitlab.com/gitlab-org/gitlab-elasticsearch-indexer). The user must trigger the initial indexing via a Rake task but, after this is done, GitLab itself will trigger reindexing when required via `after_` callbacks on create, update, and destroy that are inherited from [/ee/app/models/concerns/elastic/application_versioned_search.rb](https://gitlab.com/gitlab-org/gitlab/blob/master/ee/app/models/concerns/elastic/application_versioned_search.rb).
+The Elasticsearch integration depends on an external indexer. We ship an [indexer written in Go](https://gitlab.com/gitlab-org/gitlab-elasticsearch-indexer). The user must trigger the initial indexing via a Rake task but, after this is done, GitLab itself will trigger reindexing when required via `after_` callbacks on create, update, and destroy that are inherited from [`/ee/app/models/concerns/elastic/application_versioned_search.rb`](https://gitlab.com/gitlab-org/gitlab/blob/master/ee/app/models/concerns/elastic/application_versioned_search.rb).
-After initial indexing is complete, create, update, and delete operations for all models except projects (see [#207494](https://gitlab.com/gitlab-org/gitlab/issues/207494)) are tracked in a Redis [`ZSET`](https://redis.io/topics/data-types#sorted-sets). A regular `sidekiq-cron` `ElasticIndexBulkCronWorker` processes this queue, updating many Elasticsearch documents at a time with the [Bulk Request API](https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-bulk.html).
+After initial indexing is complete, create, update, and delete operations for all models except projects (see [#207494](https://gitlab.com/gitlab-org/gitlab/-/issues/207494)) are tracked in a Redis [`ZSET`](https://redis.io/topics/data-types#sorted-sets). A regular `sidekiq-cron` `ElasticIndexBulkCronWorker` processes this queue, updating many Elasticsearch documents at a time with the [Bulk Request API](https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-bulk.html).
-Search queries are generated by the concerns found in [ee/app/models/concerns/elastic](https://gitlab.com/gitlab-org/gitlab/tree/master/ee/app/models/concerns/elastic). These concerns are also in charge of access control, and have been a historic source of security bugs so please pay close attention to them!
+Search queries are generated by the concerns found in [`ee/app/models/concerns/elastic`](https://gitlab.com/gitlab-org/gitlab/tree/master/ee/app/models/concerns/elastic). These concerns are also in charge of access control, and have been a historic source of security bugs so please pay close attention to them!
## Existing Analyzers/Tokenizers/Filters
-These are all defined in [ee/lib/elastic/latest/config.rb](https://gitlab.com/gitlab-org/gitlab/blob/master/ee/lib/elastic/latest/config.rb)
+These are all defined in [`ee/lib/elastic/latest/config.rb`](https://gitlab.com/gitlab-org/gitlab/blob/master/ee/lib/elastic/latest/config.rb)
### Analyzers
@@ -54,7 +54,7 @@ Please see the `sha_tokenizer` explanation later below for an example.
#### `code_analyzer`
-Used when indexing a blob's filename and content. Uses the `whitespace` tokenizer and the filters: [`code`](#code), [`edgeNGram_filter`](#edgengram_filter), `lowercase`, and `asciifolding`
+Used when indexing a blob's filename and content. Uses the `whitespace` tokenizer and the filters: [`code`](#code), `lowercase`, and `asciifolding`
The `whitespace` tokenizer was selected in order to have more control over how tokens are split. For example the string `Foo::bar(4)` needs to generate tokens like `Foo` and `bar(4)` in order to be properly searched.
@@ -71,7 +71,7 @@ Not directly used for indexing, but rather used to transform a search input. Use
#### `sha_tokenizer`
-This is a custom tokenizer that uses the [`edgeNGram` tokenizer](https://www.elastic.co/guide/en/elasticsearch/reference/5.5/analysis-edgengram-tokenizer.html) to allow SHAs to be searcheable by any sub-set of it (minimum of 5 chars).
+This is a custom tokenizer that uses the [`edgeNGram` tokenizer](https://www.elastic.co/guide/en/elasticsearch/reference/5.5/analysis-edgengram-tokenizer.html) to allow SHAs to be searchable by any sub-set of it (minimum of 5 chars).
Example:
@@ -149,7 +149,7 @@ These proxy objects would talk to Elasticsearch server directly (see top half of
![Elasticsearch Architecture](img/elasticsearch_architecture.svg)
-In the planned new design, each model would have a pair of corresponding subclassed proxy objects, in which model-specific logic is located. For example, `Snippet` would have `SnippetClassProxy` and `SnippetInstanceProxy` (being subclass of `Elasticsearch::Model::Proxy::ClassMethodsProxy` and `Elasticsearch::Model::Proxy::InstanceMethodsProxy`, respectively).
+In the planned new design, each model would have a pair of corresponding sub-classed proxy objects, in which model-specific logic is located. For example, `Snippet` would have `SnippetClassProxy` and `SnippetInstanceProxy` (being subclass of `Elasticsearch::Model::Proxy::ClassMethodsProxy` and `Elasticsearch::Model::Proxy::InstanceMethodsProxy`, respectively).
`__elasticsearch__` would represent another layer of proxy object, keeping track of multiple actual proxy objects. It would forward method calls to the appropriate index. For example:
@@ -175,6 +175,53 @@ If the current version is `v12p1`, and we need to create a new version for `v12p
1. Change the namespace for files under `v12p1` folder from `Latest` to `V12p1`
1. Make changes to files under the `latest` folder as needed
+## Performance Monitoring
+
+### Prometheus
+
+GitLab exports [Prometheus
+metrics](../administration/monitoring/prometheus/gitlab_metrics.md) relating to
+the number of requests and timing for all web/API requests and Sidekiq jobs,
+which can help diagnose performance trends and compare how Elasticsearch timing
+is impacting overall performance relative to the time spent doing other things.
+
+#### Indexing queues
+
+GitLab also exports [Prometheus
+metrics](../administration/monitoring/prometheus/gitlab_metrics.md) for
+indexing queues, which can help diagnose performance bottlenecks and determine
+whether or not your GitLab instance or Elasticsearch server can keep up with
+the volume of updates.
+
+### Logs
+
+All of the indexing happens in Sidekiq, so much of the relevant logs for the
+Elasticsearch integration can be found in
+[`sidekiq.log`](../administration/logs.md#sidekiqlog). In particular, all
+Sidekiq workers that make requests to Elasticsearch in any way will log the
+number of requests and time taken querying/writing to Elasticsearch. This can
+be useful to understand whether or not your cluster is keeping up with
+indexing.
+
+Searching Elasticsearch is done via ordinary web workers handling requests. Any
+requests to load a page or make an API request, which then make requests to
+Elasticsearch, will log the number of requests and the time taken to
+[`production_json.log`](../administration/logs.md#production_jsonlog). These
+logs will also include the time spent on Database and Gitaly requests, which
+may help to diagnose which part of the search is performing poorly.
+
+There are additional logs specific to Elasticsearch that are sent to
+[`elasticsearch.log`](../administration/logs.md#elasticsearchlog-starter-only)
+that may contain information to help diagnose performance issues.
+
+### Performance Bar
+
+Elasticsearch requests will be displayed in the [`Performance
+Bar`](../administration/monitoring/performance/performance_bar.md), which can
+be used both locally in development and on any deployed GitLab instance to
+diagnose poor search performance. This will show the exact queries being made,
+which is useful to diagnose why a search might be slow.
+
## Troubleshooting
### Getting `flood stage disk watermark [95%] exceeded`
diff --git a/doc/development/emails.md b/doc/development/emails.md
index e6e2ea8aae7..2477a51f78f 100644
--- a/doc/development/emails.md
+++ b/doc/development/emails.md
@@ -1,5 +1,20 @@
# Dealing with email in development
+## Ensuring compatibility with mailer Sidekiq jobs
+
+A Sidekiq job is enqueued whenever `deliver_later` is called on an `ActionMailer`.
+If a mailer argument needs to be added or removed, it is important to ensure
+both backward and forward compatibility. Adhere to the Sidekiq Style Guide steps for
+[changing the arguments for a worker](sidekiq_style_guide.md#changing-the-arguments-for-a-worker).
+
+In the following example from [`NotificationService`](https://gitlab.com/gitlab-org/gitlab/-/blob/33ccb22e4fc271dbaac94b003a7a1a2915a13441/app/services/notification_service.rb#L74)
+adding or removing an argument in this mailer's definition may cause problems
+during deployment before all Rails and Sidekiq nodes have the updated code.
+
+```ruby
+mailer.unknown_sign_in_email(user, ip, time).deliver_later
+```
+
## Sent emails
To view rendered emails "sent" in your development instance, visit
diff --git a/doc/development/fe_guide/axios.md b/doc/development/fe_guide/axios.md
index f8d301dac5e..38a8c8f1086 100644
--- a/doc/development/fe_guide/axios.md
+++ b/doc/development/fe_guide/axios.md
@@ -1,15 +1,15 @@
# Axios
-We use [axios](https://github.com/axios/axios) to communicate with the server in Vue applications and most new code.
+We use [Axios](https://github.com/axios/axios) to communicate with the server in Vue applications and most new code.
-In order to guarantee all defaults are set you *should not use `axios` directly*, you should import `axios` from `axios_utils`.
+In order to guarantee all defaults are set you *should not use Axios directly*, you should import Axios from `axios_utils`.
## CSRF token
-All our request require a CSRF token.
-To guarantee this token is set, we are importing [axios](https://github.com/axios/axios), setting the token, and exporting `axios` .
+All our requests require a CSRF token.
+To guarantee this token is set, we are importing [Axios](https://github.com/axios/axios), setting the token, and exporting `axios` .
-This exported module should be used instead of directly using `axios` to ensure the token is set.
+This exported module should be used instead of directly using Axios to ensure the token is set.
## Usage
@@ -30,7 +30,7 @@ This exported module should be used instead of directly using `axios` to ensure
});
```
-## Mock axios response in tests
+## Mock Axios response in tests
To help us mock the responses we are using [axios-mock-adapter](https://github.com/ctimmerm/axios-mock-adapter).
@@ -41,7 +41,7 @@ Advantages over [`spyOn()`](https://jasmine.github.io/api/edge/global.html#spyOn
- simple API to test error cases
- provides `replyOnce()` to allow for different responses
-We have also decided against using [axios interceptors](https://github.com/axios/axios#interceptors) because they are not suitable for mocking.
+We have also decided against using [Axios interceptors](https://github.com/axios/axios#interceptors) because they are not suitable for mocking.
### Example
@@ -67,7 +67,7 @@ We have also decided against using [axios interceptors](https://github.com/axios
});
```
-### Mock poll requests in tests with axios
+### Mock poll requests in tests with Axios
Because polling function requires a header object, we need to always include an object as the third argument:
diff --git a/doc/development/fe_guide/dependencies.md b/doc/development/fe_guide/dependencies.md
index 0f5825992e9..7f078df887d 100644
--- a/doc/development/fe_guide/dependencies.md
+++ b/doc/development/fe_guide/dependencies.md
@@ -32,7 +32,7 @@ because they can create conflicts in the dependency tree. Blocked dependencies a
### BootstrapVue
-[BootstrapVue](https://bootstrap-vue.js.org/) is a component library built with Vue.js and Bootstrap.
+[BootstrapVue](https://bootstrap-vue.org/) is a component library built with Vue.js and Bootstrap.
We wrap BootstrapVue components in [GitLab UI](https://gitlab.com/gitlab-org/gitlab-ui/) with the
purpose of applying visual styles and usage guidelines specified in the
[Pajamas Design System](https://design.gitlab.com/). For this reason, we recommend not installing
diff --git a/doc/development/fe_guide/development_process.md b/doc/development/fe_guide/development_process.md
index 6e078f097cd..892e931bf5b 100644
--- a/doc/development/fe_guide/development_process.md
+++ b/doc/development/fe_guide/development_process.md
@@ -23,7 +23,7 @@ Please use your best judgment when to use it and please contribute new points th
- [ ] Are all necessary UX specifications available that you will need in order to implement? Are there new UX components/patterns in the designs? Then contact the UI component team early on. How should error messages or validation be handled?
- [ ] **Library usage** Use Vuex as soon as you have even a medium state to manage, use Vue router if you need to have different views internally and want to link from the outside. Check what libraries we already have for which occasions.
- [ ] **Plan your implementation:**
- - [ ] **Architecture plan:** Create a plan aligned with GitLab's architecture, how you are going to do the implementation, for example Vue application setup and its components (through [onion skinning](https://gitlab.com/gitlab-org/gitlab-foss/issues/35873#note_39994091)), Store structure and data flow, which existing Vue components can you reuse. It's a good idea to go through your plan with another engineer to refine it.
+ - [ ] **Architecture plan:** Create a plan aligned with GitLab's architecture, how you are going to do the implementation, for example Vue application setup and its components (through [onion skinning](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/35873#note_39994091)), Store structure and data flow, which existing Vue components can you reuse. It's a good idea to go through your plan with another engineer to refine it.
- [ ] **Backend:** The best way is to kickoff the implementation in a call and discuss with the assigned Backend engineer what you will need from the backend and also when. Can you reuse existing API's? How is the performance with the planned architecture? Maybe create together a JSON mock object to already start with development.
- [ ] **Communication:** It also makes sense to have for bigger features an own slack channel (normally called #f_{feature_name}) and even weekly demo calls with all people involved.
- [ ] **Dependency Plan:** Are there big dependencies in the plan between you and others, then maybe create an execution diagram to show what is blocking which part and the order of the different parts.
diff --git a/doc/development/fe_guide/droplab/plugins/ajax.md b/doc/development/fe_guide/droplab/plugins/ajax.md
index abc208e7568..f22d95064dd 100644
--- a/doc/development/fe_guide/droplab/plugins/ajax.md
+++ b/doc/development/fe_guide/droplab/plugins/ajax.md
@@ -6,7 +6,7 @@
Add the `Ajax` object to the plugins array of a `DropLab.prototype.init` or `DropLab.prototype.addHook` call.
-`Ajax` requires 2 config values, the `endpoint` and `method`.
+`Ajax` requires 2 configuration values, the `endpoint` and `method`.
- `endpoint` should be a URL to the request endpoint.
- `method` should be `setData` or `addData`.
diff --git a/doc/development/fe_guide/droplab/plugins/filter.md b/doc/development/fe_guide/droplab/plugins/filter.md
index 876149e4872..e8194e45a41 100644
--- a/doc/development/fe_guide/droplab/plugins/filter.md
+++ b/doc/development/fe_guide/droplab/plugins/filter.md
@@ -7,7 +7,7 @@ to the dropdown using a simple fuzzy string search of an input value.
Add the `Filter` object to the plugins array of a `DropLab.prototype.init` or `DropLab.prototype.addHook` call.
-- `Filter` requires a config value for `template`.
+- `Filter` requires a configuration value for `template`.
- `template` should be the key of the objects within your data array that you want to compare
to the user input string, for filtering.
diff --git a/doc/development/fe_guide/droplab/plugins/input_setter.md b/doc/development/fe_guide/droplab/plugins/input_setter.md
index 9b2e1e8faab..b873b7a14ee 100644
--- a/doc/development/fe_guide/droplab/plugins/input_setter.md
+++ b/doc/development/fe_guide/droplab/plugins/input_setter.md
@@ -6,12 +6,12 @@
Add the `InputSetter` object to the plugins array of a `DropLab.prototype.init` or `DropLab.prototype.addHook` call.
-- `InputSetter` requires a config value for `input` and `valueAttribute`.
+- `InputSetter` requires a configuration value for `input` and `valueAttribute`.
- `input` should be the DOM element that you want to manipulate.
- `valueAttribute` should be a string that is the name of an attribute on your list items that is used to get the value
to update the `input` element with.
-You can also set the `InputSetter` config to an array of objects, which will allow you to update multiple elements.
+You can also set the `InputSetter` configuration to an array of objects, which will allow you to update multiple elements.
```html
<input id="input" value="">
diff --git a/doc/development/fe_guide/emojis.md b/doc/development/fe_guide/emojis.md
index 6d324d4c4a0..3cd14c0dfd3 100644
--- a/doc/development/fe_guide/emojis.md
+++ b/doc/development/fe_guide/emojis.md
@@ -1,6 +1,6 @@
# Emojis
-GitLab supports native unicode emojis and fallsback to image-based emojis selectively
+GitLab supports native Unicode emojis and falls back to image-based emojis selectively
when your platform does not support it.
## How to update Emojis
@@ -21,7 +21,7 @@ when your platform does not support it.
1. Ensure you see new individual images copied into `app/assets/images/emoji/`
1. Ensure you can see the new emojis and their aliases in the GFM Autocomplete
1. Ensure you can see the new emojis and their aliases in the award emoji menu
- 1. You might need to add new emoji unicode support checks and rules for platforms
+ 1. You might need to add new emoji Unicode support checks and rules for platforms
that do not support a certain emoji and we need to fallback to an image.
See `app/assets/javascripts/emoji/support/is_emoji_unicode_supported.js`
and `app/assets/javascripts/emoji/support/unicode_support_map.js`
diff --git a/doc/development/fe_guide/frontend_faq.md b/doc/development/fe_guide/frontend_faq.md
index 8f8f162609a..4f814f3cdde 100644
--- a/doc/development/fe_guide/frontend_faq.md
+++ b/doc/development/fe_guide/frontend_faq.md
@@ -82,7 +82,7 @@ To avoid this behavior, add the class `js-no-auto-disable` to the button.
### 5. Should I use a full URL (i.e. `gon.gitlab_url`) or a full path (i.e. `gon.relative_url_root`) when referencing backend endpoints?
It's preferred to use a **full path** over a **full URL** because the URL will use the hostname configured with
-GitLab which may not match the request. This will cause [CORS issues like this Web IDE one](https://gitlab.com/gitlab-org/gitlab/issues/36810).
+GitLab which may not match the request. This will cause [CORS issues like this Web IDE one](https://gitlab.com/gitlab-org/gitlab/-/issues/36810).
Example:
@@ -141,7 +141,7 @@ function initFoo() {
});
}
-// Vuex action can now reference the path from it's state :)
+// Vuex action can now reference the path from its state :)
export const fetchFoos = ({ state }) => {
return axios.get(state.settings.fooPath);
};
diff --git a/doc/development/fe_guide/graphql.md b/doc/development/fe_guide/graphql.md
index caf84d04490..191ebd2ff58 100644
--- a/doc/development/fe_guide/graphql.md
+++ b/doc/development/fe_guide/graphql.md
@@ -1,7 +1,69 @@
# GraphQL
+## Getting Started
+
+### Helpful Resources
+
+**General resources**:
+
+- [📚 Official Introduction to GraphQL](https://graphql.org/learn/)
+- [📚 Official Introduction to Apollo](https://www.apollographql.com/docs/tutorial/introduction/)
+
+**GraphQL at GitLab**:
+
+- [🎬 GitLab Unfiltered GraphQL playlist](https://www.youtube.com/watch?v=wHPKZBDMfxE&list=PL05JrBw4t0KpcjeHjaRMB7IGB2oDWyJzv)
+- [🎬 GraphQL at GitLab: Deep Dive](../api_graphql_styleguide.md#deep-dive) (video) by Nick Thomas
+ - An overview of the history of GraphQL at GitLab (not frontend-specific)
+- [🎬 GitLab Feature Walkthrough with GraphQL and Vue Apollo](https://www.youtube.com/watch?v=6yYp2zB7FrM) (video) by Natalia Tepluhina
+ - A real-life example of implementing a frontend feature in GitLab using GraphQL
+- [🎬 History of client-side GraphQL at GitLab](https://www.youtube.com/watch?v=mCKRJxvMnf0) (video) Illya Klymov and Natalia Tepluhina
+- [🎬 From Vuex to Apollo](https://www.youtube.com/watch?v=9knwu87IfU8) (video) by Natalia Tepluhina
+ - A useful overview of when Apollo might be a better choice than Vuex, and how one could go about the transition
+- [🛠 Vuex -> Apollo Migration: a proof-of-concept project](https://gitlab.com/ntepluhina/vuex-to-apollo/blob/master/README.md)
+ - A collection of examples that show the possible approaches for state management with Vue+GraphQL+(Vuex or Apollo) apps
+
+### Libraries
+
+We use [Apollo](https://www.apollographql.com/) (specifically [Apollo Client](https://www.apollographql.com/docs/react/)) and [Vue Apollo](https://github.com/vuejs/vue-apollo)
+when using GraphQL for frontend development.
+
+If you are using GraphQL within a Vue application, the [Usage in Vue](#usage-in-vue) section
+can help you learn how to integrate Vue Apollo.
+
+For other use cases, check out the [Usage outside of Vue](#usage-outside-of-vue) section.
+
+### Tooling
+
+- [Apollo Client Devtools](https://github.com/apollographql/apollo-client-devtools)
+
+#### [Apollo GraphQL VS Code extension](https://marketplace.visualstudio.com/items?itemName=apollographql.vscode-apollo)
+
+If you use VS Code, the Apollo GraphQL extension supports autocompletion in `.graphql` files. To set up
+the GraphQL extension, follow these steps:
+
+1. Add an `apollo.config.js` file to the root of your `gitlab` local directory.
+1. Populate the file with the following content:
+
+ ```javascript
+ module.exports = {
+ client: {
+ includes: ['./app/assets/javascripts/**/*.graphql', './ee/app/assets/javascripts/**/*.graphql'],
+ service: {
+ name: 'GitLab',
+ localSchemaFile: './doc/api/graphql/reference/gitlab_schema.graphql',
+ },
+ },
+ };
+ ```
+
+1. Restart VS Code.
+
+### Exploring the GraphQL API
+
Our GraphQL API can be explored via GraphiQL at your instance's
-`/-/graphql-explorer` or at [GitLab.com](https://gitlab.com/-/graphql-explorer).
+`/-/graphql-explorer` or at [GitLab.com](https://gitlab.com/-/graphql-explorer). Consult the
+[GitLab GraphQL API Reference documentation](../../api/graphql/reference)
+where needed.
You can check all existing queries and mutations on the right side
of GraphiQL in its **Documentation explorer**. It's also possible to
@@ -10,9 +72,6 @@ their execution by clicking **Execute query** button on the top left:
![GraphiQL interface](img/graphiql_explorer_v12_4.png)
-We use [Apollo](https://www.apollographql.com/) and [Vue Apollo](https://github.com/vuejs/vue-apollo) for working with GraphQL
-on the frontend.
-
## Apollo Client
To save duplicated clients getting created in different apps, we have a
@@ -30,7 +89,7 @@ Default client accepts two parameters: `resolvers` and `config`.
## GraphQL Queries
To save query compilation at runtime, webpack can directly import `.graphql`
-files. This allows webpack to preprocess the query at compile time instead
+files. This allows webpack to pre-process the query at compile time instead
of the client doing compilation of queries.
To distinguish queries from mutations and fragments, the following naming convention is recommended:
@@ -41,7 +100,7 @@ To distinguish queries from mutations and fragments, the following naming conven
### Fragments
-Fragments are a way to make your complex GraphQL queries more readable and re-usable. Here is an example of GraphQL fragment:
+[Fragments](https://graphql.org/learn/queries/#fragments) are a way to make your complex GraphQL queries more readable and re-usable. Here is an example of GraphQL fragment:
```javascript
fragment DesignListItem on Design {
@@ -94,7 +153,7 @@ new Vue({
});
```
-Read more about [Vue Apollo](https://github.com/vuejs/vue-apollo) in the [Vue Apollo documentation](https://vue-apollo.netlify.com/guide/).
+Read more about [Vue Apollo](https://github.com/vuejs/vue-apollo) in the [Vue Apollo documentation](https://vue-apollo.netlify.app/guide/).
### Local state with Apollo
@@ -206,11 +265,11 @@ const defaultClient = createDefaultClient(
Now every single time on attempt to fetch a version, our client will fetch `id` and `sha` from the remote API endpoint and will assign our hardcoded values to `author` and `createdAt` version properties. With this data, frontend developers are able to work on UI part without being blocked by backend. When actual response is added to the API, a custom local resolver can be removed fast and the only change to query/fragment is `@client` directive removal.
-Read more about local state management with Apollo in the [Vue Apollo documentation](https://vue-apollo.netlify.com/guide/local-state.html#local-state).
+Read more about local state management with Apollo in the [Vue Apollo documentation](https://vue-apollo.netlify.app/guide/local-state.html#local-state).
### Using with Vuex
-When Apollo Client is used within Vuex and fetched data is stored in the Vuex store, there is no need in keeping Apollo Client cache enabled. Otherwise we would have data from the API stored in two places - Vuex store and Apollo Client cache. More to say, with Apollo default settings, a subsequent fetch from the GraphQL API could result in fetching data from Apollo cache (in the case where we have the same query and variables). To prevent this behavior, we need to disable Apollo Client cache passing a valid `fetchPolicy` option to its constructor:
+When Apollo Client is used within Vuex and fetched data is stored in the Vuex store, there is no need in keeping Apollo Client cache enabled. Otherwise we would have data from the API stored in two places - Vuex store and Apollo Client cache. More to say, with Apollo's default settings, a subsequent fetch from the GraphQL API could result in fetching data from Apollo cache (in the case where we have the same query and variables). To prevent this behavior, we need to disable Apollo Client cache passing a valid `fetchPolicy` option to its constructor:
```javascript
import fetchPolicies from '~/graphql_shared/fetch_policy_constants';
@@ -411,18 +470,6 @@ fetchNextPage() {
Please note we don't have to save `pageInfo` one more time; `fetchMore` triggers a query
`result` hook as well.
-#### Limitations
-
-Currently, bidirectional pagination doesn't work:
-
-- `hasNextPage` returns a correct value only when we paginate forward using `endCursor`
- and `first` parameters.
-- `hasPreviousPage` returns a correct value only when we paginate backward using
- `startCursor` and `last` parameters.
-
-This should be resolved in the scope of the issue
-[Bi-directional Pagination in GraphQL doesn't work as expected](https://gitlab.com/gitlab-org/gitlab/-/issues/208301).
-
### Testing
#### Mocking response as component data
@@ -599,4 +646,20 @@ defaultClient.query({ query })
.then(result => console.log(result));
```
-Read more about the [Apollo](https://www.apollographql.com/) client in the [Apollo documentation](https://www.apollographql.com/docs/tutorial/client/).
+When [using Vuex](#Using-with-Vuex), disable the cache when:
+
+- The data is being cached elsewhere
+- The use case does not need caching
+if the data is being cached elsewhere, or if there is simply no need for it for the given use case.
+
+```javascript
+import createDefaultClient from '~/lib/graphql';
+import fetchPolicies from '~/graphql_shared/fetch_policy_constants';
+
+const defaultClient = createDefaultClient(
+ {},
+ {
+ fetchPolicy: fetchPolicies.NO_CACHE,
+ },
+);
+```
diff --git a/doc/development/fe_guide/icons.md b/doc/development/fe_guide/icons.md
index 4fb738f5466..131324e6479 100644
--- a/doc/development/fe_guide/icons.md
+++ b/doc/development/fe_guide/icons.md
@@ -24,7 +24,7 @@ sprite_icon(icon_name, size: nil, css_class: '')
- **icon_name** Use the icon_name that you can find in the SVG Sprite
([Overview is available here](https://gitlab-org.gitlab.io/gitlab-svgs)).
- **size (optional)** Use one of the following sizes : 16, 24, 32, 48, 72 (this will be translated into a `s16` class)
-- **css_class (optional)** If you want to add additional css classes
+- **css_class (optional)** If you want to add additional CSS classes
**Example**
@@ -67,8 +67,8 @@ export default {
- **name** Name of the Icon in the SVG Sprite ([Overview is available here](https://gitlab-org.gitlab.io/gitlab-svgs)).
- **size (optional)** Number value for the size which is then mapped to a specific CSS class
- (Available Sizes: 8, 12, 16, 18, 24, 32, 48, 72 are mapped to `sXX` css classes)
-- **css-classes (optional)** Additional CSS Classes to add to the svg tag.
+ (Available Sizes: 8, 12, 16, 18, 24, 32, 48, 72 are mapped to `sXX` CSS classes)
+- **css-classes (optional)** Additional CSS Classes to add to the SVG tag.
### Usage in HTML/JS
@@ -91,7 +91,7 @@ Please use the class `svg-content` around it to ensure nice rendering.
### Usage in Vue
-To use an SVG illustrations in a template provide the path as a property and display it through a standard img tag.
+To use an SVG illustrations in a template provide the path as a property and display it through a standard `img` tag.
Component:
diff --git a/doc/development/fe_guide/performance.md b/doc/development/fe_guide/performance.md
index aaa6bb16fab..5d2b699c40d 100644
--- a/doc/development/fe_guide/performance.md
+++ b/doc/development/fe_guide/performance.md
@@ -65,7 +65,7 @@ within the `pages` directory correspond to Rails controllers and actions. These
auto-generated bundles will be automatically included on the corresponding
pages.
-For example, if you were to visit <https://gitlab.com/gitlab-org/gitlab/issues>,
+For example, if you were to visit <https://gitlab.com/gitlab-org/gitlab/-/issues>,
you would be accessing the `app/controllers/projects/issues_controller.rb`
controller with the `index` action. If a corresponding file exists at
`pages/projects/issues/index/index.js`, it will be compiled into a webpack
diff --git a/doc/development/fe_guide/style/vue.md b/doc/development/fe_guide/style/vue.md
index 46305cc7217..1a64db443bc 100644
--- a/doc/development/fe_guide/style/vue.md
+++ b/doc/development/fe_guide/style/vue.md
@@ -53,7 +53,7 @@ Please check this [rules](https://github.com/vuejs/eslint-plugin-vue#bulb-rules)
## Naming
-1. **Extensions**: Use `.vue` extension for Vue components. Do not use `.js` as file extension ([#34371](https://gitlab.com/gitlab-org/gitlab-foss/issues/34371)).
+1. **Extensions**: Use `.vue` extension for Vue components. Do not use `.js` as file extension ([#34371](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/34371)).
1. **Reference Naming**: Use PascalCase for their instances:
```javascript
diff --git a/doc/development/fe_guide/vue.md b/doc/development/fe_guide/vue.md
index 972c2ded9c9..0d77e4d129b 100644
--- a/doc/development/fe_guide/vue.md
+++ b/doc/development/fe_guide/vue.md
@@ -6,9 +6,9 @@ To get started with Vue, read through [their documentation](https://vuejs.org/v2
What is described in the following sections can be found in these examples:
-- web ide: <https://gitlab.com/gitlab-org/gitlab-foss/tree/master/app/assets/javascripts/ide/stores>
-- security products: <https://gitlab.com/gitlab-org/gitlab/tree/master/ee/app/assets/javascripts/vue_shared/security_reports>
-- registry: <https://gitlab.com/gitlab-org/gitlab-foss/tree/master/app/assets/javascripts/registry/stores>
+- [Web IDE](https://gitlab.com/gitlab-org/gitlab-foss/tree/master/app/assets/javascripts/ide/stores)
+- [Security products](https://gitlab.com/gitlab-org/gitlab/tree/master/ee/app/assets/javascripts/vue_shared/security_reports)
+- [Registry](https://gitlab.com/gitlab-org/gitlab-foss/tree/master/app/assets/javascripts/registry/stores)
## Vue architecture
@@ -16,7 +16,7 @@ All new features built with Vue.js must follow a [Flux architecture](https://fac
The main goal we are trying to achieve is to have only one data flow and only one data entry.
In order to achieve this goal we use [vuex](#vuex).
-You can also read about this architecture in vue docs about [state management](https://vuejs.org/v2/guide/state-management.html#Simple-State-Management-from-Scratch)
+You can also read about this architecture in Vue docs about [state management](https://vuejs.org/v2/guide/state-management.html#Simple-State-Management-from-Scratch)
and about [one way data flow](https://vuejs.org/v2/guide/components.html#One-Way-Data-Flow).
### Components and Store
@@ -59,7 +59,7 @@ To do that, provide the data through `data` attributes in the HTML element and q
_Note:_ You should only do this while initializing the application, because the mounted element will be replaced with Vue-generated DOM.
The advantage of providing data from the DOM to the Vue instance through `props` in the `render` function
-instead of querying the DOM inside the main vue component is that makes tests easier by avoiding the need to
+instead of querying the DOM inside the main Vue component is that makes tests easier by avoiding the need to
create a fixture or an HTML element in the unit test. See the following example:
```javascript
diff --git a/doc/development/feature_flags/controls.md b/doc/development/feature_flags/controls.md
index 309aecd7978..d4d1ec74591 100644
--- a/doc/development/feature_flags/controls.md
+++ b/doc/development/feature_flags/controls.md
@@ -96,7 +96,7 @@ As a guideline:
Before toggling any feature flag, check that there are no ongoing
significant incidents on GitLab.com. You can do this by checking the
`#production` and `#incident-management` Slack channels, or looking for
-[open incident issues](https://gitlab.com/gitlab-com/gl-infra/production/issues/?scope=all&utf8=%E2%9C%93&state=opened&label_name[]=incident)
+[open incident issues](https://gitlab.com/gitlab-com/gl-infra/production/-/issues/?scope=all&utf8=%E2%9C%93&state=opened&label_name[]=incident)
(although check the dates and times).
We do not want to introduce changes during an incident, as it can make
@@ -113,17 +113,58 @@ When you begin to enable the feature, please link to the relevant
Feature Flag Rollout Issue within a Slack thread of the first `/chatops`
command you make so people can understand the change if they need to.
-To enable a feature for 25% of all users, run the following in Slack:
+To enable a feature for 25% of the time, run the following in Slack:
```shell
/chatops run feature set new_navigation_bar 25
```
+This sets a feature flag to `true` based on the following formula:
+
+```ruby
+feature_flag_state = rand < (25 / 100.0)
+```
+
This will enable the feature for GitLab.com, with `new_navigation_bar` being the
name of the feature.
This command does *not* enable the feature for 25% of the total users.
Instead, when the feature is checked with `enabled?`, it will return `true` 25% of the time.
+To enable a feature for 25% of actors such as users, projects, or groups,
+run the following in Slack:
+
+```shell
+/chatops run feature set some_feature 25 --actors
+```
+
+This sets a feature flag to `true` based on the following formula:
+
+```ruby
+feature_flag_state = Zlib.crc32("some_feature<Actor>:#{actor.id}") % (100 * 1_000) < 25 * 1_000]
+# where <Actor>: is a `User`, `Group`, `Project` and actor is an instance
+```
+
+During development, based on the nature of the feature, an actor choice
+should be made.
+
+For user focused features:
+
+```ruby
+Feature.enabled?(:feature_cool_avatars, current_user)
+```
+
+For group or namespace level features:
+
+```ruby
+Feature.enabled?(:feature_cooler_groups, group)
+```
+
+For project level features:
+
+```ruby
+Feature.enabled?(:feature_ice_cold_projects, project)
+```
+
If you are not certain what percentages to use, simply use the following steps:
1. 25%
@@ -158,15 +199,21 @@ you run these 2 commands:
```shell
/chatops run feature set --project=gitlab-org/gitlab some_feature true
-/chatops run feature set some_feature 25
+/chatops run feature set some_feature 25 --actors
+```
+
+Then `some_feature` will be enabled for both 25% of actors and always when interacting with
+`gitlab-org/gitlab`. This is a good idea if the feature flag development makes use of group
+actors.
+
+```ruby
+Feature.enabled?(:some_feature, group)
```
-Then `some_feature` will be enabled for both 25% of users and all users interacting with
-`gitlab-org/gitlab`.
+NOTE:
-NOTE: **Note:**
**Percentage of time** rollout is not a good idea if what you want is to make sure a feature
-is always on or off to the users.
+is always on or off to the users. In that case, **Percentage of actors** rollout is a better method.
### Feature flag change logging
@@ -174,7 +221,7 @@ Any feature flag change that affects GitLab.com (production) will
automatically be logged in an issue.
The issue is created in the
-[gl-infra/feature-flag-log](https://gitlab.com/gitlab-com/gl-infra/feature-flag-log/issues?scope=all&utf8=%E2%9C%93&state=closed)
+[gl-infra/feature-flag-log](https://gitlab.com/gitlab-com/gl-infra/feature-flag-log/-/issues?scope=all&utf8=%E2%9C%93&state=closed)
project, and it will at minimum log the Slack handle of person enabling
a feature flag, the time, and the name of the flag being changed.
diff --git a/doc/development/feature_flags/development.md b/doc/development/feature_flags/development.md
index b20db295919..a44bc70439e 100644
--- a/doc/development/feature_flags/development.md
+++ b/doc/development/feature_flags/development.md
@@ -23,6 +23,9 @@ class Foo < ActiveRecord::Base
end
```
+Only models that `include FeatureGate` or expose `flipper_id` method can be
+used as an actor for `Feature.enabled?`.
+
Features that are developed and are intended to be merged behind a feature flag
should not include a changelog entry. The entry should either be added in the merge
request removing the feature flag or the merge request where the default value of
@@ -87,6 +90,10 @@ this method you can expose the state of a feature flag as follows:
```ruby
before_action do
+ # Prefer to scope it per project or user e.g.
+ push_frontend_feature_flag(:vim_bindings, project)
+
+ # Avoid, if possible
push_frontend_feature_flag(:vim_bindings)
end
@@ -115,17 +122,30 @@ how to access feature flags in a Vue component.
### Specs
-In the test environment `Feature.enabled?` is stubbed to always respond to `true`,
-so we make sure behavior under feature flag doesn't go untested in some non-specific
-contexts.
+Our Flipper engine in the test environment works in a memory mode `Flipper::Adapters::Memory`.
+`production` and `development` modes use `Flipper::Adapters::ActiveRecord`.
+
+### `stub_feature_flags: true` (default and preferred)
-Whenever a feature flag is present, make sure to test _both_ states of the
-feature flag.
+In this mode Flipper is configured to use `Flipper::Adapters::Memory` and mark all feature
+flags to be on-by-default and persisted on a first use. This overwrites the `default_enabled:`
+of `Feature.enabled?` and `Feature.disabled?` returning always `true` unless feature flag
+is persisted.
+
+Make sure behavior under feature flag doesn't go untested in some non-specific contexts.
See the
[testing guide](../testing_guide/best_practices.md#feature-flags-in-tests)
for information and examples on how to stub feature flags in tests.
+### `stub_feature_flags: false`
+
+This disables a memory-stubbed flipper, and uses `Flipper::Adapters::ActiveRecord`
+a mode that is used by `production` and `development`.
+
+You should use this mode only when you really want to tests aspects of Flipper
+with how it interacts with `ActiveRecord`.
+
### Enabling a feature flag (in development)
In the rails console (`rails c`), enter the following command to enable your feature flag
@@ -139,3 +159,9 @@ Similarly, the following command will disable a feature flag:
```ruby
Feature.disable(:feature_flag_name)
```
+
+You can as well enable feature flag for a given gate:
+
+```ruby
+Feature.enable(:feature_flag_name, Project.find_by_full_path("root/my-project"))
+```
diff --git a/doc/development/filtering_by_label.md b/doc/development/filtering_by_label.md
index 19dece0d5c9..ef92bd35985 100644
--- a/doc/development/filtering_by_label.md
+++ b/doc/development/filtering_by_label.md
@@ -40,13 +40,13 @@ In particular, note that:
This is more complicated than is ideal. It makes the query construction more
prone to errors (such as
-[issue #15557](https://gitlab.com/gitlab-org/gitlab-foss/issues/15557)).
+[issue #15557](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/15557)).
## Attempt A: WHERE EXISTS
### Attempt A1: use multiple subqueries with WHERE EXISTS
-In [issue #37137](https://gitlab.com/gitlab-org/gitlab-foss/issues/37137)
+In [issue #37137](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/37137)
and its associated [merge request](https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/14022),
we tried to replace the `GROUP BY` with multiple uses of `WHERE EXISTS`. For the
example above, this would give:
@@ -83,7 +83,7 @@ Having [removed MySQL support in GitLab 12.1](https://about.gitlab.com/blog/2019
using [PostgreSQL's arrays](https://www.postgresql.org/docs/11/arrays.html) became more
tractable as we didn't have to support two databases. We discussed denormalizing
the `label_links` table for querying in
-[issue #49651](https://gitlab.com/gitlab-org/gitlab-foss/issues/49651),
+[issue #49651](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/49651),
with two options: label IDs and titles.
We can think of both of those as array columns on `issues`, `merge_requests`,
@@ -147,7 +147,7 @@ WHERE
label_titles @> ARRAY['Plan', 'backend']
```
-And our [tests in issue #49651](https://gitlab.com/gitlab-org/gitlab-foss/issues/49651#note_188777346)
+And our [tests in issue #49651](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/49651#note_188777346)
showed that this could be fast.
However, at present, the disadvantages outweigh the advantages.
diff --git a/doc/development/geo.md b/doc/development/geo.md
index bf56340f8ec..5c781a60bac 100644
--- a/doc/development/geo.md
+++ b/doc/development/geo.md
@@ -94,7 +94,7 @@ projects that need updating. Those projects can be:
[Geo admin panel](../user/admin_area/geo_nodes.md).
When we fail to fetch a repository on the secondary `RETRIES_BEFORE_REDOWNLOAD`
-times, Geo does a so-called _redownload_. It will do a clean clone
+times, Geo does a so-called _re-download_. It will do a clean clone
into the `@geo-temporary` directory in the root of the storage. When
it's successful, we replace the main repo with the newly cloned one.
@@ -218,7 +218,7 @@ the performance of many synchronization operations.
FDW is a PostgreSQL extension ([`postgres_fdw`](https://www.postgresql.org/docs/11/postgres-fdw.html)) that is enabled within
the Geo Tracking Database (on a **secondary** node), which allows it
-to connect to the readonly database replica and perform queries and filter
+to connect to the read-only database replica and perform queries and filter
data from both instances.
This persistent connection is configured as an FDW server
@@ -226,7 +226,7 @@ named `gitlab_secondary`. This configuration exists within the database's user
context only. To access the `gitlab_secondary`, GitLab needs to use the
same database user that had previously been configured.
-The Geo Tracking Database accesses the readonly database replica via FDW as a regular user,
+The Geo Tracking Database accesses the read-only database replica via FDW as a regular user,
limited by its own restrictions. The credentials are configured as a
`USER MAPPING` associated with the `SERVER` mapped previously
(`gitlab_secondary`).
diff --git a/doc/development/geo/framework.md b/doc/development/geo/framework.md
index d72f7cc4cc1..85fecc41cfb 100644
--- a/doc/development/geo/framework.md
+++ b/doc/development/geo/framework.md
@@ -257,6 +257,8 @@ For example, to add support for files referenced by a `Widget` model with a
class Geo::WidgetRegistry < Geo::BaseRegistry
include Geo::StateMachineRegistry
+ MODEL_FOREIGN_KEY = :widget_id
+
belongs_to :widget, class_name: 'Widget'
end
```
@@ -305,6 +307,8 @@ For example, to add support for files referenced by a `Widget` model with a
specify 'factory is valid' do
expect(registry).to be_valid
end
+
+ include_examples 'a Geo framework registry'
end
```
@@ -356,7 +360,8 @@ Widgets should now be replicated by Geo!
end
```
-1. Add fields `widget_count`, `widget_checksummed_count`, and `widget_checksum_failed_count`
+1. Add fields `widget_count`, `widget_checksummed_count`, `widget_checksum_failed_count`,
+ `widget_synced_count` and `widget_failed_count`
to `GeoNodeStatus#RESOURCE_STATUS_FIELDS` array in `ee/app/models/geo_node_status.rb`.
1. Add the same fields to `GeoNodeStatus#PROMETHEUS_METRICS` hash in
`ee/app/models/geo_node_status.rb`.
@@ -370,6 +375,8 @@ Widgets should now be replicated by Geo!
self.widget_count = Geo::WidgetReplicator.model.count
self.widget_checksummed_count = Geo::WidgetReplicator.checksummed.count
self.widget_checksum_failed_count = Geo::WidgetReplicator.checksum_failed.count
+ self.widget_synced_count = Geo::WidgetReplicator.synced_count
+ self.widget_failed_count = Geo::WidgetReplicator.failed_count
```
1. Make sure `Widget` model has `checksummed` and `checksum_failed` scopes.
@@ -450,7 +457,7 @@ Widgets should now be verified by Geo!
end
```
-1. Create `ee/app/graphql/types/geo/package_file_registry_type.rb`:
+1. Create `ee/app/graphql/types/geo/widget_registry_type.rb`:
```ruby
# frozen_string_literal: true
diff --git a/doc/development/git_object_deduplication.md b/doc/development/git_object_deduplication.md
index 938882ba5a2..06af34d9232 100644
--- a/doc/development/git_object_deduplication.md
+++ b/doc/development/git_object_deduplication.md
@@ -89,7 +89,7 @@ are as follows:
(`pool.source_project`)
> TODO Fix invalid SQL data for pools created prior to GitLab 11.11
-> <https://gitlab.com/gitlab-org/gitaly/issues/1653>.
+> <https://gitlab.com/gitlab-org/gitaly/-/issues/1653>.
### Assumptions
@@ -133,7 +133,7 @@ are as follows:
repository.
> TODO should forks of forks be deduplicated?
-> <https://gitlab.com/gitlab-org/gitaly/issues/1532>
+> <https://gitlab.com/gitlab-org/gitaly/-/issues/1532>
### Consequences
@@ -191,4 +191,4 @@ the secondary, at which stage Git objects will get deduplicated.
> TODO How do we handle the edge case where at the time the Geo
> secondary tries to create the pool repository, the source project does
-> not exist? <https://gitlab.com/gitlab-org/gitaly/issues/1533>
+> not exist? <https://gitlab.com/gitlab-org/gitaly/-/issues/1533>
diff --git a/doc/development/gitaly.md b/doc/development/gitaly.md
index 5e5cae7228b..417c96a44dd 100644
--- a/doc/development/gitaly.md
+++ b/doc/development/gitaly.md
@@ -54,7 +54,7 @@ The process for adding new Gitaly features is:
These steps often overlap. It is possible to use an unreleased version
of Gitaly and `gitaly-proto` during testing and development.
-- See the [Gitaly repo](https://gitlab.com/gitlab-org/gitaly/blob/master/CONTRIBUTING.md#development-and-testing-with-a-custom-gitaly-proto) for instructions on writing server side code with an unreleased protocol.
+- See the [Gitaly repository](https://gitlab.com/gitlab-org/gitaly/blob/master/CONTRIBUTING.md#development-and-testing-with-a-custom-gitaly-proto) for instructions on writing server side code with an unreleased protocol.
- See [below](#running-tests-with-a-locally-modified-version-of-gitaly) for instructions on running GitLab CE tests with a modified version of Gitaly.
- In GDK run `gdk install` and restart `gdk run` (or `gdk run app`) to use a locally modified Gitaly version for development
@@ -67,7 +67,7 @@ This should make it easier to contribute for developers who are less
comfortable writing Go code.
There is documentation for this approach in [the Gitaly
-repo](https://gitlab.com/gitlab-org/gitaly/blob/master/doc/ruby_endpoint.md).
+repository](https://gitlab.com/gitlab-org/gitaly/blob/master/doc/ruby_endpoint.md).
## Gitaly-Related Test Failures
@@ -85,7 +85,7 @@ While Gitaly can handle all Git access, many of GitLab customers still
run Gitaly atop NFS. The legacy Rugged implementation for Git calls may
be faster than the Gitaly RPC due to N+1 Gitaly calls and other
reasons. See [the
-issue](https://gitlab.com/gitlab-org/gitlab-foss/issues/57317) for more
+issue](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/57317) for more
details.
Until GitLab has eliminated most of these inefficiencies or the use of
@@ -323,8 +323,8 @@ the integration by using GDK:
1. Navigate to GDK's root directory.
1. Make sure you have the proper branch checked out for Gitaly.
1. Recompile it with `make gitaly-setup` and restart the service with `gdk restart gitaly`.
- 1. Make sure your setup is runnig: `gdk status | grep praefect`.
- 1. Check what config file is used: `cat ./services/praefect/run | grep praefect` value of the `-config` flag
+ 1. Make sure your setup is running: `gdk status | grep praefect`.
+ 1. Check what configuration file is used: `cat ./services/praefect/run | grep praefect` value of the `-config` flag
1. Uncomment `prometheus_listen_addr` in the configuration file and run `gdk restart gitaly`.
1. Make sure that the flag is not enabled yet:
diff --git a/doc/development/github_importer.md b/doc/development/github_importer.md
index 5d37d2f119f..5a490737f37 100644
--- a/doc/development/github_importer.md
+++ b/doc/development/github_importer.md
@@ -121,12 +121,12 @@ also reduce pressure on the system as a whole.
## Refreshing import JIDs
-GitLab includes a worker called `StuckImportJobsWorker` that will periodically
-run and mark project imports as failed if they have been running for more than
-15 hours. For GitHub projects, this poses a bit of a problem: importing large
-projects could take several hours depending on how often we hit the GitHub rate
-limit (more on this below), but we don't want `StuckImportJobsWorker` to mark
-our import as failed because of this.
+GitLab includes a worker called `Gitlab::Import::StuckProjectImportJobsWorker`
+that will periodically run and mark project imports as failed if they have been
+running for more than 15 hours. For GitHub projects, this poses a bit of a
+problem: importing large projects could take several hours depending on how
+often we hit the GitHub rate limit (more on this below), but we don't want
+`Gitlab::Import::StuckProjectImportJobsWorker` to mark our import as failed because of this.
To prevent this from happening we periodically refresh the expiration time of
the import process. This works by storing the JID of the import job in the
diff --git a/doc/development/go_guide/dependencies.md b/doc/development/go_guide/dependencies.md
new file mode 100644
index 00000000000..b85344635c6
--- /dev/null
+++ b/doc/development/go_guide/dependencies.md
@@ -0,0 +1,175 @@
+# Dependency Management in Go
+
+Go takes an unusual approach to dependency management, in that it is
+source-based instead of artifact-based. In an artifact-based dependency
+management system, packages consist of artifacts generated from source code and
+are stored in a separate repository system from source code. For example, many
+NodeJS packages use `npmjs.org` as a package repository and `github.com` as a
+source repository. On the other hand, packages in Go *are* source code and
+releasing a package does not involve artifact generation or a separate
+repository. Go packages must be stored in a version control repository on a VCS
+server. Dependencies are fetched directly from their VCS server or via an
+intermediary proxy which itself fetches them from their VCS server.
+
+## Versioning
+
+Go 1.11 introduced modules and first-class package versioning to the Go ecosystem.
+Prior to this, Go did not have any well-defined mechanism for version management.
+While 3rd party version management tools existed, the default Go experience had
+no support for versioning.
+
+Go modules use [semantic versioning](https://semver.org). The versions of a
+module are defined as VCS (version control system) tags that are valid semantic
+versions prefixed with `v`. For example, to release version `1.0.0` of
+`gitlab.com/my/project`, the developer must create the Git tag `v1.0.0`.
+
+For major versions other than 0 and 1, the module name must be suffixed with
+`/vX` where X is the major version. For example, version `v2.0.0` of
+`gitlab.com/my/project` must be named and imported as
+`gitlab.com/my/project/v2`.
+
+Go uses 'pseudo-versions', which are special semantic versions that reference a
+specific VCS commit. The prerelease component of the semantic version must be or
+end with a timestamp and the first 12 characters of the commit identifier:
+
+- `vX.0.0-yyyymmddhhmmss-abcdefabcdef`, when no earlier tagged commit exists for X.
+- `vX.Y.Z-pre.0.yyyymmddhhmmss-abcdefabcdef`, when most recent prior tag is vX.Y.Z-pre.
+- `vX.Y.(Z+1)-0.yyyymmddhhmmss-abcdefabcdef`, when most recent prior tag is vX.Y.Z.
+
+If a VCS tag matches one of these patterns, it is ignored.
+
+For a complete understanding of Go modules and versioning, see [this series of
+blog posts](https://blog.golang.org/using-go-modules) on the official Go
+website.
+
+## 'Module' vs 'Package'
+
+- A package is a folder containing `*.go` files.
+- A module is a folder containing a `go.mod` file.
+- A module is *usually* also a package, that is a folder containing a `go.mod`
+ file and `*.go` files.
+- A module may have subdirectories, which may be packages.
+- Modules usually come in the form of a VCS repository (Git, SVN, Hg, and so on).
+- Any subdirectories of a module that themselves are modules are distinct,
+ separate modules and are excluded from the containing module.
+ - Given a module `repo`, if `repo/sub` contains a `go.mod` file then
+ `repo/sub` and any files contained therein are a separate module and not a
+ part of `repo`.
+
+## Naming
+
+The name of a module or package, excluding the standard library, must be of the
+form `(sub.)*domain.tld(/path)*`. This is similar to a URL, but is not a URL.
+The package name does not have a scheme (such as `https://`) and cannot have a
+port number. `example.com:8443/my/package` is not a valid name.
+
+## Fetching Packages
+
+Prior to Go 1.12, the process for fetching a package was as follows:
+
+1. Query `https://{package name}?go-get=1`.
+1. Scan the response for the `go-import` meta tag.
+1. Fetch the repository indicated by the meta tag using the indicated VCS.
+
+The meta tag should have the form `<meta name="go-import" content="{prefix}
+{vcs} {url}">`. For example, `gitlab.com/my/project git
+https://gitlab.com/my/project.git` indicates that packages beginning with
+`gitlab.com/my/project` should be fetched from
+`https://gitlab.com/my/project.git` using Git.
+
+## Fetching Modules
+
+Go 1.12 introduced checksum databases and module proxies.
+
+### Checksums
+
+In addition to `go.mod`, a module will have a `go.sum` file. This file records a
+SHA-256 checksum of the code and the `go.mod` file of every version of every
+dependency that is referenced by the module or one of the module's dependencies.
+Go continually updates `go.sum` as new dependencies are referenced.
+
+When Go fetches the dependencies of a module, if those dependencies already have
+an entry in `go.sum`, Go will verify the checksum of these dependencies. If the
+checksum does not match what is in `go.sum`, the build will fail. This ensures
+that a given version of a module cannot be changed by its developers or by a
+malicious party without causing build failures.
+
+Go 1.12+ can be configured to use a checksum database. If configured to do so,
+when Go fetches a dependency and there is no corresponding entry in `go.sum`, Go
+will query the configured checksum database(s) for the checksum of the
+dependency instead of calculating it from the downloaded dependency. If the
+dependency cannot be found in the checksum database, the build will fail. If the
+downloaded dependency's checksum does not match the result from the checksum
+database, the build will fail. The following environment variables control this:
+
+- `GOSUMDB` identifies the name, and optionally the public key and server URL,
+ of the checksum database to query.
+ - A value of `off` will entirely disable checksum database queries.
+ - Go 1.13+ uses `sum.golang.org` if `GOSUMDB` is not defined.
+- `GONOSUMDB` is a comma-separated list of module suffixes that checksum
+ database queries should be disabled for. Wildcards are supported.
+- `GOPRIVATE` is a comma-separated list of module names that has the same
+ function as `GONOSUMDB` in addition to disabling other features.
+
+### Proxies
+
+Go 1.12+ can be configured to fetch modules from a Go proxy instead of directly
+from the module's VCS. If configured to do so, when Go fetches a dependency, it
+attempts to fetch the dependency from the configured proxies, in order. The
+following environment variables control this:
+
+- `GOPROXY` is a comma-separated list of module proxies to query.
+ - A value of `direct` will entirely disable module proxy queries.
+ - If the last entry in the list is `direct`, Go will fall back to the process
+ described [above](#fetching-packages) if none of the proxies can provide the
+ dependency.
+ - Go 1.13+ uses `proxy.golang.org,direct` if `GOPROXY` is not defined.
+- `GONOPROXY` is a comma-separated list of module suffixes that should be
+ fetched directly and not from a proxy. Wildcards are supported.
+- `GOPRIVATE` is a comma-separated list of module names that has the same
+ function as `GONOPROXY` in addition to disabling other features.
+
+### Fetching
+
+From Go 1.12 onward, the process for fetching a module or package is as follows:
+
+1. If `GOPROXY` is a list of proxies and the module is not excluded by
+ `GONOPROXY` or `GOPRIVATE`, query them in order, and stop at the first valid
+ response.
+1. If `GOPROXY` is `direct`, or the module is excluded, or `GOPROXY` ends with
+ `,direct` and no proxy provided the module, fall back.
+ 1. Query `https://{module or package name}?go-get=1`.
+ 1. Scan the response for the `go-import` meta tag.
+ 1. Fetch the repository indicated by the meta tag using the indicated VCS.
+ 1. If the `{vcs}` field is `mod`, the URL should be treated as a module proxy instead of a VCS.
+1. If the module is being fetched directly and not as a dependency, stop.
+1. If `go.sum` contains an entry corresponding to the module, validate the checksum and stop.
+1. If `GOSUMDB` identifies a checksum database and the module is not excluded by
+ `GONOSUMDB` or `GOPRIVATE`, retrieve the module's checksum, add it to
+ `go.sum`, and validate the downloaded source against it.
+1. If `GOSUMDB` is `off` or the module is excluded, calculate a checksum from
+ the downloaded source and add it to `go.sum`.
+
+The downloaded source must contain a `go.mod` file. The `go.mod` file must
+contain a `module` directive that specifies the name of the module. If the
+module name as specified by `go.mod` does not match the name that was used to
+fetch the module, the module will fail to compile.
+
+If the module is being fetched directly and no version was specified, or if the
+module is being added as a dependency and no version was specified, Go uses the
+most recent version of the module. If the module is fetched from a proxy, Go
+queries the proxy for a list of versions and chooses the latest. If the module is
+fetched directly, Go queries the repository for a list of tags and chooses the
+latest that is also a valid semantic version.
+
+## Authenticating
+
+In versions prior to Go 1.13, support for authenticating requests made by Go was
+somewhat inconsistent. Go 1.13 improved support for `.netrc` authentication. If
+a request is made over HTTPS and a matching `.netrc` entry can be found, Go will
+add HTTP Basic authentication credentials to the request. Go will not
+authenticate requests made over HTTP. Go will reject HTTP-only entries in
+`GOPROXY` that have embedded credentials.
+
+In a future version, Go may add support for arbitrary authentication headers.
+Follow [golang/go#26232](https://github.com/golang/go/issues/26232) for details.
diff --git a/doc/development/go_guide/index.md b/doc/development/go_guide/index.md
index fe69a4205f8..5a5e163e142 100644
--- a/doc/development/go_guide/index.md
+++ b/doc/development/go_guide/index.md
@@ -17,6 +17,22 @@ experiences. Several projects were started with different standards and they
can still have specifics. They will be described in their respective
`README.md` or `PROCESS.md` files.
+## Dependency Management
+
+Go uses a source-based strategy for dependency management. Dependencies are
+downloaded as source from their source repository. This differs from the more
+common artifact-based strategy where dependencies are downloaded as artifacts
+from a package repository that is separate from the dependency's source
+repository.
+
+Go did not have first-class support for version management prior to 1.11. That
+version introduced Go modules and the use of semantic versioning. Go 1.12
+introduced module proxies, which can serve as an intermediate between clients
+and source version control systems, and checksum databases, which can be used to
+verify the integrity of dependency downloads.
+
+See [Dependency Management in Go](dependencies.md) for more details.
+
## Code Review
We follow the common principles of
@@ -105,7 +121,7 @@ Including a `.golangci.yml` in the root directory of the project allows for
configuration of `golangci-lint`. All options for `golangci-lint` are listed in
this [example](https://github.com/golangci/golangci-lint/blob/master/.golangci.example.yml).
-Once [recursive includes](https://gitlab.com/gitlab-org/gitlab-foss/issues/56836)
+Once [recursive includes](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/56836)
become available, you will be able to share job templates like this
[analyzer](https://gitlab.com/gitlab-org/security-products/ci-templates/raw/master/includes-dev/analyzer.yml).
@@ -260,7 +276,7 @@ easier to debug.
For example:
-```go
+```golang
// Wrap the error
return nil, fmt.Errorf("get cache %s: %w", f.Name, err)
@@ -390,7 +406,7 @@ builds](https://docs.docker.com/develop/develop-images/multistage-build/):
dependencies.
- They generate a small, self-contained image, derived from `Scratch`.
-Generated docker images should have the program at their `Entrypoint` to create
+Generated Docker images should have the program at their `Entrypoint` to create
portable commands. That way, anyone can run the image, and without parameters
it will display its help message (if `cli` has been used).
diff --git a/doc/development/gotchas.md b/doc/development/gotchas.md
index 8f077e613b7..7e08da8162b 100644
--- a/doc/development/gotchas.md
+++ b/doc/development/gotchas.md
@@ -155,7 +155,7 @@ refresh_service.execute(oldrev, newrev, ref)
See ["Why is it bad style to `rescue Exception => e` in Ruby?"](https://stackoverflow.com/questions/10048173/why-is-it-bad-style-to-rescue-exception-e-in-ruby).
_**Note:** This rule is [enforced automatically by
-Rubocop](https://gitlab.com/gitlab-org/gitlab-foss/blob/8-4-stable/.rubocop.yml#L911-914)._
+RuboCop](https://gitlab.com/gitlab-org/gitlab-foss/blob/8-4-stable/.rubocop.yml#L911-914)._
## Do not use inline JavaScript in views
diff --git a/doc/development/hash_indexes.md b/doc/development/hash_indexes.md
index bc962ac0cd6..1ed76e35f69 100644
--- a/doc/development/hash_indexes.md
+++ b/doc/development/hash_indexes.md
@@ -1,6 +1,6 @@
# Hash Indexes
-PostgreSQL supports hash indexes besides the regular btree
+PostgreSQL supports hash indexes besides the regular B-tree
indexes. Hash indexes however are to be avoided at all costs. While they may
_sometimes_ provide better performance the cost of rehashing can be very high.
More importantly: at least until PostgreSQL 10.0 hash indexes are not
@@ -17,4 +17,4 @@ documentation:
RuboCop is configured to register an offense when it detects the use of a hash
index.
-Instead of using hash indexes you should use regular btree indexes.
+Instead of using hash indexes you should use regular B-tree indexes.
diff --git a/doc/development/i18n/externalization.md b/doc/development/i18n/externalization.md
index a81e656fc27..bdd372e90ed 100644
--- a/doc/development/i18n/externalization.md
+++ b/doc/development/i18n/externalization.md
@@ -260,8 +260,22 @@ n_("%{project_name}", "%d projects selected", count) % { project_name: 'GitLab'
### Namespaces
-Sometimes you need to add some context to the text that you want to translate
-(if the word occurs in a sentence and/or the word is ambiguous).
+A namespace is a way to group translations that belong together. They provide context to our translators by adding a prefix followed by the bar symbol (`|`). For example:
+
+```ruby
+'Namespace|Translated string'
+```
+
+A namespace provide the following benefits:
+
+- It addresses ambiguity in words, for example: `Promotions|Promote` vs `Epic|Promote`
+- It allows translators to focus on translating externalized strings that belong to the same product area rather than arbitrary ones.
+- It gives a linguistic context to help the translator.
+
+In some cases, namespaces don't make sense, for example,
+for ubiquitous UI words and phrases such as "Cancel" or phrases like "Save changes" a namespace could
+be counterproductive.
+
Namespaces should be PascalCase.
- In Ruby/HAML:
@@ -292,7 +306,7 @@ const dateFormat = createDateTimeFormat({ year: 'numeric', month: 'long', day: '
console.log(dateFormat.format(new Date('2063-04-05'))) // April 5, 2063
```
-This makes use of [`Intl.DateTimeFormat`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/DateTimeFormat).
+This makes use of [`Intl.DateTimeFormat`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Intl/DateTimeFormat).
- In Ruby/HAML, we have two ways of adding format to dates and times:
diff --git a/doc/development/i18n/merging_translations.md b/doc/development/i18n/merging_translations.md
index 5d9dbd23efa..e0fcaddda4c 100644
--- a/doc/development/i18n/merging_translations.md
+++ b/doc/development/i18n/merging_translations.md
@@ -20,7 +20,7 @@ doesn't do. Create a new pipeline at `https://gitlab.com/gitlab-org/gitlab/pipel
If there are validation errors, the easiest solution is to disapprove
the offending string in CrowdIn, leaving a comment with what is
required to fix the offense. There is an
-[issue](https://gitlab.com/gitlab-org/gitlab/issues/23256)
+[issue](https://gitlab.com/gitlab-org/gitlab/-/issues/23256)
suggesting to automate this process. Disapproving will exclude the
invalid translation, the merge request will be updated within a few
minutes.
@@ -31,7 +31,7 @@ clicking `Pause sync` on the [CrowdIn integration settings
page](https://translate.gitlab.com/project/gitlab-ee/settings#integration).
When all failures are resolved, the translations need to be double
-checked once more as discussed in [confidential issue](../../user/project/issues/confidential_issues.md) `https://gitlab.com/gitlab-org/gitlab/issues/19485`.
+checked once more as discussed in [confidential issue](../../user/project/issues/confidential_issues.md) `https://gitlab.com/gitlab-org/gitlab/-/issues/19485`.
## Merging translations
@@ -40,7 +40,7 @@ translations can be merged into the master branch. When merging the translations
make sure to check the **Remove source branch** checkbox, so CrowdIn recreates the
`master-i18n` from master after the new translation was merged.
-We are discussing [automating this entire process](https://gitlab.com/gitlab-org/gitlab/issues/19896).
+We are discussing [automating this entire process](https://gitlab.com/gitlab-org/gitlab/-/issues/19896).
## Recreate the merge request
@@ -53,3 +53,18 @@ and delete the
This might be needed when the merge request contains failures that
have been fixed on master.
+
+## Recreate the GitLab integration in CrowdIn
+
+NOTE: ***Note:**
+These instructions work only for GitLab Team Members.
+
+If for some reason the GitLab integration in CrowdIn does not exist, it can be
+recreated by the following steps:
+
+1. Sign in to GitLab as `gitlab-crowdin-bot` (If you're a GitLab Team Member, find credentials in the GitLab shared [1Password account](https://about.gitlab.com/handbook/security/#1password-for-teams)
+1. Sign in to Crowdin with the GitLab integration
+1. Navigate to Settings > Integrations > GitLab > Set Up Integration
+1. Select `gitlab-org/gitlab` repository
+1. On `Select Branches for Translation`, select `master`
+1. Ensure the `Service Branch Name` is `master-i18n`
diff --git a/doc/development/i18n/proofreader.md b/doc/development/i18n/proofreader.md
index 837da349f7e..8c1f62330f4 100644
--- a/doc/development/i18n/proofreader.md
+++ b/doc/development/i18n/proofreader.md
@@ -5,12 +5,15 @@ are very appreciative of the work done by translators and proofreaders!
## Proofreaders
+<!-- vale gitlab.Spelling = NO -->
- Albanian
- Proofreaders needed.
- Amharic
- Tsegaselassie Tadesse - [GitLab](https://gitlab.com/tsega), [CrowdIn](https://crowdin.com/profile/tsegaselassi/activity)
- Arabic
- Proofreaders needed.
+- Bosnian
+ - Proofreaders needed.
- Bulgarian
- Lyubomir Vasilev - [CrowdIn](https://crowdin.com/profile/lyubomirv)
- Catalan
@@ -26,6 +29,8 @@ are very appreciative of the work done by translators and proofreaders!
- Chinese Traditional, Hong Kong 繁體中文 (香港)
- Victor Wu - [GitLab](https://gitlab.com/victorwuky), [CrowdIn](https://crowdin.com/profile/victorwu)
- Ivan Ip - [GitLab](https://gitlab.com/lifehome), [CrowdIn](https://crowdin.com/profile/lifehome)
+- Croatian
+ - Proofreaders needed.
- Czech
- Jan Urbanec - [GitLab](https://gitlab.com/TatranskyMedved), [CrowdIn](https://crowdin.com/profile/Tatranskymedved)
- Danish
@@ -33,11 +38,11 @@ are very appreciative of the work done by translators and proofreaders!
- Dutch
- Emily Hendle - [GitLab](https://gitlab.com/pundachan), [CrowdIn](https://crowdin.com/profile/pandachan)
- Esperanto
-- Lyubomir Vasilev - [CrowdIn](https://crowdin.com/profile/lyubomirv)
+ - Lyubomir Vasilev - [CrowdIn](https://crowdin.com/profile/lyubomirv)
- Estonian
- Proofreaders needed.
- Filipino
- - Proofreaders needed.
+ - Andrei Jiroh Halili - [GitLab](https://gitlab.com/AJHalili2006DevPH), [Crowdin](https://crowdin.com/profile/AndreiJirohHaliliDev2006)
- French
- Davy Defaud - [GitLab](https://gitlab.com/DevDef), [CrowdIn](https://crowdin.com/profile/DevDef)
- Galician
@@ -50,6 +55,8 @@ are very appreciative of the work done by translators and proofreaders!
- Proofreaders needed.
- Hebrew
- Yaron Shahrabani - [GitLab](https://gitlab.com/yarons), [CrowdIn](https://crowdin.com/profile/YaronSh)
+- Hindi
+ - Proofreaders needed.
- Hungarian
- Proofreaders needed.
- Indonesian
@@ -62,6 +69,7 @@ are very appreciative of the work done by translators and proofreaders!
- Hiroyuki Sato - [GitLab](https://gitlab.com/hiroponz), [CrowdIn](https://crowdin.com/profile/hiroponz)
- Tomo Dote - [GitLab](https://gitlab.com/fu7mu4), [CrowdIn](https://crowdin.com/profile/fu7mu4)
- Hiromi Nozawa - [GitLab](https://gitlab.com/hir0mi), [CrowdIn](https://crowdin.com/profile/hir0mi)
+ - Takuya Noguchi - [GitLab](https://gitlab.com/tnir), [CrowdIn](https://crowdin.com/profile/tnir)
- Korean
- Chang-Ho Cha - [GitLab](https://gitlab.com/changho-cha), [CrowdIn](https://crowdin.com/profile/zzazang)
- Ji Hun Oh - [GitLab](https://gitlab.com/Baw-Appie), [CrowdIn](https://crowdin.com/profile/BawAppie)
@@ -74,7 +82,6 @@ are very appreciative of the work done by translators and proofreaders!
- Filip Mech - [GitLab](https://gitlab.com/mehenz), [CrowdIn](https://crowdin.com/profile/mehenz)
- Maksymilian Roman - [GitLab](https://gitlab.com/villaincandle), [CrowdIn](https://crowdin.com/profile/villaincandle)
- Portuguese
- - Proofreaders needed.
- Diogo Trindade - [GitLab](https://gitlab.com/luisdiogo2071317), [CrowdIn](https://crowdin.com/profile/ldiogotrindade)
- Portuguese, Brazilian
- Paulo George Gomes Bezerra - [GitLab](https://gitlab.com/paulobezerra), [CrowdIn](https://crowdin.com/profile/paulogomes.rep)
@@ -88,21 +95,22 @@ are very appreciative of the work done by translators and proofreaders!
- NickVolynkin - [Crowdin](https://crowdin.com/profile/NickVolynkin)
- Andrey Komarov - [GitLab](https://gitlab.com/elkamarado), [Crowdin](https://crowdin.com/profile/kamarado)
- Iaroslav Postovalov - [GitLab](https://gitlab.com/CMDR_Tvis), [Crowdin](https://crowdin.com/profile/CMDR_Tvis)
-- Serbian (Cyrillic)
- - Proofreaders needed.
-- Serbian (Latin)
+- Serbian (Latin and Cyrillic)
- Proofreaders needed.
- Slovak
- Proofreaders needed.
- Spanish
- Pedro Garcia - [GitLab](https://gitlab.com/pedgarrod), [CrowdIn](https://crowdin.com/profile/breaking_pitt)
+- Swedish
+ - Proofreaders needed.
- Turkish
- Ali Demirtaş - [GitLab](https://gitlab.com/alidemirtas), [CrowdIn](https://crowdin.com/profile/alidemirtas)
- Ukrainian
- Volodymyr Sobotovych - [GitLab](https://gitlab.com/wheleph), [CrowdIn](https://crowdin.com/profile/wheleph)
- Andrew Vityuk - [GitLab](https://gitlab.com/3_1_3_u), [CrowdIn](https://crowdin.com/profile/andruwa13)
- Welsh
- - Proofreaders needed.
+ - Delyth Prys - [GitLab](https://gitlab.com/Delyth), [CrowdIn](https://crowdin.com/profile/DelythPrys)
+<!-- vale gitlab.Spelling = YES -->
## Become a proofreader
diff --git a/doc/development/i18n/translation.md b/doc/development/i18n/translation.md
index e1c02c2c9c2..ed205c20c0d 100644
--- a/doc/development/i18n/translation.md
+++ b/doc/development/i18n/translation.md
@@ -61,8 +61,12 @@ This helps maintain a logical connection and consistency between tools (e.g.
### Formality
-The level of formality used in software varies by language.
-For example, in French we translate `you` as the formal `vous`.
+The level of formality used in software varies by language:
+
+| Language | Formality | Example |
+| -------- | --------- | ------- |
+| French | formal | `vous` for `you` |
+| German | informal | `du` for `you` |
You can refer to other translated strings and notes in the glossary to assist
determining a suitable level of formality.
@@ -75,18 +79,22 @@ ethnicity.
In languages which distinguish between a male and female form, use both or
choose a neutral formulation.
+<!-- vale gitlab.Spelling = NO -->
For example in German, the word "user" can be translated into "Benutzer" (male) or "Benutzerin" (female).
Therefore "create a new user" would translate into "Benutzer(in) anlegen".
+<!-- vale gitlab.Spelling = YES -->
### Updating the glossary
To propose additions to the glossary please
-[open an issue](https://gitlab.com/gitlab-org/gitlab/issues?scope=all&utf8=✓&state=all&label_name[]=Category%3AInternationalization).
+[open an issue](https://gitlab.com/gitlab-org/gitlab/-/issues?scope=all&utf8=✓&state=all&label_name[]=Category%3AInternationalization).
## French Translation Guidelines
### Inclusive language in French
+<!-- vale gitlab.Spelling = NO -->
In French, the "écriture inclusive" is now over (see on [Legifrance](https://www.legifrance.gouv.fr/affichTexte.do?cidTexte=JORFTEXT000036068906&categorieLien=id)).
So, to include both genders, write “Utilisateurs et utilisatrices” instead of “Utilisateur·rice·s”.
When space is missing, the male gender should be used alone.
+<!-- vale gitlab.Spelling = YES -->
diff --git a/doc/development/img/bullet_v13_0.png b/doc/development/img/bullet_v13_0.png
new file mode 100644
index 00000000000..04b476db581
--- /dev/null
+++ b/doc/development/img/bullet_v13_0.png
Binary files differ
diff --git a/doc/development/import_export.md b/doc/development/import_export.md
index 6d1b6929667..ecdfd8a853e 100644
--- a/doc/development/import_export.md
+++ b/doc/development/import_export.md
@@ -44,19 +44,34 @@ WARN: Work still in progress <struct with JID>
### Timeouts
-Timeout errors occur due to the `StuckImportJobsWorker` marking the process as failed:
+Timeout errors occur due to the `Gitlab::Import::StuckProjectImportJobsWorker` marking the process as failed:
```ruby
-class StuckImportJobsWorker
- include ApplicationWorker
- include CronjobQueue
-
- IMPORT_JOBS_EXPIRATION = 15.hours.to_i
+module Gitlab
+ module Import
+ class StuckProjectImportJobsWorker
+ include Gitlab::Import::StuckImportJob
+ # ...
+ end
+ end
+end
- def perform
- import_state_without_jid_count = mark_import_states_without_jid_as_failed!
- import_state_with_jid_count = mark_import_states_with_jid_as_failed!
- ...
+module Gitlab
+ module Import
+ module StuckImportJob
+ # ...
+ IMPORT_JOBS_EXPIRATION = 15.hours.to_i
+ # ...
+ def perform
+ stuck_imports_without_jid_count = mark_imports_without_jid_as_failed!
+ stuck_imports_with_jid_count = mark_imports_with_jid_as_failed!
+
+ track_metrics(stuck_imports_with_jid_count, stuck_imports_without_jid_count)
+ end
+ # ...
+ end
+ end
+end
```
```shell
@@ -92,14 +107,14 @@ Marked stuck import jobs as failed. JIDs: xyz
While the performance problems are not tackled, there is a process to workaround
importing big projects, using a foreground import:
-[Foreground import](https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/5384) of big projects for customers.
+[Foreground import](https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/5384) of big projects for customers.
(Using the import template in the [infrastructure tracker](https://gitlab.com/gitlab-com/gl-infra/infrastructure/))
## Security
The Import/Export feature is constantly updated (adding new things to export), however
the code hasn't been refactored in a long time. We should perform a code audit (see
-[confidential issue](../user/project/issues/confidential_issues.md) `https://gitlab.com/gitlab-org/gitlab/issues/20720`).
+[confidential issue](../user/project/issues/confidential_issues.md) `https://gitlab.com/gitlab-org/gitlab/-/issues/20720`).
to make sure its dynamic nature does not increase the number of security concerns.
### Security in the code
@@ -192,20 +207,8 @@ module Gitlab
## Version history
-The [current version history](../user/project/settings/import_export.md) also displays the equivalent GitLab version
-and it is useful for knowing which versions won't be compatible between them.
-
-| Exporting GitLab version | Importing GitLab version |
-| -------------------------- | -------------------------- |
-| 11.7 to current | 11.7 to current |
-| 11.1 to 11.6 | 11.1 to 11.6 |
-| 10.8 to 11.0 | 10.8 to 11.0 |
-| 10.4 to 10.7 | 10.4 to 10.7 |
-| ... | ... |
-| 8.10.3 to 8.11 | 8.10.3 to 8.11 |
-| 8.10.0 to 8.10.2 | 8.10.0 to 8.10.2 |
-| 8.9.5 to 8.9.11 | 8.9.5 to 8.9.11 |
-| 8.9.0 to 8.9.4 | 8.9.0 to 8.9.4 |
+Check the [version history](../user/project/settings/import_export.md#version-history)
+for compatibility when importing and exporting projects.
### When to bump the version up
diff --git a/doc/development/insert_into_tables_in_batches.md b/doc/development/insert_into_tables_in_batches.md
index e5c4dc6ee56..d8919789808 100644
--- a/doc/development/insert_into_tables_in_batches.md
+++ b/doc/development/insert_into_tables_in_batches.md
@@ -187,8 +187,6 @@ There are a few restrictions to how these APIs can be used:
- `BulkInsertableAssociations`:
- It is currently only compatible with `has_many` relations.
- It does not yet support `has_many through: ...` relations.
-- Writing [`jsonb`](https://www.postgresql.org/docs/current/datatype-json.html) content is
-[not currently supported](https://gitlab.com/gitlab-org/gitlab/-/issues/210560).
Moreover, input data should either be limited to around 1000 records at most,
or already batched prior to calling bulk insert. The `INSERT` statement will run in a single
diff --git a/doc/development/instrumentation.md b/doc/development/instrumentation.md
index d72e1c6635e..ee1aab1456e 100644
--- a/doc/development/instrumentation.md
+++ b/doc/development/instrumentation.md
@@ -119,9 +119,9 @@ without measuring anything.
Three values are measured for a block:
-- The real time elapsed, stored in NAME_real_time.
-- The CPU time elapsed, stored in NAME_cpu_time.
-- The call count, stored in NAME_call_count.
+- The real time elapsed, stored in `NAME_real_time`.
+- The CPU time elapsed, stored in `NAME_cpu_time`.
+- The call count, stored in `NAME_call_count`.
Both the real and CPU timings are measured in milliseconds.
diff --git a/doc/development/integrations/jenkins.md b/doc/development/integrations/jenkins.md
index 001d1c21fd3..f2bc6532dde 100644
--- a/doc/development/integrations/jenkins.md
+++ b/doc/development/integrations/jenkins.md
@@ -1,6 +1,6 @@
# How to run Jenkins in development environment (on macOS) **(STARTER)**
-This is a step by step guide on how to set up [Jenkins](https://jenkins.io/) on your local machine and connect to it from your GitLab instance. GitLab triggers webhooks on Jenkins, and Jenkins connects to GitLab using the API. By running both applications on the same machine, we can make sure they are able to access each other.
+This is a step by step guide on how to set up [Jenkins](https://www.jenkins.io/) on your local machine and connect to it from your GitLab instance. GitLab triggers webhooks on Jenkins, and Jenkins connects to GitLab using the API. By running both applications on the same machine, we can make sure they are able to access each other.
## Install Jenkins
diff --git a/doc/development/integrations/secure.md b/doc/development/integrations/secure.md
index b0e1e28ba8b..1737daae0e0 100644
--- a/doc/development/integrations/secure.md
+++ b/doc/development/integrations/secure.md
@@ -15,7 +15,7 @@ scanner, as well as requirements and guidelines for the Docker image.
## Job definition
-This section desribes several important fields to add to the security scanner's job
+This section describes several important fields to add to the security scanner's job
definition file. Full documentation on these and other available fields can be viewed
in the [CI documentation](../../ci/yaml/README.md#image).
@@ -69,7 +69,7 @@ For example, here is the definition of a SAST job that generates a file named `g
and uploads it as a SAST report:
```yaml
-mysec_sast_scanning:
+mysec_sast:
image: registry.gitlab.com/secure/mysec
artifacts:
reports:
@@ -89,9 +89,9 @@ for variables such as `DEPENDENCY_SCANNING_DISABLED`, `CONTAINER_SCANNING_DISABL
disable running the custom scanner.
GitLab also defines a `CI_PROJECT_REPOSITORY_LANGUAGES` variable, which provides the list of
-languages in the repo. Depending on this value, your scanner may or may not do something different.
+languages in the repository. Depending on this value, your scanner may or may not do something different.
Language detection currently relies on the [`linguist`](https://github.com/github/linguist) Ruby gem.
-See [GitLab CI/CD prefined variables](../../ci/variables/predefined_variables.md#variables-reference).
+See [GitLab CI/CD predefined variables](../../ci/variables/predefined_variables.md).
#### Policy checking example
@@ -124,9 +124,9 @@ regardless of the individual machine the scanner runs on.
Depending on the CI infrastructure,
the CI may have to fetch the Docker image every time the job runs.
-To make the scanning job run fast, and to avoid wasting bandwidth,
-it is important to make Docker images as small as possible,
-ideally smaller than 50 MB.
+For the scanning job to run fast and avoid wasting bandwidth, Docker images should be as small as
+possible. You should aim for 50MB or smaller. If that isn't possible, try to keep it below 1.46 GB,
+which is the size of a CD-ROM.
If the scanner requires a fully functional Linux environment,
it is recommended to use a [Debian](https://www.debian.org/intro/about) "slim" distribution or [Alpine Linux](https://www.alpinelinux.org/).
@@ -135,11 +135,27 @@ and to compile the scanner with all the libraries it needs.
[Multi-stage builds](https://docs.docker.com/develop/develop-images/multistage-build/)
might also help with keeping the image small.
+To keep an image size small, consider using [dive](https://github.com/wagoodman/dive#dive) to analyze layers in a Docker image to
+identify where additional bloat might be originating from.
+
+In some cases, it might be difficult to remove files from an image. When this occurs, consider using
+[Zstandard](https://github.com/facebook/zstd)
+to compress files or large directories. Zstandard offers many different compression levels that can
+decrease the size of your image with very little impact to decompression speed. It may be helpful to
+automatically decompress any compressed directories as soon as an image launches. You can accomplish
+this by adding a step to the Docker image's `/etc/bashrc` or to a specific user's `$HOME/.bashrc`.
+Remember to change the entry point to launch a bash login shell if you chose the latter option.
+
+Here are some examples to get you started:
+
+- <https://gitlab.com/gitlab-org/security-products/license-management/-/blob/0b976fcffe0a9b8e80587adb076bcdf279c9331c/config/install.sh#L168-170>
+- <https://gitlab.com/gitlab-org/security-products/license-management/-/blob/0b976fcffe0a9b8e80587adb076bcdf279c9331c/config/.bashrc#L49>
+
### Image tag
As documented in the [Docker Official Images](https://github.com/docker-library/official-images#tags-and-aliases) project,
it is strongly encouraged that version number tags be given aliases which allows the user to easily refer to the "most recent" release of a particular series.
-See also [Docker Tagging: Best practices for tagging and versioning docker images](https://docs.microsoft.com/en-us/archive/blogs/stevelasker/docker-tagging-best-practices-for-tagging-and-versioning-docker-images).
+See also [Docker Tagging: Best practices for tagging and versioning Docker images](https://docs.microsoft.com/en-us/archive/blogs/stevelasker/docker-tagging-best-practices-for-tagging-and-versioning-docker-images).
## Command line
@@ -448,7 +464,7 @@ Right now, GitLab cannot track a vulnerability if its location changes
as new Git commits are pushed, and this results in user feedback being lost.
For instance, user feedback on a SAST vulnerability is lost
if the affected file is renamed or the affected line moves down.
-This is addressed in [issue #7586](https://gitlab.com/gitlab-org/gitlab/issues/7586).
+This is addressed in [issue #7586](https://gitlab.com/gitlab-org/gitlab/-/issues/7586).
In some cases, the multiple scans executed in the same CI pipeline result in duplicates
that are automatically merged using the vulnerability location and identifiers.
@@ -470,6 +486,10 @@ The confidence ranges from `Low` to `Confirmed`, but it can also be `Unknown`,
`Experimental` or even `Ignore` if the vulnerability is to be ignored.
Valid values are: `Ignore`, `Unknown`, `Experimental`, `Low`, `Medium`, `High`, or `Confirmed`
+`Unknown` values means that data is unavailable to determine it's actual value. Therefore, it may be `high`, `medium`, or `low`,
+and needs to be investigated. We have [provided a chart](../../user/application_security/sast/analyzers.md#analyzers-data)
+of the available SAST Analyzers and what data is currently available.
+
### Remediations
The `remediations` field of the report is an array of remediation objects.
diff --git a/doc/development/integrations/secure_partner_integration.md b/doc/development/integrations/secure_partner_integration.md
index 59336b0e6a1..22e1f8bf769 100644
--- a/doc/development/integrations/secure_partner_integration.md
+++ b/doc/development/integrations/secure_partner_integration.md
@@ -28,7 +28,7 @@ best place to integrate your own product and its results into GitLab.
implications for app security, corporate policy, or compliance. When complete,
the job reports back on its status and creates a
[job artifact](../../user/project/pipelines/job_artifacts.md) as a result.
-- The [Merge Request Security Widget](../../user/project/merge_requests/index.md#security-reports-ultimate)
+- The [Merge Request Security Widget](../../user/project/merge_requests/testing_and_reports_in_merge_requests.md#security-reports-ultimate)
displays the results of the pipeline's security checks and the developer can
review them. The developer can review both a summary and a detailed version
of the results.
@@ -54,10 +54,10 @@ best place to integrate your own product and its results into GitLab.
## How to onboard
This section describes the steps you need to complete to onboard as a partner
-and complete an intgration with the Secure stage.
+and complete an integration with the Secure stage.
1. Read about our [partnerships](https://about.gitlab.com/partners/integrate/).
-1. [Create an issue](https://gitlab.com/gitlab-com/alliances/alliances/issues/new?issuable_template=new_partner)
+1. [Create an issue](https://gitlab.com/gitlab-com/alliances/alliances/-/issues/new?issuable_template=new_partner)
using our new partner issue template to begin the discussion.
1. Get a test account to begin developing your integration. You can
request a [GitLab.com Gold Subscription Sandbox](https://about.gitlab.com/partners/integrate/#gitlabcom-gold-subscription-sandbox-request)
@@ -76,10 +76,10 @@ and complete an intgration with the Secure stage.
- Documentation for [Dependency Scanning reports](../../user/application_security/dependency_scanning/index.md#reports-json-format).
- Documentation for [Container Scanning reports](../../user/application_security/container_scanning/index.md#reports-json-format).
- See this [example secure job definition that also defines the artifact created](https://gitlab.com/gitlab-org/gitlab/blob/master/lib/gitlab/ci/templates/Security/Container-Scanning.gitlab-ci.yml).
- - If you need a new kind of scan or report, [create an issue](https://gitlab.com/gitlab-org/gitlab/issues/new#)
+ - If you need a new kind of scan or report, [create an issue](https://gitlab.com/gitlab-org/gitlab/-/issues/new#)
and add the label `devops::secure`.
- Once the job is completed, the data can be seen:
- - In the [Merge Request Security Report](../../user/project/merge_requests/index.md#security-reports-ultimate) ([MR Security Report data flow](https://gitlab.com/snippets/1910005#merge-request-view)).
+ - In the [Merge Request Security Report](../../user/project/merge_requests/testing_and_reports_in_merge_requests.md#security-reports-ultimate) ([MR Security Report data flow](https://gitlab.com/snippets/1910005#merge-request-view)).
- While [browsing a Job Artifact](../../user/project/pipelines/job_artifacts.md).
- In the [Security Dashboard](../../user/application_security/security_dashboard/index.md) ([Dashboard data flow](https://gitlab.com/snippets/1910005#project-and-group-dashboards)).
1. Optional: Provide a way to interact with results as Vulnerabilities:
@@ -99,5 +99,9 @@ and complete an intgration with the Secure stage.
doing an [Unfiltered blog post](https://about.gitlab.com/handbook/marketing/blog/unfiltered/),
doing a co-branded webinar, or producing a co-branded whitepaper.
+We have a [video playlist](https://www.youtube.com/playlist?list=PL05JrBw4t0KpMqYxJiOLz-uBIr5w-yP4A)
+that may be helpful as part of this process. This covers various topics related to integrating your
+tool.
+
If you have any issues while working through your integration or the steps
above, please create an issue to discuss with us further.
diff --git a/doc/development/internal_api.md b/doc/development/internal_api.md
index 5b53f223eb0..d220a2d46fb 100644
--- a/doc/development/internal_api.md
+++ b/doc/development/internal_api.md
@@ -43,11 +43,11 @@ POST /internal/allowed
| `key_id` | string | no | ID of the SSH-key used to connect to GitLab-shell |
| `username` | string | no | Username from the certificate used to connect to GitLab-Shell |
| `project` | string | no (if `gl_repository` is passed) | Path to the project |
-| `gl_repository` | string | no (if `project` is passed) | Path to the project |
+| `gl_repository` | string | no (if `project` is passed) | Repository identifier (e.g. `project-7`) |
| `protocol` | string | yes | SSH when called from GitLab-shell, HTTP or SSH when called from Gitaly |
| `action` | string | yes | Git command being run (`git-upload-pack`, `git-receive-pack`, `git-upload-archive`) |
| `changes` | string | yes | `<oldrev> <newrev> <refname>` when called from Gitaly, The magic string `_any` when called from GitLab Shell |
-| `check_ip` | string | no | Ip address from which call to GitLab Shell was made |
+| `check_ip` | string | no | IP address from which call to GitLab Shell was made |
Example request:
diff --git a/doc/development/issue_types.md b/doc/development/issue_types.md
index bcd3980c298..028d42b27fc 100644
--- a/doc/development/issue_types.md
+++ b/doc/development/issue_types.md
@@ -5,9 +5,9 @@ Sometimes when a new resource type is added it's not clear if it should be only
(similar to Issue, Epic, Merge Request, Snippet).
The idea of Issue Types was first proposed in [this
-issue](https://gitlab.com/gitlab-org/gitlab/issues/8767) and its usage was
+issue](https://gitlab.com/gitlab-org/gitlab/-/issues/8767) and its usage was
discussed few times since then, for example in [incident
-management](https://gitlab.com/gitlab-org/gitlab-foss/issues/55532).
+management](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/55532).
## What is an Issue Type
diff --git a/doc/development/licensing.md b/doc/development/licensing.md
index e1be1faa61b..f92151e7a37 100644
--- a/doc/development/licensing.md
+++ b/doc/development/licensing.md
@@ -12,6 +12,8 @@ Some gems may not include their license information in their `gemspec` file, and
### License Finder commands
+> Note: License Finder currently uses GitLab misused terms of whitelist and blacklist. As a result, the commands below references those terms. We've created an [issue on their project](https://github.com/pivotal/LicenseFinder/issues/745) to propose that they rename their commands.
+
There are a few basic commands License Finder provides that you'll need in order to manage license detection.
To verify that the checks are passing, and/or to see what dependencies are causing the checks to fail:
@@ -20,16 +22,16 @@ To verify that the checks are passing, and/or to see what dependencies are causi
bundle exec license_finder
```
-To whitelist a new license:
+To allowlist a new license:
```shell
license_finder whitelist add MIT
```
-To blacklist a new license:
+To denylist a new license:
```shell
-license_finder blacklist add GPLv2
+license_finder blacklist add Unlicense
```
To tell License Finder about a dependency's license if it isn't auto-detected:
diff --git a/doc/development/logging.md b/doc/development/logging.md
index e7d48e4d278..ccd4adf7cef 100644
--- a/doc/development/logging.md
+++ b/doc/development/logging.md
@@ -284,7 +284,7 @@ creating visualizations in Kibana.
**Note:**
The fields of the context are currently only logged for Sidekiq jobs triggered
through web requests. See the
-[follow-up work](https://gitlab.com/gitlab-com/gl-infra/scalability/issues/68)
+[follow-up work](https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/68)
for more information.
## Exception Handling
@@ -358,8 +358,8 @@ end
for most users, but you may need to tweak them in [Omnibus GitLab](https://gitlab.com/gitlab-org/omnibus-gitlab).
1. If you add a new file, submit an issue to the [production
- tracker](https://gitlab.com/gitlab-com/gl-infra/production/issues) or
- a merge request to the [gitlab_fluentd](https://gitlab.com/gitlab-cookbooks/gitlab_fluentd)
+ tracker](https://gitlab.com/gitlab-com/gl-infra/production/-/issues) or
+ a merge request to the [`gitlab_fluentd`](https://gitlab.com/gitlab-cookbooks/gitlab_fluentd)
project. See [this example](https://gitlab.com/gitlab-cookbooks/gitlab_fluentd/-/merge_requests/51/diffs).
1. Be sure to update the [GitLab CE/EE documentation](../administration/logs.md) and the [GitLab.com
diff --git a/doc/development/merge_request_performance_guidelines.md b/doc/development/merge_request_performance_guidelines.md
index b51fc681e27..ed6d08a9894 100644
--- a/doc/development/merge_request_performance_guidelines.md
+++ b/doc/development/merge_request_performance_guidelines.md
@@ -193,7 +193,7 @@ in Puma/Unicorn threads.
Often, GitLab needs to communicate with an external service such as Kubernetes
clusters. In this case, it's hard to estimate when the external service finishes
the requested process, for example, if it's a user-owned cluster that's inactive for some reason,
-GitLab might wait for the response forever ([Example](https://gitlab.com/gitlab-org/gitlab/issues/31475)).
+GitLab might wait for the response forever ([Example](https://gitlab.com/gitlab-org/gitlab/-/issues/31475)).
This could result in Puma/Unicorn timeout and should be avoided at all cost.
You should set a reasonable timeout, gracefully handle exceptions and surface the
@@ -210,7 +210,7 @@ as an open transaction basically blocks the release of a PostgreSQL backend conn
For keeping transaction as minimal as possible, please consider using `AfterCommitQueue`
module or `after_commit` AR hook.
-Here is [an example](https://gitlab.com/gitlab-org/gitlab/issues/36154#note_247228859)
+Here is [an example](https://gitlab.com/gitlab-org/gitlab/-/issues/36154#note_247228859)
that one request to Gitaly instance during transaction triggered a P1 issue.
## Eager Loading
diff --git a/doc/development/migration_style_guide.md b/doc/development/migration_style_guide.md
index 4cf546173de..7d3d9dac174 100644
--- a/doc/development/migration_style_guide.md
+++ b/doc/development/migration_style_guide.md
@@ -35,7 +35,7 @@ and post-deployment migrations (`db/post_migrate`) are run after the deployment
## Schema Changes
-Changes to the schema should be commited to `db/structure.sql`. This
+Changes to the schema should be committed to `db/structure.sql`. This
file is automatically generated by Rails, so you normally should not
edit this file by hand. If your migration is adding a column to a
table, that column will be added at the bottom. Please do not reorder
@@ -49,7 +49,7 @@ regenerate a clean `db/structure.sql` for the migrations you're
adding. This script will apply all migrations found in `db/migrate`
or `db/post_migrate`, so if there are any migrations you don't want to
commit to the schema, rename or remove them. If your branch is not
-targetting `master` you can set the `TARGET` environment variable.
+targeting `master` you can set the `TARGET` environment variable.
```shell
# Regenerate schema against `master`
@@ -343,7 +343,7 @@ def up
end
```
-The RuboCop rule generally allows standard Rails migration methods, listed below. This example will cause a rubocop offense:
+The RuboCop rule generally allows standard Rails migration methods, listed below. This example will cause a Rubocop offense:
```ruby
disabled_ddl_transaction!
@@ -367,6 +367,8 @@ migration involves one of the high-traffic tables:
- `users`
- `projects`
- `namespaces`
+- `issues`
+- `merge_requests`
- `ci_pipelines`
- `ci_builds`
- `notes`
@@ -550,6 +552,12 @@ operations that don't require `disable_ddl_transaction!`.
You can read more about adding [foreign key constraints to an existing column](database/add_foreign_key_to_existing_column.md).
+## `NOT NULL` constraints
+
+> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/38358) in GitLab 13.0.
+
+See the style guide on [`NOT NULL` constraints](database/not_null_constraints.md) for more information.
+
## Adding Columns With Default Values
With PostgreSQL 11 being the minimum version since GitLab 13.0, adding columns with default values has become much easier and
@@ -559,6 +567,11 @@ Before PostgreSQL 11, adding a column with a default was problematic as it would
have caused a full table rewrite. The corresponding helper `add_column_with_default`
has been deprecated and will be removed in a later release.
+NOTE: **Note:**
+If a backport adding a column with a default value is needed for %12.9 or earlier versions,
+it should use `add_column_with_default` helper. If a [large table](https://gitlab.com/gitlab-org/gitlab/-/blob/master/rubocop/rubocop-migrations.yml#L3)
+is involved, backporting to %12.9 is contraindicated.
+
## Changing the column default
One might think that changing a default column with `change_column_default` is an
@@ -611,7 +624,7 @@ end
```
If a computed update is needed, the value can be wrapped in `Arel.sql`, so Arel
-treats it as an SQL literal. It's also a required deprecation for [Rails 6](https://gitlab.com/gitlab-org/gitlab/issues/28497).
+treats it as an SQL literal. It's also a required deprecation for [Rails 6](https://gitlab.com/gitlab-org/gitlab/-/issues/28497).
The below example is the same as the one above, but
the value is set to the product of the `bar` and `baz` columns:
@@ -727,6 +740,12 @@ Rails migration example:
add_column(:projects, :foo, :integer, default: 10, limit: 8)
```
+## Strings and the Text data type
+
+> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/30453) in GitLab 13.0.
+
+See the [text data type](database/strings_and_the_text_data_type.md) style guide for more information.
+
## Timestamp column type
By default, Rails uses the `timestamp` data type that stores timestamp data
diff --git a/doc/development/module_with_instance_variables.md b/doc/development/module_with_instance_variables.md
index b0eab95190b..2229c99c99b 100644
--- a/doc/development/module_with_instance_variables.md
+++ b/doc/development/module_with_instance_variables.md
@@ -30,11 +30,11 @@ People are saying multiple inheritance is bad. Mixing multiple modules with
multiple instance variables scattering everywhere suffer from the same issue.
The same applies to `ActiveSupport::Concern`. See:
[Consider replacing concerns with dedicated classes & composition](
-https://gitlab.com/gitlab-org/gitlab/issues/16270)
+https://gitlab.com/gitlab-org/gitlab/-/issues/16270)
There's also a similar idea:
[Use decorators and interface segregation to solve overgrowing models problem](
-https://gitlab.com/gitlab-org/gitlab/issues/14235)
+https://gitlab.com/gitlab-org/gitlab/-/issues/14235)
Note that `included` doesn't solve the whole issue. They define the
dependencies, but they still allow each modules to talk implicitly via the
diff --git a/doc/development/multi_version_compatibility.md b/doc/development/multi_version_compatibility.md
index aedd5c1ffb7..001517d44ea 100644
--- a/doc/development/multi_version_compatibility.md
+++ b/doc/development/multi_version_compatibility.md
@@ -45,7 +45,7 @@ and set this column to `false`. The old servers were still updating the old colu
that updated the new column from the old one. For the new servers though, they were only updating the new column and that same trigger
was now working against us and setting it back to the wrong value.
-For more information, see [the relevant issue](https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/9176).
+For more information, see this [confidential issue](../user/project/issues/confidential_issues.md) `https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/9176`.
### Sidebar wasn't loading for some users
diff --git a/doc/development/namespaces_storage_statistics.md b/doc/development/namespaces_storage_statistics.md
index 3065d4f84a2..5207276ba73 100644
--- a/doc/development/namespaces_storage_statistics.md
+++ b/doc/development/namespaces_storage_statistics.md
@@ -25,7 +25,7 @@ by [`Namespaces#with_statistics`](https://gitlab.com/gitlab-org/gitlab/blob/4ab5
Additionally, the pattern that is currently used to update the project statistics
(the callback) doesn't scale adequately. It is currently one of the largest
-[database queries transactions on production](https://gitlab.com/gitlab-org/gitlab/issues/29070)
+[database queries transactions on production](https://gitlab.com/gitlab-org/gitlab/-/issues/29070)
that takes the most time overall. We can't add one more query to it as
it will increase the transaction's length.
@@ -142,7 +142,7 @@ but we refresh them through Sidekiq jobs and in different transactions:
1. Create a second table (`namespace_aggregation_schedules`) with two columns `id` and `namespace_id`.
1. Whenever the statistics of a project changes, insert a row into `namespace_aggregation_schedules`
- We don't insert a new row if there's already one related to the root namespace.
- - Keeping in mind the length of the transaction that involves updating `project_statistics`(<https://gitlab.com/gitlab-org/gitlab/issues/29070>), the insertion should be done in a different transaction and through a Sidekiq Job.
+ - Keeping in mind the length of the transaction that involves updating `project_statistics`(<https://gitlab.com/gitlab-org/gitlab/-/issues/29070>), the insertion should be done in a different transaction and through a Sidekiq Job.
1. After inserting the row, we schedule another worker to be executed async at two different moments:
- One enqueued for immediate execution and another one scheduled in `1.5h` hours.
- We only schedule the jobs, if we can obtain a `1.5h` lease on Redis on a key based on the root namespace ID.
@@ -162,7 +162,7 @@ This implementation has the following benefits:
The only downside of this approach is that namespaces' statistics are updated up to `1.5` hours after the change is done,
which means there's a time window in which the statistics are inaccurate. Because we're still not
-[enforcing storage limits](https://gitlab.com/gitlab-org/gitlab/issues/17664), this is not a major problem.
+[enforcing storage limits](https://gitlab.com/gitlab-org/gitlab/-/issues/17664), this is not a major problem.
## Conclusion
@@ -171,8 +171,8 @@ performant approach of aggregating the root namespaces.
All the details regarding this use case can be found on:
-- <https://gitlab.com/gitlab-org/gitlab-foss/issues/62214>
+- <https://gitlab.com/gitlab-org/gitlab-foss/-/issues/62214>
- Merge Request with the implementation: <https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/28996>
Performance of the namespace storage statistics were measured in staging and production (GitLab.com). All results were posted
-on <https://gitlab.com/gitlab-org/gitlab-foss/issues/64092>: No problem has been reported so far.
+on <https://gitlab.com/gitlab-org/gitlab-foss/-/issues/64092>: No problem has been reported so far.
diff --git a/doc/development/new_fe_guide/development/accessibility.md b/doc/development/new_fe_guide/development/accessibility.md
index aa76a9fec07..f76fd72d4dc 100644
--- a/doc/development/new_fe_guide/development/accessibility.md
+++ b/doc/development/new_fe_guide/development/accessibility.md
@@ -1,4 +1,4 @@
-# Accessiblity
+# Accessibility
Using semantic HTML plays a key role when it comes to accessibility.
@@ -37,7 +37,7 @@ In forms we should use the `for` attribute in the label statement:
## Testing
1. On MacOS you can use [VoiceOver](https://www.apple.com/accessibility/mac/vision/) by pressing `cmd+F5`.
-1. On Windows you can use [Narrator](https://www.microsoft.com/en-us/accessibility/windows) by pressing Windows logo key + Ctrl + Enter.
+1. On Windows you can use [Narrator](https://www.microsoft.com/en-us/accessibility/windows) by pressing Windows logo key + Control + Enter.
## Online resources
diff --git a/doc/development/new_fe_guide/development/performance.md b/doc/development/new_fe_guide/development/performance.md
index 4a85c04d8cf..7b58a80576a 100644
--- a/doc/development/new_fe_guide/development/performance.md
+++ b/doc/development/new_fe_guide/development/performance.md
@@ -5,11 +5,11 @@
We have a performance dashboard available in one of our [Grafana instances](https://dashboards.gitlab.net/d/1EBTz3Dmz/sitespeed-page-summary?orgId=1). This dashboard automatically aggregates metric data from [sitespeed.io](https://www.sitespeed.io/) every 6 hours. These changes are displayed after a set number of pages are aggregated.
These pages can be found inside a text file in the [`gitlab-build-images` repository](https://gitlab.com/gitlab-org/gitlab-build-images) called [`gitlab.txt`](https://gitlab.com/gitlab-org/gitlab-build-images/blob/master/scripts/gitlab.txt)
-Any frontend engineer can contribute to this dashboard. They can contribute by adding or removing urls of pages from this text file. Please have a [frontend monitoring expert](https://about.gitlab.com/company/team/) review your changes before assigning to a maintainer of the `gitlab-build-images` project. The changes will go live on the next scheduled run after the changes are merged into `master`.
+Any frontend engineer can contribute to this dashboard. They can contribute by adding or removing URLs of pages from this text file. Please have a [frontend monitoring expert](https://about.gitlab.com/company/team/) review your changes before assigning to a maintainer of the `gitlab-build-images` project. The changes will go live on the next scheduled run after the changes are merged into `master`.
There are 3 recommended high impact metrics to review on each page:
-- [First visual change](https://developers.google.com/web/tools/lighthouse/audits/first-meaningful-paint)
+- [First visual change](https://web.dev/first-meaningful-paint/)
- [Speed Index](https://sites.google.com/a/webpagetest.org/docs/using-webpagetest/metrics/speed-index)
- [Visual Complete 95%](https://sites.google.com/a/webpagetest.org/docs/using-webpagetest/metrics/speed-index)
diff --git a/doc/development/omnibus.md b/doc/development/omnibus.md
index 28ca500f21a..deaf72d2ecf 100644
--- a/doc/development/omnibus.md
+++ b/doc/development/omnibus.md
@@ -24,7 +24,7 @@ and write it to the Rails root. In the Omnibus packages, reconfigure writes the
The Omnibus design separates code (read-only, under `/opt/gitlab`) from data
(read/write, under `/var/opt/gitlab`) and logs (read/write, under
`/var/log/gitlab`). To make this happen the reconfigure script sets custom
-paths where it can in GitLab config files, and where there are no path
+paths where it can in GitLab configuration files, and where there are no path
settings, it uses symlinks.
For example, `config/gitlab.yml` is treated as data so that file is a symlink.
diff --git a/doc/development/packages.md b/doc/development/packages.md
index 11fc3faf4ab..edf01255e84 100644
--- a/doc/development/packages.md
+++ b/doc/development/packages.md
@@ -10,7 +10,7 @@ not cover the way the code should be written. However, you can find a good examp
by looking at merge requests with Maven and NPM support:
- [NPM registry support](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/8673).
-- [Conan repository](https://gitlab.com/gitlab-org/gitlab/issues/8248).
+- [Conan repository](https://gitlab.com/gitlab-org/gitlab/-/issues/8248).
- [Maven repository](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/6607).
- [Instance level endpoint for Maven repository](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/8757)
@@ -61,15 +61,19 @@ The current state of existing package registries availability is:
| Repository Type | Project Level | Group Level | Instance Level |
|-----------------|---------------|-------------|----------------|
| Maven | Yes | Yes | Yes |
-| Conan | No - [open issue](https://gitlab.com/gitlab-org/gitlab/issues/11679) | No - [open issue](https://gitlab.com/gitlab-org/gitlab/issues/11679) | Yes |
-| NPM | No - [open issue](https://gitlab.com/gitlab-org/gitlab/issues/36853) | Yes | No - [open issue](https://gitlab.com/gitlab-org/gitlab/issues/36853) |
+| Conan | No - [open issue](https://gitlab.com/gitlab-org/gitlab/-/issues/11679) | No - [open issue](https://gitlab.com/gitlab-org/gitlab/-/issues/11679) | Yes |
+| NPM | No - [open issue](https://gitlab.com/gitlab-org/gitlab/-/issues/36853) | Yes | No - [open issue](https://gitlab.com/gitlab-org/gitlab/-/issues/36853) |
| NuGet | Yes | No - [open issue](https://gitlab.com/gitlab-org/gitlab/-/issues/36423) | No |
| PyPI | Yes | No | No |
+| Go | Yes | No - [open issue](https://gitlab.com/gitlab-org/gitlab/-/issues/213900) | No - [open-issue](https://gitlab.com/gitlab-org/gitlab/-/issues/213902) |
+| Composer | Yes | Yes | No |
NOTE: **Note:** NPM is currently a hybrid of the instance level and group level.
It is using the top-level group or namespace as the defining portion of the name
(for example, `@my-group-name/my-package-name`).
+**Note:** Composer package naming scope is Instance Level.
+
### Naming conventions
To avoid name conflict for instance-level endpoints you will need to define a package naming convention
diff --git a/doc/development/performance.md b/doc/development/performance.md
index 2e73161a11f..69ad524675d 100644
--- a/doc/development/performance.md
+++ b/doc/development/performance.md
@@ -8,7 +8,7 @@ consistent performance of GitLab.
The process of solving performance problems is roughly as follows:
1. Make sure there's an issue open somewhere (for example, on the GitLab CE issue
- tracker), and create one if there is not. See [#15607](https://gitlab.com/gitlab-org/gitlab-foss/issues/15607) for an example.
+ tracker), and create one if there is not. See [#15607](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/15607) for an example.
1. Measure the performance of the code in a production environment such as
GitLab.com (see the [Tooling](#tooling) section below). Performance should be
measured over a period of _at least_ 24 hours.
@@ -42,6 +42,7 @@ GitLab provides built-in tools to help improve performance and availability:
- [Request Profiling](../administration/monitoring/performance/request_profiling.md).
- [QueryRecoder](query_recorder.md) for preventing `N+1` regressions.
- [Chaos endpoints](chaos_endpoints.md) for testing failure scenarios. Intended mainly for testing availability.
+- [Service measurement](service_measurement.md) for measuring and logging service execution.
GitLab team members can use [GitLab.com's performance monitoring systems](https://about.gitlab.com/handbook/engineering/monitoring/) located at
<https://dashboards.gitlab.net>, this requires you to log in using your
diff --git a/doc/development/permissions.md b/doc/development/permissions.md
index 0772389bf9e..06a4a03de38 100644
--- a/doc/development/permissions.md
+++ b/doc/development/permissions.md
@@ -14,7 +14,7 @@ Groups and projects can have the following visibility levels:
- private (`0`) - an entity is visible only to the approved members of the entity
The visibility level of a group can be changed only if all subgroups and
-subprojects have the same or lower visibility level. (e.g., a group can be set
+sub-projects have the same or lower visibility level. (e.g., a group can be set
to internal only if all subgroups and projects are internal or private).
Visibility levels can be found in the `Gitlab::VisibilityLevel` module.
@@ -41,11 +41,12 @@ can be accessed only by project members by default.
Users can be members of multiple groups and projects. The following access
levels are available (defined in the `Gitlab::Access` module):
-- Guest
-- Reporter
-- Developer
-- Maintainer
-- Owner
+- No access (`0`)
+- Guest (`10`)
+- Reporter (`20`)
+- Developer (`30`)
+- Maintainer (`40`)
+- Owner (`50`)
If a user is the member of both a project and the project parent group, the
higher permission is taken into account for the project.
@@ -56,6 +57,12 @@ can still view the groups and their entities (like epics).
Project membership (where the group membership is already taken into account)
is stored in the `project_authorizations` table.
+CAUTION: **Caution:**
+Due to [an issue](https://gitlab.com/gitlab-org/gitlab/-/issues/219299),
+projects in personal namespace will not show owner (`50`) permission in
+`project_authorizations` table. Note however that [`user.owned_projects`](https://gitlab.com/gitlab-org/gitlab/blob/0d63823b122b11abd2492bca47cc26858eee713d/app/models/user.rb#L906-916)
+is calculated properly.
+
### Confidential issues
Confidential issues can be accessed only by project members who are at least
@@ -92,10 +99,10 @@ into different features like Merge Requests and CI flow.
| Activity level | Resource | Locations |Permission dependency|
|----------------|----------|-----------|-----|
-| View | License information | Dependency list, License Compliance | Can view repo |
-| View | Dependency information | Dependency list, License Compliance | Can view repo |
+| View | License information | Dependency list, License Compliance | Can view repository |
+| View | Dependency information | Dependency list, License Compliance | Can view repository |
| View | Vulnerabilities information | Dependency list | Can view security findings |
-| View | Black/Whitelisted licenses for the project | License Compliance, Merge request | Can view repo |
+| View | Black/Whitelisted licenses for the project | License Compliance, Merge request | Can view repository |
| View | Security findings | Merge Request, CI job page, Pipeline security tab | Can read the project and CI jobs |
| View | Vulnerability feedback | Merge Request | Can read security findings |
| View | Dependency List page | Project | Can access Dependency information |
diff --git a/doc/development/pipelines.md b/doc/development/pipelines.md
index 39ca846c1cc..05b80cdb4a6 100644
--- a/doc/development/pipelines.md
+++ b/doc/development/pipelines.md
@@ -11,173 +11,15 @@ We're striving to [dogfood](https://about.gitlab.com/handbook/engineering/#dogfo
GitLab [CI/CD features and best-practices](../ci/yaml/README.md)
as much as possible.
-## Stages
+## Overview
-The current stages are:
-
-- `sync`: This stage is used to synchronize changes from <https://gitlab.com/gitlab-org/gitlab> to
- <https://gitlab.com/gitlab-org/gitlab-foss>.
-- `prepare`: This stage includes jobs that prepare artifacts that are needed by
- jobs in subsequent stages.
-- `build-images`: This stage includes jobs that prepare docker images
- that are needed by jobs in subsequent stages or downstream pipelines.
-- `fixtures`: This stage includes jobs that prepare fixtures needed by frontend tests.
-- `test`: This stage includes most of the tests, DB/migration jobs, and static analysis jobs.
-- `post-test`: This stage includes jobs that build reports or gather data from
- the `test` stage's jobs (e.g. coverage, Knapsack metadata etc.).
-- `review-prepare`: This stage includes a job that build the CNG images that are
- later used by the (Helm) Review App deployment (see
- [Review Apps](testing_guide/review_apps.md) for details).
-- `review`: This stage includes jobs that deploy the GitLab and Docs Review Apps.
-- `qa`: This stage includes jobs that perform QA tasks against the Review App
- that is deployed in the previous stage.
-- `post-qa`: This stage includes jobs that build reports or gather data from
- the `qa` stage's jobs (e.g. Review App performance report).
-- `pages`: This stage includes a job that deploys the various reports as
- GitLab Pages (e.g. <https://gitlab-org.gitlab.io/gitlab/coverage-ruby/>,
- <https://gitlab-org.gitlab.io/gitlab/coverage-javascript/>,
- <https://gitlab-org.gitlab.io/gitlab/webpack-report/>).
-
-## Default image
-
-The default image is defined in <https://gitlab.com/gitlab-org/gitlab/blob/master/.gitlab-ci.yml>.
-
-It includes Ruby, Go, Git, Git LFS, Chrome, Node, Yarn, PostgreSQL, and Graphics Magick.
-
-The images used in our pipelines are configured in the
-[`gitlab-org/gitlab-build-images`](https://gitlab.com/gitlab-org/gitlab-build-images)
-project, which is push-mirrored to <https://dev.gitlab.org/gitlab/gitlab-build-images>
-for redundancy.
-
-The current version of the build images can be found in the
-["Used by GitLab section"](https://gitlab.com/gitlab-org/gitlab-build-images/blob/master/.gitlab-ci.yml).
-
-## Default variables
-
-In addition to the [predefined variables](../ci/variables/predefined_variables.md),
-each pipeline includes default variables defined in
-<https://gitlab.com/gitlab-org/gitlab/blob/master/.gitlab-ci.yml>.
-
-## Common job definitions
-
-Most of the jobs [extend from a few CI definitions](../ci/yaml/README.md#extends)
-defined in [`.gitlab/ci/global.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/blob/master/.gitlab/ci/global.gitlab-ci.yml)
-that are scoped to a single [configuration parameter](../ci/yaml/README.md#configuration-parameters).
-
-| Job definitions | Description |
-|------------------|-------------|
-| `.default-tags` | Ensures a job has the `gitlab-org` tag to ensure it's using our dedicated runners. |
-| `.default-retry` | Allows a job to [retry](../ci/yaml/README.md#retry) upon `unknown_failure`, `api_failure`, `runner_system_failure`, `job_execution_timeout`, or `stuck_or_timeout_failure`. |
-| `.default-before_script` | Allows a job to use a default `before_script` definition suitable for Ruby/Rails tasks that may need a database running (e.g. tests). |
-| `.default-cache` | Allows a job to use a default `cache` definition suitable for Ruby/Rails and frontend tasks. |
-| `.use-pg11` | Allows a job to use the `postgres:11.6` and `redis:alpine` services. |
-| `.use-pg11-ee` | Same as `.use-pg11` but also use the `docker.elastic.co/elasticsearch/elasticsearch:6.4.2` services. |
-| `.use-kaniko` | Allows a job to use the `kaniko` tool to build Docker images. |
-| `.as-if-foss` | Simulate the FOSS project by setting the `FOSS_ONLY='1'` environment variable. |
-
-## `workflow:rules`
-
-We're using the [`workflow:rules` keyword](../ci/yaml/README.md#workflowrules) to
-define default rules to determine whether or not a pipeline is created.
-
-These rules are defined in <https://gitlab.com/gitlab-org/gitlab/blob/master/.gitlab-ci.yml>
-and are as follows:
-
-1. If `$FORCE_GITLAB_CI` is set, create a pipeline.
-1. For merge requests, create a pipeline.
-1. For `master` branch, create a pipeline (this includes on schedules, pushes, merges, etc.).
-1. For tags, create a pipeline.
-1. If `$GITLAB_INTERNAL` isn't set, don't create a pipeline.
-1. For stable, auto-deploy, and security branches, create a pipeline.
-1. For any other cases (e.g. when pushing a branch with no MR for it), no pipeline is created.
-
-## `rules`, `if:` conditions and `changes:` patterns
-
-We're using the [`rules` keyword](../ci/yaml/README.md#rules) extensively.
-
-All `rules` definitions are defined in
-<https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/ci/rules.gitlab-ci.yml>,
-then included in individual jobs via [`extends`](../ci/yaml/README.md#extends).
-
-The `rules` definitions are composed of `if:` conditions and `changes:` patterns,
-which are also defined in
-<https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/ci/rules.gitlab-ci.yml>
-and included in `rules` definitions via [YAML anchors](../ci/yaml/README.md#anchors)
-
-### `if:` conditions
-
-| `if:` conditions | Description | Notes |
-|------------------|-------------|-------|
-| `if-not-canonical-namespace` | Matches if the project isn't in the canonical (`gitlab-org/`) or security (`gitlab-org/security`) namespace. | Use to create a job for forks (by using `when: on_success\|manual`), or **not** create a job for forks (by using `when: never`). |
-| `if-not-ee` | Matches if the project isn't EE (i.e. project name isn't `gitlab` or `gitlab-ee`). | Use to create a job only in the FOSS project (by using `when: on_success|manual`), or **not** create a job if the project is EE (by using `when: never`). |
-| `if-not-foss` | Matches if the project isn't FOSS (i.e. project name isn't `gitlab-foss`, `gitlab-ce`, or `gitlabhq`). | Use to create a job only in the EE project (by using `when: on_success|manual`), or **not** create a job if the project is FOSS (by using `when: never`). |
-| `if-default-refs` | Matches if the pipeline is for `master`, `/^[\d-]+-stable(-ee)?$/` (stable branches), `/^\d+-\d+-auto-deploy-\d+$/` (auto-deploy branches), `/^security\//` (security branches), merge requests, and tags. | Note that jobs won't be created for branches with this default configuration. |
-| `if-master-refs` | Matches if the current branch is `master`. | |
-| `if-master-or-tag` | Matches if the pipeline is for the `master` branch or for a tag. | |
-| `if-merge-request` | Matches if the pipeline is for a merge request. | |
-| `if-nightly-master-schedule` | Matches if the pipeline is for a `master` scheduled pipeline with `$NIGHTLY` set. | |
-| `if-dot-com-gitlab-org-schedule` | Limits jobs creation to scheduled pipelines for the `gitlab-org` group on GitLab.com. | |
-| `if-dot-com-gitlab-org-master` | Limits jobs creation to the `master` branch for the `gitlab-org` group on GitLab.com. | |
-| `if-dot-com-gitlab-org-merge-request` | Limits jobs creation to merge requests for the `gitlab-org` group on GitLab.com. | |
-| `if-dot-com-gitlab-org-and-security-tag` | Limits job creation to tags for the `gitlab-org` and `gitlab-org/security` groups on GitLab.com. | |
-| `if-dot-com-gitlab-org-and-security-merge-request` | Limit jobs creation to merge requests for the `gitlab-org` and `gitlab-org/security` groups on GitLab.com. | |
-| `if-dot-com-ee-schedule` | Limits jobs to scheduled pipelines for the `gitlab-org/gitlab` project on GitLab.com. | |
-| `if-cache-credentials-schedule` | Limits jobs to scheduled pipelines with the `$CI_REPO_CACHE_CREDENTIALS` variable set. | |
-
-### `changes:` patterns
-
-| `changes:` patterns | Description |
-|------------------------------|--------------------------------------------------------------------------|
-| `ci-patterns` | Only create job for CI config-related changes. |
-| `yaml-patterns` | Only create job for YAML-related changes. |
-| `docs-patterns` | Only create job for docs-related changes. |
-| `frontend-dependency-patterns` | Only create job when frontend dependencies are updated (i.e. `package.json`, and `yarn.lock`). changes. |
-| `frontend-patterns` | Only create job for frontend-related changes. |
-| `backstage-patterns` | Only create job for backstage-related changes (i.e. Danger, fixtures, RuboCop, specs). |
-| `code-patterns` | Only create job for code-related changes. |
-| `qa-patterns` | Only create job for QA-related changes. |
-| `code-backstage-patterns` | Combination of `code-patterns` and `backstage-patterns`. |
-| `code-qa-patterns` | Combination of `code-patterns` and `qa-patterns`. |
-| `code-backstage-qa-patterns` | Combination of `code-patterns`, `backstage-patterns`, and `qa-patterns`. |
-
-## Interruptible jobs pipelines
-
-By default, all jobs are [interruptible](../ci/yaml/README.md#interruptible), except the
-`dont-interrupt-me` job which runs automatically on `master`, and is `manual`
-otherwise.
-
-If you want a running pipeline to finish even if you push new commits to a merge
-request, be sure to start the `dont-interrupt-me` job before pushing.
-
-## PostgreSQL versions testing
-
-### Current versions testing
-
-| Where? | PostgreSQL version |
-| ------ | ------ |
-| MRs | 11 |
-| `master` (non-scheduled pipelines) | 11 |
-| 2-hourly scheduled pipelines | 11 |
-
-### Long-term plan
-
-We follow the [PostgreSQL versions shipped with Omnibus GitLab](https://docs.gitlab.com/omnibus/package-information/postgresql_versions.html):
-
-| PostgreSQL version | 12.10 (April 2020) | 13.0 (May 2020) | 13.1 (June 2020) | 13.2 (July 2020) | 13.3 (August 2020) | 13.4, 13.5 | 13.6 (November 2020) | 14.0 (May 2021?) |
-| ------ | ------------------ | --------------- | ---------------- | ---------------- | ------------------ | ------------ | -------------------- | ---------------- |
-| PG9.6 | MRs/`master`/`2-hour`/`nightly` | - | - | - | - | - | - | - |
-| PG10 | `nightly` | - | - | - | - | - | - | - |
-| PG11 | `master`/`2-hour` | MRs/`master`/`2-hour`/`nightly` | MRs/`master`/`2-hour`/`nightly` | MRs/`master`/`2-hour`/`nightly` | MRs/`master`/`2-hour`/`nightly` | MRs/`master`/`2-hour`/`nightly` | `nightly` | - |
-| PG12 | - | - | - | - | `master`/`2-hour` | `master`/`2-hour` | MRs/`master`/`2-hour`/`nightly` | `master`/`2-hour` |
-| PG13 | - | - | - | - | - | - | - | MRs/`master`/`2-hour`/`nightly` |
-
-## Pipeline types
+### Pipeline types
Since we use the [`rules:`](../ci/yaml/README.md#rules) and [`needs:`](../ci/yaml/README.md#needs) keywords extensively,
we have four main pipeline types which are described below. Note that an MR that includes multiple types of changes would
have a pipelines that include jobs from multiple types (e.g. a combination of docs-only and code-only pipelines).
-### Docs-only MR pipeline
+#### Docs-only MR pipeline
Reference pipeline: <https://gitlab.com/gitlab-org/gitlab/pipelines/135236627>
@@ -191,7 +33,7 @@ graph LR
end
```
-### Code-only MR pipeline
+#### Code-only MR pipeline
Reference pipeline: <https://gitlab.com/gitlab-org/gitlab/pipelines/136295694>
@@ -204,11 +46,11 @@ graph RL;
click 1-1 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=8100542&udv=0"
1-2["build-qa-image (3.4 minutes)"];
click 1-2 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=6914325&udv=0"
- 1-3["compile-assets pull-cache (9.06 minutes)"];
+ 1-3["compile-test-assets (9.06 minutes)"];
click 1-3 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=6914317&udv=0"
- 1-4["compile-assets pull-cache as-if-foss (8.35 minutes)"];
+ 1-4["compile-test-assets as-if-foss (8.35 minutes)"];
click 1-4 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=8356616&udv=0"
- 1-5["gitlab:assets:compile pull-cache (22 minutes)"];
+ 1-5["compile-production-assets (22 minutes)"];
click 1-5 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=6914312&udv=0"
1-6["setup-test-env (8.22 minutes)"];
click 1-6 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=6914315&udv=0"
@@ -224,6 +66,8 @@ graph RL;
1-18["kubesec-sast"];
1-19["nodejs-scan-sast"];
1-20["secrets-sast"];
+ 1-21["static-analysis (17 minutes)"];
+ click 1-21 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=6914471&udv=0"
class 1-3 criticalPath;
class 1-6 criticalPath;
@@ -241,8 +85,6 @@ graph RL;
2_1-1 & 2_1-2 & 2_1-3 & 2_1-4 --> 1-6;
end
- 2_2-1["static-analysis (17 minutes)"];
- click 2_2-1 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=6914471&udv=0"
2_2-2["frontend-fixtures (17.2 minutes)"];
class 2_2-2 criticalPath;
click 2_2-2 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=7910143&udv=0"
@@ -252,13 +94,13 @@ graph RL;
click 2_2-4 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=8356727&udv=0"
2_2-5["webpack-dev-server (6.1 minutes)"];
click 2_2-5 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=8404303&udv=0"
- subgraph "Needs `setup-test-env` & `compile-assets`";
- 2_2-1 & 2_2-2 & 2_2-4 & 2_2-5 --> 1-6 & 1-3;
+ subgraph "Needs `setup-test-env` & `compile-test-assets`";
+ 2_2-2 & 2_2-4 & 2_2-5 --> 1-6 & 1-3;
2_2-3 --> 1-6 & 1-4;
end
2_3-1["build-assets-image (2.5 minutes)"];
- subgraph "Needs `gitlab:assets:compile`";
+ subgraph "Needs `compile-production-assets`";
2_3-1 --> 1-5
end
@@ -269,7 +111,7 @@ graph RL;
end
2_5-1["rspec & db jobs (12-22 minutes)"];
- subgraph "Needs `compile-assets`, `setup-test-env`, & `retrieve-tests-metadata`";
+ subgraph "Needs `compile-test-assets`, `setup-test-env`, & `retrieve-tests-metadata`";
2_5-1 --> 1-3 & 1-6 & 1-14;
class 2_5-1 criticalPath;
click 2_5-1 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations"
@@ -304,7 +146,7 @@ graph RL;
end
```
-### Frontend-only MR pipeline
+#### Frontend-only MR pipeline
Reference pipeline: <https://gitlab.com/gitlab-org/gitlab/pipelines/134661039>
@@ -317,11 +159,11 @@ graph RL;
click 1-1 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=8100542&udv=0"
1-2["build-qa-image (3.4 minutes)"];
click 1-2 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=6914325&udv=0"
- 1-3["compile-assets pull-cache (9.06 minutes)"];
+ 1-3["compile-test-assets (9.06 minutes)"];
click 1-3 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=6914317&udv=0"
- 1-4["compile-assets pull-cache as-if-foss (8.35 minutes)"];
+ 1-4["compile-test-assets as-if-foss (8.35 minutes)"];
click 1-4 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=8356616&udv=0"
- 1-5["gitlab:assets:compile pull-cache (22 minutes)"];
+ 1-5["compile-production-assets (22 minutes)"];
click 1-5 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=6914312&udv=0"
1-6["setup-test-env (8.22 minutes)"];
click 1-6 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=6914315&udv=0"
@@ -337,6 +179,8 @@ graph RL;
1-18["kubesec-sast"];
1-19["nodejs-scan-sast"];
1-20["secrets-sast"];
+ 1-21["static-analysis (17 minutes)"];
+ click 1-21 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=6914471&udv=0"
class 1-3 criticalPath;
class 1-5 criticalPath;
@@ -355,8 +199,6 @@ graph RL;
2_1-1 & 2_1-2 & 2_1-3 & 2_1-4 --> 1-6;
end
- 2_2-1["static-analysis (17 minutes)"];
- click 2_2-1 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=6914471&udv=0"
2_2-2["frontend-fixtures (17.2 minutes)"];
class 2_2-2 criticalPath;
click 2_2-2 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=7910143&udv=0"
@@ -366,14 +208,14 @@ graph RL;
click 2_2-4 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=8356727&udv=0"
2_2-5["webpack-dev-server (6.1 minutes)"];
click 2_2-5 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=8404303&udv=0"
- subgraph "Needs `setup-test-env` & `compile-assets`";
- 2_2-1 & 2_2-2 & 2_2-4 & 2_2-5 --> 1-6 & 1-3;
+ subgraph "Needs `setup-test-env` & `compile-test-assets`";
+ 2_2-2 & 2_2-4 & 2_2-5 --> 1-6 & 1-3;
2_2-3 --> 1-6 & 1-4;
end
2_3-1["build-assets-image (2.5 minutes)"];
class 2_3-1 criticalPath;
- subgraph "Needs `gitlab:assets:compile`";
+ subgraph "Needs `compile-production-assets`";
2_3-1 --> 1-5
end
@@ -384,7 +226,7 @@ graph RL;
end
2_5-1["rspec & db jobs (12-22 minutes)"];
- subgraph "Needs `compile-assets`, `setup-test-env, & `retrieve-tests-metadata`";
+ subgraph "Needs `compile-test-assets`, `setup-test-env, & `retrieve-tests-metadata`";
2_5-1 --> 1-3 & 1-6 & 1-14;
class 2_5-1 criticalPath;
click 2_5-1 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations"
@@ -444,7 +286,7 @@ graph RL;
end
```
-### QA-only MR pipeline
+#### QA-only MR pipeline
Reference pipeline: <https://gitlab.com/gitlab-org/gitlab/pipelines/134645109>
@@ -457,11 +299,11 @@ graph RL;
click 1-1 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=8100542&udv=0"
1-2["build-qa-image (3.4 minutes)"];
click 1-2 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=6914325&udv=0"
- 1-3["compile-assets pull-cache (9.06 minutes)"];
+ 1-3["compile-test-assets (9.06 minutes)"];
click 1-3 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=6914317&udv=0"
- 1-4["compile-assets pull-cache as-if-foss (8.35 minutes)"];
+ 1-4["compile-test-assets as-if-foss (8.35 minutes)"];
click 1-4 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=8356616&udv=0"
- 1-5["gitlab:assets:compile pull-cache (22 minutes)"];
+ 1-5["compile-production-assets (22 minutes)"];
click 1-5 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=6914312&udv=0"
1-6["setup-test-env (8.22 minutes)"];
click 1-6 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=6914315&udv=0"
@@ -477,6 +319,8 @@ graph RL;
1-18["kubesec-sast"];
1-19["nodejs-scan-sast"];
1-20["secrets-sast"];
+ 1-21["static-analysis (17 minutes)"];
+ click 1-21 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=6914471&udv=0"
class 1-5 criticalPath;
end
@@ -487,14 +331,8 @@ graph RL;
2_1-1 --> 1-6;
end
- 2_2-1["static-analysis (17 minutes)"];
- click 2_2-1 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=6914471&udv=0"
- subgraph "Needs `setup-test-env` & `compile-assets`";
- 2_2-1 --> 1-6 & 1-3;
- end
-
2_3-1["build-assets-image (2.5 minutes)"];
- subgraph "Needs `gitlab:assets:compile`";
+ subgraph "Needs `compile-production-assets`";
2_3-1 --> 1-5
class 2_3-1 criticalPath;
end
@@ -507,24 +345,61 @@ graph RL;
end
```
-## Test jobs
+### `workflow:rules`
+
+We're using the [`workflow:rules` keyword](../ci/yaml/README.md#workflowrules) to
+define default rules to determine whether or not a pipeline is created.
+
+These rules are defined in <https://gitlab.com/gitlab-org/gitlab/blob/master/.gitlab-ci.yml>
+and are as follows:
+
+1. If `$FORCE_GITLAB_CI` is set, create a pipeline.
+1. For merge requests, create a pipeline.
+1. For `master` branch, create a pipeline (this includes on schedules, pushes, merges, etc.).
+1. For tags, create a pipeline.
+1. If `$GITLAB_INTERNAL` isn't set, don't create a pipeline.
+1. For stable, auto-deploy, and security branches, create a pipeline.
+1. For any other cases (e.g. when pushing a branch with no MR for it), no pipeline is created.
+
+### PostgreSQL versions testing
+
+#### Current versions testing
+
+| Where? | PostgreSQL version |
+| ------ | ------ |
+| MRs | 11 |
+| `master` (non-scheduled pipelines) | 11 |
+| 2-hourly scheduled pipelines | 11 |
+
+#### Long-term plan
+
+We follow the [PostgreSQL versions shipped with Omnibus GitLab](https://docs.gitlab.com/omnibus/package-information/postgresql_versions.html):
+
+| PostgreSQL version | 12.10 (April 2020) | 13.0 (May 2020) | 13.1 (June 2020) | 13.2 (July 2020) | 13.3 (August 2020) | 13.4, 13.5 | 13.6 (November 2020) | 14.0 (May 2021?) |
+| ------ | ------------------ | --------------- | ---------------- | ---------------- | ------------------ | ------------ | -------------------- | ---------------- |
+| PG9.6 | MRs/`master`/`2-hour`/`nightly` | - | - | - | - | - | - | - |
+| PG10 | `nightly` | - | - | - | - | - | - | - |
+| PG11 | `master`/`2-hour` | MRs/`master`/`2-hour`/`nightly` | MRs/`master`/`2-hour`/`nightly` | MRs/`master`/`2-hour`/`nightly` | MRs/`master`/`2-hour`/`nightly` | MRs/`master`/`2-hour`/`nightly` | `nightly` | - |
+| PG12 | - | - | - | - | `master`/`2-hour` | `master`/`2-hour` | MRs/`master`/`2-hour`/`nightly` | `master`/`2-hour` |
+| PG13 | - | - | - | - | - | - | - | MRs/`master`/`2-hour`/`nightly` |
+
+### Test jobs
Consult [GitLab tests in the Continuous Integration (CI) context](testing_guide/ci.md)
for more information.
-## Review app jobs
+### Review app jobs
Consult the [Review Apps](testing_guide/review_apps.md) dedicated page for more information.
-## As-if-FOSS jobs
+### As-if-FOSS jobs
The `* as-if-foss` jobs allows to run GitLab's test suite "as-if-FOSS", meaning as if the jobs would run in the context
of the `gitlab-org/gitlab-foss` project. These jobs are only created in the following cases:
-- `master` commits (pushes and scheduled pipelines).
- `gitlab-org/security/gitlab` merge requests.
- Merge requests which include `RUN AS-IF-FOSS` in their title.
-- Merge requests that changes the CI config.
+- Merge requests that changes the CI configuration.
The `* as-if-foss` jobs have the `FOSS_ONLY='1'` variable set and gets their EE-specific
folders removed before the tests start running.
@@ -532,9 +407,41 @@ folders removed before the tests start running.
The intent is to ensure that a change won't introduce a failure once the `gitlab-org/gitlab` project will be synced to
the `gitlab-org/gitlab-foss` project.
-## Pre-clone step
+## Performance
-The `gitlab-org/gitlab` project on GitLab.com uses a [pre-clone step](https://gitlab.com/gitlab-org/gitlab/issues/39134)
+### Interruptible pipelines
+
+By default, all jobs are [interruptible](../ci/yaml/README.md#interruptible), except the
+`dont-interrupt-me` job which runs automatically on `master`, and is `manual`
+otherwise.
+
+If you want a running pipeline to finish even if you push new commits to a merge
+request, be sure to start the `dont-interrupt-me` job before pushing.
+
+### Caching strategy
+
+1. All jobs must only pull caches by default.
+1. All jobs must be able to pass with an empty cache. In other words, caches are only there to speed up jobs.
+1. We currently have 6 different caches defined in
+ [`.gitlab/ci/global.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/blob/master/.gitlab/ci/global.gitlab-ci.yml),
+ with fixed keys:
+ - `.rails-cache`.
+ - `.static-analysis-cache`.
+ - `.qa-cache`
+ - `.yarn-cache`.
+ - `.assets-compile-cache` (the key includes `${NODE_ENV}` so it's actually two different caches).
+1. Only 6 specific jobs, running in 2-hourly scheduled pipelines, are pushing (i.e. updating) to the caches:
+ - `update-rails-cache`, defined in [`.gitlab/ci/rails.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/blob/master/.gitlab/ci/rails.gitlab-ci.yml).
+ - `update-static-analysis-cache`, defined in [`.gitlab/ci/rails.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/blob/master/.gitlab/ci/rails.gitlab-ci.yml).
+ - `update-qa-cache`, defined in [`.gitlab/ci/qa.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/blob/master/.gitlab/ci/qa.gitlab-ci.yml).
+ - `update-assets-compile-production-cache`, defined in [`.gitlab/ci/frontend.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/blob/master/.gitlab/ci/frontend.gitlab-ci.yml).
+ - `update-assets-compile-test-cache`, defined in [`.gitlab/ci/frontend.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/blob/master/.gitlab/ci/frontend.gitlab-ci.yml).
+ - `update-yarn-cache`, defined in [`.gitlab/ci/frontend.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/blob/master/.gitlab/ci/frontend.gitlab-ci.yml).
+1. These jobs will run in merge requests whose title include `UPDATE CACHE`.
+
+### Pre-clone step
+
+The `gitlab-org/gitlab` project on GitLab.com uses a [pre-clone step](https://gitlab.com/gitlab-org/gitlab/-/issues/39134)
to seed the project with a recent archive of the repository. This is done for
several reasons:
@@ -597,6 +504,124 @@ credentials are stored in the 1Password GitLab.com Production vault.
Note that this bucket should be located in the same continent as the
runner, or [network egress charges will apply](https://cloud.google.com/storage/pricing).
+## CI configuration internals
+
+### Stages
+
+The current stages are:
+
+- `sync`: This stage is used to synchronize changes from <https://gitlab.com/gitlab-org/gitlab> to
+ <https://gitlab.com/gitlab-org/gitlab-foss>.
+- `prepare`: This stage includes jobs that prepare artifacts that are needed by
+ jobs in subsequent stages.
+- `build-images`: This stage includes jobs that prepare Docker images
+ that are needed by jobs in subsequent stages or downstream pipelines.
+- `fixtures`: This stage includes jobs that prepare fixtures needed by frontend tests.
+- `test`: This stage includes most of the tests, DB/migration jobs, and static analysis jobs.
+- `post-test`: This stage includes jobs that build reports or gather data from
+ the `test` stage's jobs (e.g. coverage, Knapsack metadata etc.).
+- `review-prepare`: This stage includes a job that build the CNG images that are
+ later used by the (Helm) Review App deployment (see
+ [Review Apps](testing_guide/review_apps.md) for details).
+- `review`: This stage includes jobs that deploy the GitLab and Docs Review Apps.
+- `qa`: This stage includes jobs that perform QA tasks against the Review App
+ that is deployed in the previous stage.
+- `post-qa`: This stage includes jobs that build reports or gather data from
+ the `qa` stage's jobs (e.g. Review App performance report).
+- `pages`: This stage includes a job that deploys the various reports as
+ GitLab Pages (e.g. [`coverage-ruby`](https://gitlab-org.gitlab.io/gitlab/coverage-ruby/),
+ [`coverage-javascript`](https://gitlab-org.gitlab.io/gitlab/coverage-javascript/),
+ [`webpack-report`](https://gitlab-org.gitlab.io/gitlab/webpack-report/).
+
+### Default image
+
+The default image is defined in [`.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/blob/master/.gitlab-ci.yml).
+
+It includes Ruby, Go, Git, Git LFS, Chrome, Node, Yarn, PostgreSQL, and Graphics Magick.
+
+The images used in our pipelines are configured in the
+[`gitlab-org/gitlab-build-images`](https://gitlab.com/gitlab-org/gitlab-build-images)
+project, which is push-mirrored to [`gitlab/gitlab-build-images`](https://dev.gitlab.org/gitlab/gitlab-build-images)
+for redundancy.
+
+The current version of the build images can be found in the
+["Used by GitLab section"](https://gitlab.com/gitlab-org/gitlab-build-images/blob/master/.gitlab-ci.yml).
+
+### Default variables
+
+In addition to the [predefined variables](../ci/variables/predefined_variables.md),
+each pipeline includes default variables defined in
+<https://gitlab.com/gitlab-org/gitlab/blob/master/.gitlab-ci.yml>.
+
+### Common job definitions
+
+Most of the jobs [extend from a few CI definitions](../ci/yaml/README.md#extends)
+defined in [`.gitlab/ci/global.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/blob/master/.gitlab/ci/global.gitlab-ci.yml)
+that are scoped to a single [configuration parameter](../ci/yaml/README.md#configuration-parameters).
+
+| Job definitions | Description |
+|------------------|-------------|
+| `.default-tags` | Ensures a job has the `gitlab-org` tag to ensure it's using our dedicated runners. |
+| `.default-retry` | Allows a job to [retry](../ci/yaml/README.md#retry) upon `unknown_failure`, `api_failure`, `runner_system_failure`, `job_execution_timeout`, or `stuck_or_timeout_failure`. |
+| `.default-before_script` | Allows a job to use a default `before_script` definition suitable for Ruby/Rails tasks that may need a database running (e.g. tests). |
+| `.rails-cache` | Allows a job to use a default `cache` definition suitable for Ruby/Rails tasks. |
+| `.static-analysis-cache` | Allows a job to use a default `cache` definition suitable for static analysis tasks. |
+| `.yarn-cache` | Allows a job to use a default `cache` definition suitable for frontend jobs that do a `yarn install`. |
+| `.assets-compile-cache` | Allows a job to use a default `cache` definition suitable for frontend jobs that compile assets. |
+| `.use-pg11` | Allows a job to use the `postgres:11.6` and `redis:alpine` services. |
+| `.use-pg11-ee` | Same as `.use-pg11` but also use the `docker.elastic.co/elasticsearch/elasticsearch:6.4.2` services. |
+| `.use-kaniko` | Allows a job to use the `kaniko` tool to build Docker images. |
+| `.as-if-foss` | Simulate the FOSS project by setting the `FOSS_ONLY='1'` environment variable. |
+
+### `rules`, `if:` conditions and `changes:` patterns
+
+We're using the [`rules` keyword](../ci/yaml/README.md#rules) extensively.
+
+All `rules` definitions are defined in
+<https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/ci/rules.gitlab-ci.yml>,
+then included in individual jobs via [`extends`](../ci/yaml/README.md#extends).
+
+The `rules` definitions are composed of `if:` conditions and `changes:` patterns,
+which are also defined in
+[`rules.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/ci/rules.gitlab-ci.yml)
+and included in `rules` definitions via [YAML anchors](../ci/yaml/README.md#anchors)
+
+#### `if:` conditions
+
+| `if:` conditions | Description | Notes |
+|------------------|-------------|-------|
+| `if-not-canonical-namespace` | Matches if the project isn't in the canonical (`gitlab-org/`) or security (`gitlab-org/security`) namespace. | Use to create a job for forks (by using `when: on_success\|manual`), or **not** create a job for forks (by using `when: never`). |
+| `if-not-ee` | Matches if the project isn't EE (i.e. project name isn't `gitlab` or `gitlab-ee`). | Use to create a job only in the FOSS project (by using `when: on_success|manual`), or **not** create a job if the project is EE (by using `when: never`). |
+| `if-not-foss` | Matches if the project isn't FOSS (i.e. project name isn't `gitlab-foss`, `gitlab-ce`, or `gitlabhq`). | Use to create a job only in the EE project (by using `when: on_success|manual`), or **not** create a job if the project is FOSS (by using `when: never`). |
+| `if-default-refs` | Matches if the pipeline is for `master`, `/^[\d-]+-stable(-ee)?$/` (stable branches), `/^\d+-\d+-auto-deploy-\d+$/` (auto-deploy branches), `/^security\//` (security branches), merge requests, and tags. | Note that jobs won't be created for branches with this default configuration. |
+| `if-master-refs` | Matches if the current branch is `master`. | |
+| `if-master-or-tag` | Matches if the pipeline is for the `master` branch or for a tag. | |
+| `if-merge-request` | Matches if the pipeline is for a merge request. | |
+| `if-nightly-master-schedule` | Matches if the pipeline is for a `master` scheduled pipeline with `$NIGHTLY` set. | |
+| `if-dot-com-gitlab-org-schedule` | Limits jobs creation to scheduled pipelines for the `gitlab-org` group on GitLab.com. | |
+| `if-dot-com-gitlab-org-master` | Limits jobs creation to the `master` branch for the `gitlab-org` group on GitLab.com. | |
+| `if-dot-com-gitlab-org-merge-request` | Limits jobs creation to merge requests for the `gitlab-org` group on GitLab.com. | |
+| `if-dot-com-gitlab-org-and-security-tag` | Limits job creation to tags for the `gitlab-org` and `gitlab-org/security` groups on GitLab.com. | |
+| `if-dot-com-gitlab-org-and-security-merge-request` | Limit jobs creation to merge requests for the `gitlab-org` and `gitlab-org/security` groups on GitLab.com. | |
+| `if-dot-com-ee-schedule` | Limits jobs to scheduled pipelines for the `gitlab-org/gitlab` project on GitLab.com. | |
+| `if-cache-credentials-schedule` | Limits jobs to scheduled pipelines with the `$CI_REPO_CACHE_CREDENTIALS` variable set. | |
+
+#### `changes:` patterns
+
+| `changes:` patterns | Description |
+|------------------------------|--------------------------------------------------------------------------|
+| `ci-patterns` | Only create job for CI config-related changes. |
+| `yaml-patterns` | Only create job for YAML-related changes. |
+| `docs-patterns` | Only create job for docs-related changes. |
+| `frontend-dependency-patterns` | Only create job when frontend dependencies are updated (i.e. `package.json`, and `yarn.lock`). changes. |
+| `frontend-patterns` | Only create job for frontend-related changes. |
+| `backstage-patterns` | Only create job for backstage-related changes (i.e. Danger, fixtures, RuboCop, specs). |
+| `code-patterns` | Only create job for code-related changes. |
+| `qa-patterns` | Only create job for QA-related changes. |
+| `code-backstage-patterns` | Combination of `code-patterns` and `backstage-patterns`. |
+| `code-qa-patterns` | Combination of `code-patterns` and `qa-patterns`. |
+| `code-backstage-qa-patterns` | Combination of `code-patterns`, `backstage-patterns`, and `qa-patterns`. |
+
---
[Return to Development documentation](README.md)
diff --git a/doc/development/polling.md b/doc/development/polling.md
index bc178f8cb21..47cfc32d934 100644
--- a/doc/development/polling.md
+++ b/doc/development/polling.md
@@ -56,4 +56,4 @@ For more information see:
- [`Poll-Interval` header](fe_guide/performance.md#realtime-components)
- [RFC 7232](https://tools.ietf.org/html/rfc7232)
-- [ETag proposal](https://gitlab.com/gitlab-org/gitlab-foss/issues/26926)
+- [ETag proposal](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/26926)
diff --git a/doc/development/profiling.md b/doc/development/profiling.md
index 2589329fc83..2cab6750b9b 100644
--- a/doc/development/profiling.md
+++ b/doc/development/profiling.md
@@ -107,9 +107,13 @@ Recorded transactions can be found by navigating to `/sherlock/transactions`.
## Bullet
-Bullet is a Gem that can be used to track down N+1 query problems. Because
-Bullet adds quite a bit of logging noise it's disabled by default. To enable
-Bullet, set the environment variable `ENABLE_BULLET` to a non-empty value before
+Bullet is a Gem that can be used to track down N+1 query problems. Bullet section is
+displayed on the [performance-bar](../administration/monitoring/performance/performance_bar.md).
+
+![Bullet](img/bullet_v13_0.png)
+
+Because Bullet adds quite a bit of logging noise the logging is disabled by default.
+To enable the logging, set the environment variable `ENABLE_BULLET` to a non-empty value before
starting GitLab. For example:
```shell
diff --git a/doc/development/rake_tasks.md b/doc/development/rake_tasks.md
index 96acce5e4df..3fa6ba40e6c 100644
--- a/doc/development/rake_tasks.md
+++ b/doc/development/rake_tasks.md
@@ -147,7 +147,7 @@ To run several tests inside one directory:
### Speed up tests, Rake tasks, and migrations
-[Spring](https://github.com/rails/spring) is a Rails application preloader. It
+[Spring](https://github.com/rails/spring) is a Rails application pre-loader. It
speeds up development by keeping your application running in the background so
you don't need to boot it every time you run a test, Rake task or migration.
@@ -203,9 +203,9 @@ To generate a sprite file containing all the Emoji, run:
bundle exec rake gemojione:sprite
```
-If new emoji are added, the spritesheet may change size. To compensate for
-such changes, first generate the `emoji.png` spritesheet with the above Rake
-task, then check the dimensions of the new spritesheet and update the
+If new emoji are added, the sprite sheet may change size. To compensate for
+such changes, first generate the `emoji.png` sprite sheet with the above Rake
+task, then check the dimensions of the new sprite sheet and update the
`SPRITESHEET_WIDTH` and `SPRITESHEET_HEIGHT` constants accordingly.
## Update project templates
diff --git a/doc/development/redis.md b/doc/development/redis.md
index a8b7b84bb65..6782ea96448 100644
--- a/doc/development/redis.md
+++ b/doc/development/redis.md
@@ -18,7 +18,7 @@ database.
Redis is a flat namespace with no hierarchy, which means we must pay attention
to key names to avoid collisions. Typically we use colon-separated elements to
-provide a semblence of structure at application level. An example might be
+provide a semblance of structure at application level. An example might be
`projects:1:somekey`.
Although we split our Redis usage into three separate purposes, and those may
@@ -34,7 +34,7 @@ invalidated by a name change, it is better to include a hook that will expire
the entry, instead of relying on the key changing.
We don't use [Redis Cluster](https://redis.io/topics/cluster-tutorial) at the
-moment, but may wish to in the future: [#118820](https://gitlab.com/gitlab-org/gitlab/issues/118820).
+moment, but may wish to in the future: [#118820](https://gitlab.com/gitlab-org/gitlab/-/issues/118820).
This imposes an additional constraint on naming: where GitLab is performing
operations that require several keys to be held on the same Redis server - for
diff --git a/doc/development/refactoring_guide/index.md b/doc/development/refactoring_guide/index.md
index 4bd9d0e9c11..a9ff9556aed 100644
--- a/doc/development/refactoring_guide/index.md
+++ b/doc/development/refactoring_guide/index.md
@@ -21,7 +21,7 @@ Pinning tests help you ensure that you don't unintentionally change the output o
Leaving in the commits for adding and removing pins helps others checkout and verify the result of the test.
-```bash
+```shell
AAAAAA Add pinning tests to funky_foo
BBBBBB Refactor funky_foo into nice_foo
CCCCCC Remove pinning tests for funky_foo
@@ -31,13 +31,13 @@ Then you can leave a reviewer instructions on how to run the pinning test in you
> First revert the commit which removes the pin.
>
-> ```bash
+> ```shell
> git revert --no-commit $(git log -1 --grep="Remove pinning test for funky_foo" --pretty=format:"%H")
> ```
>
> Then run the test
>
-> ```bash
+> ```shell
> yarn run jest path/to/funky_foo_pin_spec.js
> ```
@@ -69,7 +69,7 @@ expect(cleanForSnapshot(wrapper.element)).toMatchSnapshot();
### Examples
-- [Pinning test in a haml to vue refactor](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/27691#pinning-tests)
+- [Pinning test in a Haml to Vue refactor](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/27691#pinning-tests)
- [Pinning test in isolating a bug](https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/32198#note_212736225)
- [Pinning test in refactoring dropdown](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/28173)
- [Pinning test in refactoring vulnerability_details.vue](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/25830/commits)
diff --git a/doc/development/reference_processing.md b/doc/development/reference_processing.md
index ef1f2f5269c..79377533966 100644
--- a/doc/development/reference_processing.md
+++ b/doc/development/reference_processing.md
@@ -27,7 +27,7 @@ transform them into structured links to the resources they represent.
For example, the class
[`Banzai::Filter::IssueReferenceFilter`](https://gitlab.com/gitlab-org/gitlab/blob/master/lib/banzai/filter/issue_reference_filter.rb)
is responsible for handling references to issues, such as
-`gitlab-org/gitlab#123` and `https://gitlab.com/gitlab-org/gitlab/issues/200048`.
+`gitlab-org/gitlab#123` and `https://gitlab.com/gitlab-org/gitlab/-/issues/200048`.
All reference filters are instances of [`HTML::Pipeline::Filter`](https://www.rubydoc.info/github/jch/html-pipeline/v1.11.0/HTML/Pipeline/Filter),
and inherit (often indirectly) from [`Banzai::Filter::ReferenceFilter`](https://gitlab.com/gitlab-org/gitlab/blob/master/lib/banzai/filter/reference_filter.rb).
diff --git a/doc/development/renaming_features.md b/doc/development/renaming_features.md
index 6a196921a5d..daf437027db 100644
--- a/doc/development/renaming_features.md
+++ b/doc/development/renaming_features.md
@@ -16,7 +16,7 @@ The more of the following that are true, the more likely you should choose the f
- You are not confident the new name is permanent.
- The feature is susceptible to bugs (large, complex, needing refactor, etc).
-- The renaming will be difficult to review (feature spans many lines/files/repos).
+- The renaming will be difficult to review (feature spans many lines, files, or repositories).
- The renaming will be disruptive in some way (database table renaming).
## Consider a façade-first approach
diff --git a/doc/development/routing.md b/doc/development/routing.md
index 97837a917a2..e164431853f 100644
--- a/doc/development/routing.md
+++ b/doc/development/routing.md
@@ -63,11 +63,19 @@ gitlab-org/gitlab/-/settings/repository
gitlab-org/serverless/runtimes/-/settings/repository
```
-Currently, only some project routes are placed under the `/-/` scope. However,
-you can help us migrate more of them! To migrate project routes:
+## Migrating unscoped routes
+
+Currently, the majority of routes are placed under the `/-/` scope. However,
+you can help us migrate the rest of them! To migrate routes:
1. Modify existing routes by adding `-` scope.
1. Add redirects for legacy routes by using `Gitlab::Routing.redirect_legacy_paths`.
1. Create a technical debt issue to remove deprecated routes in later releases.
To get started, see an [example merge request](https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/28435).
+
+## Useful links
+
+- [Routing improvements master plan](https://gitlab.com/gitlab-org/gitlab/-/issues/215362)
+- [Scoped routing explained](https://gitlab.com/gitlab-org/gitlab/-/issues/214217)
+- [Removal of deprecated routes](https://gitlab.com/gitlab-org/gitlab/-/issues/28848)
diff --git a/doc/development/scalability.md b/doc/development/scalability.md
index ba25e169d66..c0c26df88b5 100644
--- a/doc/development/scalability.md
+++ b/doc/development/scalability.md
@@ -52,10 +52,10 @@ maintain and support one database with tables with many rows.
There are two ways to deal with this:
-- Partioning. Locally split up tables data.
+- Partitioning. Locally split up tables data.
- Sharding. Distribute data across multiple databases.
-Partioning is a built-in PostgreSQL feature and requires minimal changes
+Partitioning is a built-in PostgreSQL feature and requires minimal changes
in the application. However, it [requires PostgreSQL
11](https://www.2ndquadrant.com/en/blog/partitioning-evolution-postgresql-11/).
@@ -93,12 +93,12 @@ systems.
#### Database size
A recent [database checkup shows a breakdown of the table sizes on
-GitLab.com](https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/8022#master-1022016101-8).
+GitLab.com](https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/8022#master-1022016101-8).
Since `merge_request_diff_files` contains over 1 TB of data, we will want to
reduce/eliminate this table first. GitLab has support for [storing diffs in
object storage](../administration/merge_request_diffs.md), which we [will
want to do on
-GitLab.com](https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/7356).
+GitLab.com](https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/7356).
#### High availability
@@ -116,7 +116,7 @@ database has reached the target time.
On GitLab.com, Consul and Patroni work together to coordinate failovers with
the read replicas. [Omnibus ships with repmgr instead of
-Consul](../administration/high_availability/database.md).
+Patroni](../administration/postgresql/replication_and_failover.md).
#### Load-balancing
@@ -147,10 +147,10 @@ limitation:
- Run multiple PgBouncer instances.
- Use a multi-threaded connection pooler (e.g.
- [Odyssey](https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/7776).
+ [Odyssey](https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/7776).
On some Linux systems, it's possible to run [multiple PgBouncer instances on
-the same port](https://gitlab.com/gitlab-org/omnibus-gitlab/issues/4796).
+the same port](https://gitlab.com/gitlab-org/omnibus-gitlab/-/issues/4796).
On GitLab.com, we run multiple PgBouncer instances on different ports to
avoid saturating a single core.
@@ -246,9 +246,9 @@ lifting of many activities, including:
- Processing CI builds and pipelines.
The full list of jobs can be found in the
-[app/workers](https://gitlab.com/gitlab-org/gitlab/tree/master/app/workers)
+[`app/workers`](https://gitlab.com/gitlab-org/gitlab/tree/master/app/workers)
and
-[ee/app/workers](https://gitlab.com/gitlab-org/gitlab/tree/master/ee/app/workers)
+[`ee/app/workers`](https://gitlab.com/gitlab-org/gitlab/tree/master/ee/app/workers)
directories in the GitLab code base.
#### Runaway Queues
@@ -275,13 +275,13 @@ in a timely manner:
- Redistribute/gerrymander Sidekiq processes by queue
types. Long-running jobs (e.g. relating to project import) can often
squeeze out jobs that run fast (e.g. delivering e-mail). [This technique
- was used in to optimize our existing Sidekiq deployment](https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/7219#note_218019483).
+ was used in to optimize our existing Sidekiq deployment](https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/7219#note_218019483).
- Optimize jobs. Eliminating unnecessary work, reducing network calls
(e.g. SQL, Gitaly, etc.), and optimizing processor time can yield significant
benefits.
From the Sidekiq logs, it's possible to see which jobs run the most
-frequently and/or take the longest. For example, theis Kibana
+frequently and/or take the longest. For example, these Kibana
visualizations show the jobs that consume the most total time:
![Most time-consuming Sidekiq jobs](img/sidekiq_most_time_consuming_jobs.png)
diff --git a/doc/development/secure_coding_guidelines.md b/doc/development/secure_coding_guidelines.md
index b473c310647..912b8fbf043 100644
--- a/doc/development/secure_coding_guidelines.md
+++ b/doc/development/secure_coding_guidelines.md
@@ -23,7 +23,7 @@ For more information about the permission model at GitLab, please see [the GitLa
### Impact
Improper permission handling can have significant impacts on the security of an application.
-Some situations may reveal [sensitive data](https://gitlab.com/gitlab-com/gl-infra/production/issues/477) or allow a malicious actor to perform [harmful actions](https://gitlab.com/gitlab-org/gitlab/issues/8180).
+Some situations may reveal [sensitive data](https://gitlab.com/gitlab-com/gl-infra/production/-/issues/477) or allow a malicious actor to perform [harmful actions](https://gitlab.com/gitlab-org/gitlab/-/issues/8180).
The overall impact depends heavily on what resources can be accessed or modified improperly.
A common vulnerability when permission checks are missing is called [IDOR](https://owasp.org/www-project-web-security-testing-guide/latest/4-Web_Application_Security_Testing/05-Authorization_Testing/04-Testing_for_Insecure_Direct_Object_References) for Insecure Direct Object References.
@@ -48,11 +48,11 @@ Be careful to **also test [visibility levels](https://gitlab.com/gitlab-org/gitl
Some example of well implemented access controls and tests:
-1. [example1](https://dev.gitlab.org/gitlab/gitlab-ee/merge_requests/710/diffs?diff_id=13750#af40ef0eaae3c1e018809e1d88086e32bccaca40_43_43)
+1. [example1](https://dev.gitlab.org/gitlab/gitlab-ee/-/merge_requests/710/diffs?diff_id=13750#af40ef0eaae3c1e018809e1d88086e32bccaca40_43_43)
1. [example2](https://dev.gitlab.org/gitlab/gitlabhq/-/merge_requests/2511/diffs#ed3aaab1510f43b032ce345909a887e5b167e196_142_155)
1. [example3](https://dev.gitlab.org/gitlab/gitlabhq/-/merge_requests/3170/diffs?diff_id=17494)
-**NB:** any input from development team is welcome, e.g. about rubocop rules.
+**NB:** any input from development team is welcome, e.g. about Rubocop rules.
## Regular Expressions guidelines
@@ -67,7 +67,7 @@ matches = re.findall("^bar$",text)
print(matches)
```
-The Python example will output an emtpy array (`[]`) as the matcher considers the whole string `foo\nbar` including the newline (`\n`). In contrast Ruby's Regular Expression engine acts differently:
+The Python example will output an empty array (`[]`) as the matcher considers the whole string `foo\nbar` including the newline (`\n`). In contrast Ruby's Regular Expression engine acts differently:
```ruby
text = "foo\nbar"
@@ -82,7 +82,7 @@ This Ruby Regex specialty can have security impact, as often regular expressions
#### Examples
-GitLab specific examples can be found [here](https://gitlab.com/gitlab-org/gitlab/issues/36029#note_251262187) and [there](https://gitlab.com/gitlab-org/gitlab/issues/33569).
+GitLab specific examples can be found [here](https://gitlab.com/gitlab-org/gitlab/-/issues/36029#note_251262187) and [there](https://gitlab.com/gitlab-org/gitlab/-/issues/33569).
Another example would be this fictional Ruby On Rails controller:
@@ -111,7 +111,7 @@ or controls the regular expression (regex) used, and is able to enter user input
### Impact
-The resource, for example Unicorn, Puma, or Sidekiq, can be made to hang as it takes a long time to evaulate the bad regex match.
+The resource, for example Unicorn, Puma, or Sidekiq, can be made to hang as it takes a long time to evaluate the bad regex match.
### Examples
@@ -140,9 +140,9 @@ class Email < ApplicationRecord
GitLab has `Gitlab::UntrustedRegexp` which internally uses the [`re2`](https://github.com/google/re2/wiki/Syntax) library.
By utilizing `re2`, we get a strict limit on total execution time, and a smaller subset of available regex features.
-All user-provided regexes should use `Gitlab::UntrustedRegexp`.
+All user-provided regular expressions should use `Gitlab::UntrustedRegexp`.
-For other regexes, here are a few guidelines:
+For other regular expressions, here are a few guidelines:
- Remove unnecessary backtracking.
- Avoid nested quantifiers if possible.
@@ -180,11 +180,11 @@ have been reported to GitLab include:
- Network mapping of internal services
- This can help an attacker gather information about internal services
- that could be used in further attacks. [More details](https://gitlab.com/gitlab-org/gitlab-foss/issues/51327).
+ that could be used in further attacks. [More details](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/51327).
- Reading internal services, including cloud service metadata.
- The latter can be a serious problem, because an attacker can obtain keys that allow control of the victim's cloud infrastructure. (This is also a good reason
- to give only necessary privileges to the token.). [More details](https://gitlab.com/gitlab-org/gitlab-foss/issues/51490).
-- When combined with CRLF vulnerability, remote code execution. [More details](https://gitlab.com/gitlab-org/gitlab-foss/issues/41293)
+ to give only necessary privileges to the token.). [More details](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/51490).
+- When combined with CRLF vulnerability, remote code execution. [More details](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/41293)
### When to Consider
@@ -206,14 +206,14 @@ The [GitLab::HTTP](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab
`Outbound requests` options that allow instance administrators to block all internal connections, or limit the networks to which connections can be made.
In some cases, it has been possible to configure GitLab::HTTP as the HTTP
-connection library for 3rd-party gems. This is preferrable over re-implementing
+connection library for 3rd-party gems. This is preferable over re-implementing
the mitigations for a new feature.
- [More details](https://dev.gitlab.org/gitlab/gitlabhq/-/merge_requests/2530/diffs)
#### Feature-specific Mitigations
-For situtions in which a whitelist or GitLab:HTTP cannot be used, it will be necessary to implement mitigations directly in the feature. It is best to validate the destination IP addresses themselves, not just domain names, as DNS can be controlled by the attacker. Below are a list of mitigations that should be implemented.
+For situations in which an allowlist or GitLab:HTTP cannot be used, it will be necessary to implement mitigations directly in the feature. It is best to validate the destination IP addresses themselves, not just domain names, as DNS can be controlled by the attacker. Below are a list of mitigations that should be implemented.
**Important Note:** There are many tricks to bypass common SSRF validations. If feature-specific mitigations are necessary, they should be reviewed by the AppSec team, or a developer who has worked on SSRF mitigations previously.
@@ -230,7 +230,7 @@ For situtions in which a whitelist or GitLab:HTTP cannot be used, it will be nec
- For HTTP connections: Disable redirects or validate the redirect destination
- To mitigate DNS rebinding attacks, validate and use the first IP address received
-See [url_blocker_spec.rb](https://gitlab.com/gitlab-org/gitlab/-/blob/master/spec/lib/gitlab/url_blocker_spec.rb) for examples of SSRF payloads
+See [`url_blocker_spec.rb`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/spec/lib/gitlab/url_blocker_spec.rb) for examples of SSRF payloads
## XSS guidelines
@@ -276,10 +276,10 @@ For any and all input fields, ensure to define expectations on the type/format o
- Treat all user input as untrusted.
- Based on the expectations you [defined above](#setting-expectations):
- Validate the [input size limits](https://youtu.be/2VFavqfDS6w?t=7582).
- - Validate the input using a [whitelist approach](https://youtu.be/2VFavqfDS6w?t=7816) to only allow characters through which you are expecting to receive for the field.
+ - Validate the input using an [allowlist approach](https://youtu.be/2VFavqfDS6w?t=7816) to only allow characters through which you are expecting to receive for the field.
- Input which fails validation should be **rejected**, and not sanitized.
-Note that blacklists should be avoided, as it is near impossible to block all [variations of XSS](https://owasp.org/www-community/xss-filter-evasion-cheatsheet).
+Note that denylists should be avoided, as it is near impossible to block all [variations of XSS](https://owasp.org/www-community/xss-filter-evasion-cheatsheet).
#### Output encoding
@@ -308,7 +308,7 @@ Once you've [determined when and where](#setting-expectations) the user submitte
#### Content Security Policy
- [Content Security Policy](https://www.youtube.com/watch?v=2VFavqfDS6w&t=12991s)
-- [Use nonce-based Content Security Policy for inline JavaScript](https://gitlab.com/gitlab-org/gitlab-foss/issues/65330)
+- [Use nonce-based Content Security Policy for inline JavaScript](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/65330)
#### Free form input fields
@@ -323,7 +323,7 @@ Once you've [determined when and where](#setting-expectations) the user submitte
### Select examples of past XSS issues affecting GitLab
-- [Stored XSS in user status](https://gitlab.com/gitlab-org/gitlab-foss/issues/55320)
+- [Stored XSS in user status](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/55320)
### Developer Training
@@ -345,5 +345,5 @@ Once you've [determined when and where](#setting-expectations) the user submitte
- [Input Validation](https://youtu.be/2VFavqfDS6w?t=7489)
- [Validate size limits](https://youtu.be/2VFavqfDS6w?t=7582)
- [RoR model validators](https://youtu.be/2VFavqfDS6w?t=7636)
-- [Whitelist input validation](https://youtu.be/2VFavqfDS6w?t=7816)
+- [Allowlist input validation](https://youtu.be/2VFavqfDS6w?t=7816)
- [Content Security Policy](https://www.youtube.com/watch?v=2VFavqfDS6w&t=12991s)
diff --git a/doc/development/service_measurement.md b/doc/development/service_measurement.md
new file mode 100644
index 00000000000..e53864c8640
--- /dev/null
+++ b/doc/development/service_measurement.md
@@ -0,0 +1,81 @@
+# GitLab Developers Guide to service measurement
+
+You can enable service measurement in order to debug any slow service's execution time, number of SQL calls, garbage collection stats, memory usage, etc.
+
+## Measuring module
+
+The measuring module is a tool that allows to measure a service's execution, and log:
+
+- Service class name
+- Execution time
+- Number of SQL calls
+- Detailed `gc` stats and diffs
+- RSS memory usage
+- Server worker ID
+
+The measuring module will log these measurements into a structured log called [`service_measurement.log`](../administration/logs.md#service_measurementlog),
+as a single entry for each service execution.
+
+NOTE: **Note:**
+For GitLab.com, `service_measurement.log` is ingested in Elasticsearch and Kibana as part of our monitoring solution.
+
+## How to use it
+
+The measuring module allows you to easily measure and log execution of any service,
+by just prepending `Measurable` in any Service class, on the last line of the file that the class resides in.
+
+For example, to prepend a module into the `DummyService` class, you would use the following approach:
+
+```ruby
+class DummyService
+ def execute
+ # ...
+ end
+end
+
+DummyService.prepend(Measurable)
+```
+
+In case when you are prepending a module from the `EE` namespace with EE features, you need to prepend Measurable after prepending the `EE` module.
+
+This way, `Measurable` will be at the bottom of the ancestor chain, in order to measure execution of `EE` features as well:
+
+```ruby
+class DummyService
+ def execute
+ # ...
+ end
+end
+
+DummyService.prepend_if_ee('EE::DummyService')
+DummyService.prepend(Measurable)
+```
+
+### Log additional attributes
+
+In case you need to log some additional attributes, it is possible to define `extra_attributes_for_measurement` in the service class:
+
+```ruby
+def extra_attributes_for_measurement
+ {
+ project_path: @project.full_path,
+ user: current_user.name
+ }
+end
+```
+
+NOTE: **Note:**
+Once the measurement module is injected in the service, it will be behind generic feature flag.
+In order to actually use it, you need to enable measuring for the desired service by enabling the feature flag.
+
+### Enabling measurement using feature flags
+
+In the following example, the `:gitlab_service_measuring_projects_import_service`
+[feature flag](feature_flags/development.md#enabling-a-feature-flag-in-development) is used to enable the measuring feature
+for `Projects::ImportService`.
+
+From ChatOps:
+
+```shell
+/chatops run feature set gitlab_service_measuring_projects_import_service true
+```
diff --git a/doc/development/shell_scripting_guide/index.md b/doc/development/shell_scripting_guide/index.md
index 99cd1b9d67f..c04a4e90e59 100644
--- a/doc/development/shell_scripting_guide/index.md
+++ b/doc/development/shell_scripting_guide/index.md
@@ -106,7 +106,7 @@ and ignore files starting with a period. To override this, use `-ln` flag to spe
NOTE: **Note:**
This is a work in progress.
-It is an [ongoing effort](https://gitlab.com/gitlab-org/gitlab-foss/issues/64016) to evaluate different tools for the
+It is an [ongoing effort](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/64016) to evaluate different tools for the
automated testing of shell scripts (like [BATS](https://github.com/bats-core/bats-core)).
## Code Review
diff --git a/doc/development/sidekiq_style_guide.md b/doc/development/sidekiq_style_guide.md
index a5d0eecdc7b..7ae3c9e9de2 100644
--- a/doc/development/sidekiq_style_guide.md
+++ b/doc/development/sidekiq_style_guide.md
@@ -78,7 +78,7 @@ As a general rule, a worker can be considered idempotent if:
- It can safely run multiple times with the same arguments.
- Application side-effects are expected to happen only once
- (or side-effects of a second run are not impactful).
+ (or side-effects of a second run do not have an effect).
A good example of that would be a cache expiration worker.
@@ -147,7 +147,7 @@ GitLab doesn't skip jobs scheduled in the future, as we assume that
the state will have changed by the time the job is scheduled to
execute.
-More [deduplication strategies have been suggested](https://gitlab.com/gitlab-com/gl-infra/scalability/issues/195). If you are implementing a worker that
+More [deduplication strategies have been suggested](https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/195). If you are implementing a worker that
could benefit from a different strategy, please comment in the issue.
If the automatic deduplication were to cause issues in certain
@@ -156,7 +156,7 @@ named `disable_<queue name>_deduplication`. For example to disable
deduplication for the `AuthorizedProjectsWorker`, we would enable the
feature flag `disable_authorized_projects_deduplication`.
-From chatops:
+From ChatOps:
```shell
/chatops run feature set disable_authorized_projects_deduplication true
@@ -272,10 +272,10 @@ annotated with the `worker_resource_boundary` method.
Most workers tend to spend most of their time blocked, wait on network responses
from other services such as Redis, PostgreSQL, and Gitaly. Since Sidekiq is a
-multithreaded environment, these jobs can be scheduled with high concurrency.
+multi-threaded environment, these jobs can be scheduled with high concurrency.
Some workers, however, spend large amounts of time _on-CPU_ running logic in
-Ruby. Ruby MRI does not support true multithreading - it relies on the
+Ruby. Ruby MRI does not support true multi-threading - it relies on the
[GIL](https://thoughtbot.com/blog/untangling-ruby-threads#the-global-interpreter-lock)
to greatly simplify application development by only allowing one section of Ruby
code in a process to run at a time, no matter how many cores the machine
@@ -395,7 +395,7 @@ in the default execution mode - using
does not account for weights.
As we are [moving towards using `sidekiq-cluster` in
-Core](https://gitlab.com/gitlab-org/gitlab/issues/34396), newly-added
+Core](https://gitlab.com/gitlab-org/gitlab/-/issues/34396), newly-added
workers do not need to have weights specified. They can simply use the
default weight, which is 1.
@@ -427,7 +427,7 @@ isn't picked up by the cops. In any case, please leave a code-comment
pointing to which context will be used when disabling the cops.
When you do provide objects to the context, please make sure that the
-route for namespaces and projects is preloaded. This can be done using
+route for namespaces and projects is pre-loaded. This can be done using
the `.with_route` scope defined on all `Routable`s.
### Cron-Workers
@@ -518,6 +518,34 @@ job needs to be scheduled with.
The `context_proc` which needs to return a hash with the context
information for the job.
+## Arguments logging
+
+When [`SIDEKIQ_LOG_ARGUMENTS`](../administration/troubleshooting/sidekiq.md#log-arguments-to-sidekiq-jobs)
+is enabled, Sidekiq job arguments will be logged.
+
+By default, the only arguments logged are numeric arguments, because
+arguments of other types could contain sensitive information. To
+override this, use `loggable_arguments` inside a worker with the indexes
+of the arguments to be logged. (Numeric arguments do not need to be
+specified here.)
+
+For example:
+
+```ruby
+class MyWorker
+ include ApplicationWorker
+
+ loggable_arguments 1, 3
+
+ # object_id will be logged as it's numeric
+ # string_a will be logged due to the loggable_arguments call
+ # string_b will be filtered from logs
+ # string_c will be logged due to the loggable_arguments call
+ def perform(object_id, string_a, string_b, string_c)
+ end
+end
+```
+
## Tests
Each Sidekiq worker must be tested using RSpec, just like any other class. These
@@ -537,18 +565,74 @@ possible situations:
### Changing the arguments for a worker
-Jobs need to be backwards- and forwards-compatible between consecutive versions
-of the application.
+Jobs need to be backward and forward compatible between consecutive versions
+of the application. Adding or removing an argument may cause problems
+during deployment before all Rails and Sidekiq nodes have the updated code.
+
+#### Remove an argument
+
+**Do not remove arguments from the `perform` function.**. Instead, use the
+following approach:
+
+1. Provide a default value (usually `nil`) and use a comment to mark the
+ argument as deprecated
+1. Stop using the argument in `perform_async`.
+1. Ignore the value in the worker class, but do not remove it until the next
+ major release.
+
+In the following example, if you want to remove `arg2`, first set a `nil` default value,
+and then update locations where `ExampleWorker.perform_async` is called.
+
+```ruby
+class ExampleWorker
+ def perform(object_id, arg1, arg2 = nil)
+ # ...
+ end
+end
+```
+
+#### Add an argument
+
+There are two options for safely adding new arguments to Sidekiq workers:
+
+1. Set up a [multi-step deployment](#multi-step-deployment) in which the new argument is first added to the worker
+1. Use a [parameter hash](#parameter-hash) for additional arguments. This is perhaps the most flexible option.
+1. Use a parameter hash for additional arguments. This is perhaps the most flexible option.
+
+##### Multi-step deployment
+
+This approach requires multiple merge requests and for the first merge request
+to be merged and deployed before additional changes are merged.
-This can be done by following this process:
+1. In an initial merge request, add the argument to the worker with a default
+ value:
-1. **Do not remove arguments from the `perform` function.**. Instead, use the
- following approach
- 1. Provide a default value (usually `nil`) and use a comment to mark the
- argument as deprecated
- 1. Stop using the argument in `perform_async`.
- 1. Ignore the value in the worker class, but do not remove it until the next
- major release.
+ ```ruby
+ class ExampleWorker
+ def perform(object_id, new_arg = nil)
+ # ...
+ end
+ end
+ ```
+
+1. Merge and deploy the worker with the new argument.
+1. In a further merge request, update `ExampleWorker.perform_async` calls to
+ use the new argument.
+
+##### Parameter hash
+
+This approach will not require multiple deployments if an existing worker already
+utilizes a parameter hash.
+
+1. Use a parameter hash in the worker to allow for future flexibility:
+
+ ```ruby
+ class ExampleWorker
+ def perform(object_id, params = {})
+ # ...
+ end
+ end
+ ```
### Removing workers
diff --git a/doc/development/telemetry/index.md b/doc/development/telemetry/index.md
index 32f63d5221e..aee16e4049a 100644
--- a/doc/development/telemetry/index.md
+++ b/doc/development/telemetry/index.md
@@ -1,3 +1,9 @@
+---
+stage: Growth
+group: Telemetry
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+---
+
# Telemetry Guide
At GitLab, we collect telemetry for the purpose of helping us build a better GitLab. Data about how GitLab is used is collected to better understand what parts of GitLab needs improvement and what features to build next. Telemetry also helps our team better understand the reasons why people use GitLab and with this knowledge we are able to make better product decisions.
@@ -19,11 +25,11 @@ Telemetry Guide:
1. [What is Usage Ping](usage_ping.md#what-is-usage-ping)
1. [Usage Ping payload](usage_ping.md#usage-ping-payload)
- 1. [Disabling Usage Ping](usage_ping.md#disabling-usage-ping)
+ 1. [Disable Usage Ping](usage_ping.md#disable-usage-ping)
1. [Usage Ping request flow](usage_ping.md#usage-ping-request-flow)
1. [How Usage Ping works](usage_ping.md#how-usage-ping-works)
1. [Implementing Usage Ping](usage_ping.md#implementing-usage-ping)
- 1. [Developing and testing usage ping](usage_ping.md#developing-and-testing-usage-ping)
+ 1. [Developing and testing Usage Ping](usage_ping.md#developing-and-testing-usage-ping)
[Snowplow Guide](snowplow.md)
@@ -44,27 +50,27 @@ More useful links:
## Our tracking tools
-In this section we will explain the six different technologies we use to gather product usage data.
+We use several different technologies to gather product usage data.
-**Snowplow JS (Frontend)**
+### Snowplow JS (Frontend)
Snowplow is an enterprise-grade marketing and product analytics platform which helps track the way users engage with our website and application. [Snowplow JS](https://github.com/snowplow/snowplow/wiki/javascript-tracker) is a frontend tracker for client-side events.
-**Snowplow Ruby (Backend)**
+### Snowplow Ruby (Backend)
Snowplow is an enterprise-grade marketing and product analytics platform which helps track the way users engage with our website and application. [Snowplow Ruby](https://github.com/snowplow/snowplow/wiki/ruby-tracker) is a backend tracker for server-side events.
-**Usage Ping**
+### Usage Ping
Usage Ping is a method for GitLab Inc to collect usage data on a GitLab instance. Usage Ping is primarily composed of row counts for different tables in the instance’s database. By comparing these counts month over month (or week over week), we can get a rough sense for how an instance is using the different features within the product. This high-level data is used to help our product, support, and sales teams.
-Read more about how this works in the [Usage Ping guide](usage_ping.md)
+For more details, read the [Usage Ping](usage_ping.md) guide.
-**Database import**
+### Database import
Database imports are full imports of data into GitLab's data warehouse. For GitLab.com, the PostgreSQL database is loaded into Snowflake data warehouse every 6 hours. For more details, see the [data team handbook](https://about.gitlab.com/handbook/business-ops/data-team/#extract-and-load).
-**Log system**
+### Log system
System logs are the application logs generated from running the GitLab Rails application. For more details, see the [log system](../../administration/logs.md) and [logging infrastructure](https://gitlab.com/gitlab-com/runbooks/tree/master/logging/doc#logging-infrastructure-overview).
@@ -72,63 +78,63 @@ System logs are the application logs generated from running the GitLab Rails app
Our different tracking tools allows us to track different types of events. The event types and examples of what data can be tracked are outlined below.
-| Event Type | Snowplow JS (Frontend) | Snowplow Ruby (Backend) | Usage Ping | Database import | Log system |
-| ------ | ------ | ------ | ------ | ------ | ------ |
-| Database counts | ❌ | ❌ | ✅ | ✅ | ❌ |
-| Pageview events | ✅ | ✅ | ❌ | ❌ | ❌ |
-| UI events | ✅ | ❌ | ❌ | ❌ | ❌ |
-| CRUD and API events | ❌ | ✅ | ❌ | ❌ | ❌ |
-| Event funnels | ✅ | ✅ | ❌ | ❌ | ❌ |
-| PostgreSQL Data | ❌ | ❌ | ❌ | ✅ | ❌ |
-| Logs | ❌ | ❌ | ❌ | ❌ | ✅ |
-| External services | ❌ | ❌ | ❌ | ❌ | ❌ |
+| Event Type | Snowplow JS (Frontend) | Snowplow Ruby (Backend) | Usage Ping | Database import | Log system |
+|---------------------|------------------------|-------------------------|---------------------|---------------------|---------------------|
+| Database counts | **{dotted-circle}** | **{dotted-circle}** | **{check-circle}** | **{check-circle}** | **{dotted-circle}** |
+| Pageview events | **{check-circle}** | **{check-circle}** | **{dotted-circle}** | **{dotted-circle}** | **{dotted-circle}** |
+| UI events | **{check-circle}** | **{dotted-circle}** | **{dotted-circle}** | **{dotted-circle}** | **{dotted-circle}** |
+| CRUD and API events | **{dotted-circle}** | **{check-circle}** | **{dotted-circle}** | **{dotted-circle}** | **{dotted-circle}** |
+| Event funnels | **{check-circle}** | **{check-circle}** | **{dotted-circle}** | **{dotted-circle}** | **{dotted-circle}** |
+| PostgreSQL Data | **{dotted-circle}** | **{dotted-circle}** | **{dotted-circle}** | **{check-circle}** | **{dotted-circle}** |
+| Logs | **{dotted-circle}** | **{dotted-circle}** | **{dotted-circle}** | **{dotted-circle}** | **{check-circle}** |
+| External services | **{dotted-circle}** | **{dotted-circle}** | **{dotted-circle}** | **{dotted-circle}** | **{dotted-circle}** |
-**Database counts**
+### Database counts
-- How many Projects have been created by unique users
-- How many users logged in the past 28 day
+- Number of Projects created by unique users
+- Number of users logged in the past 28 day
-Database counts are row counts for different tables in an instance’s database. These are SQL count queries which have been filtered, grouped, or aggregated which provide high level usage data. The full list of available tables can be found in [structure.sql](https://gitlab.com/gitlab-org/gitlab/-/blob/master/db/structure.sql)
+Database counts are row counts for different tables in an instance’s database. These are SQL count queries which have been filtered, grouped, or aggregated which provide high level usage data. The full list of available tables can be found in [structure.sql](https://gitlab.com/gitlab-org/gitlab/-/blob/master/db/structure.sql).
-**Pageview events**
+### Pageview events
-- How many sessions visited the /dashboard/groups page
+- Number of sessions that visited the /dashboard/groups page
-**UI Events**
+### UI Events
-- How many sessions clicked on a button or link
-- How many sessions closed a modal
+- Number of sessions that clicked on a button or link
+- Number of sessions that closed a modal
UI events are any interface-driven actions from the browser including click data.
-**CRUD or API events**
+### CRUD or API events
-- How many Git pushes were made
-- How many GraphQL queries were made
-- How many requests were made to a Rails action or controller.
+- Number of Git pushes
+- Number of GraphQL queries
+- Number of requests to a Rails action or controller
-These are backend events that include the creation, read, update, deletion of records and other events that might be triggered from layers that aren't necessarily only available in the interface.
+These are backend events that include the creation, read, update, deletion of records, and other events that might be triggered from layers other than those available in the interface.
-**Event funnels**
+### Event funnels
-- How many sessions performed action A, B, then C
-- What is our conversion rate from step A to B?
+- Number of sessions that performed action A, B, then C
+- Conversion rate from step A to B
-**PostgreSQL data**
+### PostgreSQL data
-These are raw database records which can be explored using business intelligence tools like Sisense. The full list of available tables can be found in [structure.sql](https://gitlab.com/gitlab-org/gitlab/-/blob/master/db/structure.sql)
+These are raw database records which can be explored using business intelligence tools like Sisense. The full list of available tables can be found in [structure.sql](https://gitlab.com/gitlab-org/gitlab/-/blob/master/db/structure.sql).
-**Logs**
+### Logs
These are raw logs such as the [Production logs](../../administration/logs.md#production_jsonlog), [API logs](../../administration/logs.md#api_jsonlog), or [Sidekiq logs](../../administration/logs.md#sidekiqlog). See the [overview of Logging Infrastructure](https://gitlab.com/gitlab-com/runbooks/tree/master/logging/doc#logging-infrastructure-overview) for more details.
-**External services**
+### External services
These are external services a GitLab instance interacts with such as an [external storage provider](../../administration/static_objects_external_storage.md) or an [external container registry](../../administration/packages/container_registry.md#use-an-external-container-registry-with-gitlab-as-an-auth-endpoint). These services must be able to send data back into a GitLab instance for data to be tracked.
## Telemetry systems overview
-The systems overview is a simplified diagram showing the interactions between GitLab Inc and self-managed nstances.
+The systems overview is a simplified diagram showing the interactions between GitLab Inc and self-managed instances.
![Telemetry_Overview](../img/telemetry_system_overview.png)
@@ -140,7 +146,7 @@ For Telemetry purposes, GitLab Inc has three major components:
1. [Data Infrastructure](https://about.gitlab.com/handbook/business-ops/data-team/data-infrastructure/): This contains everything managed by our data team including Sisense Dashboards for visualization, Snowflake for Data Warehousing, incoming data sources such as PostgreSQL Pipeline and S3 Bucket, and lastly our data collectors [GitLab.com's Snowplow Collector](https://about.gitlab.com/handbook/engineering/infrastructure/library/snowplow/) and GitLab's Versions Application.
1. GitLab.com: This is the production GitLab application which is made up of a Client and Server. On the Client or browser side, a Snowplow JS Tracker (Frontend) is used to track client-side events. On the Server or application side, a Snowplow Ruby Tracker (Backend) is used to track server-side events. The server also contains Usage Ping which leverages a PostgreSQL database and a Redis in-memory data store to report on usage data. Lastly, the server also contains System Logs which are generated from running the GitLab application.
-1. [Monitoring infrastructure](https://about.gitlab.com/handbook/engineering/monitoring/): This is the infrastructure used to ensure GitLab.com is operating smoothly. System Logs are sent from GitLab.com to our monitoring infrastructure and collected by a FluentD collector. From FluentD, logs are either sent to long term Google Cloud Services cold storage via Stackdriver, or, they are sent to our Elastic Cluster via Cloud Pub/Sub which can be explored in real-time using Kibana
+1. [Monitoring infrastructure](https://about.gitlab.com/handbook/engineering/monitoring/): This is the infrastructure used to ensure GitLab.com is operating smoothly. System Logs are sent from GitLab.com to our monitoring infrastructure and collected by a FluentD collector. From FluentD, logs are either sent to long term Google Cloud Services cold storage via Stackdriver, or, they are sent to our Elastic Cluster via Cloud Pub/Sub which can be explored in real-time using Kibana.
### Self-managed
@@ -151,15 +157,15 @@ For Telemetry purposes, self-managed instances have two major components:
### Differences between GitLab Inc and Self-managed
-As shown by the orange lines, on GitLab.com Snowplow JS, Snowplow Ruby, Usage Ping, and PostgreSQL database imports all flow into GitLab Inc's data fnfrastructure. However, on self-managed, only Usage Ping flows into GitLab Inc's data infrastructure.
+As shown by the orange lines, on GitLab.com Snowplow JS, Snowplow Ruby, Usage Ping, and PostgreSQL database imports all flow into GitLab Inc's data infrastructure. However, on self-managed, only Usage Ping flows into GitLab Inc's data infrastructure.
As shown by the green lines, on GitLab.com system logs flow into GitLab Inc's monitoring infrastructure. On self-managed, there are no logs sent to GitLab Inc's monitoring infrastructure.
The differences between GitLab.com and self-managed are summarized below:
-| Environment | Snowplow JS (Frontend) | Snowplow Ruby (Backend) | Usage Ping | Database import | Logs system |
-| ------ | ------ | ------ | ------ | ------ | ------ |
-| GitLab.com | ✅ | ✅ | ✅ | ✅ | ✅ |
-| Self-Managed | ❌(1) | ❌(1) | ✅ | ❌ | ❌ |
+| Environment | Snowplow JS (Frontend) | Snowplow Ruby (Backend) | Usage Ping | Database import | Logs system |
+|--------------|------------------------|-------------------------|--------------------|---------------------|---------------------|
+| GitLab.com | **{check-circle}** | **{check-circle}** | **{check-circle}** | **{check-circle}** | **{check-circle}** |
+| Self-Managed | **{dotted-circle}**(1) | **{dotted-circle}**(1) | **{check-circle}** | **{dotted-circle}** | **{dotted-circle}** |
Note (1): Snowplow JS and Snowplow Ruby are available on self-managed, however, the Snowplow Collector endpoint is set to a self-managed Snowplow Collector which GitLab Inc does not have access to.
diff --git a/doc/development/telemetry/snowplow.md b/doc/development/telemetry/snowplow.md
index aeaad6e5624..b7090ee4d20 100644
--- a/doc/development/telemetry/snowplow.md
+++ b/doc/development/telemetry/snowplow.md
@@ -1,3 +1,9 @@
+---
+stage: Growth
+group: Telemetry
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+---
+
# Snowplow Guide
This guide provides a details about how Snowplow works. It includes the following sections:
@@ -40,7 +46,7 @@ From [Snowplow's documentation](https://github.com/snowplow/snowplow), Snowplow
## Snowplow schema
-We currently have many definitions of Snowplow's schema. We have an active issue to [standardize this schema](https://gitlab.com/gitlab-org/gitlab/issues/207930) including the following definitions:
+We currently have many definitions of Snowplow's schema. We have an active issue to [standardize this schema](https://gitlab.com/gitlab-org/gitlab/-/issues/207930) including the following definitions:
- Frontend and backend taxonomy as listed below
- [Feature instrumentation taxonomy](https://about.gitlab.com/handbook/product/feature-instrumentation/#taxonomy)
@@ -121,7 +127,7 @@ Below is an example of `data-track-*` attributes assigned to a button:
/>
```
-Event listeners are bound at the document level to handle click events on or within elements with these data attributes. This allows for them to be properly handled on rerendering and changes to the DOM, but it's important to know that because of the way these events are bound, click events shouldn't be stopped from propagating up the DOM tree. If for any reason click events are being stopped from propagating, you'll need to implement your own listeners and follow the instructions in [Tracking in raw JavaScript](#tracking-in-raw-javascript).
+Event listeners are bound at the document level to handle click events on or within elements with these data attributes. This allows for them to be properly handled on re-rendering and changes to the DOM, but it's important to know that because of the way these events are bound, click events shouldn't be stopped from propagating up the DOM tree. If for any reason click events are being stopped from propagating, you'll need to implement your own listeners and follow the instructions in [Tracking in raw JavaScript](#tracking-in-raw-javascript).
Below is a list of supported `data-track-*` attributes:
@@ -213,7 +219,7 @@ button.addEventListener('click', () => {
### Tests and test helpers
-In Jest particularly in vue tests, you can use the following:
+In Jest particularly in Vue tests, you can use the following:
```javascript
import { mockTracking } from 'helpers/tracking_helper';
@@ -330,10 +336,10 @@ Snowplow Inspector Chrome Extension is a browser extension for testing frontend
Snowplow Micro is a very small version of a full Snowplow data collection pipeline: small enough that it can be launched by a test suite. Events can be recorded into Snowplow Micro just as they can a full Snowplow pipeline. Micro then exposes an API that can be queried.
-Snowplow Micro is a docker-based solution for testing frontend and backend events in a local development environment. You need to modify GDK using the instructions below to set this up.
+Snowplow Micro is a Docker-based solution for testing frontend and backend events in a local development environment. You need to modify GDK using the instructions below to set this up.
- Read [Introducing Snowplow Micro](https://snowplowanalytics.com/blog/2019/07/17/introducing-snowplow-micro/)
-- Look at the [Snowplow Micro repo](https://github.com/snowplow-incubator/snowplow-micro)
+- Look at the [Snowplow Micro repository](https://github.com/snowplow-incubator/snowplow-micro)
- Watch our [installation guide recording](https://www.youtube.com/watch?v=OX46fo_A0Ag)
1. Install [Snowplow Micro](https://github.com/snowplow-incubator/snowplow-micro)
diff --git a/doc/development/telemetry/usage_ping.md b/doc/development/telemetry/usage_ping.md
index e9b959eaa96..0f438e02772 100644
--- a/doc/development/telemetry/usage_ping.md
+++ b/doc/development/telemetry/usage_ping.md
@@ -1,19 +1,17 @@
-# Usage Ping Guide
+---
+stage: Growth
+group: Telemetry
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+---
-> - [Introduced][ee-557] in GitLab Enterprise Edition 8.10.
-> - More statistics [were added][ee-735] in GitLab Enterprise Edition 8.12.
-> - [Moved to GitLab Core][ce-23361] in 9.1.
-> - More statistics [were added][ee-6602] in GitLab Ultimate 11.2.
+# Usage Ping Guide
-This guide provides a details about how usage ping works. It includes the following sections:
+> - Introduced in GitLab Enterprise Edition 8.10.
+> - More statistics were added in GitLab Enterprise Edition 8.12.
+> - Moved to GitLab Core in 9.1.
+> - More statistics were added in GitLab Ultimate 11.2.
-1. [What is Usage Ping](#what-is-usage-ping)
-1. [Usage Ping payload](#usage-ping-payload)
-1. [Disabling Usage Ping](#disabling-usage-ping)
-1. [Usage Ping request flow](#usage-ping-request-flow)
-1. [How Usage Ping works](#how-usage-ping-works)
-1. [Implementing Usage Ping](#implementing-usage-ping)
-1. [Developing and testing usage ping](#developing-and-testing-usage-ping)
+This guide describes Usage Ping's purpose and how it's implemented.
For more information about Telemetry, see:
@@ -27,237 +25,50 @@ More useful links:
- [Data for Product Managers](https://about.gitlab.com/handbook/business-ops/data-team/data-for-product-managers/)
- [Data Infrastructure](https://about.gitlab.com/handbook/business-ops/data-team/data-infrastructure/)
-## What is Usage Ping
+## What is Usage Ping?
-- GitLab sends a weekly payload containing usage data to GitLab Inc. The usage ping uses high-level data to help our product, support, and sales teams. It does not send any project names, usernames, or any other specific data. The information from the usage ping is not anonymous, it is linked to the hostname of the instance. Sending usage ping is optional, and any instance can disable analytics.
-- The usage data is primarily composed of row counts for different tables in the instance’s database. By comparing these counts month over month (or week over week), we can get a rough sense for how an instance is using the different features within the product.
-- Usage ping is important to GitLab as we use it to calculate our and Stage Monthly Active Users (SMAU) which helps us measure the success of our stages and features.
+- GitLab sends a weekly payload containing usage data to GitLab Inc. Usage Ping provides high-level data to help our product, support, and sales teams. It does not send any project names, usernames, or any other specific data. The information from the usage ping is not anonymous, it is linked to the hostname of the instance. Sending usage ping is optional, and any instance can disable analytics.
+- The usage data is primarily composed of row counts for different tables in the instance’s database. By comparing these counts month over month (or week over week), we can get a rough sense for how an instance is using the different features within the product. In addition to counts, other facts
+ that help us classify and understand GitLab installations are collected.
+- Usage ping is important to GitLab as we use it to calculate our Stage Monthly Active Users (SMAU) which helps us measure the success of our stages and features.
- Once usage ping is enabled, GitLab will gather data from the other instances and will be able to show usage statistics of your instance to your users.
-### Why Should We Enable Usage Ping?
+### Why should we enable Usage Ping?
-- The main purpose of Usage Ping is to build a better GitLab. Data about how GitLab is used is collected to better understand feature/stage adoption and usage, which helps us understand how GitLab is adding value and helps our team better understand the reasons why people use GitLab and with this knowledge we are able to make better product decisions.
+- The main purpose of Usage Ping is to build a better GitLab. Data about how GitLab is used is collected to better understand feature/stage adoption and usage, which helps us understand how GitLab is adding value and helps our team better understand the reasons why people use GitLab and with this knowledge we're able to make better product decisions.
- As a benefit of having the usage ping active, GitLab lets you analyze the users’ activities over time of your GitLab installation.
- As a benefit of having the usage ping active, GitLab provides you with The DevOps Score,which gives you an overview of your entire instance’s adoption of Concurrent DevOps from planning to monitoring.
- You will get better, more proactive support. (assuming that our TAMs and support organization used the data to deliver more value)
- You will get insight and advice into how to get the most value out of your investment in GitLab. Wouldn't you want to know that a number of features or values are not being adopted in your organization?
- You get a report that illustrates how you compare against other similar organizations (anonymized), with specific advice and recommendations on how to improve your DevOps processes.
+- Usage Ping is enabled by default. To disable it, see [Disable Usage Ping](#disable-usage-ping).
### Limitations
-- Usage Ping does not track frontend events things like page views, link clicks, or user sessions and only focuses on aggregated backend events.
+- Usage Ping does not track frontend events things like page views, link clicks, or user sessions, and only focuses on aggregated backend events.
- Because of these limitations we recommend instrumenting your products with Snowplow for more detailed analytics on GitLab.com and use Usage Ping to track aggregated backend events on self-managed.
## Usage Ping payload
You can view the exact JSON payload sent to GitLab Inc. in the administration panel. To view the payload:
-1. Navigate to the **Admin Area > Settings > Metrics and profiling**.
+1. Navigate to **Admin Area > Settings > Metrics and profiling**.
1. Expand the **Usage statistics** section.
1. Click the **Preview payload** button.
-Here is an example of the payload structure
-
-``` json
-{
- "uuid": "0000000-0000-0000-0000-000000000000",
- "hostname": "example.com",
- "version": "12.10.0-pre",
- "installation_type": "omnibus-gitlab",
- "active_user_count": 999,
- "recorded_at": "2020-04-17T07:43:54.162+00:00",
- "edition": "EEU",
- "license_md5": "00000000000000000000000000000000",
- "license_id": null,
- "historical_max_users": 999,
- "licensee": {
- "Name": "ABC, Inc.",
- "Email": "email@example.com",
- "Company": "ABC, Inc."
- },
- "license_user_count": 999,
- "license_starts_at": "2020-01-01",
- "license_expires_at": "2021-01-01",
- "license_plan": "ultimate",
- "license_add_ons": {
- },
- "license_trial": false,
- "counts": {
- "assignee_lists": 999,
- "boards": 999,
- "ci_builds": 999,
- ...
- },
- "container_registry_enabled": true,
- "dependency_proxy_enabled": false,
- "gitlab_shared_runners_enabled": true,
- "gravatar_enabled": true,
- "influxdb_metrics_enabled": true,
- "ldap_enabled": false,
- "mattermost_enabled": false,
- "omniauth_enabled": true,
- "prometheus_metrics_enabled": false,
- "reply_by_email_enabled": "incoming+%{key}@incoming.gitlab.com",
- "signup_enabled": true,
- "web_ide_clientside_preview_enabled": true,
- "ingress_modsecurity_enabled": true,
- "projects_with_expiration_policy_disabled": 999,
- "projects_with_expiration_policy_enabled": 999,
- ...
- "elasticsearch_enabled": true,
- "license_trial_ends_on": null,
- "geo_enabled": false,
- "git": {
- "version": {
- "major": 2,
- "minor": 26,
- "patch": 1
- }
- },
- "gitaly": {
- "version": "12.10.0-rc1-93-g40980d40",
- "servers": 56,
- "filesystems": [
- "EXT_2_3_4"
- ]
- },
- "gitlab_pages": {
- "enabled": true,
- "version": "1.17.0"
- },
- "database": {
- "adapter": "postgresql",
- "version": "9.6.15"
- },
- "app_server": {
- "type": "console"
- },
- "avg_cycle_analytics": {
- "issue": {
- "average": 999,
- "sd": 999,
- "missing": 999
- },
- "plan": {
- "average": null,
- "sd": 999,
- "missing": 999
- },
- "code": {
- "average": null,
- "sd": 999,
- "missing": 999
- },
- "test": {
- "average": null,
- "sd": 999,
- "missing": 999
- },
- "review": {
- "average": null,
- "sd": 999,
- "missing": 999
- },
- "staging": {
- "average": null,
- "sd": 999,
- "missing": 999
- },
- "production": {
- "average": null,
- "sd": 999,
- "missing": 999
- },
- "total": 999
- },
- "usage_activity_by_stage": {
- "configure": {
- "project_clusters_enabled": 999,
- ...
- },
- "create": {
- "merge_requests": 999,
- ...
- },
- "manage": {
- "events": 999,
- ...
- },
- "monitor": {
- "clusters": 999,
- ...
- },
- "package": {
- "projects_with_packages": 999
- },
- "plan": {
- "issues": 999,
- ...
- },
- "release": {
- "deployments": 999,
- ...
- },
- "secure": {
- "user_container_scanning_jobs": 999,
- ...
- },
- "verify": {
- "ci_builds": 999,
- ...
- }
- },
- "usage_activity_by_stage_monthly": {
- "configure": {
- "project_clusters_enabled": 999,
- ...
- },
- "create": {
- "merge_requests": 999,
- ...
- },
- "manage": {
- "events": 999,
- ...
- },
- "monitor": {
- "clusters": 999,
- ...
- },
- "package": {
- "projects_with_packages": 999
- },
- "plan": {
- "issues": 999,
- ...
- },
- "release": {
- "deployments": 999,
- ...
- },
- "secure": {
- "user_container_scanning_jobs": 999,
- ...
- },
- "verify": {
- "ci_builds": 999,
- ...
- }
- }
-}
-```
+For an example payload, see [Example Usage Ping payload](#example-usage-ping-payload).
-## Disabling usage ping
+## Disable Usage Ping
-The usage ping is opt-out. If you want to deactivate this feature, go to the Settings page of your administration panel and uncheck the Usage Ping checkbox.
+To disable Usage Ping in the GitLab UI, go to the **Settings** page of your administration panel and uncheck the **Usage Ping** checkbox.
-To disable the usage ping and prevent it from being configured in future through the administration panel, Omnibus installs can set the following in [`gitlab.rb`](https://docs.gitlab.com/omnibus/settings/configuration.html#configuration-options):
+To disable Usage Ping and prevent it from being configured in the future through the administration panel, Omnibus installs can set the following in [`gitlab.rb`](https://docs.gitlab.com/omnibus/settings/configuration.html#configuration-options):
```ruby
gitlab_rails['usage_ping_enabled'] = false
```
-And source installs can set the following in `gitlab.yml`:
+Source installations can set the following in `gitlab.yml`:
```yaml
production: &base
@@ -267,9 +78,9 @@ production: &base
usage_ping_enabled: false
```
-## Usage Ping Request Flow
+## Usage Ping request flow
-The following example shows a basic request/response flow between a GitLab Instance, the Versions Application, the License Application, Salesforce, GitLab's S3 Bucket, GitLab's Snowflake Data Warehouse, and Sisense.:
+The following example shows a basic request/response flow between a GitLab instance, the Versions Application, the License Application, Salesforce, GitLab's S3 Bucket, GitLab's Snowflake Data Warehouse, and Sisense:
```mermaid
sequenceDiagram
@@ -303,34 +114,40 @@ sequenceDiagram
## How Usage Ping works
1. The Usage Ping [cron job](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/workers/gitlab_usage_ping_worker.rb#L30) is set in Sidekiq to run weekly.
-1. When the cron job runs, it calls [GitLab::UsageData.to_json](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/services/submit_usage_ping_service.rb#L22).
-1. GitLab::UsageData.to_json [cascades down](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/usage_data.rb#L22) to ~400+ other counter method calls.
-1. The response of all methods calls are [merged together](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/usage_data.rb#L14) into a single JSON payload in GitLab::UsageData.to_json.
+1. When the cron job runs, it calls [`GitLab::UsageData.to_json`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/services/submit_usage_ping_service.rb#L22).
+1. `GitLab::UsageData.to_json` [cascades down](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/usage_data.rb#L22) to ~400+ other counter method calls.
+1. The response of all methods calls are [merged together](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/usage_data.rb#L14) into a single JSON payload in `GitLab::UsageData.to_json`.
1. The JSON payload is then [posted to the Versions application]( https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/services/submit_usage_ping_service.rb#L20).
## Implementing Usage Ping
-Usage Ping consists of four types of counters which are all found in `usage_data.rb`:
+Usage Ping consists of two kinds of data, counters and observations. Counters track how often a certain event
+happened over time, such as how many CI pipelines have run. They are monotonic and always trend up.
+Observations are facts collected from one or more GitLab instances and can carry arbitrary data. There are no
+general guidelines around how to collect those, due to the individual nature of that data.
+
+There are four types of counters which are all found in `usage_data.rb`:
- **Ordinary Batch Counters:** Simple count of a given ActiveRecord_Relation
- **Distinct Batch Counters:** Distinct count of a given ActiveRecord_Relation on given column
- **Alternative Counters:** Used for settings and configurations
- **Redis Counters:** Used for in-memory counts. This method is being deprecated due to data inaccuracies and will be replaced with a persistent method.
-Note: Only use the provided counter methods. Each counter method contains a built in fail safe to isolate each counter to avoid breaking the entire Usage Ping.
+NOTE: **Note:**
+Only use the provided counter methods. Each counter method contains a built in fail safe to isolate each counter to avoid breaking the entire Usage Ping.
### Why batch counting
For large tables, PostgreSQL can take a long time to count rows due to MVCC [(Multi-version Concurrency Control)](https://en.wikipedia.org/wiki/Multiversion_concurrency_control). Batch counting is a counting method where a single large query is broken into multiple smaller queries. For example, instead of a single query querying 1,000,000 records, with batch counting, you can execute 100 queries of 10,000 records each. Batch counting is useful for avoiding database timeouts as each batch query is significantly shorter than one single long running query.
-For GitLab.com, there are extremely large tables with 15 second query timeouts, so, we use batch counting to avoid encountering timeouts. Here are the sizes of some GitLab.com tables:
+For GitLab.com, there are extremely large tables with 15 second query timeouts, so we use batch counting to avoid encountering timeouts. Here are the sizes of some GitLab.com tables:
-| Table | Row counts in millions |
-| ------ | ------ |
-| merge_request_diff_commits | 2280 |
-| ci_build_trace_sections | 1764 |
-| merge_request_diff_files | 1082 |
-| events | 514 |
+| Table | Row counts in millions |
+|------------------------------|------------------------|
+| `merge_request_diff_commits` | 2280 |
+| `ci_build_trace_sections` | 1764 |
+| `merge_request_diff_files` | 1082 |
+| `events` | 514 |
There are two batch counting methods provided, `Ordinary Batch Counters` and `Distinct Batch Counters`. Batch counting requires indexes on columns to calculate max, min, and range queries. In some cases, a specialized index may need to be added on the columns involved in a counter.
@@ -393,7 +210,7 @@ Method: `redis_usage_data(counter, &block)`
Arguments:
- `counter`: a counter from `Gitlab::UsageDataCounters`, that has `fallback_totals` method implemented
-- or a `block`: wich is evaluated
+- or a `block`: which is evaluated
Example of usage:
@@ -413,8 +230,8 @@ Method: `alt_usage_data(value = nil, fallback: -1, &block)`
Arguments:
-- `value`: a simple static value in wich case the value is simply returned.
-- or a `block`: wich is evaluated
+- `value`: a simple static value in which case the value is simply returned.
+- or a `block`: which is evaluated
- `fallback: -1`: the common value used for any metrics that are failing.
Example of usage:
@@ -425,6 +242,29 @@ alt_usage_data { Gitlab::CurrentSettings.uuid }
alt_usage_data(999)
```
+### Prometheus Queries
+
+In those cases where operational metrics should be part of Usage Ping, a database or Redis query is unlikely
+to provide useful data. Instead, Prometheus might be more appropriate, since most of GitLab's architectural
+components publish metrics to it that can be queried back, aggregated, and included as usage data.
+
+NOTE: **Note:**
+Prometheus as a data source for Usage Ping is currently only available for single-node Omnibus installations
+that are running the [bundled Prometheus](../../administration/monitoring/prometheus/index.md) instance.
+
+In order to query Prometheus for metrics, a helper method is available that will `yield` a fully configured
+`PrometheusClient`, given it is available as per the note above:
+
+```ruby
+with_prometheus_client do |client|
+ response = client.query('<your query>')
+ ...
+end
+```
+
+Please refer to [the `PrometheusClient` definition](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/prometheus_client.rb)
+for how to use its API to query for data.
+
## Developing and testing Usage Ping
### 1. Use your Rails console to manually test counters
@@ -441,49 +281,591 @@ Gitlab::UsageData.distinct_count(::Note.with_suggestions.where(time_period), :au
### 2. Generate the SQL query
-Your Rails console will give back the generated SQL queries.
+Your Rails console will return the generated SQL queries.
Example:
```ruby
- pry(main)> Gitlab::UsageData.count(User.active)
- (0.4ms) SELECT "features"."key" FROM "features"
- (0.7ms) SELECT MIN("users"."id") FROM "users" WHERE ("users"."state" IN ('active')) AND (ghost IS NOT TRUE) AND ("users"."user_type" IS NULL OR "users"."user_type" NOT IN (2, 1, 3))
- (0.6ms) SELECT MAX("users"."id") FROM "users" WHERE ("users"."state" IN ('active')) AND (ghost IS NOT TRUE) AND ("users"."user_type" IS NULL OR "users"."user_type" NOT IN (2, 1, 3))
- (0.5ms) SELECT COUNT("users"."id") FROM "users" WHERE ("users"."state" IN ('active')) AND (ghost IS NOT TRUE) AND ("users"."user_type" IS NULL OR "users"."user_type" NOT IN (2, 1, 3)) AND "users"."id" BETWEEN 0 AND 99999
+pry(main)> Gitlab::UsageData.count(User.active)
+ (2.6ms) SELECT "features"."key" FROM "features"
+ (15.3ms) SELECT MIN("users"."id") FROM "users" WHERE ("users"."state" IN ('active')) AND ("users"."user_type" IS NULL OR "users"."user_type" IN (6, 4))
+ (2.4ms) SELECT MAX("users"."id") FROM "users" WHERE ("users"."state" IN ('active')) AND ("users"."user_type" IS NULL OR "users"."user_type" IN (6, 4))
+ (1.9ms) SELECT COUNT("users"."id") FROM "users" WHERE ("users"."state" IN ('active')) AND ("users"."user_type" IS NULL OR "users"."user_type" IN (6, 4)) AND "users"."id" BETWEEN 1 AND 100000
```
### 3. Optimize queries with #database-lab
Paste the SQL query into `#database-lab` to see how the query performs at scale.
-- #database-lab is a Slack channel which uses a production-sized environment to test your queries
+- `#database-lab` is a Slack channel which uses a production-sized environment to test your queries.
- GitLab.com’s production database has a 15 second timeout.
-- For each query we require an execution time of under 1 second due do cold caches which can 10x this time.
-- Add a specialized index on columns involved to reduce your the execution time.
-
-In order to have an understanding of the queries execution we add in the MR description the following information
-
-For counters that have a `time_period` test and add information for both cases.
-
-- with `time_period = {}` for all time period
-- and `time_period = { created_at: 28.days.ago..Time.current }` for last 28 days period
-
-Execution plan and query time before and after optimization
-
-Using database-lab and [explain.depesz.com](https://explain.depesz.com/) see more details in [database review guide](../database_review.md#preparation-when-adding-or-modifying-queries)
+- For each query we require an execution time of under 1 second due to cold caches which can 10x this time.
+- Add a specialized index on columns involved to reduce the execution time.
-Query generated for the index and time
+In order to have an understanding of the query's execution we add in the MR description the following information:
-Using database-lab
+- For counters that have a `time_period` test we add information for both cases:
+ - `time_period = {}` for all time periods
+ - `time_period = { created_at: 28.days.ago..Time.current }` for last 28 days period
+- Execution plan and query time before and after optimization
+- Query generated for the index and time
+- Migration output for up and down execution
-Migration output for up and down execution
+We also use `#database-lab` and [explain.depesz.com](https://explain.depesz.com/). For more details, see the [database review guide](../database_review.md#preparation-when-adding-or-modifying-queries).
Examples of query optimization work:
- [Example 1](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/26445)
- [Example 2](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/26871)
-### 4. Ask for a Telemetry Review
-
-On GitLab.com, we have DangerBot setup to monitor Telemetry related files and DangerBot will recommend a Telemetry review. Simply `@gitlab-org/growth/telemetry/engineers` in your MR for a review.
+### 4. Add the metric definition
+
+When adding, changing, or updating metrics, please update the [Usage Statistics definition table](#usage-statistics-definitions).
+
+### 5. Add new metric to Versions Application
+
+Check if new metrics need to be added to the Versions Application. See `usage_data` [schema](https://gitlab.com/gitlab-services/version-gitlab-com/-/blob/master/db/schema.rb#L147) and usage data [parameters accepted](https://gitlab.com/gitlab-services/version-gitlab-com/-/blob/master/app/services/usage_ping.rb). Any metrics added under the `counts` key are saved in the `counts` column.
+
+### 6. Ask for a Telemetry Review
+
+On GitLab.com, we have DangerBot setup to monitor Telemetry related files and DangerBot will recommend a Telemetry review. Mention `@gitlab-org/growth/telemetry/engineers` in your MR for a review.
+
+### Optional: Test Prometheus based Usage Ping
+
+If the data submitted includes metrics [queried from Prometheus](#prometheus-queries) that you would like to inspect and verify,
+then you need to ensure that a Prometheus server is running locally, and that furthermore the respective GitLab components
+are exporting metrics to it. If you do not need to test data coming from Prometheus, no further action
+is necessary, since Usage Ping should degrade gracefully in the absence of a running Prometheus server.
+
+There are currently three kinds of components that may export data to Prometheus, and which are included in Usage Ping:
+
+- [`node_exporter`](https://github.com/prometheus/node_exporter) - Exports node metrics from the host machine
+- [`gitlab-exporter`](https://gitlab.com/gitlab-org/gitlab-exporter) - Exports process metrics from various GitLab components
+- various GitLab services such as Sidekiq and the Rails server that export their own metrics
+
+#### Test with an Omnibus container
+
+This is the recommended approach to test Prometheus based Usage Ping.
+
+The easiest way to verify your changes is to build a new Omnibus image from your code branch via CI, then download the image
+and run a local container instance:
+
+1. From your merge request, click on the `qa` stage, then trigger the `package-and-qa` job. This job will trigger an Omnibus
+build in a [downstream pipeline of the `omnibus-gitlab-mirror` project](https://gitlab.com/gitlab-org/build/omnibus-gitlab-mirror/-/pipelines).
+1. In the downstream pipeline, wait for the `gitlab-docker` job to finish.
+1. Open the job logs and locate the full container name including the version. It will take the following form: `registry.gitlab.com/gitlab-org/build/omnibus-gitlab-mirror/gitlab-ee:<VERSION>`.
+1. On your local machine, make sure you are logged in to the GitLab Docker registry. You can find the instructions for this in
+[Authenticating to the GitLab Container Registry](../../user/packages/container_registry/index.md#authenticating-to-the-gitlab-container-registry).
+1. Once logged in, download the new image via `docker pull registry.gitlab.com/gitlab-org/build/omnibus-gitlab-mirror/gitlab-ee:<VERSION>`
+1. For more information about working with and running Omnibus GitLab containers in Docker, please refer to [GitLab Docker images](https://docs.gitlab.com/omnibus/docker/README.html) in the Omnibus documentation.
+
+#### Test with GitLab development toolkits
+
+This is the less recommended approach, since it comes with a number of difficulties when emulating a real GitLab deployment.
+
+The [GDK](https://gitlab.com/gitlab-org/gitlab-development-kit) is not currently set up to run a Prometheus server or `node_exporter` alongside other GitLab components. If you would
+like to do so, [Monitoring the GDK with Prometheus](https://gitlab.com/gitlab-org/gitlab-development-kit/-/blob/master/doc/howto/prometheus/index.md#monitoring-the-gdk-with-prometheus) is a good start.
+
+The [GCK](https://gitlab.com/gitlab-org/gitlab-compose-kit) has limited support for testing Prometheus based Usage Ping.
+By default, it already comes with a fully configured Prometheus service that is set up to scrape a number of components,
+but with the following limitations:
+
+- It does not currently run a `gitlab-exporter` instance, so several `process_*` metrics from services such as Gitaly may be missing.
+- While it runs a `node_exporter`, `docker-compose` services emulate hosts, meaning that it would normally report itself to not be associated
+with any of the other services that are running. That is not how node metrics are reported in a production setup, where `node_exporter`
+always runs as a process alongside other GitLab components on any given node. From Usage Ping's perspective none of the node data would therefore
+appear to be associated to any of the services running, since they all appear to be running on different hosts. To alleviate this problem, the `node_exporter` in GCK was arbitrarily "assigned" to the `web` service, meaning only for this service `node_*` metrics will appear in Usage Ping.
+
+## Usage Statistics definitions
+
+| Statistic | Section | Stage | Tier | Description |
+|:--------------------------------------------------------|:-----------------------------------|:------------|:---------------|:--------------------------------------------------|
+| `uuid` | | | | |
+| `hostname` | | | | |
+| `version` | | | | |
+| `installation_type` | | | | |
+| `active_user_count` | | | | |
+| `recorded_at` | | | | |
+| `edition` | | | | |
+| `license_md5` | | | | |
+| `license_id` | | | | |
+| `historical_max_users` | | | | |
+| `Name` | `licensee` | | | |
+| `Email` | `licensee` | | | |
+| `Company` | `licensee` | | | |
+| `license_user_count` | | | | |
+| `license_starts_at` | | | | |
+| `license_expires_at` | | | | |
+| `license_plan` | | | | |
+| `license_trial` | | | | |
+| `assignee_lists` | `counts` | | | |
+| `boards` | `counts` | | | |
+| `ci_builds` | `counts` | `verify` | | Unique builds in project |
+| `ci_internal_pipelines` | `counts` | `verify` | | Total pipelines in GitLab repositories |
+| `ci_external_pipelines` | `counts` | `verify` | | Total pipelines in external repositories |
+| `ci_pipeline_config_auto_devops` | `counts` | `verify` | | Total pipelines from an Auto DevOps template |
+| `ci_pipeline_config_repository` | `counts` | `verify` | | Total Pipelines from templates in repository |
+| `ci_runners` | `counts` | `verify` | | Total configured Runners in project |
+| `ci_triggers` | `counts` | `verify` | | Total configured Triggers in project |
+| `ci_pipeline_schedules` | `counts` | `verify` | | Pipeline schedules in GitLab |
+| `auto_devops_enabled` | `counts` |`configure` | | Projects with Auto DevOps template enabled |
+| `auto_devops_disabled` | `counts` |`configure` | | Projects with Auto DevOps template disabled |
+| `deploy_keys` | `counts` | | | |
+| `deployments` | `counts` |`release` | | Total deployments |
+| `dast_jobs` | `counts` | | | |
+| `successful_deployments` | `counts` |`release` | | Total successful deployments |
+| `failed_deployments` | `counts` |`release` | | Total failed deployments |
+| `environments` | `counts` |`release` | | Total available and stopped environments |
+| `clusters` | `counts` |`configure` | | Total GitLab Managed clusters both enabled and disabled |
+| `clusters_enabled` | `counts` |`configure` | | Total GitLab Managed clusters currently enabled |
+| `project_clusters_enabled` | `counts` |`configure` | | Total GitLab Managed clusters attached to projects|
+| `group_clusters_enabled` | `counts` |`configure` | | Total GitLab Managed clusters attached to groups |
+| `instance_clusters_enabled` | `counts` |`configure` | | Total GitLab Managed clusters attached to the instance |
+| `clusters_disabled` | `counts` |`configure` | | Total GitLab Managed disabled clusters |
+| `project_clusters_disabled` | `counts` |`configure` | | Total GitLab Managed disabled clusters previously attached to projects |
+| `group_clusters_disabled` | `counts` |`configure` | | Total GitLab Managed disabled clusters previously attached to groups |
+| `instance_clusters_disabled` | `counts` |`configure` | | Total GitLab Managed disabled clusters previously attached to the instance |
+| `clusters_platforms_eks` | `counts` |`configure` | | Total GitLab Managed clusters provisioned with GitLab on AWS EKS |
+| `clusters_platforms_gke` | `counts` |`configure` | | Total GitLab Managed clusters provisioned with GitLab on GCE GKE |
+| `clusters_platforms_user` | `counts` |`configure` | | Total GitLab Managed clusters that are user provisioned |
+| `clusters_applications_helm` | `counts` |`configure` | | Total GitLab Managed clusters with Helm enabled |
+| `clusters_applications_ingress` | `counts` |`configure` | | Total GitLab Managed clusters with Ingress enabled |
+| `clusters_applications_cert_managers` | `counts` |`configure` | | Total GitLab Managed clusters with Cert Manager enabled |
+| `clusters_applications_crossplane` | `counts` |`configure` | | Total GitLab Managed clusters with Crossplane enabled |
+| `clusters_applications_prometheus` | `counts` |`configure` | | Total GitLab Managed clusters with Prometheus enabled |
+| `clusters_applications_runner` | `counts` |`configure` | | Total GitLab Managed clusters with Runner enabled |
+| `clusters_applications_knative` | `counts` |`configure` | | Total GitLab Managed clusters with Knative enabled |
+| `clusters_applications_elastic_stack` | `counts` |`configure` | | Total GitLab Managed clusters with Elastic Stack enabled |
+| `clusters_management_project` | `counts` |`configure` | | Total GitLab Managed clusters with defined cluster management project |
+| `in_review_folder` | `counts` | | | |
+| `grafana_integrated_projects` | `counts` | | | |
+| `groups` | `counts` | | | |
+| `issues` | `counts` | | | |
+| `issues_created_from_gitlab_error_tracking_ui` | `counts` | `monitor` | | |
+| `issues_with_associated_zoom_link` | `counts` | `monitor` | | |
+| `issues_using_zoom_quick_actions` | `counts` | `monitor` | | |
+| `issues_with_embedded_grafana_charts_approx` | `counts` | `monitor` | | |
+| `issues_with_health_status` | `counts` | | | |
+| `keys` | `counts` | | | |
+| `label_lists` | `counts` | | | |
+| `lfs_objects` | `counts` | | | |
+| `milestone_lists` | `counts` | | | |
+| `milestones` | `counts` | | | |
+| `pages_domains` | `counts` |`release` | | Total GitLab Pages domains |
+| `pool_repositories` | `counts` | | | |
+| `projects` | `counts` | | | |
+| `projects_imported_from_github` | `counts` | | | |
+| `projects_with_repositories_enabled` | `counts` | | | |
+| `projects_with_error_tracking_enabled` | `counts` | `monitor` | | |
+| `protected_branches` | `counts` | | | |
+| `releases` | `counts` |`release` | | Unique release tags |
+| `remote_mirrors` | `counts` | | | |
+| `requirements_created` | `counts` | | | |
+| `snippets` | `counts` | | | |
+| `suggestions` | `counts` | | | |
+| `todos` | `counts` | | | |
+| `uploads` | `counts` | | | |
+| `web_hooks` | `counts` | | | |
+| `projects_alerts_active` | `counts` | | | |
+| `projects_asana_active` | `counts` | | | |
+| `projects_assembla_active` | `counts` | | | |
+| `projects_bamboo_active` | `counts` | | | |
+| `projects_bugzilla_active` | `counts` | | | |
+| `projects_buildkite_active` | `counts` | | | |
+| `projects_campfire_active` | `counts` | | | |
+| `projects_custom_issue_tracker_active` | `counts` | | | |
+| `projects_discord_active` | `counts` | | | |
+| `projects_drone_ci_active` | `counts` | | | |
+| `projects_emails_on_push_active` | `counts` | | | |
+| `projects_external_wiki_active` | `counts` | | |
+| `projects_flowdock_active` | `counts` | | | |
+| `projects_github_active` | `counts` | | | |
+| `projects_hangouts_chat_active` | `counts` | | | |
+| `projects_hipchat_active` | `counts` | | | |
+| `projects_irker_active` | `counts` | | | |
+| `projects_jenkins_active` | `counts` | | | |
+| `projects_jira_active` | `counts` | | | |
+| `projects_mattermost_active` | `counts` | | | |
+| `projects_mattermost_slash_commands_active` | `counts` | | | |
+| `projects_microsoft_teams_active` | `counts` | | | |
+| `projects_packagist_active` | `counts` | | | |
+| `projects_pipelines_email_active` | `counts` | | | |
+| `projects_pivotaltracker_active` | `counts` | | | |
+| `projects_prometheus_active` | `counts` | | | |
+| `projects_pushover_active` | `counts` | | | |
+| `projects_redmine_active` | `counts` | | | |
+| `projects_slack_active` | `counts` | | | |
+| `projects_slack_slash_commands_active` | `counts` | | | |
+| `projects_teamcity_active` | `counts` | | | |
+| `projects_unify_circuit_active` | `counts` | | | |
+| `projects_webex_teams_active` | `counts` | | | |
+| `projects_youtrack_active` | `counts` | | | |
+| `projects_slack_notifications_active` | `counts` | | | |
+| `projects_slack_slash_active` | `counts` | | | |
+| `projects_jira_server_active` | `counts` | | | |
+| `projects_jira_cloud_active` | `counts` | | | |
+| `projects_jira_dvcs_cloud_active` | `counts` | | | |
+| `projects_jira_dvcs_server_active` | `counts` | | | |
+| `labels` | `counts` | | | |
+| `merge_requests` | `counts` | | | |
+| `merge_requests_users` | `counts` | | | |
+| `notes` | `counts` | | | |
+| `wiki_pages_create` | `counts` | | | |
+| `wiki_pages_update` | `counts` | | | |
+| `wiki_pages_delete` | `counts` | | | |
+| `web_ide_commits` | `counts` | | | |
+| `web_ide_views` | `counts` | | | |
+| `web_ide_merge_requests` | `counts` | | | |
+| `web_ide_previews` | `counts` | | | |
+| `snippet_comment` | `counts` | | | |
+| `commit_comment` | `counts` | | | |
+| `merge_request_comment` | `counts` | | | |
+| `snippet_create` | `counts` | | | |
+| `snippet_update` | `counts` | | | |
+| `navbar_searches` | `counts` | | | |
+| `cycle_analytics_views` | `counts` | | | |
+| `productivity_analytics_views` | `counts` | | | |
+| `source_code_pushes` | `counts` | | | |
+| `merge_request_create` | `counts` | | | |
+| `design_management_designs_create` | `counts` | | | |
+| `design_management_designs_update` | `counts` | | | |
+| `design_management_designs_delete` | `counts` | | | |
+| `licenses_list_views` | `counts` | | | |
+| `user_preferences_group_overview_details` | `counts` | | | |
+| `user_preferences_group_overview_security_dashboard` | `counts` | | | |
+| `ingress_modsecurity_logging` | `counts` | | | |
+| `ingress_modsecurity_blocking` | `counts` | | | |
+| `ingress_modsecurity_disabled` | `counts` | | | |
+| `ingress_modsecurity_not_installed` | `counts` | | | |
+| `dependency_list_usages_total` | `counts` | | | |
+| `epics` | `counts` | | | |
+| `feature_flags` | `counts` | | | |
+| `geo_nodes` | `counts` | `geo` | | Number of sites in a Geo deployment |
+| `geo_event_log_max_id` | `counts` | `geo` | | Number of replication events on a Geo primary |
+| `incident_issues` | `counts` | `monitor` | | Issues created by the alert bot |
+| `alert_bot_incident_issues` | `counts` | `monitor` | | Issues created by the alert bot |
+| `incident_labeled_issues` | `counts` | `monitor` | | Issues with the incident label |
+| `issues_created_gitlab_alerts` | `counts` | `monitor` | | Issues created from alerts by non-alert bot users |
+| `issues_created_manually_from_alerts` | `counts` | `monitor` | | Issues created from alerts by non-alert bot users |
+| `issues_created_from_alerts` | `counts` | `monitor` | | Issues created from Prometheus and alert management alerts |
+| `ldap_group_links` | `counts` | | | |
+| `ldap_keys` | `counts` | | | |
+| `ldap_users` | `counts` | | | |
+| `pod_logs_usages_total` | `counts` | | | |
+| `projects_enforcing_code_owner_approval` | `counts` | | | |
+| `projects_mirrored_with_pipelines_enabled` | `counts` |`release` | | Projects with repository mirroring enabled |
+| `projects_reporting_ci_cd_back_to_github` | `counts` |`verify` | | Projects with a GitHub service pipeline enabled |
+| `projects_with_packages` | `counts` |`package` | | Projects with package registry configured |
+| `projects_with_prometheus_alerts` | `counts` |`monitor` | | Projects with Prometheus alerting enabled |
+| `projects_with_tracing_enabled` | `counts` |`monitor` | | Projects with tracing enabled |
+| `projects_with_alerts_service_enabled` | `counts` |`monitor` | | Projects with alerting service enabled |
+| `template_repositories` | `counts` | | | |
+| `container_scanning_jobs` | `counts` | | | |
+| `dependency_scanning_jobs` | `counts` | | | |
+| `license_management_jobs` | `counts` | | | |
+| `sast_jobs` | `counts` | | | |
+| `status_page_projects` | `counts` | `monitor` | | Projects with status page enabled |
+| `status_page_issues` | `counts` | `monitor` | | Issues published to a Status Page |
+| `status_page_incident_publishes` | `counts` | `monitor` | | Cumulative count of usages of publish operation |
+| `status_page_incident_unpublishes` | `counts` | `monitor` | | Cumulative count of usages of unpublish operation |
+| `epics_deepest_relationship_level` | `counts` | | | |
+| `operations_dashboard_default_dashboard` | `counts` | `monitor` | | Active users with enabled operations dashboard |
+| `operations_dashboard_users_with_projects_added` | `counts` | `monitor` | | Active users with projects on operations dashboard|
+| `container_registry_enabled` | | | | |
+| `dependency_proxy_enabled` | | | | |
+| `gitlab_shared_runners_enabled` | | | | |
+| `gravatar_enabled` | | | | |
+| `ldap_enabled` | | | | |
+| `mattermost_enabled` | | | | |
+| `omniauth_enabled` | | | | |
+| `prometheus_metrics_enabled` | | | | |
+| `reply_by_email_enabled` | | | | |
+| `average` | `avg_cycle_analytics - code` | | | |
+| `sd` | `avg_cycle_analytics - code` | | | |
+| `missing` | `avg_cycle_analytics - code` | | | |
+| `average` | `avg_cycle_analytics - test` | | | |
+| `sd` | `avg_cycle_analytics - test` | | | |
+| `missing` | `avg_cycle_analytics - test` | | | |
+| `average` | `avg_cycle_analytics - review` | | | |
+| `sd` | `avg_cycle_analytics - review` | | | |
+| `missing` | `avg_cycle_analytics - review` | | | |
+| `average` | `avg_cycle_analytics - staging` | | | |
+| `sd` | `avg_cycle_analytics - staging` | | | |
+| `missing` | `avg_cycle_analytics - staging` | | | |
+| `average` | `avg_cycle_analytics - production` | | | |
+| `sd` | `avg_cycle_analytics - production` | | | |
+| `missing` | `avg_cycle_analytics - production` | | | |
+| `total` | `avg_cycle_analytics` | | | |
+| `clusters_applications_cert_managers` | `usage_activity_by_stage` | `configure` | | Unique clusters with certificate managers enabled |
+| `clusters_applications_helm` | `usage_activity_by_stage` | `configure` | | Unique clusters with Helm enabled |
+| `clusters_applications_ingress` | `usage_activity_by_stage` | `configure` | | Unique clusters with Ingress enabled |
+| `clusters_applications_knative` | `usage_activity_by_stage` | `configure` | | Unique clusters with Knative enabled |
+| `clusters_management_project` | `usage_activity_by_stage` | `configure` | | Unique clusters with project management enabled |
+| `clusters_disabled` | `usage_activity_by_stage` | `configure` | | Total non-"GitLab Managed clusters" |
+| `clusters_enabled` | `usage_activity_by_stage` | `configure` | | Total GitLab Managed clusters |
+| `clusters_platforms_gke` | `usage_activity_by_stage` | `configure` | | Unique clusters with Google Cloud installed |
+| `clusters_platforms_eks` | `usage_activity_by_stage` | `configure` | | Unique clusters with AWS installed |
+| `clusters_platforms_user` | `usage_activity_by_stage` | `configure` | | Unique clusters that are user provided |
+| `instance_clusters_disabled` | `usage_activity_by_stage` | `configure` | | Unique clusters disabled on instance |
+| `instance_clusters_enabled` | `usage_activity_by_stage` | `configure` | | Unique clusters enabled on instance |
+| `group_clusters_disabled` | `usage_activity_by_stage` | `configure` | | Unique clusters disabled on group |
+| `group_clusters_enabled` | `usage_activity_by_stage` | `configure` | | Unique clusters enabled on group |
+| `project_clusters_disabled` | `usage_activity_by_stage` | `configure` | | Unique clusters disabled on project |
+| `project_clusters_enabled` | `usage_activity_by_stage` | `configure` | | Unique clusters enabled on project |
+| `projects_slack_notifications_active` | `usage_activity_by_stage` | `configure` | | Unique projects with Slack service enabled |
+| `projects_slack_slash_active` | `usage_activity_by_stage` | `configure` | | Unique projects with Slack '/' commands enabled |
+| `projects_with_prometheus_alerts: 0` | `usage_activity_by_stage` | `monitor` | | Projects with Prometheus enabled and no alerts |
+| `deploy_keys` | `usage_activity_by_stage` | `create` | | |
+| `keys` | `usage_activity_by_stage` | `create` | | |
+| `projects_jira_dvcs_server_active` | `usage_activity_by_stage` | `plan` | | |
+| `service_desk_enabled_projects` | `usage_activity_by_stage` | `plan` | | |
+| `service_desk_issues` | `usage_activity_by_stage` | `plan` | | |
+| `todos: 0` | `usage_activity_by_stage` | `plan` | | |
+| `deployments` | `usage_activity_by_stage` | `release` | | Total deployments |
+| `failed_deployments` | `usage_activity_by_stage` | `release` | | Total failed deployments |
+| `projects_mirrored_with_pipelines_enabled` | `usage_activity_by_stage` | `release` | | Projects with repository mirroring enabled |
+| `releases` | `usage_activity_by_stage` | `release` | | Unique release tags in project |
+| `successful_deployments: 0` | `usage_activity_by_stage` | `release` | | Total successful deployments |
+| `user_preferences_group_overview_security_dashboard: 0` | `usage_activity_by_stage` | `secure` | | |
+| `ci_builds` | `usage_activity_by_stage` | `verify` | | Unique builds in project |
+| `ci_external_pipelines` | `usage_activity_by_stage` | `verify` | | Total pipelines in external repositories |
+| `ci_internal_pipelines` | `usage_activity_by_stage` | `verify` | | Total pipelines in GitLab repositories |
+| `ci_pipeline_config_auto_devops` | `usage_activity_by_stage` | `verify` | | Total pipelines from an Auto DevOps template |
+| `ci_pipeline_config_repository` | `usage_activity_by_stage` | `verify` | | Pipelines from templates in repository |
+| `ci_pipeline_schedules` | `usage_activity_by_stage` | `verify` | | Pipeline schedules in GitLab |
+| `ci_pipelines` | `usage_activity_by_stage` | `verify` | | Total pipelines |
+| `ci_triggers` | `usage_activity_by_stage` | `verify` | | Triggers enabled |
+| `clusters_applications_runner` | `usage_activity_by_stage` | `verify` | | Unique clusters with Runner enabled |
+| `projects_reporting_ci_cd_back_to_github: 0` | `usage_activity_by_stage` | `verify` | | Unique projects with a GitHub pipeline enabled |
+| `nodes` | `topology` | `enablement`| | The list of server nodes on which GitLab components are running |
+| `duration_s` | `topology` | `enablement`| | Time it took to collect topology data |
+| `node_memory_total_bytes` | `topology > nodes` | `enablement`| | The total available memory of this node |
+| `node_cpus` | `topology > nodes` | `enablement`| | The number of CPU cores of this node |
+| `node_services` | `topology > nodes` | `enablement`| | The list of GitLab services running on this node |
+| `name` | `topology > nodes > node_services` | `enablement`| | The name of the GitLab service running on this node |
+| `process_count` | `topology > nodes > node_services` | `enablement`| | The number of processes running for this service |
+| `process_memory_rss` | `topology > nodes > node_services` | `enablement`| | The average Resident Set Size of a service process |
+| `process_memory_uss` | `topology > nodes > node_services` | `enablement`| | The average Unique Set Size of a service process |
+| `process_memory_pss` | `topology > nodes > node_services` | `enablement`| | The average Proportional Set Size of a service process |
+
+## Example Usage Ping payload
+
+The following is example content of the Usage Ping payload.
+
+```json
+{
+ "uuid": "0000000-0000-0000-0000-000000000000",
+ "hostname": "example.com",
+ "version": "12.10.0-pre",
+ "installation_type": "omnibus-gitlab",
+ "active_user_count": 999,
+ "recorded_at": "2020-04-17T07:43:54.162+00:00",
+ "edition": "EEU",
+ "license_md5": "00000000000000000000000000000000",
+ "license_id": null,
+ "historical_max_users": 999,
+ "licensee": {
+ "Name": "ABC, Inc.",
+ "Email": "email@example.com",
+ "Company": "ABC, Inc."
+ },
+ "license_user_count": 999,
+ "license_starts_at": "2020-01-01",
+ "license_expires_at": "2021-01-01",
+ "license_plan": "ultimate",
+ "license_add_ons": {
+ },
+ "license_trial": false,
+ "counts": {
+ "assignee_lists": 999,
+ "boards": 999,
+ "ci_builds": 999,
+ ...
+ },
+ "container_registry_enabled": true,
+ "dependency_proxy_enabled": false,
+ "gitlab_shared_runners_enabled": true,
+ "gravatar_enabled": true,
+ "influxdb_metrics_enabled": true,
+ "ldap_enabled": false,
+ "mattermost_enabled": false,
+ "omniauth_enabled": true,
+ "prometheus_metrics_enabled": false,
+ "reply_by_email_enabled": "incoming+%{key}@incoming.gitlab.com",
+ "signup_enabled": true,
+ "web_ide_clientside_preview_enabled": true,
+ "ingress_modsecurity_enabled": true,
+ "projects_with_expiration_policy_disabled": 999,
+ "projects_with_expiration_policy_enabled": 999,
+ ...
+ "elasticsearch_enabled": true,
+ "license_trial_ends_on": null,
+ "geo_enabled": false,
+ "git": {
+ "version": {
+ "major": 2,
+ "minor": 26,
+ "patch": 1
+ }
+ },
+ "gitaly": {
+ "version": "12.10.0-rc1-93-g40980d40",
+ "servers": 56,
+ "clusters": 14,
+ "filesystems": [
+ "EXT_2_3_4"
+ ]
+ },
+ "gitlab_pages": {
+ "enabled": true,
+ "version": "1.17.0"
+ },
+ "database": {
+ "adapter": "postgresql",
+ "version": "9.6.15"
+ },
+ "app_server": {
+ "type": "console"
+ },
+ "avg_cycle_analytics": {
+ "issue": {
+ "average": 999,
+ "sd": 999,
+ "missing": 999
+ },
+ "plan": {
+ "average": null,
+ "sd": 999,
+ "missing": 999
+ },
+ "code": {
+ "average": null,
+ "sd": 999,
+ "missing": 999
+ },
+ "test": {
+ "average": null,
+ "sd": 999,
+ "missing": 999
+ },
+ "review": {
+ "average": null,
+ "sd": 999,
+ "missing": 999
+ },
+ "staging": {
+ "average": null,
+ "sd": 999,
+ "missing": 999
+ },
+ "production": {
+ "average": null,
+ "sd": 999,
+ "missing": 999
+ },
+ "total": 999
+ },
+ "usage_activity_by_stage": {
+ "configure": {
+ "project_clusters_enabled": 999,
+ ...
+ },
+ "create": {
+ "merge_requests": 999,
+ ...
+ },
+ "manage": {
+ "events": 999,
+ ...
+ },
+ "monitor": {
+ "clusters": 999,
+ ...
+ },
+ "package": {
+ "projects_with_packages": 999
+ },
+ "plan": {
+ "issues": 999,
+ ...
+ },
+ "release": {
+ "deployments": 999,
+ ...
+ },
+ "secure": {
+ "user_container_scanning_jobs": 999,
+ ...
+ },
+ "verify": {
+ "ci_builds": 999,
+ ...
+ }
+ },
+ "usage_activity_by_stage_monthly": {
+ "configure": {
+ "project_clusters_enabled": 999,
+ ...
+ },
+ "create": {
+ "merge_requests": 999,
+ ...
+ },
+ "manage": {
+ "events": 999,
+ ...
+ },
+ "monitor": {
+ "clusters": 999,
+ ...
+ },
+ "package": {
+ "projects_with_packages": 999
+ },
+ "plan": {
+ "issues": 999,
+ ...
+ },
+ "release": {
+ "deployments": 999,
+ ...
+ },
+ "secure": {
+ "user_container_scanning_jobs": 999,
+ ...
+ },
+ "verify": {
+ "ci_builds": 999,
+ ...
+ }
+ },
+ "topology": {
+ "nodes": [
+ {
+ "node_memory_total_bytes": 33269903360,
+ "node_cpus": 16,
+ "node_services": [
+ {
+ "name": "web",
+ "process_count": 16,
+ "process_memory_pss": 233349888,
+ "process_memory_rss": 788220927,
+ "process_memory_uss": 195295487
+ },
+ {
+ "name": "sidekiq",
+ "process_count": 1,
+ "process_memory_pss": 734080000,
+ "process_memory_rss": 750051328,
+ "process_memory_uss": 731533312
+ },
+ ...
+ ],
+ ...
+ },
+ ...
+ ],
+ "duration_s": 0.013836685999194742
+ }
+}
+```
diff --git a/doc/development/testing_guide/best_practices.md b/doc/development/testing_guide/best_practices.md
index f0137e542cc..7bb8473117f 100644
--- a/doc/development/testing_guide/best_practices.md
+++ b/doc/development/testing_guide/best_practices.md
@@ -35,11 +35,18 @@ Here are some things to keep in mind regarding test performance:
To run RSpec tests:
```shell
-# run all tests
+# run test for a file
+bin/rspec spec/models/project_spec.rb
+
+# run test for the example on line 10 on that file
+bin/rspec spec/models/project_spec.rb:10
+
+# run tests matching the example name has that string
+bin/rspec spec/models/project_spec.rb -e associations
+
+# run all tests, will take hours for GitLab codebase!
bin/rspec
-# run test for path
-bin/rspec spec/[path]/[to]/[spec].rb
```
Use [Guard](https://github.com/guard/guard) to continuously monitor for changes and only run matching tests:
@@ -59,7 +66,7 @@ FDOC=1 bin/rspec spec/[path]/[to]/[spec].rb
### General guidelines
-- Use a single, top-level `describe ClassName` block.
+- Use a single, top-level `RSpec.describe ClassName` block.
- Use `.method` to describe class methods and `#method` to describe instance
methods.
- Use `context` to test branching logic.
@@ -323,7 +330,7 @@ Feature.enabled?(:ci_live_trace) # => false
If you wish to set up a test where a feature flag is enabled only
for some actors and not others, you can specify this in options
passed to the helper. For example, to enable the `ci_live_trace`
-feature flag for a specifc project:
+feature flag for a specific project:
```ruby
project1, project2 = build_list(:project, 2)
@@ -340,7 +347,7 @@ This represents an actual behavior of FlipperGate:
1. You can enable an override for a specified actor to be enabled
1. You can disable (remove) an override for a specified actor,
- fallbacking to default state
+ falling back to default state
1. There's no way to model that you explicitly disable a specified actor
```ruby
@@ -357,6 +364,60 @@ Feature.enabled?(:my_feature2) # => false
Feature.enabled?(:my_feature2, project1) # => true
```
+#### `stub_feature_flags` vs `Feature.enable*`
+
+It is preferred to use `stub_feature_flags` for enabling feature flags
+in testing environment. This method provides a simple and well described
+interface for a simple use-cases.
+
+However, in some cases a more complex behaviors needs to be tested,
+like a feature flag percentage rollouts. This can be achieved using
+the `.enable_percentage_of_time` and `.enable_percentage_of_actors`
+
+```ruby
+# Good: feature needs to be explicitly disabled, as it is enabled by default if not defined
+stub_feature_flags(my_feature: false)
+stub_feature_flags(my_feature: true)
+stub_feature_flags(my_feature: project)
+stub_feature_flags(my_feature: [project, project2])
+
+# Bad
+Feature.enable(:my_feature_2)
+
+# Good: enable my_feature for 50% of time
+Feature.enable_percentage_of_time(:my_feature_3, 50)
+
+# Good: enable my_feature for 50% of actors/gates/things
+Feature.enable_percentage_of_actors(:my_feature_4, 50)
+```
+
+Each feature flag that has a defined state will be persisted
+for test execution time:
+
+```ruby
+Feature.persisted_names.include?('my_feature') => true
+Feature.persisted_names.include?('my_feature_2') => true
+Feature.persisted_names.include?('my_feature_3') => true
+Feature.persisted_names.include?('my_feature_4') => true
+```
+
+#### Stubbing gate
+
+It is required that a gate that is passed as an argument to `Feature.enabled?`
+and `Feature.disabled?` is an object that includes `FeatureGate`.
+
+In specs you can use a `stub_feature_flag_gate` method that allows you to have
+quickly your custom gate:
+
+```ruby
+gate = stub_feature_flag_gate('CustomActor')
+
+stub_feature_flags(ci_live_trace: gate)
+
+Feature.enabled?(:ci_live_trace) # => false
+Feature.enabled?(:ci_live_trace, gate) # => true
+```
+
### Pristine test environments
The code exercised by a single GitLab test may access and modify many items of
@@ -406,7 +467,7 @@ However, if a spec makes direct Redis calls, it should mark itself with the
#### Background jobs / Sidekiq
By default, Sidekiq jobs are enqueued into a jobs array and aren't processed.
-If a test enqueues Sidekiq jobs and need them to be processed, the
+If a test queues Sidekiq jobs and need them to be processed, the
`:sidekiq_inline` trait can be used.
The `:sidekiq_might_not_need_inline` trait was added when [Sidekiq inline mode was
@@ -662,7 +723,7 @@ module Spec
end
```
-Helpers should not change the RSpec config. For instance, the helpers module
+Helpers should not change the RSpec configuration. For instance, the helpers module
described above should not include:
```ruby
@@ -723,9 +784,9 @@ end
This will create a repository containing two files, with default permissions and
the specified content.
-### Config
+### Configuration
-RSpec config files are files that change the RSpec config (i.e.
+RSpec configuration files are files that change the RSpec configuration (i.e.
`RSpec.configure do |config|` blocks). They should be placed under
`spec/support/`.
@@ -744,7 +805,7 @@ RSpec.configure do |config|
end
```
-If a config file only consists of `config.include`, you can add these
+If a configuration file only consists of `config.include`, you can add these
`config.include` directly in `spec/spec_helper.rb`.
For very generic helpers, consider including them in the `spec/support/rspec.rb`
diff --git a/doc/development/testing_guide/end_to_end/best_practices.md b/doc/development/testing_guide/end_to_end/best_practices.md
index 57cfcf34726..7df3cd614c7 100644
--- a/doc/development/testing_guide/end_to_end/best_practices.md
+++ b/doc/development/testing_guide/end_to_end/best_practices.md
@@ -91,7 +91,7 @@ point of failure and so the screenshot would not be captured at the right moment
All tests expect to be able to log in at the start of the test.
-For an example see: <https://gitlab.com/gitlab-org/gitlab/issues/34736>
+For an example see: <https://gitlab.com/gitlab-org/gitlab/-/issues/34736>
Ideally, any actions performed in an `after(:context)` (or [`before(:context)`](#limit-the-use-of-the-ui-in-beforecontext-and-after-hooks)) block would be performed via the API. But if it's necessary to do so via the UI (e.g., if API functionality doesn't exist), make sure to log out at the end of the block.
diff --git a/doc/development/testing_guide/end_to_end/feature_flags.md b/doc/development/testing_guide/end_to_end/feature_flags.md
index 3bd07f17207..ada20cc9dad 100644
--- a/doc/development/testing_guide/end_to_end/feature_flags.md
+++ b/doc/development/testing_guide/end_to_end/feature_flags.md
@@ -26,4 +26,4 @@ end
It's also possible to run an entire scenario with a feature flag enabled, without having to edit existing tests or write new ones.
-Please see the [QA readme](https://gitlab.com/gitlab-org/gitlab/tree/master/qa#running-tests-with-a-feature-flag-enabled) for details.
+Please see the [QA README](https://gitlab.com/gitlab-org/gitlab/tree/master/qa#running-tests-with-a-feature-flag-enabled) for details.
diff --git a/doc/development/testing_guide/end_to_end/index.md b/doc/development/testing_guide/end_to_end/index.md
index e6a683e9148..ac051b827d2 100644
--- a/doc/development/testing_guide/end_to_end/index.md
+++ b/doc/development/testing_guide/end_to_end/index.md
@@ -65,18 +65,18 @@ subgraph "gitlab-qa-mirror pipeline"
end
```
-1. Developer triggers a manual action, that can be found in CE / EE merge
+1. Developer triggers a manual action, that can be found in GitLab merge
requests. This starts a chain of pipelines in multiple projects.
1. The script being executed triggers a pipeline in
- [Omnibus GitLab Mirror](https://gitlab.com/gitlab-org/omnibus-gitlab-mirror)
+ [Omnibus GitLab Mirror](https://gitlab.com/gitlab-org/build/omnibus-gitlab-mirror)
and waits for the resulting status. We call this a _status attribution_.
-1. GitLab packages are being built in the [Omnibus GitLab](https://gitlab.com/gitlab-org/omnibus-gitlab)
+1. GitLab packages are being built in the [Omnibus GitLab Mirror](https://gitlab.com/gitlab-org/build/omnibus-gitlab-mirror)
pipeline. Packages are then pushed to its Container Registry.
1. When packages are ready, and available in the registry, a final step in the
- [Omnibus GitLab](https://gitlab.com/gitlab-org/omnibus-gitlab) pipeline, triggers a new
+ [Omnibus GitLab Mirror](https://gitlab.com/gitlab-org/build/omnibus-gitlab-mirror) pipeline, triggers a new
GitLab QA pipeline (those with access can view them at `https://gitlab.com/gitlab-org/gitlab-qa-mirror/pipelines`). It also waits for a resulting status.
1. GitLab QA pulls images from the registry, spins-up containers and runs tests
@@ -84,9 +84,9 @@ subgraph "gitlab-qa-mirror pipeline"
tool.
1. The result of the GitLab QA pipeline is being
- propagated upstream, through Omnibus, back to the CE / EE merge request.
+ propagated upstream, through Omnibus, back to the GitLab merge request.
-Please note, we plan to [add more specific information](https://gitlab.com/gitlab-org/quality/team-tasks/issues/156)
+Please note, we plan to [add more specific information](https://gitlab.com/gitlab-org/quality/team-tasks/-/issues/156)
about the tests included in each job/scenario that runs in `gitlab-qa-mirror`.
#### With Pipeline for Merged Results
@@ -132,7 +132,7 @@ as well as these:
| `QA_RSPEC_TAGS` | The RSpec tags to add (no default) |
For now [manual jobs with custom variables will not use the same variable
-when retried](https://gitlab.com/gitlab-org/gitlab/issues/31367), so if you want to run the same test(s) multiple times,
+when retried](https://gitlab.com/gitlab-org/gitlab/-/issues/31367), so if you want to run the same test(s) multiple times,
specify the same variables in each `custom-parallel` job (up to as
many of the 10 available jobs that you want to run).
@@ -155,7 +155,7 @@ See [Review Apps](../review_apps.md) for more details about Review Apps.
If you are not [testing code in a merge request](#testing-code-in-merge-requests),
there are two main options for running the tests. If you simply want to run
-the existing tests against a live GitLab instance or against a pre-built docker image
+the existing tests against a live GitLab instance or against a pre-built Docker image
you can use the [GitLab QA orchestrator](https://gitlab.com/gitlab-org/gitlab-qa/tree/master/README.md). See also [examples
of the test scenarios you can run via the orchestrator](https://gitlab.com/gitlab-org/gitlab-qa/blob/master/docs/what_tests_can_be_run.md#examples).
@@ -191,5 +191,5 @@ Continued reading:
You can ask question in the `#quality` channel on Slack (GitLab internal) or
you can find an issue you would like to work on in
-[the `gitlab` issue tracker](https://gitlab.com/gitlab-org/gitlab/issues?label_name%5B%5D=QA&label_name%5B%5D=test), or
-[the `gitlab-qa` issue tracker](https://gitlab.com/gitlab-org/gitlab-qa/issues?label_name%5B%5D=new+scenario).
+[the `gitlab` issue tracker](https://gitlab.com/gitlab-org/gitlab/-/issues?label_name%5B%5D=QA&label_name%5B%5D=test), or
+[the `gitlab-qa` issue tracker](https://gitlab.com/gitlab-org/gitlab-qa/-/issues?label_name%5B%5D=new+scenario).
diff --git a/doc/development/testing_guide/end_to_end/page_objects.md b/doc/development/testing_guide/end_to_end/page_objects.md
index 9d4fa5316c4..d43d88779c7 100644
--- a/doc/development/testing_guide/end_to_end/page_objects.md
+++ b/doc/development/testing_guide/end_to_end/page_objects.md
@@ -89,7 +89,7 @@ end
### Defining Elements
-The `view` DSL method will correspond to the rails View, partial, or vue component that renders the elements.
+The `view` DSL method will correspond to the rails View, partial, or Vue component that renders the elements.
The `element` DSL method in turn declares an element for which a corresponding
`data-qa-selector=element_name_snaked` data attribute will need to be added to the view file.
@@ -134,7 +134,7 @@ view 'app/views/my/view.html.haml' do
end
```
-To add these elements to the view, you must change the rails View, partial, or vue component by adding a `data-qa-selector` attribute
+To add these elements to the view, you must change the rails View, partial, or Vue component by adding a `data-qa-selector` attribute
for each element defined.
In our case, `data-qa-selector="login_field"`, `data-qa-selector="password_field"` and `data-qa-selector="sign_in_button"`
@@ -149,7 +149,7 @@ In our case, `data-qa-selector="login_field"`, `data-qa-selector="password_field
Things to note:
-- The name of the element and the qa_selector must match and be snake_cased
+- The name of the element and the `qa_selector` must match and be snake_cased
- If the element appears on the page unconditionally, add `required: true` to the element. See
[Dynamic element validation](dynamic_element_validation.md)
- You may see `.qa-selector` classes in existing Page Objects. We should prefer the [`data-qa-selector`](#data-qa-selector-vs-qa-selector)
@@ -255,7 +255,7 @@ These steps ensure the sanity selectors check will detect problems properly.
For example, `qa/qa/ee/page/merge_request/show.rb` adds EE-specific methods to `qa/qa/page/merge_request/show.rb` (with
`QA::Page::MergeRequest::Show.prepend_if_ee('QA::EE::Page::MergeRequest::Show')`) and following is how it's implemented
-(only showing the relevant part and refering to the 4 steps described above with inline comments):
+(only showing the relevant part and referring to the 4 steps described above with inline comments):
```ruby
module QA
diff --git a/doc/development/testing_guide/end_to_end/rspec_metadata_tests.md b/doc/development/testing_guide/end_to_end/rspec_metadata_tests.md
index b7c93d205a3..b1a8a14163c 100644
--- a/doc/development/testing_guide/end_to_end/rspec_metadata_tests.md
+++ b/doc/development/testing_guide/end_to_end/rspec_metadata_tests.md
@@ -9,10 +9,11 @@ This is a partial list of the [RSpec metadata](https://relishapp.com/rspec/rspec
|-----|-------------|
| `:elasticsearch` | The test requires an Elasticsearch service. It is used by the [instance-level scenario](https://gitlab.com/gitlab-org/gitlab-qa#definitions) [`Test::Integration::Elasticsearch`](https://gitlab.com/gitlab-org/gitlab/-/blob/72b62b51bdf513e2936301cb6c7c91ec27c35b4d/qa/qa/ee/scenario/test/integration/elasticsearch.rb) to include only tests that require Elasticsearch. |
| `:kubernetes` | The test includes a GitLab instance that is configured to be run behind an SSH tunnel, allowing a TLS-accessible GitLab. This test will also include provisioning of at least one Kubernetes cluster to test against. *This tag is often be paired with `:orchestrated`.* |
-| `:orchestrated` | The GitLab instance under test may be [configured by `gitlab-qa`](https://gitlab.com/gitlab-org/gitlab-qa/-/blob/master/docs/what_tests_can_be_run.md#orchestrated-tests) to be different to the default GitLab configuration, or `gitlab-qa` may launch additional services in separate docker containers, or both. Tests tagged with `:orchestrated` are excluded when testing environments where we can't dynamically modify GitLab's configuration (for example, Staging). |
+| `:orchestrated` | The GitLab instance under test may be [configured by `gitlab-qa`](https://gitlab.com/gitlab-org/gitlab-qa/-/blob/master/docs/what_tests_can_be_run.md#orchestrated-tests) to be different to the default GitLab configuration, or `gitlab-qa` may launch additional services in separate Docker containers, or both. Tests tagged with `:orchestrated` are excluded when testing environments where we can't dynamically modify GitLab's configuration (for example, Staging). |
| `:quarantine` | The test has been [quarantined](https://about.gitlab.com/handbook/engineering/quality/guidelines/debugging-qa-test-failures/#quarantining-tests), will run in a separate job that only includes quarantined tests, and is allowed to fail. The test will be skipped in its regular job so that if it fails it will not hold up the pipeline. |
| `:reliable` | The test has been [promoted to a reliable test](https://about.gitlab.com/handbook/engineering/quality/guidelines/reliable-tests/#promoting-an-existing-test-to-reliable) meaning it passes consistently in all pipelines, including merge requests. |
| `:requires_admin` | The test requires an admin account. Tests with the tag are excluded when run against Canary and Production environments. |
| `:runner` | The test depends on and will set up a GitLab Runner instance, typically to run a pipeline. |
| `:gitaly_ha` | The test will run against a GitLab instance where repositories are stored on redundant Gitaly nodes behind a Praefect node. All nodes are [separate containers](../../../administration/gitaly/praefect.md#requirements-for-configuring-a-gitaly-cluster). Tests that use this tag have a longer setup time since there are three additional containers that need to be started. |
| `:skip_live_env` | The test will be excluded when run against live deployed environments such as Staging, Canary, and Production. |
+| `:jira` | The test requires a Jira Server. [GitLab-QA](https://gitlab.com/gitlab-org/gitlab-qa) will provision the Jira Server in a Docker container when the `Test::Integration::Jira` test scenario is run.
diff --git a/doc/development/testing_guide/end_to_end/running_tests_that_require_special_setup.md b/doc/development/testing_guide/end_to_end/running_tests_that_require_special_setup.md
index f360226d922..77d820e1686 100644
--- a/doc/development/testing_guide/end_to_end/running_tests_that_require_special_setup.md
+++ b/doc/development/testing_guide/end_to_end/running_tests_that_require_special_setup.md
@@ -2,13 +2,13 @@
## Jenkins spec
-The [`jenkins_build_status_spec`](https://gitlab.com/gitlab-org/gitlab/blob/163c8a8c814db26d11e104d1cb2dcf02eb567dbe/qa/qa/specs/features/ee/browser_ui/3_create/jenkins/jenkins_build_status_spec.rb) spins up a Jenkins instance in a docker container based on an image stored in the [GitLab-QA container registry](https://gitlab.com/gitlab-org/gitlab-qa/container_registry).
-The docker image it uses is preconfigured with some base data and plugins.
+The [`jenkins_build_status_spec`](https://gitlab.com/gitlab-org/gitlab/blob/163c8a8c814db26d11e104d1cb2dcf02eb567dbe/qa/qa/specs/features/ee/browser_ui/3_create/jenkins/jenkins_build_status_spec.rb) spins up a Jenkins instance in a Docker container based on an image stored in the [GitLab-QA container registry](https://gitlab.com/gitlab-org/gitlab-qa/container_registry).
+The Docker image it uses is preconfigured with some base data and plugins.
The test then configures the GitLab plugin in Jenkins with a URL of the GitLab instance that will be used
to run the tests. Unfortunately, the GitLab Jenkins plugin does not accept ports so `http://localhost:3000` would
-not be accepted. Therefore, this requires us to run GitLab on port 80 or inside a docker container.
+not be accepted. Therefore, this requires us to run GitLab on port 80 or inside a Docker container.
-To start a docker container for GitLab based on the nightly image:
+To start a Docker container for GitLab based on the nightly image:
```shell
docker run \
@@ -24,7 +24,7 @@ To run the tests from the `/qa` directory:
CHROME_HEADLESS=false bin/qa Test::Instance::All http://localhost -- qa/specs/features/ee/browser_ui/3_create/jenkins/jenkins_build_status_spec.rb
```
-The test will automatically spinup a docker container for Jenkins and tear down once the test completes.
+The test will automatically spin up a Docker container for Jenkins and tear down once the test completes.
However, if you need to run Jenkins manually outside of the tests, use this command:
@@ -46,5 +46,5 @@ only to prevent it from running in the pipelines for live environments such as S
### Troubleshooting
-If Jenkins docker container exits without providing any information in the logs, try increasing the memory used by
+If Jenkins Docker container exits without providing any information in the logs, try increasing the memory used by
the Docker Engine.
diff --git a/doc/development/testing_guide/end_to_end/style_guide.md b/doc/development/testing_guide/end_to_end/style_guide.md
index 9c02af12d5d..7e9f097f624 100644
--- a/doc/development/testing_guide/end_to_end/style_guide.md
+++ b/doc/development/testing_guide/end_to_end/style_guide.md
@@ -49,7 +49,7 @@ Notice that in the above example, before clicking the `:operations_environments_
When adding new elements to a page, it's important that we have a uniform element naming convention.
-We follow a simple formula roughly based on hungarian notation.
+We follow a simple formula roughly based on Hungarian notation.
*Formula*: `element :<descriptor>_<type>`
@@ -109,7 +109,7 @@ we use the name of the page object in [snake_case](https://en.wikipedia.org/wiki
(all lowercase, with words separated by an underscore). See good and bad examples below.
While we prefer to follow the standard in most cases, it is also acceptable to
-use common abbreviations (e.g., mr) or other alternatives, as long as
+use common abbreviations (e.g., `mr`) or other alternatives, as long as
the name is not ambiguous. This can include appending `_page` if it helps to
avoid confusion or make the code more readable. For example, if a page object is
named `New`, it could be confusing to name the block argument `new` because that
@@ -123,7 +123,7 @@ Capybara DSL, potentially leading to confusion and bugs.
**Good**
```ruby
-Page::Project::Settings::Members.perform do |members|
+Page::Project::Members.perform do |members|
members.do_something
end
```
@@ -149,7 +149,7 @@ end
**Bad**
```ruby
-Page::Project::Settings::Members.perform do |project_settings_members_page|
+Page::Project::Members.perform do |project_settings_members_page|
project_settings_members_page.do_something
end
```
diff --git a/doc/development/testing_guide/flaky_tests.md b/doc/development/testing_guide/flaky_tests.md
index 6c1b06ce59a..7aed908c4f6 100644
--- a/doc/development/testing_guide/flaky_tests.md
+++ b/doc/development/testing_guide/flaky_tests.md
@@ -49,20 +49,20 @@ examples in a JSON report file on `master` (`retrieve-tests-metadata` and
This was originally implemented in: <https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/13021>.
-If you want to enable retries locally, you can use the `RETRIES` env variable.
+If you want to enable retries locally, you can use the `RETRIES` environment variable.
For instance `RETRIES=1 bin/rspec ...` would retry the failing examples once.
## Problems we had in the past at GitLab
-- [`rspec-retry` is biting us when some API specs fail](https://gitlab.com/gitlab-org/gitlab-foss/issues/29242): <https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/9825>
-- [Sporadic RSpec failures due to `PG::UniqueViolation`](https://gitlab.com/gitlab-org/gitlab-foss/issues/28307#note_24958837): <https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/9846>
+- [`rspec-retry` is biting us when some API specs fail](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/29242): <https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/9825>
+- [Sporadic RSpec failures due to `PG::UniqueViolation`](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/28307#note_24958837): <https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/9846>
- Follow-up: <https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/10688>
- - [Capybara.reset_session! should be called before requests are blocked](https://gitlab.com/gitlab-org/gitlab-foss/issues/33779): <https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/12224>
+ - [Capybara.reset_session! should be called before requests are blocked](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/33779): <https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/12224>
- FFaker generates funky data that tests are not ready to handle (and tests should be predictable so that's bad!):
- - [Make `spec/mailers/notify_spec.rb` more robust](https://gitlab.com/gitlab-org/gitlab-foss/issues/20121): <https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/10015>
- - [Transient failure in `spec/requests/api/commits_spec.rb`](https://gitlab.com/gitlab-org/gitlab-foss/issues/27988#note_25342521): <https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/9944>
- - [Replace FFaker factory data with sequences](https://gitlab.com/gitlab-org/gitlab-foss/issues/29643): <https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/10184>
- - [Transient failure in spec/finders/issues_finder_spec.rb](https://gitlab.com/gitlab-org/gitlab-foss/issues/30211#note_26707685): <https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/10404>
+ - [Make `spec/mailers/notify_spec.rb` more robust](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/20121): <https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/10015>
+ - [Transient failure in `spec/requests/api/commits_spec.rb`](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/27988#note_25342521): <https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/9944>
+ - [Replace FFaker factory data with sequences](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/29643): <https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/10184>
+ - [Transient failure in spec/finders/issues_finder_spec.rb](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/30211#note_26707685): <https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/10404>
### Time-sensitive flaky tests
@@ -75,24 +75,24 @@ For instance `RETRIES=1 bin/rspec ...` would retry the failing examples once.
### Feature tests
-- [Be sure to create all the data the test need before starting exercise](https://gitlab.com/gitlab-org/gitlab-foss/issues/32622#note_31128195): <https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/12059>
-- [Bis](https://gitlab.com/gitlab-org/gitlab-foss/issues/34609#note_34048715): <https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/12604>
-- [Bis](https://gitlab.com/gitlab-org/gitlab-foss/issues/34698#note_34276286): <https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/12664>
-- [Assert against the underlying database state instead of against a page's content](https://gitlab.com/gitlab-org/gitlab-foss/issues/31437): <https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/10934>
-- In JS tests, shifting elements can cause Capybara to misclick when the element moves at the exact time Capybara sends the click
+- [Be sure to create all the data the test need before starting exercise](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/32622#note_31128195): <https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/12059>
+- [Bis](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/34609#note_34048715): <https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/12604>
+- [Bis](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/34698#note_34276286): <https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/12664>
+- [Assert against the underlying database state instead of against a page's content](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/31437): <https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/10934>
+- In JS tests, shifting elements can cause Capybara to mis-click when the element moves at the exact time Capybara sends the click
- [Dropdowns rendering upward or downward due to window size and scroll position](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/17660)
- - [Lazy loaded images can cause Capybara to misclick](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/18713)
+ - [Lazy loaded images can cause Capybara to mis-click](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/18713)
- [Triggering JS events before the event handlers are set up](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/18742)
-- [Wait for the image to be lazy-loaded when asserting on a Markdown image's src attribute](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/25408)
+- [Wait for the image to be lazy-loaded when asserting on a Markdown image's `src` attribute](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/25408)
#### Capybara viewport size related issues
-- [Transient failure of spec/features/issues/filtered_search/filter_issues_spec.rb](https://gitlab.com/gitlab-org/gitlab-foss/issues/29241#note_26743936): <https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/10411>
+- [Transient failure of spec/features/issues/filtered_search/filter_issues_spec.rb](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/29241#note_26743936): <https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/10411>
#### Capybara JS driver related issues
-- [Don't wait for AJAX when no AJAX request is fired](https://gitlab.com/gitlab-org/gitlab-foss/issues/30461): <https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/10454>
-- [Bis](https://gitlab.com/gitlab-org/gitlab-foss/issues/34647): <https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/12626>
+- [Don't wait for AJAX when no AJAX request is fired](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/30461): <https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/10454>
+- [Bis](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/34647): <https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/12626>
#### PhantomJS / WebKit related issues
diff --git a/doc/development/testing_guide/frontend_testing.md b/doc/development/testing_guide/frontend_testing.md
index 52d538c7159..37e1066e7aa 100644
--- a/doc/development/testing_guide/frontend_testing.md
+++ b/doc/development/testing_guide/frontend_testing.md
@@ -26,7 +26,7 @@ Jest tests can be found in `/spec/frontend` and `/ee/spec/frontend` in EE.
> **Note:**
>
-> Most examples have a Jest and Karma example. See the Karma examples only as explanation to what's going on in the code, should you stumble over some usescases during your discovery. The Jest examples are the one you should follow.
+> Most examples have a Jest and Karma example. See the Karma examples only as explanation to what's going on in the code, should you stumble over some use cases during your discovery. The Jest examples are the one you should follow.
## Karma test suite
@@ -61,7 +61,7 @@ which could arise (especially with testing against browser specific features).
- Jest runs in a Node.js environment, not in a browser. Support for running Jest tests in a browser [is planned](https://gitlab.com/gitlab-org/gitlab/-/issues/26982).
- Because Jest runs in a Node.js environment, it uses [jsdom](https://github.com/jsdom/jsdom) by default. See also its [limitations](#limitations-of-jsdom) below.
- Jest does not have access to Webpack loaders or aliases.
- The aliases used by Jest are defined in its [own config](https://gitlab.com/gitlab-org/gitlab/blob/master/jest.config.js).
+ The aliases used by Jest are defined in its [own configuration](https://gitlab.com/gitlab-org/gitlab/blob/master/jest.config.js).
- All calls to `setTimeout` and `setInterval` are mocked away. See also [Jest Timer Mocks](https://jestjs.io/docs/en/timer-mocks).
- `rewire` is not required because Jest supports mocking modules. See also [Manual Mocks](https://jestjs.io/docs/en/manual-mocks).
- No [context object](https://jasmine.github.io/tutorials/your_first_suite#section-The_%3Ccode%3Ethis%3C/code%3E_keyword) is passed to tests in Jest.
@@ -200,15 +200,15 @@ For example, it's better to use the generated markup to trigger a button click a
## Common practices
-Following you'll find some general common practices you will find as part of our testsuite. Should you stumble over something not following this guide, ideally fix it right away. 🎉
+Following you'll find some general common practices you will find as part of our test suite. Should you stumble over something not following this guide, ideally fix it right away. 🎉
### How to query DOM elements
-When it comes to querying DOM elements in your tests, it is best to uniquely target the element, without adding additional attributes specifically for testing purposes. Sometimes this cannot be done feasibly. In these cases, adding test attributes to simplify the selectors might be the best option.
+When it comes to querying DOM elements in your tests, it is best to uniquely and semantically target the element. Sometimes this cannot be done feasibly. In these cases, adding test attributes to simplify the selectors might be the best option.
Preferentially, in component testing with `@vue/test-utils`, you should query for child components using the component itself. This helps enforce that specific behavior can be covered by that component's individual unit tests. Otherwise, try to use:
-- A behavioral attribute like `name` (also verifies that `name` was setup properly)
+- A semantic attribute like `name` (also verifies that `name` was setup properly)
- A `data-testid` attribute ([recommended by maintainers of `@vue/test-utils`](https://github.com/vuejs/vue-test-utils/issues/1498#issuecomment-610133465))
- a Vue `ref` (if using `@vue/test-utils`)
@@ -216,11 +216,17 @@ Examples:
```javascript
it('exists', () => {
+ // Good
wrapper.find(FooComponent);
wrapper.find('input[name=foo]');
wrapper.find('[data-testid="foo"]');
wrapper.find({ ref: 'foo'});
+
+ // Bad
wrapper.find('.js-foo');
+ wrapper.find('.btn-primary');
+ wrapper.find('.qa-foo-component');
+ wrapper.find('[data-qa-selector="foo"]');
});
```
@@ -556,7 +562,7 @@ The more challenging part are mocks, which can be used for functions or even dep
### Manual module mocks
Manual mocks are used to mock modules across the entire Jest environment. This is a very powerful testing tool that helps simplify
-unit testing by mocking out modules which cannot be easily consumned in our test environment.
+unit testing by mocking out modules which cannot be easily consumed in our test environment.
> **WARNING:** Do not use manual mocks if a mock should not be consistently applied in every spec (i.e. it's only needed by a few specs).
> Instead, consider using [`jest.mock(..)`](https://jestjs.io/docs/en/jest-object#jestmockmodulename-factory-options)
@@ -582,10 +588,10 @@ If a manual mock is needed for a CE module, please place it in `spec/frontend/mo
- [`mocks/axios_utils`](https://gitlab.com/gitlab-org/gitlab/blob/bd20aeb64c4eed117831556c54b40ff4aee9bfd1/spec/frontend/mocks/ce/lib/utils/axios_utils.js#L1) -
This mock is helpful because we don't want any unmocked requests to pass any tests. Also, we are able to inject some test helpers such as `axios.waitForAll`.
- [`__mocks__/mousetrap/index.js`](https://gitlab.com/gitlab-org/gitlab/blob/cd4c086d894226445be9d18294a060ba46572435/spec/frontend/__mocks__/mousetrap/index.js#L1) -
- This mock is helpful because the module itself uses amd format which webpack understands, but is incompatible with the jest environment. This mock doesn't remove
+ This mock is helpful because the module itself uses AMD format which webpack understands, but is incompatible with the jest environment. This mock doesn't remove
any behavior, only provides a nice es6 compatible wrapper.
- [`__mocks__/monaco-editor/index.js`](https://gitlab.com/gitlab-org/gitlab/blob/b7f914cddec9fc5971238cdf12766e79fa1629d7/spec/frontend/__mocks__/monaco-editor/index.js) -
- This mock is helpful because the monaco package is completely incompatible in a Jest environment. In fact, webpack requires a special loader to make it work. This mock
+ This mock is helpful because the Monaco package is completely incompatible in a Jest environment. In fact, webpack requires a special loader to make it work. This mock
simply makes this package consumable by Jest.
### Keep mocks light
@@ -611,7 +617,7 @@ As long as the fixtures don't change, `yarn test` is sufficient (and saves you s
### Live testing and focused testing -- Jest
-While you work on a testsuite, you may want to run these specs in watch mode, so they rerun automatically on every save.
+While you work on a test suite, you may want to run these specs in watch mode, so they rerun automatically on every save.
```shell
# Watch and rerun all specs matching the name icon
@@ -801,9 +807,9 @@ Tests relevant for frontend development can be found at the following places:
RSpec runs complete [feature tests](testing_levels.md#frontend-feature-tests), while the Jest and Karma directories contain [frontend unit tests](testing_levels.md#frontend-unit-tests), [frontend component tests](testing_levels.md#frontend-component-tests), and [frontend integration tests](testing_levels.md#frontend-integration-tests).
-All tests in `spec/javascripts/` will eventually be migrated to `spec/frontend/` (see also [#52483](https://gitlab.com/gitlab-org/gitlab-foss/issues/52483)).
+All tests in `spec/javascripts/` will eventually be migrated to `spec/frontend/` (see also [#52483](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/52483)).
-Before May 2018, `features/` also contained feature tests run by Spinach. These tests were removed from the codebase in May 2018 ([#23036](https://gitlab.com/gitlab-org/gitlab-foss/issues/23036)).
+Before May 2018, `features/` also contained feature tests run by Spinach. These tests were removed from the codebase in May 2018 ([#23036](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/23036)).
See also [Notes on testing Vue components](../fe_guide/vue.md#testing-vue-components).
@@ -830,11 +836,11 @@ testAction(
);
```
-Check an example in [spec/javascripts/ide/stores/actions_spec.jsspec/javascripts/ide/stores/actions_spec.js](https://gitlab.com/gitlab-org/gitlab/blob/master/spec/javascripts/ide/stores/actions_spec.js).
+Check an example in [`spec/javascripts/ide/stores/actions_spec.jsspec/javascripts/ide/stores/actions_spec.js`](https://gitlab.com/gitlab-org/gitlab/blob/master/spec/javascripts/ide/stores/actions_spec.js).
-### Wait until axios requests finish
+### Wait until Axios requests finish
-The axios utils mock module located in `spec/frontend/mocks/ce/lib/utils/axios_utils.js` contains two helper methods for Jest tests that spawn HTTP requests.
+The Axios Utils mock module located in `spec/frontend/mocks/ce/lib/utils/axios_utils.js` contains two helper methods for Jest tests that spawn HTTP requests.
These are very useful if you don't have a handle to the request's Promise, for example when a Vue component does a request as part of its life cycle.
- `waitFor(url, callback)`: Runs `callback` after a request to `url` finishes (either successfully or unsuccessfully).
@@ -844,11 +850,11 @@ Both functions run `callback` on the next tick after the requests finish (using
## Testing with older browsers
-Some regressions only affect a specific browser version. We can install and test in particular browsers with either Firefox or Browserstack using the following steps:
+Some regressions only affect a specific browser version. We can install and test in particular browsers with either Firefox or BrowserStack using the following steps:
-### Browserstack
+### BrowserStack
-[Browserstack](https://www.browserstack.com/) allows you to test more than 1200 mobile devices and browsers.
+[BrowserStack](https://www.browserstack.com/) allows you to test more than 1200 mobile devices and browsers.
You can use it directly through the [live app](https://www.browserstack.com/live) or you can install the [chrome extension](https://chrome.google.com/webstore/detail/browserstack/nkihdmlheodkdfojglpcjjmioefjahjb) for easy access.
You can find the credentials on 1Password, under `frontendteam@gitlab.com`.
@@ -860,7 +866,7 @@ You can download any older version of Firefox from the releases FTP server, <htt
1. From the website, select a version, in this case `50.0.1`.
1. Go to the mac folder.
-1. Select your preferred language, you will find the dmg package inside, download it.
+1. Select your preferred language, you will find the DMG package inside, download it.
1. Drag and drop the application to any other folder but the `Applications` folder.
1. Rename the application to something like `Firefox_Old`.
1. Move the application to the `Applications` folder.
diff --git a/doc/development/testing_guide/index.md b/doc/development/testing_guide/index.md
index f0fb06910f8..0d470e0e737 100644
--- a/doc/development/testing_guide/index.md
+++ b/doc/development/testing_guide/index.md
@@ -20,7 +20,7 @@ Following are two great articles that everyone should read to understand what
automated testing means, and what are its principles:
- [Five Factor Testing](https://madeintandem.com/blog/five-factor-testing/): Why do we need tests?
-- [Principles of Automated Testing](http://www.lihaoyi.com/post/PrinciplesofAutomatedTesting.html): Levels of testing. Prioritize tests. Cost of tests.
+- [Principles of Automated Testing](https://www.lihaoyi.com/post/PrinciplesofAutomatedTesting.html): Levels of testing. Prioritize tests. Cost of tests.
## [Testing levels](testing_levels.md)
@@ -58,7 +58,7 @@ Everything you should know about how to test Rake tasks.
## [End-to-end tests](end_to_end/index.md)
Everything you should know about how to run end-to-end tests using
-[GitLab QA](ttps://gitlab.com/gitlab-org/gitlab-qa) testing framework.
+[GitLab QA](https://gitlab.com/gitlab-org/gitlab-qa) testing framework.
## [Migrations tests](testing_migrations_guide.md)
diff --git a/doc/development/testing_guide/review_apps.md b/doc/development/testing_guide/review_apps.md
index 58acf937d77..3d7aea89e73 100644
--- a/doc/development/testing_guide/review_apps.md
+++ b/doc/development/testing_guide/review_apps.md
@@ -55,8 +55,8 @@ subgraph "CNG-mirror pipeline"
each component (e.g. `gitlab-rails-ee`, `gitlab-shell`, `gitaly` etc.)
based on the commit from the [GitLab pipeline](https://gitlab.com/gitlab-org/gitlab/pipelines/125315730) and stores
them in its [registry](https://gitlab.com/gitlab-org/build/CNG-mirror/container_registry).
- - We use the [`CNG-mirror`](https://gitlab.com/gitlab-org/build/CNG-mirror) project so that the `CNG`, (**C**loud
- **N**ative **G**itLab), project's registry is not overloaded with a
+ - We use the [`CNG-mirror`](https://gitlab.com/gitlab-org/build/CNG-mirror) project so that the `CNG`, (Cloud
+ Native GitLab), project's registry is not overloaded with a
lot of transient Docker images.
- Note that the official CNG images are built by the `cloud-native-image`
job, which runs only for tags, and triggers itself a [`CNG`](https://gitlab.com/gitlab-org/build/CNG) pipeline.
@@ -139,8 +139,8 @@ browser performance testing using a
The `review-apps-ee` and `review-apps-ce` clusters are currently set up with
the following node pools:
-- `review-apps-ee` of preemptible `e2-highcpu-16` (16 vCPU, 16 GB memory) nodes with autoscaling
-- `review-apps-ce` of preemptible `n1-standard-8` (8 vCPU, 16 GB memory) nodes with autoscaling
+- `review-apps-ee` of pre-emptible `e2-highcpu-16` (16 vCPU, 16 GB memory) nodes with autoscaling
+- `review-apps-ce` of pre-emptible `n1-standard-8` (8 vCPU, 16 GB memory) nodes with autoscaling
### Helm
@@ -152,7 +152,7 @@ used by the `review-deploy` and `review-stop` jobs.
### Get access to the GCP Review Apps cluster
-You need to [open an access request (internal link)](https://gitlab.com/gitlab-com/access-requests/issues/new)
+You need to [open an access request (internal link)](https://gitlab.com/gitlab-com/access-requests/-/issues/new)
for the `gcp-review-apps-sg` GCP group. In order to join a group, you must specify the desired GCP role in your access request.
The role is what will grant you specific permissions in order to engage with Review App containers.
@@ -212,7 +212,7 @@ If [Review App Stability](https://app.periscopedata.com/app/gitlab/496118/Engine
dips this may be a signal that the `review-apps-ce/ee` cluster is unhealthy.
Leading indicators may be health check failures leading to restarts or majority failure for Review App deployments.
-The [Review Apps Overview dashboard](https://app.google.stackdriver.com/dashboards/6798952013815386466?project=gitlab-review-apps&timeDomain=1d)
+The [Review Apps Overview dashboard](https://console.cloud.google.com/monitoring/classic/dashboards/6798952013815386466?project=gitlab-review-apps&timeDomain=1d)
aids in identifying load spikes on the cluster, and if nodes are problematic or the entire cluster is trending towards unhealthy.
### Release failed with `ImagePullBackOff`
@@ -278,14 +278,14 @@ kubectl top pods | sort --key 2 --numeric
**Potential cause:**
-This could be a sign that there are too many stale secrets and/or config maps.
+This could be a sign that there are too many stale secrets and/or configuration maps.
**Where to look for further debugging:**
Look at [the list of Configurations](https://console.cloud.google.com/kubernetes/config?project=gitlab-review-apps)
or `kubectl get secret,cm --sort-by='{.metadata.creationTimestamp}' | grep 'review-'`.
-Any secrets or config maps older than 5 days are suspect and should be deleted.
+Any secrets or configuration maps older than 5 days are suspect and should be deleted.
**Useful commands:**
@@ -321,7 +321,7 @@ kubectl get cm --sort-by='{.metadata.creationTimestamp}' | grep 'review-' | grep
#### Finding the problem
-[In the past](https://gitlab.com/gitlab-org/gitlab-foss/issues/62834), it happened
+[In the past](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/62834), it happened
that the `dns-gitlab-review-app-external-dns` Deployment was in a pending state,
effectively preventing all the Review Apps from getting a DNS record assigned,
making them unreachable via domain name.
@@ -354,7 +354,7 @@ For the record, the debugging steps to find out this issue were:
1. Web search for exact error message, following rabbit hole to [a relevant Kubernetes bug report](https://github.com/kubernetes/kubernetes/issues/57345)
1. Access the node over SSH via the GCP console (**Computer Engine > VM
instances** then click the "SSH" button for the node where the `dns-gitlab-review-app-external-dns` pod runs)
-1. In the node: `systemctl --version` => systemd 232
+1. In the node: `systemctl --version` => `systemd 232`
1. Gather some more information:
- `mount | grep kube | wc -l` => e.g. 290
- `systemctl list-units --all | grep -i var-lib-kube | wc -l` => e.g. 142
@@ -406,7 +406,7 @@ find a way to limit it to only us.**
## Other resources
- [Review Apps integration for CE/EE (presentation)](https://docs.google.com/presentation/d/1QPLr6FO4LduROU8pQIPkX1yfGvD13GEJIBOenqoKxR8/edit?usp=sharing)
-- [Stability issues](https://gitlab.com/gitlab-org/quality/team-tasks/issues/212)
+- [Stability issues](https://gitlab.com/gitlab-org/quality/team-tasks/-/issues/212)
### Helpful command line tools
diff --git a/doc/development/testing_guide/testing_levels.md b/doc/development/testing_guide/testing_levels.md
index 9285a910ecf..2210ec94696 100644
--- a/doc/development/testing_guide/testing_levels.md
+++ b/doc/development/testing_guide/testing_levels.md
@@ -295,7 +295,7 @@ graph RL
- **DOM**:
Testing on the real DOM ensures your components work in the intended environment.
- Part of DOM testing is delegated to [cross-browser testing](https://gitlab.com/gitlab-org/quality/team-tasks/issues/45).
+ Part of DOM testing is delegated to [cross-browser testing](https://gitlab.com/gitlab-org/quality/team-tasks/-/issues/45).
- **Properties or state of components**:
On this level, all tests can only perform actions a user would do.
For example: to change the state of a component, a click event would be fired.
@@ -366,7 +366,7 @@ See also:
- The [RSpec testing guidelines](../testing_guide/best_practices.md#rspec).
- System / Feature tests in the [Testing Best Practices](best_practices.md#system--feature-tests).
-- [Issue #26159](https://gitlab.com/gitlab-org/gitlab/issues/26159) which aims at combining those guidelines with this page.
+- [Issue #26159](https://gitlab.com/gitlab-org/gitlab/-/issues/26159) which aims at combining those guidelines with this page.
```mermaid
graph RL
diff --git a/doc/development/understanding_explain_plans.md b/doc/development/understanding_explain_plans.md
index 8c71a27540d..797dfc676eb 100644
--- a/doc/development/understanding_explain_plans.md
+++ b/doc/development/understanding_explain_plans.md
@@ -198,8 +198,7 @@ There are quite a few different types of nodes, so we only cover some of the
more common ones here.
A full list of all the available nodes and their descriptions can be found in
-the [PostgreSQL source file
-"plannodes.h"](https://gitlab.com/postgres/postgres/blob/master/src/include/nodes/plannodes.h)
+the [PostgreSQL source file `plannodes.h`](https://gitlab.com/postgres/postgres/blob/master/src/include/nodes/plannodes.h)
### Seq Scan
@@ -315,25 +314,30 @@ the `Indexes:` section:
```sql
Indexes:
"users_pkey" PRIMARY KEY, btree (id)
- "users_confirmation_token_key" UNIQUE CONSTRAINT, btree (confirmation_token)
- "users_email_key" UNIQUE CONSTRAINT, btree (email)
- "users_reset_password_token_key" UNIQUE CONSTRAINT, btree (reset_password_token)
- "index_on_users_lower_email" btree (lower(email::text))
- "index_on_users_lower_username" btree (lower(username::text))
+ "index_users_on_confirmation_token" UNIQUE, btree (confirmation_token)
+ "index_users_on_email" UNIQUE, btree (email)
+ "index_users_on_reset_password_token" UNIQUE, btree (reset_password_token)
+ "index_users_on_static_object_token" UNIQUE, btree (static_object_token)
+ "index_users_on_unlock_token" UNIQUE, btree (unlock_token)
"index_on_users_name_lower" btree (lower(name::text))
+ "index_users_on_accepted_term_id" btree (accepted_term_id)
"index_users_on_admin" btree (admin)
"index_users_on_created_at" btree (created_at)
"index_users_on_email_trigram" gin (email gin_trgm_ops)
"index_users_on_feed_token" btree (feed_token)
- "index_users_on_ghost" btree (ghost)
+ "index_users_on_group_view" btree (group_view)
"index_users_on_incoming_email_token" btree (incoming_email_token)
+ "index_users_on_managing_group_id" btree (managing_group_id)
"index_users_on_name" btree (name)
"index_users_on_name_trigram" gin (name gin_trgm_ops)
+ "index_users_on_public_email" btree (public_email) WHERE public_email::text <> ''::text
"index_users_on_state" btree (state)
- "index_users_on_state_and_internal_attrs" btree (state) WHERE ghost <> true AND support_bot <> true
- "index_users_on_support_bot" btree (support_bot)
+ "index_users_on_state_and_user_type" btree (state, user_type)
+ "index_users_on_unconfirmed_email" btree (unconfirmed_email) WHERE unconfirmed_email IS NOT NULL
+ "index_users_on_user_type" btree (user_type)
"index_users_on_username" btree (username)
"index_users_on_username_trigram" gin (username gin_trgm_ops)
+ "tmp_idx_on_user_id_where_bio_is_filled" btree (id) WHERE COALESCE(bio, ''::character varying)::text IS DISTINCT FROM ''::text
```
Here we can see there is no index on the `twitter` column, which means
@@ -652,7 +656,7 @@ different queries. The only _rule_ is that you _must always measure_ your query
and related tools such as:
- [`explain.depesz.com`](https://explain.depesz.com/).
-- [Pev](http://tatiyants.com/postgres-query-plan-visualization/).
+- [`explain.dalibo.com/`](https://explain.dalibo.com/).
## Producing query plans
@@ -681,11 +685,11 @@ Planning time: 0.411 ms
Execution time: 0.113 ms
```
-### Chatops
+### ChatOps
-[GitLab employees can also use our chatops solution, available in Slack using the
+[GitLab employees can also use our ChatOps solution, available in Slack using the
`/chatops` slash command](chatops_on_gitlabcom.md).
-You can use chatops to get a query plan by running the following:
+You can use ChatOps to get a query plan by running the following:
```sql
/chatops run explain SELECT COUNT(*) FROM projects WHERE visibility_level IN (0, 20)
@@ -714,7 +718,7 @@ with their own clone of the production database.
Joe is available in the
[`#database-lab`](https://gitlab.slack.com/archives/CLJMDRD8C) channel on Slack.
-Unlike chatops, it gives you a way to execute DDL statements (like creating indexes and tables) and get query plan not only for `SELECT` but also `UPDATE` and `DELETE`.
+Unlike ChatOps, it gives you a way to execute DDL statements (like creating indexes and tables) and get query plan not only for `SELECT` but also `UPDATE` and `DELETE`.
For example, in order to test new index you can do the following:
diff --git a/doc/development/uploads.md b/doc/development/uploads.md
index 1dd6ab5496a..4693c93e3d0 100644
--- a/doc/development/uploads.md
+++ b/doc/development/uploads.md
@@ -92,7 +92,7 @@ We can identify three major use-cases for an upload:
1. **storage:** if we are uploading for storing a file (i.e. artifacts, packages, discussion attachments). In this case [direct upload](#direct-upload) is the proper level as it's the less resource-intensive operation. Additional information can be found on [File Storage in GitLab](file_storage.md).
1. **in-controller/synchronous processing:** if we allow processing **small files** synchronously, using [disk buffered upload](#disk-buffered-upload) may speed up development.
-1. **Sidekiq/asynchronous processing:** Async processing must implement [direct upload](#direct-upload), the reason being that it's the only way to support Cloud Native deployments without a shared NFS.
+1. **Sidekiq/asynchronous processing:** Asynchronous processing must implement [direct upload](#direct-upload), the reason being that it's the only way to support Cloud Native deployments without a shared NFS.
For more details about currently broken feature see [epic &1802](https://gitlab.com/groups/gitlab-org/-/epics/1802).
@@ -114,7 +114,7 @@ We have three kinds of file encoding in our uploads:
1. <i class="fa fa-check-circle"></i> **multipart**: `multipart/form-data` is the most common, a file is encoded as a part of a multipart encoded request.
1. <i class="fa fa-check-circle"></i> **body**: some APIs uploads files as the whole request body.
-1. <i class="fa fa-times-circle"></i> **JSON**: some JSON API uploads files as base64 encoded strings. This will require a change to GitLab Workhorse, which [is planned](https://gitlab.com/gitlab-org/gitlab-workhorse/issues/226).
+1. <i class="fa fa-times-circle"></i> **JSON**: some JSON API uploads files as base64 encoded strings. This will require a change to GitLab Workhorse, which [is planned](https://gitlab.com/gitlab-org/gitlab-workhorse/-/issues/226).
## Uploading technologies
@@ -128,7 +128,7 @@ This is the default kind of upload, and it's most expensive in terms of resource
In this case, workhorse is unaware of files being uploaded and acts as a regular proxy.
-When a multipart request reaches the rails application, `Rack::Multipart` leaves behind tempfiles in `/tmp` and uses valuable Ruby process time to copy files around.
+When a multipart request reaches the rails application, `Rack::Multipart` leaves behind temporary files in `/tmp` and uses valuable Ruby process time to copy files around.
```mermaid
sequenceDiagram
diff --git a/doc/development/utilities.md b/doc/development/utilities.md
index dfdc5c66114..52151d37038 100644
--- a/doc/development/utilities.md
+++ b/doc/development/utilities.md
@@ -49,7 +49,7 @@ Refer to: <https://gitlab.com/gitlab-org/gitlab/blob/master/lib/gitlab/utils/mer
## `Override`
-Refer to <https://gitlab.com/gitlab-org/gitlab/blob/master/lib/gitlab/utils/override.rb>:
+Refer to [`override.rb`](https://gitlab.com/gitlab-org/gitlab/blob/master/lib/gitlab/utils/override.rb):
- This utility can help you check if one method would override
another or not. It is the same concept as Java's `@Override` annotation
@@ -153,7 +153,7 @@ Refer to <https://gitlab.com/gitlab-org/gitlab/blob/master/lib/gitlab/utils/stro
## `RequestCache`
-Refer to <https://gitlab.com/gitlab-org/gitlab/blob/master/lib/gitlab/cache/request_cache.rb>.
+Refer to [`request_cache.rb`](https://gitlab.com/gitlab-org/gitlab/blob/master/lib/gitlab/cache/request_cache.rb).
This module provides a simple way to cache values in RequestStore,
and the cache key would be based on the class name, method name,
diff --git a/doc/development/value_stream_analytics.md b/doc/development/value_stream_analytics.md
index 69b07eb7c86..70b16cc1739 100644
--- a/doc/development/value_stream_analytics.md
+++ b/doc/development/value_stream_analytics.md
@@ -183,7 +183,7 @@ Currently supported parents:
### Default stages
-The [original implementation](https://gitlab.com/gitlab-org/gitlab/issues/847) of value stream analytics defined 7 stages. These stages are always available for each parent, however altering these stages is not possible.
+The [original implementation](https://gitlab.com/gitlab-org/gitlab/-/issues/847) of value stream analytics defined 7 stages. These stages are always available for each parent, however altering these stages is not possible.
To make things efficient and reduce the number of records created, the default stages are expressed as in-memory objects (not persisted). When the user creates a custom stage for the first time, all the stages will be persisted. This behavior is implemented in the value stream analytics service objects.
diff --git a/doc/development/what_requires_downtime.md b/doc/development/what_requires_downtime.md
index 8ea9f70fc7a..407899b23d5 100644
--- a/doc/development/what_requires_downtime.md
+++ b/doc/development/what_requires_downtime.md
@@ -130,12 +130,12 @@ class CleanupUsersUpdatedAtRename < ActiveRecord::Migration[4.2]
end
```
-NOTE: **Note:** If you're renaming a [large table](https://gitlab.com/gitlab-org/gitlab/-/blob/master/rubocop/migration_helpers.rb#L9), please carefully consider the state when the first migration has run but the second cleanup migration hasn't been run yet.
+NOTE: **Note:** If you're renaming a [large table](https://gitlab.com/gitlab-org/gitlab/-/blob/master/rubocop/rubocop-migrations.yml#L3), please carefully consider the state when the first migration has run but the second cleanup migration hasn't been run yet.
With [Canary](https://about.gitlab.com/handbook/engineering/infrastructure/library/canary/) it is possible that the system runs in this state for a significant amount of time.
## Changing Column Constraints
-Adding or removing a NOT NULL clause (or another constraint) can typically be
+Adding or removing a `NOT NULL` clause (or another constraint) can typically be
done without requiring downtime. However, this does require that any application
changes are deployed _first_. Thus, changing the constraints of a column should
happen in a post-deployment migration.
@@ -143,35 +143,11 @@ happen in a post-deployment migration.
NOTE: Avoid using `change_column` as it produces an inefficient query because it re-defines
the whole column type.
-To add a NOT NULL constraint, use the `add_not_null_constraint` migration helper:
+You can check the following guides for each specific use case:
-```ruby
-# A post-deployment migration in db/post_migrate
-class AddNotNull < ActiveRecord::Migration[4.2]
- include Gitlab::Database::MigrationHelpers
-
- disable_ddl_transaction!
-
- def up
- add_not_null_constraint :users, :username
- end
-
- def down
- remove_not_null_constraint :users, :username
- end
-end
-```
-
-If the column to be updated requires cleaning first (e.g. there are `NULL` values), you should:
-
-1. Add the `NOT NULL` constraint with `validate: false`
-
- `add_not_null_constraint :users, :username, validate: false`
-
-1. Clean up the data with a data migration
-1. Validate the `NOT NULL` constraint with a followup migration
-
- `validate_not_null_constraint :users, :username`
+- [Adding foreign-key constraints](migration_style_guide.md#adding-foreign-key-constraints)
+- [Adding `NOT NULL` constraints](database/not_null_constraints.md)
+- [Adding limits to text columns](database/strings_and_the_text_data_type.md)
## Changing Column Types
diff --git a/doc/development/windows.md b/doc/development/windows.md
index b5309002222..c92a468fad3 100644
--- a/doc/development/windows.md
+++ b/doc/development/windows.md
@@ -47,7 +47,7 @@ There is a chance that your Google Cloud group may already have an image
built. Search the available images before you do the work to build your
own.
-Build a Google Cloud image with the above shared runners repo by doing the following:
+Build a Google Cloud image with the above shared runners repository by doing the following:
1. Install [Packer](https://www.packer.io/) (tested to work with version 1.5.1).
1. Install Packer Windows Update Provisioner.
@@ -55,7 +55,7 @@ Build a Google Cloud image with the above shared runners repo by doing the follo
1. Run the command `go build -o packer-provisioner-windows-update` (requires `go` to be installed).
1. Verify `packer-provisioner-windows-update` is in the `PATH` environment variable.
1. Add all [required environment variables](https://gitlab.com/gitlab-org/ci-cd/shared-runners/images/gcp/windows-containers/-/blob/master/packer.json#L2-10)
- in the `packer.json` file to your environment (perhaps use [direnv](https://direnv.net/)).
+ in the `packer.json` file to your environment (perhaps use [`direnv`](https://direnv.net/)).
1. Build the image by running the command: `packer build packer.json`.
## How to use a Windows image in GCP
@@ -96,7 +96,7 @@ Here are a few tips on GCP and Windows.
### GCP cost savings
To minimise the cost of your GCP VM instance, stop it when you're not using it.
-If you do, you'll need to redownload the RDP file from the console as the IP
+If you do, you'll need to re-download the RDP file from the console as the IP
address changes every time you stop and start it.
### chocolatey
@@ -119,13 +119,13 @@ You can install .NET version 3 support with the following `DISM` command:
`DISM /Online /Enable-Feature /FeatureName:NetFx3 /All`
-### nix -> Windows cmd tips
+### nix -> Windows `cmd` tips
-The first tip for using the Windows command shell is to open Powershell and use that instead.
+The first tip for using the Windows command shell is to open PowerShell and use that instead.
-Start Powershell: `start powershell`.
+Start PowerShell: `start powershell`.
-Powershell has aliases for all of the following commands so you don't have to learn the native commands:
+PowerShell has aliases for all of the following commands so you don't have to learn the native commands:
- `ls` ---> `dir`
- `rm` ---> `del`