summaryrefslogtreecommitdiff
path: root/doc/development
diff options
context:
space:
mode:
Diffstat (limited to 'doc/development')
-rw-r--r--doc/development/application_limits.md3
-rw-r--r--doc/development/architecture.md4
-rw-r--r--doc/development/avoiding_downtime_in_migrations.md97
-rw-r--r--doc/development/background_migrations.md99
-rw-r--r--doc/development/cascading_settings.md2
-rw-r--r--doc/development/cicd/index.md13
-rw-r--r--doc/development/code_review.md13
-rw-r--r--doc/development/database/efficient_in_operator_queries.md81
-rw-r--r--doc/development/database/loose_foreign_keys.md33
-rw-r--r--doc/development/database_review.md12
-rw-r--r--doc/development/documentation/feature_flags.md3
-rw-r--r--doc/development/documentation/redirects.md1
-rw-r--r--doc/development/documentation/restful_api_styleguide.md2
-rw-r--r--doc/development/documentation/styleguide/index.md102
-rw-r--r--doc/development/documentation/styleguide/word_list.md32
-rw-r--r--doc/development/documentation/testing.md11
-rw-r--r--doc/development/ee_features.md28
-rw-r--r--doc/development/emails.md38
-rw-r--r--doc/development/event_store.md292
-rw-r--r--doc/development/experiment_guide/experimentation.md403
-rw-r--r--doc/development/experiment_guide/gitlab_experiment.md2
-rw-r--r--doc/development/experiment_guide/index.md5
-rw-r--r--doc/development/fe_guide/style/javascript.md18
-rw-r--r--doc/development/fe_guide/vue3_migration.md26
-rw-r--r--doc/development/feature_flags/controls.md15
-rw-r--r--doc/development/feature_flags/index.md1
-rw-r--r--doc/development/features_inside_dot_gitlab.md2
-rw-r--r--doc/development/geo.md128
-rw-r--r--doc/development/geo/framework.md8
-rw-r--r--doc/development/gitaly.md11
-rw-r--r--doc/development/import_project.md2
-rw-r--r--doc/development/index.md4
-rw-r--r--doc/development/integrations/jenkins.md4
-rw-r--r--doc/development/integrations/jira_connect.md3
-rw-r--r--doc/development/integrations/secure.md78
-rw-r--r--doc/development/internal_api/index.md32
-rw-r--r--doc/development/licensing.md2
-rw-r--r--doc/development/merge_request_performance_guidelines.md4
-rw-r--r--doc/development/migration_style_guide.md29
-rw-r--r--doc/development/permissions.md4
-rw-r--r--doc/development/pipelines.md373
-rw-r--r--doc/development/policies.md2
-rw-r--r--doc/development/profiling.md14
-rw-r--r--doc/development/redis/new_redis_instance.md125
-rw-r--r--doc/development/secure_coding_guidelines.md351
-rw-r--r--doc/development/service_ping/index.md7
-rw-r--r--doc/development/service_ping/metrics_instrumentation.md6
-rw-r--r--doc/development/service_ping/performance_indicator_metrics.md17
-rw-r--r--doc/development/sidekiq_style_guide.md19
-rw-r--r--doc/development/snowplow/dictionary.md4
-rw-r--r--doc/development/snowplow/implementation.md1
-rw-r--r--doc/development/snowplow/schemas.md4
-rw-r--r--doc/development/testing_guide/best_practices.md5
-rw-r--r--doc/development/testing_guide/ci.md9
-rw-r--r--doc/development/testing_guide/end_to_end/feature_flags.md26
-rw-r--r--doc/development/testing_guide/end_to_end/index.md29
-rw-r--r--doc/development/testing_guide/end_to_end/rspec_metadata_tests.md4
-rw-r--r--doc/development/testing_guide/review_apps.md15
-rw-r--r--doc/development/wikis.md3
59 files changed, 1524 insertions, 1137 deletions
diff --git a/doc/development/application_limits.md b/doc/development/application_limits.md
index 2075e7cda3c..15d21883bb8 100644
--- a/doc/development/application_limits.md
+++ b/doc/development/application_limits.md
@@ -141,6 +141,8 @@ end
### Subscription Plans
+> The `opensource` plan was [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/346399) in GitLab 14.7.
+
Self-managed:
- `default`: Everyone.
@@ -156,5 +158,6 @@ GitLab.com:
- `gold`: Namespaces and projects with an Ultimate subscription.
- `ultimate`: Namespaces and projects with an Ultimate subscription.
- `ultimate_trial`: Namespaces and projects with an Ultimate Trial subscription.
+- `opensource`: Namespaces and projects that are member of GitLab Open Source program.
The `test` environment doesn't have any plans.
diff --git a/doc/development/architecture.md b/doc/development/architecture.md
index 078eb5dd0de..36d3655ba93 100644
--- a/doc/development/architecture.md
+++ b/doc/development/architecture.md
@@ -339,7 +339,7 @@ Component statuses are linked to configuration documentation for each component.
### Component list
-| Component | Description | [Omnibus GitLab](https://docs.gitlab.com/omnibus/) | [GitLab Environment Toolkit (GET)](https://gitlab.com/gitlab-org/quality/gitlab-environment-toolkit) | [GitLab chart](https://docs.gitlab.com/charts/) | [Minikube Minimal](https://docs.gitlab.com/charts/development/minikube/#deploying-gitlab-with-minimal-settings) | [GitLab.com](https://gitlab.com) | [Source](../install/installation.md) | [GDK](https://gitlab.com/gitlab-org/gitlab-development-kit) | [CE/EE](https://about.gitlab.com/install/ce-or-ee/) |
+| Component | Description | [Omnibus GitLab](https://docs.gitlab.com/omnibus/) | [GitLab Environment Toolkit (GET)](https://gitlab.com/gitlab-org/quality/gitlab-environment-toolkit) | [GitLab chart](https://docs.gitlab.com/charts/) | [minikube Minimal](https://docs.gitlab.com/charts/development/minikube/#deploying-gitlab-with-minimal-settings) | [GitLab.com](https://gitlab.com) | [Source](../install/installation.md) | [GDK](https://gitlab.com/gitlab-org/gitlab-development-kit) | [CE/EE](https://about.gitlab.com/install/ce-or-ee/) |
|-------------------------------------------------------|----------------------------------------------------------------------|:--------------:|:--------------:|:------------:|:----------------:|:----------:|:------:|:---:|:-------:|
| [Certificate Management](#certificate-management) | TLS Settings, Let's Encrypt | ✅ | ✅ | ✅ | ⚙ | ✅ | ⚙ | ⚙ | CE & EE |
| [Consul](#consul) | Database node discovery, failover | ⚙ | ✅ | ❌ | ❌ | ✅ | ❌ | ❌ | EE Only |
@@ -797,7 +797,7 @@ For monitoring deployed apps, see the [Sentry integration docs](../operations/er
- Configuration:
- [Omnibus](https://gitlab.com/gitlab-org/omnibus-gitlab/blob/master/files/gitlab-config-template/gitlab.rb.template)
- [Charts](https://docs.gitlab.com/charts/charts/gitlab/sidekiq/)
- - [Minikube Minimal](https://docs.gitlab.com/charts/charts/gitlab/sidekiq/index.html)
+ - [minikube Minimal](https://docs.gitlab.com/charts/charts/gitlab/sidekiq/index.html)
- [Source](https://gitlab.com/gitlab-org/gitlab/-/blob/master/config/gitlab.yml.example)
- [GDK](https://gitlab.com/gitlab-org/gitlab/-/blob/master/config/gitlab.yml.example)
- Layer: Core Service (Processor)
diff --git a/doc/development/avoiding_downtime_in_migrations.md b/doc/development/avoiding_downtime_in_migrations.md
index a5fc1909551..1d36bbf1212 100644
--- a/doc/development/avoiding_downtime_in_migrations.md
+++ b/doc/development/avoiding_downtime_in_migrations.md
@@ -228,100 +228,9 @@ can take a very long time to complete, preventing a deployment from proceeding.
They can also produce a lot of pressure on the database due to it rapidly
updating many rows in sequence.
-To reduce database pressure you should instead use
-`change_column_type_using_background_migration` or `rename_column_using_background_migration`
-when migrating a column in a large table (for example, `issues`). These methods work
-similarly to the concurrent counterparts but uses background migration to spread
-the work / load over a longer time period, without slowing down deployments.
-
-For example, to change the column type using a background migration:
-
-```ruby
-class ExampleMigration < Gitlab::Database::Migration[1.0]
- disable_ddl_transaction!
-
- class Issue < ActiveRecord::Base
- self.table_name = 'issues'
-
- include EachBatch
-
- def self.to_migrate
- where('closed_at IS NOT NULL')
- end
- end
-
- def up
- change_column_type_using_background_migration(
- Issue.to_migrate,
- :closed_at,
- :datetime_with_timezone
- )
- end
-
- def down
- change_column_type_using_background_migration(
- Issue.to_migrate,
- :closed_at,
- :datetime
- )
- end
-end
-```
-
-This would change the type of `issues.closed_at` to `timestamp with time zone`.
-
-Keep in mind that the relation passed to
-`change_column_type_using_background_migration` _must_ include `EachBatch`,
-otherwise it will raise a `TypeError`.
-
-This migration then needs to be followed in a separate release (_not_ a patch
-release) by a cleanup migration, which should steal from the queue and handle
-any remaining rows. For example:
-
-```ruby
-class MigrateRemainingIssuesClosedAt < Gitlab::Database::Migration[1.0]
- disable_ddl_transaction!
-
- class Issue < ActiveRecord::Base
- self.table_name = 'issues'
- include EachBatch
- end
-
- def up
- Gitlab::BackgroundMigration.steal('CopyColumn')
- Gitlab::BackgroundMigration.steal('CleanupConcurrentTypeChange')
-
- migrate_remaining_rows if migrate_column_type?
- end
-
- def down
- # Previous migrations already revert the changes made here.
- end
-
- def migrate_remaining_rows
- Issue.where('closed_at_for_type_change IS NULL AND closed_at IS NOT NULL').each_batch do |batch|
- batch.update_all('closed_at_for_type_change = closed_at')
- end
-
- cleanup_concurrent_column_type_change(:issues, :closed_at)
- end
-
- def migrate_column_type?
- # Some environments may have already executed the previous version of this
- # migration, thus we don't need to migrate those environments again.
- column_for('issues', 'closed_at').type == :datetime # rubocop:disable Migration/Datetime
- end
-end
-```
-
-The same applies to `rename_column_using_background_migration`:
-
-1. Create a migration using the helper, which will schedule background
- migrations to spread the writes over a longer period of time.
-1. In the next monthly release, create a clean-up migration to steal from the
- Sidekiq queues, migrate any missing rows, and cleanup the rename. This
- migration should skip the steps after stealing from the Sidekiq queues if the
- column has already been renamed.
+To reduce database pressure you should instead use a background migration
+when migrating a column in a large table (for example, `issues`). This will
+spread the work / load over a longer time period, without slowing down deployments.
For more information, see [the documentation on cleaning up background
migrations](background_migrations.md#cleaning-up).
diff --git a/doc/development/background_migrations.md b/doc/development/background_migrations.md
index 4a18b2123da..49835085f96 100644
--- a/doc/development/background_migrations.md
+++ b/doc/development/background_migrations.md
@@ -83,23 +83,11 @@ replacing the class name and arguments with whatever values are necessary for
your migration:
```ruby
-migrate_async('BackgroundMigrationClassName', [arg1, arg2, ...])
+migrate_in('BackgroundMigrationClassName', [arg1, arg2, ...])
```
-Usually it's better to enqueue jobs in bulk, for this you can use
-`bulk_migrate_async`:
-
-```ruby
-bulk_migrate_async(
- [['BackgroundMigrationClassName', [1]],
- ['BackgroundMigrationClassName', [2]]]
-)
-```
-
-Note that this will queue a Sidekiq job immediately: if you have a large number
-of records, this may not be what you want. You can use the function
-`queue_background_migration_jobs_by_range_at_intervals` to split the job into
-batches:
+You can use the function `queue_background_migration_jobs_by_range_at_intervals`
+to automatically split the job into batches:
```ruby
queue_background_migration_jobs_by_range_at_intervals(
@@ -117,16 +105,6 @@ consuming migrations it's best to schedule a background job using an
updates. Removals in turn can be handled by simply defining foreign keys with
cascading deletes.
-If you would like to schedule jobs in bulk with a delay, you can use
-`BackgroundMigrationWorker.bulk_perform_in`:
-
-```ruby
-jobs = [['BackgroundMigrationClassName', [1]],
- ['BackgroundMigrationClassName', [2]]]
-
-bulk_migrate_in(5.minutes, jobs)
-```
-
### Rescheduling background migrations
If one of the background migrations contains a bug that is fixed in a patch
@@ -197,53 +175,47 @@ the new format.
## Example
-To explain all this, let's use the following example: the table `services` has a
+To explain all this, let's use the following example: the table `integrations` has a
field called `properties` which is stored in JSON. For all rows you want to
-extract the `url` key from this JSON object and store it in the `services.url`
-column. There are millions of services and parsing JSON is slow, thus you can't
+extract the `url` key from this JSON object and store it in the `integrations.url`
+column. There are millions of integrations and parsing JSON is slow, thus you can't
do this in a regular migration.
To do this using a background migration we'll start with defining our migration
class:
```ruby
-class Gitlab::BackgroundMigration::ExtractServicesUrl
- class Service < ActiveRecord::Base
- self.table_name = 'services'
+class Gitlab::BackgroundMigration::ExtractIntegrationsUrl
+ class Integration < ActiveRecord::Base
+ self.table_name = 'integrations'
end
- def perform(service_id)
- # A row may be removed between scheduling and starting of a job, thus we
- # need to make sure the data is still present before doing any work.
- service = Service.select(:properties).find_by(id: service_id)
+ def perform(start_id, end_id)
+ Integration.where(id: start_id..end_id).each do |integration|
+ json = JSON.load(integration.properties)
- return unless service
-
- begin
- json = JSON.load(service.properties)
+ integration.update(url: json['url']) if json['url']
rescue JSON::ParserError
# If the JSON is invalid we don't want to keep the job around forever,
# instead we'll just leave the "url" field to whatever the default value
# is.
- return
+ next
end
-
- service.update(url: json['url']) if json['url']
end
end
```
Next we'll need to adjust our code so we schedule the above migration for newly
-created and updated services. We can do this using something along the lines of
+created and updated integrations. We can do this using something along the lines of
the following:
```ruby
-class Service < ActiveRecord::Base
- after_commit :schedule_service_migration, on: :update
- after_commit :schedule_service_migration, on: :create
+class Integration < ActiveRecord::Base
+ after_commit :schedule_integration_migration, on: :update
+ after_commit :schedule_integration_migration, on: :create
- def schedule_service_migration
- BackgroundMigrationWorker.perform_async('ExtractServicesUrl', [id])
+ def schedule_integration_migration
+ BackgroundMigrationWorker.perform_async('ExtractIntegrationsUrl', [id, id])
end
end
```
@@ -253,21 +225,20 @@ before the transaction completes as doing so can lead to race conditions where
the changes are not yet visible to the worker.
Next we'll need a post-deployment migration that schedules the migration for
-existing data. Since we're dealing with a lot of rows we'll schedule jobs in
-batches instead of doing this one by one:
+existing data.
```ruby
-class ScheduleExtractServicesUrl < Gitlab::Database::Migration[1.0]
+class ScheduleExtractIntegrationsUrl < Gitlab::Database::Migration[1.0]
disable_ddl_transaction!
- def up
- define_batchable_model('services').select(:id).in_batches do |relation|
- jobs = relation.pluck(:id).map do |id|
- ['ExtractServicesUrl', [id]]
- end
+ MIGRATION = 'ExtractIntegrationsUrl'
+ DELAY_INTERVAL = 2.minutes
- BackgroundMigrationWorker.bulk_perform_async(jobs)
- end
+ def up
+ queue_background_migration_jobs_by_range_at_intervals(
+ define_batchable_model('integrations'),
+ MIGRATION,
+ DELAY_INTERVAL)
end
def down
@@ -284,18 +255,18 @@ jobs and manually run on any un-migrated rows. Such a migration would look like
this:
```ruby
-class ConsumeRemainingExtractServicesUrlJobs < Gitlab::Database::Migration[1.0]
+class ConsumeRemainingExtractIntegrationsUrlJobs < Gitlab::Database::Migration[1.0]
disable_ddl_transaction!
def up
# This must be included
- Gitlab::BackgroundMigration.steal('ExtractServicesUrl')
+ Gitlab::BackgroundMigration.steal('ExtractIntegrationsUrl')
# This should be included, but can be skipped - see below
- define_batchable_model('services').where(url: nil).each_batch(of: 50) do |batch|
+ define_batchable_model('integrations').where(url: nil).each_batch(of: 50) do |batch|
range = batch.pluck('MIN(id)', 'MAX(id)').first
- Gitlab::BackgroundMigration::ExtractServicesUrl.new.perform(*range)
+ Gitlab::BackgroundMigration::ExtractIntegrationsUrl.new.perform(*range)
end
end
@@ -313,9 +284,9 @@ If the application does not depend on the data being 100% migrated (for
instance, the data is advisory, and not mission-critical), then this final step
can be skipped.
-This migration will then process any jobs for the ExtractServicesUrl migration
+This migration will then process any jobs for the ExtractIntegrationsUrl migration
and continue once all jobs have been processed. Once done you can safely remove
-the `services.properties` column.
+the `integrations.properties` column.
## Testing
diff --git a/doc/development/cascading_settings.md b/doc/development/cascading_settings.md
index d04761400ac..f9c96e414d0 100644
--- a/doc/development/cascading_settings.md
+++ b/doc/development/cascading_settings.md
@@ -1,6 +1,6 @@
---
stage: Manage
-group: Access
+group: Authentication & Authorization
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
diff --git a/doc/development/cicd/index.md b/doc/development/cicd/index.md
index 82fd37eacaf..17a70393c96 100644
--- a/doc/development/cicd/index.md
+++ b/doc/development/cicd/index.md
@@ -109,6 +109,11 @@ After the server receives the request it selects a `pending` job based on the [`
Once all jobs are completed for the current stage, the server "unlocks" all the jobs from the next stage by changing their state to `pending`. These can now be picked by the scheduling algorithm when the runner requests new jobs, and continues like this until all stages are completed.
+If a job is not picked up by a runner in 24 hours it is automatically removed from
+the processing queue after that time. If a pending job is stuck, when there is no
+runner available that can process it, it is removed from the queue after 1 hour.
+In both cases the job's status is changed to `failed` with an appropriate failure reason.
+
### Communication between runner and GitLab server
After the runner is [registered](https://docs.gitlab.com/runner/register/) using the registration token, the server knows what type of jobs it can execute. This depends on:
@@ -169,18 +174,18 @@ We have a few inconsistencies in our codebase that should be refactored.
For example, `CommitStatus` should be `Ci::Job` and `Ci::JobArtifact` should be `Ci::BuildArtifact`.
See [this issue](https://gitlab.com/gitlab-org/gitlab/-/issues/16111) for the full refactoring plan.
-## CI Minutes
+## CI/CD Minutes
-This diagram shows how the [CI minutes](../../subscriptions/gitlab_com/index.md#ci-pipeline-minutes)
+This diagram shows how the [CI/CD minutes](../../ci/pipelines/cicd_minutes.md)
feature and its components work.
-![CI Minutes architecture](img/ci_minutes.png)
+![CI/CD minutes architecture](img/ci_minutes.png)
<!-- Editable diagram available at https://app.diagrams.net/?libs=general;flowchart#G1XjLPvJXbzMofrC3eKRyDEk95clV6ypOb -->
Watch a walkthrough of this feature in details in the video below.
<div class="video-fallback">
- See the video: <a href="https://www.youtube.com/watch?v=NmdWRGT8kZg">CI Minutes - architectural overview</a>.
+ See the video: <a href="https://www.youtube.com/watch?v=NmdWRGT8kZg">CI/CD minutes - architectural overview</a>.
</div>
<figure class="video-container">
<iframe src="https://www.youtube.com/embed/NmdWRGT8kZg" frameborder="0" allowfullscreen="true"> </iframe>
diff --git a/doc/development/code_review.md b/doc/development/code_review.md
index 742d183e15c..1ed510c6ad7 100644
--- a/doc/development/code_review.md
+++ b/doc/development/code_review.md
@@ -394,9 +394,9 @@ the roulette is not available, choose someone else from that list.
It is the responsibility of the author for the merge request to be reviewed. If it stays in the `ready for review` state too long it is recommended to request a review from a specific reviewer.
-#### List of merge requests ready for review
+### Volunteering to review
-Developers who have capacity can regularly check the list of [merge requests to review](https://gitlab.com/groups/gitlab-org/-/merge_requests?state=opened&label_name%5B%5D=workflow%3A%3Aready%20for%20review) and add themselves as a reviewer for any merge request they want to review.
+GitLab engineers who have capacity can regularly check the list of [merge requests to review](https://gitlab.com/groups/gitlab-org/-/merge_requests?state=opened&label_name%5B%5D=workflow%3A%3Aready%20for%20review) and add themselves as a reviewer for any merge request they want to review.
### Reviewing a merge request
@@ -501,7 +501,7 @@ Merge Results against the latest `main` at the time of the pipeline creation.
WARNING:
**Review all changes thoroughly for malicious code before starting a
-[Pipeline for Merged Results](../ci/pipelines/merge_request_pipelines.md#run-pipelines-in-the-parent-project-for-merge-requests-from-a-forked-project).**
+[Pipeline for Merged Results](../ci/pipelines/merge_request_pipelines.md#run-pipelines-in-the-parent-project).**
When reviewing merge requests added by wider community contributors:
@@ -522,9 +522,12 @@ If the MR source branch is more than 1,000 commits behind the target branch:
or check with the contributor first when they're actively working on the MR.
- The rebase can usually be done inside GitLab with the `/rebase` [quick action](../user/project/quick_actions.md).
+#### Taking over a community merge request
+
When an MR needs further changes but the author is not responding for a long period of time,
-or unable to finish the MR, we can take it over in accordance with our
-[Closing policy for issues and merge requests](contributing/#closing-policy-for-issues-and-merge-requests):
+or is unable to finish the MR, GitLab can take it over in accordance with our
+[Closing policy for issues and merge requests](contributing/#closing-policy-for-issues-and-merge-requests).
+A GitLab engineer (generally the merge request coach) will:
1. Add a comment to their MR saying you'll take it over to be able to get it merged.
1. Add the label `~"coach will finish"` to their MR.
diff --git a/doc/development/database/efficient_in_operator_queries.md b/doc/development/database/efficient_in_operator_queries.md
index 1e706890f64..76518a6f5e2 100644
--- a/doc/development/database/efficient_in_operator_queries.md
+++ b/doc/development/database/efficient_in_operator_queries.md
@@ -589,6 +589,87 @@ LIMIT 20
NOTE:
To make the query efficient, the following columns need to be covered with an index: `project_id`, `issue_type`, `created_at`, and `id`.
+#### Using calculated ORDER BY expression
+
+The following example orders epic records by the duration between the creation time and closed
+time. It is calculated with the following formula:
+
+```sql
+SELECT EXTRACT('epoch' FROM epics.closed_at - epics.created_at) FROM epics
+```
+
+The query above returns the duration in seconds (`double precision`) between the two timestamp
+columns in seconds. To order the records by this expression, you must reference it
+in the `ORDER BY` clause:
+
+```sql
+SELECT EXTRACT('epoch' FROM epics.closed_at - epics.created_at)
+FROM epics
+ORDER BY EXTRACT('epoch' FROM epics.closed_at - epics.created_at) DESC
+```
+
+To make this ordering efficient on the group-level with the in-operator optimization, use a
+custom `ORDER BY` configuration. Since the duration is not a distinct value (no unique index
+present), you must add a tie-breaker column (`id`).
+
+The following example shows the final `ORDER BY` clause:
+
+```sql
+ORDER BY extract('epoch' FROM epics.closed_at - epics.created_at) DESC, epics.id DESC
+```
+
+Snippet for loading records ordered by the calcualted duration:
+
+```ruby
+arel_table = Epic.arel_table
+order = Gitlab::Pagination::Keyset::Order.build([
+ Gitlab::Pagination::Keyset::ColumnOrderDefinition.new(
+ attribute_name: 'duration_in_seconds',
+ order_expression: Arel.sql('EXTRACT(EPOCH FROM epics.closed_at - epics.created_at)').desc,
+ distinct: false,
+ sql_type: 'double precision' # important for calculated SQL expressions
+ ),
+ Gitlab::Pagination::Keyset::ColumnOrderDefinition.new(
+ attribute_name: 'id',
+ order_expression: arel_table[:id].desc
+ )
+])
+
+records = Gitlab::Pagination::Keyset::InOperatorOptimization::QueryBuilder.new(
+ scope: Epic.where.not(closed_at: nil).reorder(order), # filter out NULL values
+ array_scope: Group.find(9970).self_and_descendants.select(:id),
+ array_mapping_scope: -> (id_expression) { Epic.where(Epic.arel_table[:group_id].eq(id_expression)) }
+).execute.limit(20)
+
+puts records.pluck(:duration_in_seconds, :id) # other columnns are not available
+```
+
+Building the query requires quite a bit of configuration. For the order configuration you
+can find more information within the
+[complex order configuration](keyset_pagination.md#complex-order-configuration)
+section for keyset paginated database queries.
+
+The query requires a specialized database index:
+
+```sql
+CREATE INDEX index_epics_on_duration ON epics USING btree (group_id, EXTRACT(EPOCH FROM epics.closed_at - epics.created_at) DESC, id DESC) WHERE (closed_at IS NOT NULL);
+```
+
+Notice that the `finder_query` parameter is not used. The query only returns the `ORDER BY` columns
+which are the `duration_in_seconds` (calculated column) and the `id` columns. This is a limitation
+of the feature, defining the `finder_query` with calculated `ORDER BY` expressions is not supported.
+To get the complete database records, an extra query can be invoked by the returned `id` column:
+
+```ruby
+records_by_id = records.index_by(&:id)
+complete_records = Epic.where(id: records_by_id.keys).index_by(&:id)
+
+# Printing the complete records according to the `ORDER BY` clause
+records_by_id.each do |id, _|
+ puts complete_records[id].attributes
+end
+```
+
#### Batch iteration
Batch iteration over the records is possible via the keyset `Iterator` class.
diff --git a/doc/development/database/loose_foreign_keys.md b/doc/development/database/loose_foreign_keys.md
index c60989f225d..97d150b1a18 100644
--- a/doc/development/database/loose_foreign_keys.md
+++ b/doc/development/database/loose_foreign_keys.md
@@ -43,7 +43,7 @@ we can:
1. Create a `DELETE` trigger on the `projects` table.
Record the deletions in a separate table (`deleted_records`).
-1. A job checks the `deleted_records` table every 5 minutes.
+1. A job checks the `deleted_records` table every minute or two.
1. For each record in the table, delete the associated `ci_pipelines` records
using the `project_id` column.
@@ -89,9 +89,10 @@ ci_pipelines:
### Track record changes
-To know about deletions in the `projects` table, configure a `DELETE` trigger using a database
-migration (post-migration). The trigger needs to be configured only once. If the model already has
-at least one `loose_foreign_key` definition, then this step can be skipped:
+To know about deletions in the `projects` table, configure a `DELETE` trigger
+using a [post-deployment migration](../post_deployment_migrations.md). The
+trigger needs to be configured only once. If the model already has at least one
+`loose_foreign_key` definition, then this step can be skipped:
```ruby
class TrackProjectRecordChanges < Gitlab::Database::Migration[1.0]
@@ -122,15 +123,20 @@ REFERENCES projects(id)
ON DELETE CASCADE;
```
-The migration should run after the `DELETE` trigger is installed. If the foreign key is deleted
-earlier, there is a good chance of introducing data inconsistency which needs manual cleanup:
+The migration must run after the `DELETE` trigger is installed and the loose
+foreign key definition is deployed. As such, it must be a [post-deployment
+migration](../post_deployment_migrations.md) dated after the migration for the
+trigger. If the foreign key is deleted earlier, there is a good chance of
+introducing data inconsistency which needs manual cleanup:
```ruby
class RemoveProjectsCiPipelineFk < Gitlab::Database::Migration[1.0]
- enable_lock_retries!
+ disable_ddl_transaction!
def up
- remove_foreign_key_if_exists(:ci_pipelines, :projects, name: "fk_86635dbd80")
+ with_lock_retries do
+ remove_foreign_key_if_exists(:ci_pipelines, :projects, name: "fk_86635dbd80")
+ end
end
def down
@@ -155,6 +161,17 @@ it_behaves_like 'it has loose foreign keys' do
end
```
+**After** [removing a foreign key](#remove-the-foreign-key),
+use the "`cleanup by a loose foreign key`" shared example to test a child record's deletion or nullification
+via the added loose foreign key:
+
+```ruby
+it_behaves_like 'cleanup by a loose foreign key' do
+ let!(:model) { create(:ci_pipeline, user: create(:user)) }
+ let!(:parent) { model.user }
+end
+```
+
## Caveats of loose foreign keys
### Record creation
diff --git a/doc/development/database_review.md b/doc/development/database_review.md
index 45be797b720..082c843a12c 100644
--- a/doc/development/database_review.md
+++ b/doc/development/database_review.md
@@ -132,6 +132,18 @@ test its execution using `CREATE INDEX CONCURRENTLY` in the `#database-lab` Slac
- This job runs migrations in a production-like environment (similar to `#database_lab`) and posts to the MR its findings (queries, runtime, size change).
- Review migration runtimes and any warnings.
+#### Preparation when adding data migrations
+
+Data migrations are inherently risky. Additional actions are required to reduce the possibility
+of error that would result in corruption or loss of production data.
+
+Include in the MR description:
+
+- If the migration itself is not reversible, details of how data changes could be reverted in the event of an incident. For example, in the case of a migration that deletes records (an operation that most of the times is not automatically revertable), how _could_ the deleted records be recovered.
+- If the migration deletes data, apply the label `~data-deletion`.
+- Concise descriptions of possible user experience impact of an error; for example, "Issues would unexpectedly go missing from Epics".
+- Relevant data from the [query plans](#query-plans) that indicate the query works as expected; such as the approximate number of records that will be modified/deleted.
+
#### Preparation when adding or modifying queries
##### Raw SQL
diff --git a/doc/development/documentation/feature_flags.md b/doc/development/documentation/feature_flags.md
index 6a618e47211..1e4698ff867 100644
--- a/doc/development/documentation/feature_flags.md
+++ b/doc/development/documentation/feature_flags.md
@@ -1,6 +1,7 @@
---
-type: reference, dev
info: For assistance with this Style Guide page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments-to-other-projects-and-subjects.
+stage: none
+group: unassigned
description: "GitLab development - how to document features deployed behind feature flags"
---
diff --git a/doc/development/documentation/redirects.md b/doc/development/documentation/redirects.md
index ef94c33a276..8f13048f663 100644
--- a/doc/development/documentation/redirects.md
+++ b/doc/development/documentation/redirects.md
@@ -94,7 +94,6 @@ To add a redirect:
```
If you don't want to use the Rake task, you can use the following template.
- However, the file paths must be relative to the `doc` or `docs` directory.
Replace the value of `redirect_to` with the new file path and `YYYY-MM-DD`
with the date the file should be removed.
diff --git a/doc/development/documentation/restful_api_styleguide.md b/doc/development/documentation/restful_api_styleguide.md
index 5dc627c93e9..4d654b6b901 100644
--- a/doc/development/documentation/restful_api_styleguide.md
+++ b/doc/development/documentation/restful_api_styleguide.md
@@ -1,5 +1,7 @@
---
info: For assistance with this Style Guide page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments-to-other-projects-and-subjects.
+stage: none
+group: unassigned
description: 'Writing styles, markup, formatting, and other standards for the GitLab RESTful APIs.'
---
diff --git a/doc/development/documentation/styleguide/index.md b/doc/development/documentation/styleguide/index.md
index 7f2e5a1ba26..e57ffb90028 100644
--- a/doc/development/documentation/styleguide/index.md
+++ b/doc/development/documentation/styleguide/index.md
@@ -1,5 +1,7 @@
---
info: For assistance with this Style Guide page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments-to-other-projects-and-subjects.
+stage: none
+group: unassigned
description: 'Writing styles, markup, formatting, and other standards for GitLab Documentation.'
---
@@ -13,7 +15,7 @@ use the `#docs-processes` channel.
In addition to this page, the following resources can help you craft and contribute to documentation:
- [Doc contribution guidelines](../index.md)
-- [A-Z word list](word_list.md)
+- [Recommended word list](word_list.md)
- [Doc style and consistency testing](../testing.md)
- [UI text guidelines](https://design.gitlab.com/content/error-messages/)
- [GitLab Handbook style guidelines](https://about.gitlab.com/handbook/communication/#writing-style-guidelines)
@@ -392,39 +394,26 @@ especially in tutorials, instructional documentation, and
Some contractions, however, should be avoided:
-- Do not use the word "GitLab" in a contraction.
+<!-- vale gitlab.Possessive = NO -->
-- Do not use contractions with a proper noun and a verb. For example:
+| Do not use a contraction | Example | Use instead |
+|-------------------------------|--------------------------------------------------|------------------------------------------------------------------|
+| With a proper noun and a verb | The **Container Registry's** a powerful feature. | The **Container Registry** is a powerful feature. |
+| To emphasize a negative | **Don't** install X with Y. | **Do not** install X with Y. |
+| In reference documentation | **Don't** set a limit. | **Do not** set a limit. |
+| In error messages | Requests to localhost **aren't** allowed. | Requests to localhost **are not** allowed. |
- | Do | Don't |
- |------------------------------------------|-----------------------------------------|
- | Canada is establishing X. | Canada's establishing X. |
-
-- Do not use contractions when you need to emphasize a negative. For example:
-
- | Do | Don't |
- |------------------------------------------|-----------------------------------------|
- | Do not install X with Y. | Don't install X with Y. |
-
-- Do not use contractions in reference documentation. For example:
-
- | Do | Don't |
- |------------------------------------------|-----------------------------------------|
- | Do not set a limit greater than 1000. | Don't set a limit greater than 1000. |
- | For `parameter1`, the default is 10. | For `parameter1`, the default's 10. |
-
-- Avoid contractions in error messages. Examples:
-
- | Do | Don't |
- |------------------------------------------|-----------------------------------------|
- | Requests to localhost are not allowed. | Requests to localhost aren't allowed. |
- | Specified URL cannot be used. | Specified URL can't be used. |
+<!-- vale gitlab.Possessive = YES -->
### Acronyms
If you use an acronym, spell it out on first use on a page. You do not need to spell it out more than once on a page.
When possible, try to avoid acronyms in headings.
+### Numbers
+
+When using numbers in text, spell out zero through nine, and use numbers for 10 and greater. For details, see the [Microsoft Style Guide](https://docs.microsoft.com/en-us/style-guide/numbers).
+
## Text
- [Write in Markdown](#markdown).
@@ -1626,7 +1615,7 @@ the section. The version information must:
- Be surrounded by blank lines.
- Start with `>`. If there are multiple bullets, each line must start with `> -`.
- The string must include these words in this order (capitalization doesn't matter):
- - `introduced`, `deprecated`, `changed`, `moved`, `recommended` (as in the
+ - `introduced`, `enabled`, `deprecated`, `changed`, `moved`, `recommended` (as in the
[feature flag documentation](../feature_flags.md)), `removed`, or `renamed`
- `in` or `to`
- `GitLab`
@@ -1667,24 +1656,6 @@ If a feature is moved to another tier:
> - [Moved](<link-to-issue>) from GitLab Premium to GitLab Free in 12.0.
```
-If a feature is deprecated, include a link to a replacement (when available):
-
-```markdown
-> - [Deprecated](<link-to-issue>) in GitLab 11.3. Replaced by [meaningful text](<link-to-appropriate-documentation>).
-```
-
-You can also describe the replacement in surrounding text, if available. If the
-deprecation isn't obvious in existing text, you may want to include a warning:
-
-```markdown
-WARNING:
-This feature was [deprecated](link-to-issue) in GitLab 12.3 and replaced by
-[Feature name](link-to-feature-documentation).
-```
-
-In the first major GitLab version after the feature was deprecated, be sure to
-remove information about that deprecated feature.
-
#### Inline version text
If you're adding content to an existing topic, you can add version information
@@ -1784,6 +1755,47 @@ To view historical information about a feature, review GitLab
[release posts](https://about.gitlab.com/releases/), or search for the issue or
merge request where the work was done.
+### Deprecated features
+
+When a feature is deprecated, add `(DEPRECATED)` to the page title or to
+the heading of the section documenting the feature, immediately before
+the tier badge:
+
+```markdown
+<!-- Page title example: -->
+# Feature A (DEPRECATED) **(ALL TIERS)**
+
+<!-- Doc section example: -->
+## Feature B (DEPRECATED) **(PREMIUM SELF)**
+```
+
+Add the deprecation to the version history note (you can include a link
+to a replacement when available):
+
+```markdown
+> - [Deprecated](<link-to-issue>) in GitLab 11.3. Replaced by [meaningful text](<link-to-appropriate-documentation>).
+```
+
+You can also describe the replacement in surrounding text, if available. If the
+deprecation isn't obvious in existing text, you may want to include a warning:
+
+```markdown
+WARNING:
+This feature was [deprecated](link-to-issue) in GitLab 12.3 and replaced by
+[Feature name](link-to-feature-documentation).
+```
+
+If you add `(DEPRECATED)` to the page's title and the document is linked from the docs
+navigation, either remove the page from the nav or update the nav item to include the
+same text before the feature name:
+
+```yaml
+ - doc_title: (DEPRECATED) Feature A
+```
+
+In the first major GitLab version after the feature was deprecated, be sure to
+remove information about that deprecated feature.
+
## Products and features
Refer to the information in this section when describing products and features
diff --git a/doc/development/documentation/styleguide/word_list.md b/doc/development/documentation/styleguide/word_list.md
index 9c375379685..dc3dcba0c95 100644
--- a/doc/development/documentation/styleguide/word_list.md
+++ b/doc/development/documentation/styleguide/word_list.md
@@ -25,6 +25,13 @@ Try to avoid **`@mention`**. Say **mention** instead, and consider linking to th
[mentions topic](../../../user/discussions/index.md#mentions).
Don't use backticks.
+## 2FA, two-factor authentication
+
+Spell out **two-factor authentication** in sentence case for the first use and in section headings, and **2FA**
+thereafter. If the first word in a sentence, do not capitalize `factor` or `authentication`. For example:
+
+- Two-factor authentication (2FA) helps secure your account. Set up 2FA when you first log in.
+
## above
Try to avoid using **above** when referring to an example or table in a documentation page. If required, use **previous** instead. For example:
@@ -182,6 +189,11 @@ Use **check out** as a verb. For the Git command, use `checkout`.
CI/CD is always uppercase. No need to spell it out on first use.
+## CI/CD minutes
+
+Use **CI/CD minutes** instead of **CI minutes**, **pipeline minutes**, **pipeline minutes quota**, or
+**CI pipeline minutes**. This decision was made in [this issue](https://gitlab.com/gitlab-org/gitlab/-/issues/342813).
+
## click
Do not use **click**. Instead, use **select** with buttons, links, menu items, and lists.
@@ -197,11 +209,19 @@ Use **confirmation dialog** to describe the dialog box that asks you to confirm
- On the confirmation dialog, select **OK**.
+## Container Registry
+
+Use title case for the GitLab Container Registry.
+
## currently
Do not use **currently** when talking about the product or its features. The documentation describes the product as it is today.
([Vale](../testing.md#vale) rule: [`CurrentStatus.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab/CurrentStatus.yml))
+## Dependency Proxy
+
+Use title case for the GitLab Dependency Proxy.
+
## deploy board
Use lowercase for **deploy board**.
@@ -614,6 +634,10 @@ When writing about the Owner role:
Do not use **Owner permissions**. A user who is assigned the Owner role has a set of associated permissions.
+## Package Registry
+
+Use title case for the GitLab Package Registry.
+
## permissions
Do not use [**roles**](#roles) and **permissions** interchangeably. Each user is assigned a role. Each role includes a set of permissions.
@@ -634,6 +658,10 @@ Use **press** when talking about keyboard keys. For example:
Do not use profanity. Doing so may negatively affect other users and contributors, which is contrary to the GitLab value of [Diversity, Inclusion, and Belonging](https://about.gitlab.com/handbook/values/#diversity-inclusion).
+## push rules
+
+Use lowercase for **push rules**.
+
## Reporter
When writing about the Reporter role:
@@ -825,6 +853,10 @@ You **turn on** or **turn off** a toggle. For example:
- Turn on the **blah** toggle.
+## TFA, two-factor authentication
+
+Use [**2FA** and **two-factor authentication**](#2fa-two-factor-authentication) instead.
+
## type
Do not use **type** if you can avoid it. Use **enter** instead.
diff --git a/doc/development/documentation/testing.md b/doc/development/documentation/testing.md
index 13648a7c710..134183a0e75 100644
--- a/doc/development/documentation/testing.md
+++ b/doc/development/documentation/testing.md
@@ -234,15 +234,8 @@ As a general guideline, the lower the score, the more readable the documentation
For example, a page that scores `12` before a set of changes, and `9` after, indicates an iterative improvement to readability. The score is not an exact science, but is meant to help indicate the
general complexity level of the page.
-The readability score is calculated by using the following formula:
-
-```plaintext
-(.39 x ASL) + (11.8 x ASW) – 15.59
-```
-
-- `ASL` is average sentence length (the number of words divided by the number of sentences).
-- `ASW` is the average number of syllables per word (the number of syllables divided by the number of words).
-- The score excludes headings, code blocks, and lists.
+The readability score is calculated based on the number of words per sentence, and the number
+of syllables per word. For more information, see [the Vale documentation](https://docs.errata.ai/vale/styles#metric).
### Install linters
diff --git a/doc/development/ee_features.md b/doc/development/ee_features.md
index 70d7ea7c1a5..9f705f2c6f1 100644
--- a/doc/development/ee_features.md
+++ b/doc/development/ee_features.md
@@ -23,7 +23,33 @@ when no license is active. So EE features always should be guarded by
`project.feature_available?` or `group.licensed_feature_available?` (or
`License.feature_available?` if it is a system-wide feature).
-Frontend features should be guarded by pushing a flag from the backend by [using `push_licensed_feature`](licensed_feature_availability.md#restricting-frontend-features), and checked using `this.glFeatures.someFeature` in the frontend.
+Frontend features should be guarded by pushing a flag from the backend by [using `push_licensed_feature`](licensed_feature_availability.md#restricting-frontend-features), and checked using `this.glFeatures.someFeature` in the frontend. For example:
+
+```html
+<script>
+import glFeatureFlagMixin from '~/vue_shared/mixins/gl_feature_flags_mixin';
+
+export default {
+ mixins: [glFeatureFlagMixin()],
+ components: {
+ EEComponent: () => import('ee_component/components/test.vue'),
+ },
+ computed: {
+ shouldRenderComponent() {
+ return this.glFeatures.myEEFeature;
+ }
+ },
+};
+</script>
+
+<template>
+ <div>
+ <ee-component v-if="shouldRenderComponent"/>
+ </div>
+</template>
+```
+
+Look in `ee/app/models/license.rb` for the names of the licensed features.
CE specs should remain untouched as much as possible and extra specs
should be added for EE. Licensed features can be stubbed using the
diff --git a/doc/development/emails.md b/doc/development/emails.md
index f99b914e2de..811619bb0ff 100644
--- a/doc/development/emails.md
+++ b/doc/development/emails.md
@@ -139,6 +139,44 @@ These are the only valid legacy formats for an email handler:
In GitLab, the handler for the Service Desk feature is `path/to/project`.
+### MailRoom Gem updates
+
+We use [`gitlab-mail_room`](https://gitlab.com/gitlab-org/gitlab-mail_room), a
+fork of [`MailRoom`](https://github.com/tpitale/mail_room/), to ensure
+that we can make updates quickly to the gem if necessary. We try to upstream
+changes as soon as possible and keep the two projects in sync.
+
+Updating the `gitlab-mail_room` dependency in `Gemfile` is deprecated at
+the moment in favor of updating the version in
+[Omnibus](https://gitlab.com/gitlab-org/omnibus-gitlab/-/blob/master/config/software/mail_room.rb)
+(see [example merge request](https://gitlab.com/gitlab-org/omnibus-gitlab/-/merge_requests/5816))
+and Helm Chart configuration (see [example merge request](https://gitlab.com/gitlab-org/build/CNG/-/merge_requests/854)).
+
+#### Rationale
+
+This was done because to avoid [thread deadlocks](https://github.com/ruby/net-imap/issues/14), `MailRoom` needs
+an updated version of the `net-imap` gem. However, this [version of the net-imap cannot be installed by an unprivileged
+user](https://github.com/ruby/net-imap/issues/14) due to [an error installing the digest
+gem](https://github.com/ruby/digest/issues/14). [This bug in the Ruby interpreter](https://bugs.ruby-lang.org/issues/17761) was fixed in Ruby
+3.0.2.
+
+Updating the gem directly in the GitLab Rails `Gemfile` caused a [production incident](https://gitlab.com/gitlab-com/gl-infra/production/-/issues/4053)
+since Kubernetes pods do not run as privileged users.
+
+Note that Omnibus GitLab runs `/opt/gitlab/embedded/bin/mail_room`
+instead of `bundle exec rake` to avoid loading the older version. With a
+Kubernetes install, the MailRoom version has always been explicitly set
+in the Helm Chart configuration rather than the `Gemfile`.
+
+#### Preserving backwards compatibility
+
+Removing the `Gemfile` would break incoming e-mail processing for source
+installs. For now, source installs are advised to upgrade manually to
+the version specified in Omnibus and run `bin/mail_room` directly as
+done with Omnibus.
+
+We can reconsider this deprecation once we upgrade to Ruby 3.0.
+
---
[Return to Development documentation](index.md)
diff --git a/doc/development/event_store.md b/doc/development/event_store.md
new file mode 100644
index 00000000000..7f2b9c86d27
--- /dev/null
+++ b/doc/development/event_store.md
@@ -0,0 +1,292 @@
+---
+stage: none
+group: unassigned
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
+---
+
+# GitLab EventStore
+
+## Background
+
+The monolithic GitLab project is becoming larger and more domains are being defined.
+As a result, these domains are becoming entangled with each others due to temporal coupling.
+
+An emblematic example is the [`PostReceive`](https://gitlab.com/gitlab-org/gitlab/blob/master/app/workers/post_receive.rb)
+worker where a lot happens across multiple domains. If a new behavior reacts to
+a new commit being pushed, then we add code somewhere in `PostReceive` or its sub-components
+(`Git::ProcessRefChangesService`, for example).
+
+This type of architecture:
+
+- Is a violation of the Single Responsibility Principle.
+- Increases the risk of adding code to a codebase you are not familiar with.
+ There may be nuances you don't know about which may introduce bugs or a performance degradation.
+- Violates domain boundaries. Inside a specific namespace (for example `Git::`) we suddenly see
+ classes from other domains chiming in (like `Ci::` or `MergeRequests::`).
+
+## What is EventStore?
+
+`Gitlab:EventStore` is a basic pub-sub system built on top of the existing Sidekiq workers and observability we have today.
+We use this system to apply an event-driven approach when modeling a domain while keeping coupling
+to a minimum.
+
+This essentially leaves the existing Sidekiq workers as-is to perform asynchronous work but inverts
+the dependency.
+
+### EventStore example
+
+When a CI pipeline is created we update the head pipeline for any merge request matching the
+pipeline's `ref`. The merge request can then display the status of the latest pipeline.
+
+#### Without the EventStore
+
+We change `Ci::CreatePipelineService` and add logic (like an `if` statement) to check if the
+pipeline is created. Then we schedule a worker to run some side-effects for the `MergeRequests::` domain.
+
+This style violates the [Open-Closed Principle](https://en.wikipedia.org/wiki/Open%E2%80%93closed_principle)
+and unnecessarily add side-effects logic from other domains, increasing coupling:
+
+```mermaid
+graph LR
+ subgraph ci[CI]
+ cp[CreatePipelineService]
+ end
+
+ subgraph mr[MergeRequests]
+ upw[UpdateHeadPipelineWorker]
+ end
+
+ subgraph no[Namespaces::Onboarding]
+ pow[PipelinesOnboardedWorker]
+ end
+
+ cp -- perform_async --> upw
+ cp -- perform_async --> pow
+```
+
+#### With the EventStore
+
+`Ci::CreatePipelineService` publishes an event `Ci::PipelineCreatedEvent` and its responsibility stops here.
+
+The `MergeRequests::` domain can subscribe to this event with a worker `MergeRequests::UpdateHeadPipelineWorker`, so:
+
+- Side-effects are scheduled asynchronously and don't impact the main business transaction that
+ emits the domain event.
+- More side-effects can be added without modifying the main business transaction.
+- We can clearly see what domains are involved and their ownership.
+- We can identify what events occur in the system because they are explicitly declared.
+
+With `Gitlab::EventStore` there is still coupling between the subscriber (Sidekiq worker) and the schema of the domain event.
+This level of coupling is much smaller than having the main transaction (`Ci::CreatePipelineService`) coupled to:
+
+- multiple subscribers.
+- multiple ways of invoking subscribers (including conditional invocations).
+- multiple ways of passing parameters.
+
+```mermaid
+graph LR
+ subgraph ci[CI]
+ cp[CreatePipelineService]
+ cp -- publish --> e[PipelineCreateEvent]
+ end
+
+ subgraph mr[MergeRequests]
+ upw[UpdateHeadPipelineWorker]
+ end
+
+ subgraph no[Namespaces::Onboarding]
+ pow[PipelinesOnboardedWorker]
+ end
+
+ upw -. subscribe .-> e
+ pow -. subscribe .-> e
+```
+
+Each subscriber, being itself a Sidekiq worker, can specify any attributes that are related
+to the type of work they are responsible for. For example, one subscriber could define
+`urgency: high` while another one less critical could set `urgency: low`.
+
+The EventStore is only an abstraction that allows us to have Dependency Inversion. This helps
+separating a business transaction from side-effects (often executed in other domains).
+
+When an event is published, the EventStore calls `perform_async` on each subscribed worker,
+passing in the event information as arguments. This essentially schedules a Sidekiq job on each
+subscriber's queue.
+
+This means that nothing else changes with regards to how subscribers work, as they are just
+Sidekiq workers. For example: if a worker (subscriber) fails to execute a job, the job is put
+back into Sidekiq to be retried.
+
+## EventStore advantages
+
+- Subscribers (Sidekiq workers) can be set to run quicker by changing the worker weight
+ if the side-effect is critical.
+- Automatically enforce the fact that side-effects run asynchronously.
+ This makes it safe for other domains to subscribe to events without affecting the performance of the
+ main business transaction.
+
+## Define an event
+
+An `Event` object represents a domain event that occurred in a bounded context.
+Notify other bounded contexts about something
+that happened by publishing events, so that they can react to it.
+
+Define new event classes under `app/events/<namespace>/` with a name representing something that happened in the past:
+
+```ruby
+class Ci::PipelineCreatedEvent < Gitlab::EventStore::Event
+ def schema
+ {
+ 'type' => 'object',
+ 'required' => ['pipeline_id'],
+ 'properties' => {
+ 'pipeline_id' => { 'type' => 'integer' },
+ 'ref' => { 'type' => 'string' }
+ }
+ }
+ end
+end
+```
+
+The schema is validated immediately when we initialize the event object so we can ensure that
+publishers follow the contract with the subscribers.
+
+We recommend using optional properties as much as possible, which require fewer rollouts for schema changes.
+However, `required` properties could be used for unique identifiers of the event's subject. For example:
+
+- `pipeline_id` can be a required property for a `Ci::PipelineCreatedEvent`.
+- `project_id` can be a required property for a `Projects::ProjectDeletedEvent`.
+
+Publish only properties that are needed by the subscribers without tailoring the payload to specific subscribers.
+The payload should fully represent the event and not contain loosely related properties. For example:
+
+```ruby
+Ci::PipelineCreatedEvent.new(data: {
+ pipeline_id: pipeline.id,
+ # unless all subscribers need merge request IDs,
+ # this is data that can be fetched by the subscriber.
+ merge_request_ids: pipeline.all_merge_requests.pluck(:id)
+})
+```
+
+Publishing events with more properties provides the subscribers with the data
+they need in the first place. Otherwise subscribers have to fetch the additional data from the database.
+However, this can lead to continuous changes to the schema and possibly adding properties that may not
+represent the single source of truth.
+It's best to use this technique as a performance optimization. For example: when an event has many
+subscribers that all fetch the same data again from the database.
+
+### Update the schema
+
+Changes to the schema require multiple rollouts. While the new version is being deployed:
+
+- Existing publishers can publish events using the old version.
+- Existing subscribers can consume events using the old version.
+- Events get persisted in the Sidekiq queue as job arguments, so we could have 2 versions of the schema during deployments.
+
+As changing the schema ultimately impacts the Sidekiq arguments, please refer to our
+[Sidekiq style guide](sidekiq_style_guide.md#changing-the-arguments-for-a-worker) with regards to multiple rollouts.
+
+#### Add properties
+
+1. Rollout 1:
+ - Add new properties as optional (not `required`).
+ - Update the subscriber so it can consume events with and without the new properties.
+1. Rollout 2:
+ - Change the publisher to provide the new property
+1. Rollout 3: (if the property should be `required`):
+ - Change the schema and the subscriber code to always expect it.
+
+#### Remove properties
+
+1. Rollout 1:
+ - If the property is `required`, make it optional.
+ - Update the subscriber so it does not always expect the property.
+1. Rollout 2:
+ - Remove the property from the event publishing.
+ - Remove the code from the subscriber that processes the property.
+
+#### Other changes
+
+For other changes, like renaming a property, use the same steps:
+
+1. Remove the old property
+1. Add the new property
+
+## Publish an event
+
+To publish the event from the [previous example](#define-an-event):
+
+```ruby
+Gitlab::EventStore.publish(
+ Ci::PipelineCreatedEvent.new(data: { pipeline_id: pipeline.id })
+)
+```
+
+## Create a subscriber
+
+A subscriber is a Sidekiq worker that includes the `Gitlab::EventStore::Subscriber` module.
+This module takes care of the `perform` method and provides a better abstraction to handle
+the event safely via the `handle_event` method. For example:
+
+```ruby
+module MergeRequests
+ class UpdateHeadPipelineWorker
+ include ApplicationWorker
+ include Gitlab::EventStore::Subscriber
+
+ def handle_event(event)
+ Ci::Pipeline.find_by_id(event.data[:pipeline_id]).try do |pipeline|
+ # ...
+ end
+ end
+ end
+end
+```
+
+## Register the subscriber to the event
+
+To subscribe the worker to a specific event in `lib/gitlab/event_store.rb`,
+add a line like this to the `Gitlab::EventStore.configure!` method:
+
+```ruby
+module Gitlab
+ module EventStore
+ def self.configure!
+ Store.new.tap do |store|
+ # ...
+
+ store.subscribe ::MergeRequests::UpdateHeadPipelineWorker, to: ::Ci::PipelineCreatedEvent
+
+ # ...
+ end
+ end
+ end
+end
+```
+
+Subscriptions are stored in memory when the Rails app is loaded and they are immediately frozen.
+It's not possible to modify subscriptions at runtime.
+
+### Conditional dispatch of events
+
+A subscription can specify a condition when to accept an event:
+
+```ruby
+store.subscribe ::MergeRequests::UpdateHeadPipelineWorker,
+ to: ::Ci::PipelineCreatedEvent,
+ if: -> (event) { event.data[:merge_request_id].present? }
+```
+
+This tells the event store to dispatch `Ci::PipelineCreatedEvent`s to the subscriber if
+the condition is met.
+
+This technique can avoid scheduling Sidekiq jobs if the subscriber is interested in a
+small subset of events.
+
+WARNING:
+When using conditional dispatch it must contain only cheap conditions because they are
+executed synchronously every time the given event is published.
+
+For complex conditions it's best to subscribe to all the events and then handle the logic
+in the `handle_event` method of the subscriber worker.
diff --git a/doc/development/experiment_guide/experimentation.md b/doc/development/experiment_guide/experimentation.md
index b242646c549..bb782af80cc 100644
--- a/doc/development/experiment_guide/experimentation.md
+++ b/doc/development/experiment_guide/experimentation.md
@@ -1,402 +1,9 @@
---
-stage: Growth
-group: Activation
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
+redirect_to: 'gitlab_experiment.md'
+remove_date: '2022-04-13'
---
-# Create an A/B test with `Experimentation Module`
+This document was moved to [another location](gitlab_experiment.md).
-NOTE:
-We recommend using [GLEX](gitlab_experiment.md) for new experiments.
-
-## Implement the experiment
-
-1. Add the experiment to the `Gitlab::Experimentation::EXPERIMENTS` hash in
- [`experimentation.rb`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib%2Fgitlab%2Fexperimentation.rb):
-
- ```ruby
- EXPERIMENTS = {
- other_experiment: {
- #...
- },
- # Add your experiment here:
- signup_flow: {
- tracking_category: 'Growth::Activation::Experiment::SignUpFlow' # Used for providing the category when setting up tracking data
- }
- }.freeze
- ```
-
-1. Use the experiment in the code.
-
- Experiments can be performed on a `subject`. The provided `subject` should
- respond to `to_global_id` or `to_s`.
- The resulting string is bucketed and assigned to either the control or the
- experimental group, so you must always provide the same `subject`
- for an experiment to have the same experience.
-
- 1. Use this standard for the experiment in a controller:
-
- - Experiment run for a user:
-
- ```ruby
- class ProjectController < ApplicationController
- def show
- # experiment_enabled?(:experiment_key) is also available in views and helpers
- if experiment_enabled?(:signup_flow, subject: current_user)
- # render the experiment
- else
- # render the original version
- end
- end
- end
- ```
-
- - Experiment run for a namespace:
-
- ```ruby
- if experiment_enabled?(:signup_flow, subject: namespace)
- # experiment code
- else
- # control code
- end
- ```
-
- When no subject is given, it falls back to a cookie that gets set and is consistent until
- the cookie gets deleted.
-
- ```ruby
- class RegistrationController < ApplicationController
- def show
- # falls back to a cookie
- if experiment_enabled?(:signup_flow)
- # render the experiment
- else
- # render the original version
- end
- end
- end
- ```
-
- 1. Make the experiment available to the frontend in a controller. This example
- checks whether the experiment is enabled and pushes the result to the frontend:
-
- ```ruby
- before_action do
- push_frontend_experiment(:signup_flow, subject: current_user)
- end
- ```
-
- You can check the state of the feature flag in JavaScript:
-
- ```javascript
- import { isExperimentEnabled } from '~/experimentation';
-
- if ( isExperimentEnabled('signupFlow') ) {
- // ...
- }
- ```
-
-You can also run an experiment outside of the controller scope, such as in a worker:
-
-```ruby
-class SomeWorker
- def perform
- # Check if the experiment is active at all (the percentage_of_time_value > 0)
- return unless Gitlab::Experimentation.active?(:experiment_key)
-
- # Since we cannot access cookies in a worker, we need to bucket models
- # based on a unique, unchanging attribute instead.
- # It is therefore necessary to always provide the same subject.
- if Gitlab::Experimentation.in_experiment_group?(:experiment_key, subject: user)
- # execute experimental code
- else
- # execute control code
- end
- end
-end
-```
-
-## Implement tracking events
-
-To determine whether the experiment is a success or not, we must implement tracking events
-to acquire data for analyzing. We can send events to Snowplow via either the backend or frontend.
-Read the [product intelligence guide](https://about.gitlab.com/handbook/product/product-intelligence-guide/) for more details.
-
-### Track backend events
-
-The framework provides a helper method that is available in controllers:
-
-```ruby
-before_action do
- track_experiment_event(:signup_flow, 'action', 'value', subject: current_user)
-end
-```
-
-To test it:
-
-```ruby
-context 'when the experiment is active and the user is in the experimental group' do
- before do
- stub_experiment(signup_flow: true)
- stub_experiment_for_subject(signup_flow: true)
- end
-
- it 'tracks an event', :snowplow do
- subject
-
- expect_snowplow_event(
- category: 'Growth::Activation::Experiment::SignUpFlow',
- action: 'action',
- value: 'value',
- label: 'experimentation_subject_id',
- property: 'experimental_group'
- )
- end
-end
-```
-
-### Track frontend events
-
-The framework provides a helper method that is available in controllers:
-
-```ruby
-before_action do
- push_frontend_experiment(:signup_flow, subject: current_user)
- frontend_experimentation_tracking_data(:signup_flow, 'action', 'value', subject: current_user)
-end
-```
-
-This pushes tracking data to `gon.experiments` and `gon.tracking_data`.
-
-```ruby
-expect(Gon.experiments['signupFlow']).to eq(true)
-
-expect(Gon.tracking_data).to eq(
- {
- category: 'Growth::Activation::Experiment::SignUpFlow',
- action: 'action',
- value: 'value',
- label: 'experimentation_subject_id',
- property: 'experimental_group'
- }
-)
-```
-
-To track it:
-
-```javascript
-import { isExperimentEnabled } from '~/lib/utils/experimentation';
-import Tracking from '~/tracking';
-
-document.addEventListener('DOMContentLoaded', () => {
- const signupFlowExperimentEnabled = isExperimentEnabled('signupFlow');
-
- if (signupFlowExperimentEnabled && gon.tracking_data) {
- const { category, action, ...data } = gon.tracking_data;
-
- Tracking.event(category, action, data);
- }
-}
-```
-
-To test it in Jest:
-
-```javascript
-import { withGonExperiment } from 'helpers/experimentation_helper';
-import Tracking from '~/tracking';
-
-describe('event tracking', () => {
- describe('with tracking data', () => {
- withGonExperiment('signupFlow');
-
- beforeEach(() => {
- jest.spyOn(Tracking, 'event').mockImplementation(() => {});
-
- gon.tracking_data = {
- category: 'Growth::Activation::Experiment::SignUpFlow',
- action: 'action',
- value: 'value',
- label: 'experimentation_subject_id',
- property: 'experimental_group'
- };
- });
-
- it('should track data', () => {
- performAction()
-
- expect(Tracking.event).toHaveBeenCalledWith(
- 'Growth::Activation::Experiment::SignUpFlow',
- 'action',
- {
- value: 'value',
- label: 'experimentation_subject_id',
- property: 'experimental_group'
- },
- );
- });
- });
-});
-```
-
-## Record experiment user
-
-In addition to the anonymous tracking of events, we can also record which users
-have participated in which experiments, and whether they were given the control
-experience or the experimental experience.
-
-The `record_experiment_user` helper method is available to all controllers, and it
-enables you to record these experiment participants (the current user) and which
-experience they were given:
-
-```ruby
-before_action do
- record_experiment_user(:signup_flow)
-end
-```
-
-Subsequent calls to this method for the same experiment and the same user have no
-effect unless the user is then enrolled into a different experience. This happens
-when we roll out the experimental experience to a greater percentage of users.
-
-This data is completely separate from the [events tracking data](#implement-tracking-events).
-They are not linked together in any way.
-
-### Add context
-
-You can add arbitrary context data in a hash which gets stored as part of the experiment
-user record. New calls to the `record_experiment_user` with newer contexts are merged
-deeply into the existing context.
-
-This data can then be used by data analytics dashboards.
-
-```ruby
-before_action do
- record_experiment_user(:signup_flow, foo: 42, bar: { a: 22})
- # context is { "foo" => 42, "bar" => { "a" => 22 }}
-end
-
-# Additional contexts for newer record calls are merged deeply
-record_experiment_user(:signup_flow, foo: 40, bar: { b: 2 }, thor: 3)
-# context becomes { "foo" => 40, "bar" => { "a" => 22, "b" => 2 }, "thor" => 3}
-```
-
-## Record experiment conversion event
-
-Along with the tracking of backend and frontend events and the
-[recording of experiment participants](#record-experiment-user), we can also record
-when a user performs the desired conversion event action. For example:
-
-- **Experimental experience:** Show an in-product nudge to test if the change causes more
- people to sign up for trials.
-- **Conversion event:** The user starts a trial.
-
-The `record_experiment_conversion_event` helper method is available to all controllers.
-Use it to record the conversion event for the current user, regardless of whether
-the user is in the control or experimental group:
-
-```ruby
-before_action do
- record_experiment_conversion_event(:signup_flow)
-end
-```
-
-Note that the use of this method requires that we have first
-[recorded the user](#record-experiment-user) as being part of the experiment.
-
-## Enable the experiment
-
-After all merge requests have been merged, use [ChatOps](../../ci/chatops/index.md) in the
-[appropriate channel](../feature_flags/controls.md#communicate-the-change) to start the experiment for 10% of the users.
-The feature flag should have the name of the experiment with the `_experiment_percentage` suffix appended.
-For visibility, share any commands run against production in the `#s_growth` channel:
-
- ```shell
- /chatops run feature set signup_flow_experiment_percentage 10
- ```
-
- If you notice issues with the experiment, you can disable the experiment by removing the feature flag:
-
- ```shell
- /chatops run feature delete signup_flow_experiment_percentage
- ```
-
-## Add user to experiment group manually
-
-To force the application to add your current user into the experiment group,
-add a query string parameter to the path where the experiment runs. If you add the
-query string parameter, the experiment works only for this request, and doesn't work
-after following links or submitting forms.
-
-For example, to forcibly enable the `EXPERIMENT_KEY` experiment, add `force_experiment=EXPERIMENT_KEY`
-to the URL:
-
-```shell
-https://gitlab.com/<EXPERIMENT_ENTRY_URL>?force_experiment=<EXPERIMENT_KEY>
-```
-
-## Add user to experiment group with a cookie
-
-You can force the current user into the experiment group for `<EXPERIMENT_KEY>`
-during the browser session by using your browser's developer tools:
-
-```javascript
-document.cookie = "force_experiment=<EXPERIMENT_KEY>; path=/";
-```
-
-Use a comma to list more than one experiment to be forced:
-
-```javascript
-document.cookie = "force_experiment=<EXPERIMENT_KEY>,<ANOTHER_EXPERIMENT_KEY>; path=/";
-```
-
-To clear the experiments, unset the `force_experiment` cookie:
-
-```javascript
-document.cookie = "force_experiment=; path=/";
-```
-
-## Testing and test helpers
-
-### RSpec
-
-Use the following in RSpec to mock the experiment:
-
-```ruby
-context 'when the experiment is active' do
- before do
- stub_experiment(signup_flow: true)
- end
-
- context 'when the user is in the experimental group' do
- before do
- stub_experiment_for_subject(signup_flow: true)
- end
-
- it { is_expected.to do_experimental_thing }
- end
-
- context 'when the user is in the control group' do
- before do
- stub_experiment_for_subject(signup_flow: false)
- end
-
- it { is_expected.to do_control_thing }
- end
-end
-```
-
-### Jest
-
-Use the following in Jest to mock the experiment:
-
-```javascript
-import { withGonExperiment } from 'helpers/experimentation_helper';
-
-describe('given experiment is enabled', () => {
- withGonExperiment('signupFlow');
-
- it('should do the experimental thing', () => {
- expect(wrapper.find('.js-some-experiment-triggered-element')).toEqual(expect.any(Element));
- });
-});
-```
+<!-- This redirect file can be deleted after <2022-04-13>. -->
+<!-- Before deletion, see: https://docs.gitlab.com/ee/development/documentation/#move-or-rename-a-page -->
diff --git a/doc/development/experiment_guide/gitlab_experiment.md b/doc/development/experiment_guide/gitlab_experiment.md
index 288823bb41f..36d2a4f6fbf 100644
--- a/doc/development/experiment_guide/gitlab_experiment.md
+++ b/doc/development/experiment_guide/gitlab_experiment.md
@@ -71,6 +71,8 @@ class Cached cached
## Implement an experiment
+[Examples](https://gitlab.com/gitlab-org/growth/growth/-/wikis/GLEX-Framework-code-examples)
+
Start by generating a feature flag using the `bin/feature-flag` command as you
normally would for a development feature flag, making sure to use `experiment` for
the type. For the sake of documentation let's name our feature flag (and experiment)
diff --git a/doc/development/experiment_guide/index.md b/doc/development/experiment_guide/index.md
index 9937cb2ebd1..8d62f92e0b9 100644
--- a/doc/development/experiment_guide/index.md
+++ b/doc/development/experiment_guide/index.md
@@ -48,10 +48,7 @@ If the experiment is successful and becomes part of the product, any items that
For more information, see [Implementing an A/B/n experiment using GLEX](gitlab_experiment.md).
-There are still some longer running experiments using the [`Exerimentation Module`](experimentation.md).
-
-Both approaches use [experiment](../feature_flags/index.md#experiment-type) feature flags.
-`GLEX` is the preferred option for new experiments.
+This uses [experiment](../feature_flags/index.md#experiment-type) feature flags.
### Add new icons and illustrations for experiments
diff --git a/doc/development/fe_guide/style/javascript.md b/doc/development/fe_guide/style/javascript.md
index a2035eb1b55..f22e3ea6395 100644
--- a/doc/development/fe_guide/style/javascript.md
+++ b/doc/development/fe_guide/style/javascript.md
@@ -189,24 +189,6 @@ are loaded dynamically with webpack.
Do not use `innerHTML`, `append()` or `html()` to set content. It opens up too many
vulnerabilities.
-## Avoid single-line conditional statements
-
-Indentation is important when scanning code as it gives a quick indication of the existence of branches, loops, and return points.
-This can help to quickly understand the control flow.
-
-```javascript
-// bad
-if (isThingNull) return '';
-
-if (isThingNull)
- return '';
-
-// good
-if (isThingNull) {
- return '';
-}
-```
-
## ESLint
ESLint behavior can be found in our [tooling guide](../tooling.md).
diff --git a/doc/development/fe_guide/vue3_migration.md b/doc/development/fe_guide/vue3_migration.md
index 2b783eb21b7..6e994d5e95d 100644
--- a/doc/development/fe_guide/vue3_migration.md
+++ b/doc/development/fe_guide/vue3_migration.md
@@ -37,32 +37,8 @@ If you need cross-component communication (between different Vue apps), then per
**What to use instead**
-Vue documentation recommends using the [mitt](https://github.com/developit/mitt) library. It's relatively small (200 bytes, compressed) and has a clear API:
+We have created a factory that you can use to instantiate a new [mitt](https://github.com/developit/mitt)-like event hub.
-```javascript
-import mitt from 'mitt'
-
-const emitter = mitt()
-
-// listen to an event
-emitter.on('foo', e => console.log('foo', e) )
-
-// listen to all events
-emitter.on('*', (type, e) => console.log(type, e) )
-
-// fire an event
-emitter.emit('foo', { a: 'b' })
-
-// working with handler references:
-function onFoo() {}
-
-emitter.on('foo', onFoo) // listen
-emitter.off('foo', onFoo) // unlisten
-```
-
-**Event hub factory**
-
-We have created a factory that you can use to instantiate a new mitt-based event hub.
This makes it easier to migrate existing event hubs to the new recommended approach, or
to create new ones.
diff --git a/doc/development/feature_flags/controls.md b/doc/development/feature_flags/controls.md
index 4843b58c3fd..6bf5be23ace 100644
--- a/doc/development/feature_flags/controls.md
+++ b/doc/development/feature_flags/controls.md
@@ -69,7 +69,7 @@ Slack channel for the stage the feature is relevant to. For example, use the
`#s_monitor` channel for features developed by the Monitor stage, Health
group.
-To enable a feature for 25% of all users, run the following in Slack:
+To enable a feature for 25% of the time, run the following in Slack:
```shell
/chatops run feature set new_navigation_bar 25 --dev
@@ -303,6 +303,19 @@ and reduces confidence in our testing suite covering all possible combinations.
Additionally, a feature flag overwritten in some of the environments can result
in undefined and untested system behavior.
+`development` type feature flags should have a short life-cycle because their purpose
+is for rolling out a persistent change. `development` feature flags that are older
+than 2 milestones are reported to engineering managers. The
+[report tool](https://gitlab.com/gitlab-org/gitlab-feature-flag-alert) runs on a
+monthly basis. For example, see [the report for December 2021](https://gitlab.com/gitlab-org/quality/triage-reports/-/issues/5480).
+
+If a `development` feature flag is still present in the codebase after 6 months we should
+take one of the following actions:
+
+- Enable the feature flag by default and remove it.
+- Convert it to an instance, group, or project setting.
+- Revert the changes if it's still disabled and not needed anymore.
+
To remove a feature flag, open **one merge request** to make the changes. In the MR:
1. Add the ~"feature flag" label so release managers are aware the changes are hidden behind a feature flag.
diff --git a/doc/development/feature_flags/index.md b/doc/development/feature_flags/index.md
index 86dc4c73062..820b4bab802 100644
--- a/doc/development/feature_flags/index.md
+++ b/doc/development/feature_flags/index.md
@@ -171,6 +171,7 @@ Each feature flag is defined in a separate YAML file consisting of a number of f
| `default_enabled` | yes | The default state of the feature flag that is strictly validated, with `default_enabled:` passed as an argument. |
| `introduced_by_url` | no | The URL to the Merge Request that introduced the feature flag. |
| `rollout_issue_url` | no | The URL to the Issue covering the feature flag rollout. |
+| `milestone` | no | Milestone in which the feature was added. |
| `group` | no | The [group](https://about.gitlab.com/handbook/product/categories/#devops-stages) that owns the feature flag. |
NOTE:
diff --git a/doc/development/features_inside_dot_gitlab.md b/doc/development/features_inside_dot_gitlab.md
index 3a9decaad69..73ba9cbd674 100644
--- a/doc/development/features_inside_dot_gitlab.md
+++ b/doc/development/features_inside_dot_gitlab.md
@@ -16,7 +16,7 @@ When implementing new features, please refer to these existing features to avoid
- [CODEOWNERS](../user/project/code_owners.md#set-up-code-owners): `.gitlab/CODEOWNERS`.
- [Route Maps](../ci/review_apps/#route-maps): `.gitlab/route-map.yml`.
- [Customize Auto DevOps Helm Values](../topics/autodevops/customize.md#customize-values-for-helm-chart): `.gitlab/auto-deploy-values.yaml`.
-- [GitLab managed apps CI/CD](../user/clusters/applications.md#usage): `.gitlab/managed-apps/config.yaml`.
+- [GitLab managed apps CI/CD](../user/clusters/applications.md#prerequisites): `.gitlab/managed-apps/config.yaml`.
- [Insights](../user/project/insights/index.md#configure-your-insights): `.gitlab/insights.yml`.
- [Service Desk Templates](../user/project/service_desk.md#using-customized-email-templates): `.gitlab/service_desk_templates/`.
- [Web IDE](../user/project/web_ide/#web-ide-configuration-file): `.gitlab/.gitlab-webide.yml`.
diff --git a/doc/development/geo.md b/doc/development/geo.md
index 7d5aae49749..9f5fd674d38 100644
--- a/doc/development/geo.md
+++ b/doc/development/geo.md
@@ -7,8 +7,8 @@ info: To determine the technical writer assigned to the Stage/Group associated w
# Geo (development) **(PREMIUM SELF)**
Geo connects GitLab instances together. One GitLab instance is
-designated as a **primary** node and can be run with multiple
-**secondary** nodes. Geo orchestrates quite a few components that can be seen on
+designated as a **primary** site and can be run with multiple
+**secondary** sites. Geo orchestrates quite a few components that can be seen on
the diagram below and are described in more detail within this document.
![Geo Architecture Diagram](../administration/geo/replication/img/geo_architecture.png)
@@ -22,16 +22,16 @@ Geo handles replication for different components:
- [Uploaded blobs](#uploads-replication): includes anything from images attached on issues
to raw logs and assets from CI.
-With the exception of the Database replication, on a *secondary* node, everything is coordinated
+With the exception of the Database replication, on a *secondary* site, everything is coordinated
by the [Geo Log Cursor](#geo-log-cursor).
### Geo Log Cursor daemon
The [Geo Log Cursor daemon](#geo-log-cursor-daemon) is a separate process running on
-each **secondary** node. It monitors the [Geo Event Log](#geo-event-log)
+each **secondary** site. It monitors the [Geo Event Log](#geo-event-log)
for new events and creates background jobs for each specific event type.
-For example when a repository is updated, the Geo **primary** node creates
+For example when a repository is updated, the Geo **primary** site creates
a Geo event with an associated repository updated event. The Geo Log Cursor daemon
picks the event up and schedules a `Geo::ProjectSyncWorker` job which will
use the `Geo::RepositorySyncService` and `Geo::WikiSyncService` classes
@@ -41,7 +41,7 @@ The Geo Log Cursor daemon can operate in High Availability mode automatically.
The daemon will try to acquire a lock from time to time and once acquired, it
will behave as the *active* daemon.
-Any additional running daemons on the same node, will be in standby
+Any additional running daemons on the same site, will be in standby
mode, ready to resume work if the *active* daemon releases its lock.
We use the [`ExclusiveLease`](https://www.rubydoc.info/github/gitlabhq/gitlabhq/Gitlab/ExclusiveLease) lock type with a small TTL, that is renewed at every
@@ -53,14 +53,14 @@ the lock, it switches to standby mode.
### Database replication
Geo uses [streaming replication](#streaming-replication) to replicate
-the database from the **primary** to the **secondary** nodes. This
-replication gives the **secondary** nodes access to all the data saved
+the database from the **primary** to the **secondary** sites. This
+replication gives the **secondary** sites access to all the data saved
in the database. So users can log in on the **secondary** and read all
-the issues, merge requests, and so on, on the **secondary** node.
+the issues, merge requests, and so on, on the **secondary** site.
### Repository replication
-Geo also replicates repositories. Each **secondary** node keeps track of
+Geo also replicates repositories. Each **secondary** site keeps track of
the state of every repository in the [tracking database](#tracking-database).
There are a few ways a repository gets replicated by the:
@@ -92,7 +92,7 @@ background and it searches the `Geo::ProjectRegistry` model for
projects that need updating. Those projects can be:
- Unsynced: Projects that have never been synced on the **secondary**
- node and so do not exist yet.
+ site and so do not exist yet.
- Updated recently: Projects that have a `last_repository_updated_at`
timestamp that is more recent than the `last_repository_successful_sync_at`
timestamp in the `Geo::ProjectRegistry` model.
@@ -106,7 +106,7 @@ it's successful, we replace the main repository with the newly cloned one.
### Uploads replication
-File uploads are also being replicated to the **secondary** node. To
+File uploads are also being replicated to the **secondary** site. To
track the state of syncing, the `Geo::UploadRegistry` model is used.
#### Upload Registry
@@ -123,7 +123,7 @@ models respectively.
Also similar to the [Repository Sync worker](#repository-sync-worker),
there is a `Geo::FileDownloadDispatchWorker` class that is run
periodically to sync all uploads that aren't synced to the Geo
-**secondary** node yet.
+**secondary** site yet.
Files are copied via HTTP(s) and initiated via the
`/api/v4/geo/transfers/:type/:id` endpoint,
@@ -136,18 +136,18 @@ To authenticate file transfers, each `GeoNode` record has two fields:
- A public access key (`access_key` field).
- A secret access key (`secret_access_key` field).
-The **secondary** node authenticates itself via a [JWT request](https://jwt.io/).
-When the **secondary** node wishes to download a file, it sends an
+The **secondary** site authenticates itself via a [JWT request](https://jwt.io/).
+When the **secondary** site wishes to download a file, it sends an
HTTP request with the `Authorization` header:
```plaintext
Authorization: GL-Geo <access_key>:<JWT payload>
```
-The **primary** node uses the `access_key` field to look up the
-corresponding **secondary** node and decrypts the JWT payload,
+The **primary** site uses the `access_key` field to look up the
+corresponding **secondary** site and decrypts the JWT payload,
which contains additional information to identify the file
-request. This ensures that the **secondary** node downloads the right
+request. This ensures that the **secondary** site downloads the right
file for the right database ID. For example, for an LFS object, the
request must also include the SHA256 sum of the file. An example JWT
payload looks like:
@@ -157,7 +157,7 @@ payload looks like:
```
If the requested file matches the requested SHA256 sum, then the Geo
-**primary** node sends data via the [X-Sendfile](https://www.nginx.com/resources/wiki/start/topics/examples/xsendfile/)
+**primary** site sends data via the [X-Sendfile](https://www.nginx.com/resources/wiki/start/topics/examples/xsendfile/)
feature, which allows NGINX to handle the file transfer without tying
up Rails or Workhorse.
@@ -168,34 +168,34 @@ involved, otherwise it may fail with an encryption error.
## Git Push to Geo secondary
The Git Push Proxy exists as a functionality built inside the `gitlab-shell` component.
-It is active on a **secondary** node only. It allows the user that has cloned a repository
-from the secondary node to push to the same URL.
+It is active on a **secondary** site only. It allows the user that has cloned a repository
+from the secondary site to push to the same URL.
-Git `push` requests directed to a **secondary** node will be sent over to the **primary** node,
-while `pull` requests will continue to be served by the **secondary** node for maximum efficiency.
+Git `push` requests directed to a **secondary** site will be sent over to the **primary** site,
+while `pull` requests will continue to be served by the **secondary** site for maximum efficiency.
HTTPS and SSH requests are handled differently:
-- With HTTPS, we will give the user a `HTTP 302 Redirect` pointing to the project on the **primary** node.
+- With HTTPS, we will give the user a `HTTP 302 Redirect` pointing to the project on the **primary** site.
The Git client is wise enough to understand that status code and process the redirection.
- With SSH, because there is no equivalent way to perform a redirect, we have to proxy the request.
This is done inside [`gitlab-shell`](https://gitlab.com/gitlab-org/gitlab-shell), by first translating the request
- to the HTTP protocol, and then proxying it to the **primary** node.
+ to the HTTP protocol, and then proxying it to the **primary** site.
The [`gitlab-shell`](https://gitlab.com/gitlab-org/gitlab-shell) daemon knows when to proxy based on the response
from `/api/v4/allowed`. A special `HTTP 300` status code is returned and we execute a "custom action",
specified in the response body. The response contains additional data that allows the proxied `push` operation
-to happen on the **primary** node.
+to happen on the **primary** site.
## Using the Tracking Database
Along with the main database that is replicated, a Geo **secondary**
-node has its own separate [Tracking database](#tracking-database).
+site has its own separate [Tracking database](#tracking-database).
-The tracking database contains the state of the **secondary** node.
+The tracking database contains the state of the **secondary** site.
Any database migration that needs to be run as part of an upgrade
-needs to be applied to the tracking database on each **secondary** node.
+needs to be applied to the tracking database on each **secondary** site.
### Configuration
@@ -223,12 +223,12 @@ projects/attachments/ and so on, in the tracking database and main database.
## Redis
-Redis on the **secondary** node works the same as on the **primary**
-node. It is used for caching, storing sessions, and other persistent
+Redis on the **secondary** site works the same as on the **primary**
+site. It is used for caching, storing sessions, and other persistent
data.
-Redis data replication between **primary** and **secondary** node is
-not used, so sessions and so on, aren't shared between nodes.
+Redis data replication between **primary** and **secondary** site is
+not used, so sessions and so on, aren't shared between sites.
## Object Storage
@@ -244,7 +244,7 @@ ignores items in object storage. Either:
- The object storage layer should take care of its own geographical
replication.
-- All secondary nodes should use the same storage node.
+- All secondary sites should use the same storage site.
## Verification
@@ -252,29 +252,29 @@ ignores items in object storage. Either:
Repositories are verified with a checksum.
-The **primary** node calculates a checksum on the repository. It
+The **primary** site calculates a checksum on the repository. It
basically hashes all Git refs together and stores that hash in the
`project_repository_states` table of the database.
-The **secondary** node does the same to calculate the hash of its
-clone, and compares the hash with the value the **primary** node
+The **secondary** site does the same to calculate the hash of its
+clone, and compares the hash with the value the **primary** site
calculated. If there is a mismatch, Geo will mark this as a mismatch
and the administrator can see this in the [Geo admin panel](../user/admin_area/geo_nodes.md).
## Glossary
-### Primary node
+### Primary site
-A **primary** node is the single node in a Geo setup that read-write
+A **primary** site is the single site in a Geo setup that read-write
capabilities. It's the single source of truth and the Geo
-**secondary** nodes replicate their data from there.
+**secondary** sites replicate their data from there.
-In a Geo setup, there can only be one **primary** node. All
-**secondary** nodes connect to that **primary**.
+In a Geo setup, there can only be one **primary** site. All
+**secondary** sites connect to that **primary**.
-### Secondary node
+### Secondary site
-A **secondary** node is a read-only replica of the **primary** node
+A **secondary** site is a read-only replica of the **primary** site
running in a different geographical location.
### Streaming replication
@@ -287,11 +287,11 @@ Streaming replication depends on the Write Ahead Logs, or WAL. Those
logs are copied over to the replica and replayed there.
Since streaming replication also replicates the schema, the database
-migration do not need to run on the secondary nodes.
+migration do not need to run on the secondary sites.
### Tracking database
-A database on each Geo **secondary** node that keeps state for the node
+A database on each Geo **secondary** site that keeps state for the site
on which it resides. Read more in [Using the Tracking database](#using-the-tracking-database).
## Geo Event Log
@@ -312,7 +312,7 @@ events include:
### Geo Log Cursor
-The process running on the **secondary** node that looks for new
+The process running on the **secondary** site that looks for new
`Geo::EventLog` rows.
## Code features
@@ -327,51 +327,51 @@ Many of these methods are cached using the `RequestStore` class, to
reduce the performance impact of using the methods throughout the
codebase.
-#### Current node
+#### Current site
The class method `.current_node` returns the `GeoNode` record for the
-current node.
+current site.
We use the `host`, `port`, and `relative_url_root` values from
-`gitlab.yml` and search in the database to identify which node we are
+`gitlab.yml` and search in the database to identify which site we are
in (see `GeoNode.current_node`).
#### Primary or secondary
-To determine whether the current node is a **primary** node or a
-**secondary** node use the `.primary?` and `.secondary?` class
+To determine whether the current site is a **primary** site or a
+**secondary** site use the `.primary?` and `.secondary?` class
methods.
-It is possible for these methods to both return `false` on a node when
-the node is not enabled. See [Enablement](#enablement).
+It is possible for these methods to both return `false` on a site when
+the site is not enabled. See [Enablement](#enablement).
#### Geo Database configured?
There is also an additional gotcha when dealing with things that
happen during initialization time. In a few places, we use the
-`Gitlab::Geo.geo_database_configured?` method to check if the node has
+`Gitlab::Geo.geo_database_configured?` method to check if the site has
the tracking database, which only exists on the **secondary**
-node. This overcomes race conditions that could happen during
-bootstrapping of a new node.
+site. This overcomes race conditions that could happen during
+bootstrapping of a new site.
#### Enablement
We consider Geo feature enabled when the user has a valid license with the
-feature included, and they have at least one node defined at the Geo Nodes
+feature included, and they have at least one site defined at the Geo Nodes
screen.
See `Gitlab::Geo.enabled?` and `Gitlab::Geo.license_allows?` methods.
#### Read-only
-All Geo **secondary** nodes are read-only.
+All Geo **secondary** sites are read-only.
The general principle of a [read-only database](verifying_database_capabilities.md#read-only-database)
-applies to all Geo **secondary** nodes. So the
+applies to all Geo **secondary** sites. So the
`Gitlab::Database.read_only?` method will always return `true` on a
-**secondary** node.
+**secondary** site.
-When some write actions are not allowed because the node is a
+When some write actions are not allowed because the site is a
**secondary**, consider adding the `Gitlab::Database.read_only?` or
`Gitlab::Database.read_write?` guard, instead of `Gitlab::Geo.secondary?`.
@@ -393,7 +393,7 @@ that need to be taken care of:
- Verification.
- Cleaner. When sync settings are changed for the secondary site, some resources need to be cleaned up.
- Geo Node Status. We need to provide API endpoints as well as some presentation in the GitLab Admin Area.
-- Health Check. If we can perform some pre-cheсks and make node unhealthy if something is wrong, we should do that.
+- Health Check. If we can perform some pre-cheсks and make site unhealthy if something is wrong, we should do that.
The `rake gitlab:geo:check` command has to be updated too.
## History of communication channel
@@ -404,7 +404,7 @@ check here historic decisions and why we moved to new implementations.
### Custom code (GitLab 8.6 and earlier)
In GitLab versions before 8.6, custom code is used to handle
-notification from **primary** node to **secondary** nodes by HTTP
+notification from **primary** site to **secondary** sites by HTTP
requests.
### System hooks (GitLab 8.7 to 9.5)
diff --git a/doc/development/geo/framework.md b/doc/development/geo/framework.md
index 4460e6f66a8..778387986d8 100644
--- a/doc/development/geo/framework.md
+++ b/doc/development/geo/framework.md
@@ -14,7 +14,7 @@ team to discuss the options. You can contact them in `#g_geo` on Slack
or mention `@geo-team` in the issue or merge request.
Geo provides an API to make it possible to easily replicate data types
-across Geo nodes. This API is presented as a Ruby Domain-Specific
+across Geo sites. This API is presented as a Ruby Domain-Specific
Language (DSL) and aims to make it possible to replicate data with
minimal effort of the engineer who created a data type.
@@ -43,7 +43,7 @@ naming conventions:
For more detail, see [Data types](../../administration/geo/replication/datatypes.md).
- **Geo Replicable**:
- A Replicable is a resource Geo wants to sync across Geo nodes. There
+ A Replicable is a resource Geo wants to sync across Geo sites. There
is a limited set of supported data types of replicables. The effort
required to implement replication of a resource that belongs to one
of the known data types is minimal.
@@ -57,7 +57,7 @@ naming conventions:
It's tied to the Geo Replicable data type. All replicators have a
common interface that can be used to process (that is, produce and
consume) events. It takes care of the communication between the
- primary node (where events are produced) and the secondary node
+ primary site (where events are produced) and the secondary site
(where events are consumed). The engineer who wants to incorporate
Geo in their feature will use the API of replicators to make this
happen.
@@ -117,7 +117,7 @@ the model code:
```ruby
class Packages::PackageFile < ApplicationRecord
- include ::Gitlab::Geo::ReplicableModel
+ include ::Geo::ReplicableModel
with_replicator Geo::PackageFileReplicator
end
diff --git a/doc/development/gitaly.md b/doc/development/gitaly.md
index 5a7a5a6abcb..275e9421983 100644
--- a/doc/development/gitaly.md
+++ b/doc/development/gitaly.md
@@ -112,10 +112,9 @@ bundle exec rake gitlab:features:disable_rugged
Most of this code exists in the `lib/gitlab/git/rugged_impl` directory.
NOTE:
-You should *not* need to add or modify code related to
-Rugged unless explicitly discussed with the
-[Gitaly Team](https://gitlab.com/groups/gl-gitaly/group_members). This code does
-NOT work on GitLab.com or other GitLab instances that do not use NFS.
+You should *not* have to add or modify code related to Rugged unless explicitly discussed with the
+[Gitaly Team](https://gitlab.com/groups/gl-gitaly/group_members). This code does not work on GitLab.com or other GitLab
+instances that do not use NFS.
## `TooManyInvocationsError` errors
@@ -197,7 +196,7 @@ If you make changes to your local Gitaly in between test runs you need
to manually run `make` again.
Note that CI tests do not use your locally modified version of
-Gitaly. To use a custom Gitaly version in CI you need to update
+Gitaly. To use a custom Gitaly version in CI, you must update
GITALY_SERVER_VERSION as described at the beginning of this section.
To use a different Gitaly repository, such as if your changes are present
@@ -326,7 +325,7 @@ default value. The default value depends on the GitLab version.
To be sure that the flag is set correctly and it goes into Gitaly, you can check
the integration by using GDK:
-1. The state of the flag must be observable. To check it, you need to enable it
+1. The state of the flag must be observable. To check it, you must enable it
by fetching the Prometheus metrics:
1. Navigate to GDK's root directory.
1. Make sure you have the proper branch checked out for Gitaly.
diff --git a/doc/development/import_project.md b/doc/development/import_project.md
index 9872aa239dc..9e236b4cfce 100644
--- a/doc/development/import_project.md
+++ b/doc/development/import_project.md
@@ -16,7 +16,7 @@ There are several ways to import a project.
### Importing via UI
-The first option is to simply [import the Project tarball file via the GitLab UI](../user/project/settings/import_export.md#import-the-project):
+The first option is to simply [import the Project tarball file via the GitLab UI](../user/project/settings/import_export.md#import-a-project-and-its-data):
1. Create the group `qa-perf-testing`
1. Import the [GitLab FOSS project tarball](https://gitlab.com/gitlab-org/quality/performance-data/-/blob/master/projects_export/gitlabhq_export.tar.gz) into the Group.
diff --git a/doc/development/index.md b/doc/development/index.md
index 1398104abda..197c7f48398 100644
--- a/doc/development/index.md
+++ b/doc/development/index.md
@@ -164,6 +164,7 @@ the [reviewer values](https://about.gitlab.com/handbook/engineering/workflow/rev
### General
- [Directory structure](directory_structure.md)
+- [GitLab EventStore](event_store.md) to publish/subscribe to domain events
- [GitLab utilities](utilities.md)
- [Newlines style guide](newlines_styleguide.md)
- [Logging](logging.md)
@@ -263,8 +264,7 @@ the [reviewer values](https://about.gitlab.com/handbook/engineering/workflow/rev
- [Caching guidelines](caching.md) for using caching in Rails under a GitLab environment.
- [Merge request performance guidelines](merge_request_performance_guidelines.md)
for ensuring merge requests do not negatively impact GitLab performance
-- [Profiling](profiling.md) a URL, measuring performance using Sherlock, or
- tracking down N+1 queries using Bullet.
+- [Profiling](profiling.md) a URL or tracking down N+1 queries using Bullet.
- [Cached queries guidelines](cached_queries.md), for tracking down N+1 queries
masked by query caching, memory profiling and why should we avoid cached
queries.
diff --git a/doc/development/integrations/jenkins.md b/doc/development/integrations/jenkins.md
index 3987c6658c3..8a3f64f0a0d 100644
--- a/doc/development/integrations/jenkins.md
+++ b/doc/development/integrations/jenkins.md
@@ -54,8 +54,8 @@ To set up the Jenkins project you intend to run your build on, read
You can configure your integration between Jenkins and GitLab:
-- With the [recommended approach for Jenkins integration](../../integration/jenkins.md#recommended-jenkins-integration).
-- [Using a webhook](../../integration/jenkins.md#webhook-integration).
+- With the [recommended approach for Jenkins integration](../../integration/jenkins.md#configure-a-jenkins-integration-recommended).
+- [Using a webhook](../../integration/jenkins.md#configure-a-webhook).
## Test your setup
diff --git a/doc/development/integrations/jira_connect.md b/doc/development/integrations/jira_connect.md
index ca3dc3660ee..fc7204fdd5a 100644
--- a/doc/development/integrations/jira_connect.md
+++ b/doc/development/integrations/jira_connect.md
@@ -89,6 +89,3 @@ To add a [namespace](../../user/group/index.md#namespaces) to Jira:
`--cookies "<cookies from the request>"`.
1. Submit the cURL request.
1. If the response is `{"success":true}`, the namespace was added.
-1. Append the cookies to the cURL command in your terminal `--cookies "PASTE COOKIES HERE"`.
-1. Submit the cURL request.
-1. If the response is `{"success":true}` the namespace was added.
diff --git a/doc/development/integrations/secure.md b/doc/development/integrations/secure.md
index 356e731aa87..147d379cec7 100644
--- a/doc/development/integrations/secure.md
+++ b/doc/development/integrations/secure.md
@@ -327,21 +327,42 @@ You can find the schemas for these scanners here:
- [Coverage Fuzzing](https://gitlab.com/gitlab-org/security-products/security-report-schemas/-/blob/master/dist/coverage-fuzzing-report-format.json)
- [Secret Detection](https://gitlab.com/gitlab-org/security-products/security-report-schemas/-/blob/master/dist/secret-detection-report-format.json)
-### Version
+### Enable report validation
-This field specifies the version of the [Security Report Schemas](https://gitlab.com/gitlab-org/security-products/security-report-schemas) you are using. Please refer to the [releases](https://gitlab.com/gitlab-org/security-products/security-report-schemas/-/releases) of the schemas for the specific versions to use.
+In GitLab 14.10 and later, report validation against the schemas is enabled. To enable report validation for versions earlier than 14.10,
+set [`VALIDATE_SCHEMA`](../../user/application_security/#enable-security-report-validation) to
+`"true"`.
-### Vulnerabilities
+Reports that don't pass validation are not ingested by GitLab, and an error message
+displays on the corresponding pipeline.
+
+You should ensure that reports generated by the scanner pass validation against the schema version
+declared in your reports. GitLab uses the
+[`json_schemer`](https://www.rubydoc.info/gems/json_schemer) gem to perform validation.
+
+Ongoing improvements to report validation is tracked [in this epic](https://gitlab.com/groups/gitlab-org/-/epics/6968).
+
+### Report Fields
+
+#### Version
+
+This field specifies which [Security Report Schemas](https://gitlab.com/gitlab-org/security-products/security-report-schemas) version you are using. For information about the versions to use, see [releases](https://gitlab.com/gitlab-org/security-products/security-report-schemas/-/releases).
+
+In GitLab 14.10 and later, GitLab validates your report against the version of the schema specified by this value.
+The versions supported by GitLab can be found in
+[`gitlab/ee/lib/ee/gitlab/ci/parsers/security/validators/schemas`](https://gitlab.com/gitlab-org/gitlab/-/tree/master/ee/lib/ee/gitlab/ci/parsers/security/validators/schemas).
+
+#### Vulnerabilities
The `vulnerabilities` field of the report is an array of vulnerability objects.
-#### ID
+##### ID
The `id` field is the unique identifier of the vulnerability.
It is used to reference a fixed vulnerability from a [remediation objects](#remediations).
We recommend that you generate a UUID and use it as the `id` field's value.
-#### Category
+##### Category
The value of the `category` field matches the report type:
@@ -351,12 +372,12 @@ The value of the `category` field matches the report type:
- `sast`
- `dast`
-#### Scanner
+##### Scanner
The `scanner` field is an object that embeds a human-readable `name` and a technical `id`.
The `id` should not collide with any other scanner another integrator would provide.
-#### Name, message, and description
+##### Name, message, and description
The `name` and `message` fields contain a short description of the vulnerability.
The `description` field provides more details.
@@ -400,13 +421,13 @@ It should not repeat the other fields of the vulnerability object.
In particular, the `description` should not repeat the `location` (what is affected)
or the `solution` (how to mitigate the risk).
-#### Solution
+##### Solution
You can use the `solution` field to instruct users how to fix the identified vulnerability or to mitigate
the risk. End-users interact with this field, whereas GitLab automatically processes the
`remediations` objects.
-#### Identifiers
+##### Identifiers
The `identifiers` array describes the detected vulnerability. An identifier object's `type` and
`value` fields are used to tell if two identifiers are the same. The user interface uses the
@@ -440,15 +461,14 @@ Not all vulnerabilities have CVEs, and a CVE can be identified multiple times. A
isn't a stable identifier and you shouldn't assume it as such when tracking vulnerabilities.
The maximum number of identifiers for a vulnerability is set as 20. If a vulnerability has more than 20 identifiers,
-the system saves only the first 20 of them. Note that vulnerabilities in the [Pipeline
-Security](../../user/application_security/security_dashboard/#pipeline-security)
+the system saves only the first 20 of them. Note that vulnerabilities in the [Pipeline Security](../../user/application_security/security_dashboard/#view-vulnerabilities-in-a-pipeline)
tab do not enforce this limit and all identifiers present in the report artifact are displayed.
-### Details
+#### Details
The `details` field is an object that supports many different content elements that are displayed when viewing vulnerability information. An example of the various data elements can be seen in the [security-reports repository](https://gitlab.com/gitlab-examples/security/security-reports/-/tree/master/samples/details-example).
-### Location
+#### Location
The `location` indicates where the vulnerability has been detected.
The format of the location depends on the type of scanning.
@@ -458,7 +478,7 @@ which is used to track vulnerabilities
as new commits are pushed to the repository.
The attributes used to generate the location fingerprint also depend on the type of scanning.
-#### Dependency Scanning
+##### Dependency Scanning
The `location` of a Dependency Scanning vulnerability is composed of a `dependency` and a `file`.
The `dependency` object describes the affected `package` and the dependency `version`.
@@ -488,7 +508,7 @@ combines the `file` and the package `name`,
so these attributes are mandatory.
All other attributes are optional.
-#### Container Scanning
+##### Container Scanning
Similar to Dependency Scanning,
the `location` of a Container Scanning vulnerability has a `dependency` and a `file`.
@@ -519,7 +539,7 @@ so these attributes are mandatory.
The `image` is also mandatory.
All other attributes are optional.
-#### Cluster Image Scanning
+##### Cluster Image Scanning
The `location` of a `cluster_image_scanning` vulnerability has a `dependency` field. It also has
an `operating_system` field. For example, here is the `location` object for a vulnerability
@@ -553,7 +573,7 @@ as well as the package `name`, so these fields are required. The `image` field i
The `cluster_id` and `agent_id` are mutually exclusive, and one of them must be present.
All other fields are optional.
-#### SAST
+##### SAST
The `location` of a SAST vulnerability must have a `file` and a `start_line` field,
giving the path of the affected file, and the affected line number, respectively.
@@ -578,7 +598,7 @@ combines `file`, `start_line`, and `end_line`,
so these attributes are mandatory.
All other attributes are optional.
-### Tracking and merging vulnerabilities
+#### Tracking and merging vulnerabilities
Users may give feedback on a vulnerability:
@@ -606,7 +626,7 @@ and at least one [identifier](#identifiers). Two identifiers are the same if the
CWE and WASC identifiers are not considered because they describe categories of vulnerability flaws,
but not specific security flaws.
-#### Severity and confidence
+##### Severity and confidence
The `severity` field describes how much the vulnerability impacts the software,
whereas the `confidence` field describes how reliable the assessment of the vulnerability is.
@@ -623,7 +643,7 @@ Valid values are: `Ignore`, `Unknown`, `Experimental`, `Low`, `Medium`, `High`,
and needs to be investigated. We have [provided a chart](../../user/application_security/sast/analyzers.md#analyzers-data)
of the available SAST Analyzers and what data is currently available.
-### Remediations
+#### Remediations
The `remediations` field of the report is an array of remediation objects.
Each remediation describes a patch that can be applied to
@@ -667,28 +687,16 @@ Here is an example of a report that contains remediations.
}
```
-#### Summary
+##### Summary
The `summary` field is an overview of how the vulnerabilities can be fixed. This field is required.
-#### Fixed vulnerabilities
+##### Fixed vulnerabilities
The `fixes` field is an array of objects that reference the vulnerabilities fixed by the
remediation. `fixes[].id` contains a fixed vulnerability's [unique identifier](#id). This field is required.
-#### Diff
+##### Diff
The `diff` field is a base64-encoded remediation code diff, compatible with
[`git apply`](https://git-scm.com/docs/git-format-patch#_discussion). This field is required.
-
-## Limitations
-
-### Container Scanning
-
-Container Scanning currently has these limitations:
-
-- Although the Security Dashboard can display scan results from multiple images, if multiple
- vulnerabilities have the same fingerprint, only the first instance of that vulnerability is
- displayed. We're working on removing this limitation. You can follow our progress on the issue
- [Change location fingerprint for Container Scanning](https://gitlab.com/gitlab-org/gitlab/-/issues/215466).
-- Different scanners may each report the same vulnerability, resulting in duplicate findings.
diff --git a/doc/development/internal_api/index.md b/doc/development/internal_api/index.md
index 543c5f40f88..96910892022 100644
--- a/doc/development/internal_api/index.md
+++ b/doc/development/internal_api/index.md
@@ -703,10 +703,10 @@ Example response:
- CustomersDot
-## CI minute provisioning
+## CI/CD minutes provisioning
-The CI Minute endpoints are used by [CustomersDot](https://gitlab.com/gitlab-org/customers-gitlab-com) (`customers.gitlab.com`)
-to apply additional packs of CI minutes, for personal namespaces or top-level groups within GitLab.com.
+The CI/CD Minutes endpoints are used by [CustomersDot](https://gitlab.com/gitlab-org/customers-gitlab-com) (`customers.gitlab.com`)
+to apply additional packs of CI/CD minutes, for personal namespaces or top-level groups within GitLab.com.
### Creating an additional pack
@@ -826,6 +826,32 @@ Example response:
200
```
+### Deleting an `upcoming_reconciliation`
+
+Use a DELETE command to delete an `upcoming_reconciliation`.
+
+```plaintext
+DELETE /internal/upcoming_reconciliations
+```
+
+| Attribute | Type | Required | Description |
+|:---------------|:--------|:---------|:----------------------------------------------------------------------------------|
+| `namespace_id` | integer | yes | The ID of the GitLab.com namespace that no longer has an upcoming reconciliation. |
+
+Example request:
+
+```shell
+curl --request DELETE \
+ --url "http://localhost:3000/api/v4/internal/upcoming_reconciliations?namespace_id=22" \
+ --header 'PRIVATE-TOKEN: <admin_access_token>'
+```
+
+Example response:
+
+```plaintext
+204
+```
+
### Known consumers
- CustomersDot
diff --git a/doc/development/licensing.md b/doc/development/licensing.md
index 23871bf3c68..a50c514733e 100644
--- a/doc/development/licensing.md
+++ b/doc/development/licensing.md
@@ -6,7 +6,7 @@ info: To determine the technical writer assigned to the Stage/Group associated w
# GitLab Licensing and Compatibility
-[GitLab Community Edition](https://gitlab.com/gitlab-org/gitlab-foss/) (CE) is licensed [under the terms of the MIT License](https://gitlab.com/gitlab-org/gitlab-foss/blob/master/LICENSE). [GitLab Enterprise Edition](https://gitlab.com/gitlab-org/gitlab/) (EE) is licensed under "[The GitLab Enterprise Edition (EE) license](https://gitlab.com/gitlab-org/gitlab/-/blob/master/LICENSE)" wherein there are more restrictions.
+[GitLab Community Edition](https://gitlab.com/gitlab-org/gitlab-foss/) (CE) is licensed [under the terms of the MIT License](https://gitlab.com/gitlab-org/gitlab-foss/blob/master/LICENSE). [GitLab Enterprise Edition](https://gitlab.com/gitlab-org/gitlab/) (EE) is licensed under "[The GitLab Enterprise Edition (EE) license](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/LICENSE)" wherein there are more restrictions.
## Automated Testing
diff --git a/doc/development/merge_request_performance_guidelines.md b/doc/development/merge_request_performance_guidelines.md
index 69e9f7d16e3..106db862122 100644
--- a/doc/development/merge_request_performance_guidelines.md
+++ b/doc/development/merge_request_performance_guidelines.md
@@ -13,9 +13,7 @@ _every_ merge request **should** adhere to the guidelines outlined in this
document. There are no exceptions to this rule unless specifically discussed
with and agreed upon by backend maintainers and performance specialists.
-To measure the impact of a merge request you can use
-[Sherlock](profiling.md#sherlock). It's also highly recommended that you read
-the following guides:
+It's also highly recommended that you read the following guides:
- [Performance Guidelines](performance.md)
- [Avoiding downtime in migrations](avoiding_downtime_in_migrations.md)
diff --git a/doc/development/migration_style_guide.md b/doc/development/migration_style_guide.md
index 0bd9979e790..d85b7372814 100644
--- a/doc/development/migration_style_guide.md
+++ b/doc/development/migration_style_guide.md
@@ -162,6 +162,9 @@ def down
end
```
+Migrations like this are inherently risky and [additional actions](database_review.md#preparation-when-adding-data-migrations)
+are required when preparing the migration for review.
+
## Atomicity
By default, migrations are single transaction. That is, a transaction is opened
@@ -769,6 +772,32 @@ to run on a large table, as long as it is only updating a small subset of the
rows in the table, but do not ignore that without validating on the GitLab.com
staging environment - or asking someone else to do so for you - beforehand.
+## Removing a foreign key constraint
+
+When removing a foreign key constraint, we need to acquire a lock on both tables
+that are related to the foreign key. For tables with heavy write patterns, it's a good
+idea to use `with_lock_retries`, otherwise you might fail to acquire a lock in time.
+You might also run into deadlocks when acquiring a lock, because ordinarily
+the application writes in `parent,child` order. However, removing a foreign
+key acquires the lock in `child,parent` order. To resolve this, you can
+explicitly acquire the lock in `parent,child`, for example:
+
+```ruby
+disable_ddl_transaction!
+
+def up
+ with_lock_retries do
+ execute('lock table ci_pipelines, ci_builds in access exclusive mode')
+
+ remove_foreign_key :ci_builds, to_table: :ci_pipelines, column: :pipeline_id, on_delete: :cascade, name: 'the_fk_name'
+ end
+end
+
+def down
+ add_concurrent_foreign_key :ci_builds, :ci_pipelines, column: :pipeline_id, on_delete: :cascade, name: 'the_fk_name'
+end
+```
+
## Dropping a database table
Dropping a database table is uncommon, and the `drop_table` method
diff --git a/doc/development/permissions.md b/doc/development/permissions.md
index b7079e9fb8e..72e20d31cf8 100644
--- a/doc/development/permissions.md
+++ b/doc/development/permissions.md
@@ -1,10 +1,10 @@
---
stage: Manage
-group: Access
+group: Authentication & Authorization
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
-# GitLab permissions guide
+# Implementing permissions
There are multiple types of permissions across GitLab, and when implementing
anything that deals with permissions, all of them should be considered.
diff --git a/doc/development/pipelines.md b/doc/development/pipelines.md
index c7443032d78..a9c68905095 100644
--- a/doc/development/pipelines.md
+++ b/doc/development/pipelines.md
@@ -170,10 +170,9 @@ After that, the next pipeline uses the up-to-date `knapsack/report-master.json`
### Flaky tests
-Tests that are [known to be flaky](testing_guide/flaky_tests.md#automatic-retries-and-flaky-tests-detection) are:
-
-- skipped if the `$SKIP_FLAKY_TESTS_AUTOMATICALLY` variable is set to `true` (`false` by default)
-- run if `$SKIP_FLAKY_TESTS_AUTOMATICALLY` variable is not set to `true` or if the `~"pipeline:run-flaky-tests"` label is set on the MR
+Tests that are [known to be flaky](testing_guide/flaky_tests.md#automatic-retries-and-flaky-tests-detection) are
+skipped unless the `$SKIP_FLAKY_TESTS_AUTOMATICALLY` variable is set to `false` or if the `~"pipeline:run-flaky-tests"`
+label is set on the MR.
### Monitoring
@@ -222,6 +221,13 @@ The `* as-if-jh` jobs are run in addition to the regular EE-context jobs. The `j
The intent is to ensure that a change doesn't introduce a failure after the `gitlab-org/gitlab` project is synced to
the `gitlab-jh/gitlab` project.
+### Corresponding JH branch
+
+You can create a corresponding JH branch on the `gitlab-jh/gitlab` project by
+appending `-jh` to the branch name. If a corresponding JH branch is found,
+`* as-if-jh` jobs grab the `jh` folder from the respective branch,
+rather than from the default branch.
+
## `undercover` RSpec test
> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/74859) in GitLab 14.6.
@@ -266,318 +272,123 @@ We follow the [PostgreSQL versions shipped with Omnibus GitLab](../administratio
## Pipelines types for merge requests
-In general, pipelines for an MR fall into one or more of the following types,
-depending on the changes made in the MR:
+In general, pipelines for an MR fall into one of the following types (from shorter to longer), depending on the changes made in the MR:
+
+- [Documentation pipeline](#documentation-pipeline): For MRs that touch documentation.
+- [Backend pipeline](#backend-pipeline): For MRs that touch backend code.
+- [Frontend pipeline](#frontend-pipeline): For MRs that touch frontend code.
+- [End-to-end pipeline](#end-to-end-pipeline): For MRs that touch code in the `qa/` folder.
-- [Documentation only MR pipeline](#documentation-only-mr-pipeline): This is typically created for an MR that only changes documentation.
-- [Code-only MR pipeline](#code-only-mr-pipeline): This is typically created for an MR that only changes code, either backend or frontend.
-- [Frontend-only MR pipeline](#frontend-only-mr-pipeline): This is typically created for an MR that only changes frontend code.
-- [QA-only MR pipeline](#qa-only-mr-pipeline): This is typically created for an MR that only changes end to end tests related code.
+A "pipeline type" is an abstract term that mostly describes the "critical path" (i.e. the chain of jobs for which the sum
+of individual duration equals the pipeline's duration).
+We use these "pipeline types" in [metrics dashboards](https://app.periscopedata.com/app/gitlab/858266/GitLab-Pipeline-Durations)
+in order to detect what types and jobs need to be optimized first.
+
+An MR that touches multiple areas would be associated with the longest type applicable. For instance, an MR that touches backend
+and frontend would fall into the "Frontend" pipeline type since this type takes longer to finish than the "Backend" pipeline type.
We use the [`rules:`](../ci/yaml/index.md#rules) and [`needs:`](../ci/yaml/index.md#needs) keywords extensively
to determine the jobs that need to be run in a pipeline. Note that an MR that includes multiple types of changes would
have a pipelines that include jobs from multiple types (for example, a combination of docs-only and code-only pipelines).
-### Documentation only MR pipeline
+Following are graphs of the critical paths for each pipeline type. Jobs that aren't part of the critical path are ommitted.
+
+### Documentation pipeline
-[Reference pipeline](https://gitlab.com/gitlab-org/gitlab/-/pipelines/250546928):
+[Reference pipeline](https://gitlab.com/gitlab-org/gitlab/-/pipelines/432049110).
```mermaid
graph LR
- subgraph "No needed jobs";
- 1-1["danger-review (2.3 minutes)"];
- click 1-1 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=8100542&udv=0"
- 1-2["docs-lint markdown (1.5 minutes)"];
- click 1-2 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=10224335&udv=0"
- 1-3["docs-lint links (5 minutes)"];
- click 1-3 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=8356757&udv=0"
- 1-4["ui-docs-links lint (2.5 minutes)"];
- click 1-4 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=10823717&udv=1020379"
- end
+ classDef criticalPath fill:#f66;
+
+ 1-3["docs-lint links (5 minutes)"];
+ class 1-3 criticalPath;
+ click 1-3 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=8356757&udv=0"
```
-### Code-only MR pipeline
+### Backend pipeline
-[Reference pipeline](https://gitlab.com/gitlab-org/gitlab/pipelines/136295694)
+[Reference pipeline](https://gitlab.com/gitlab-org/gitlab/-/pipelines/433316063).
```mermaid
graph RL;
classDef criticalPath fill:#f66;
- subgraph "No needed jobs";
- 1-1["danger-review (2.3 minutes)"];
- click 1-1 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=8100542&udv=0"
- 1-2["build-qa-image (2 minutes)"];
- click 1-2 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=6914325&udv=0"
- 1-3["compile-test-assets (6 minutes)"];
- click 1-3 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=6914317&udv=0"
- 1-4["compile-test-assets as-if-foss (7 minutes)"];
- click 1-4 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=8356616&udv=0"
- 1-5["compile-production-assets (14 minutes)"];
- click 1-5 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=6914312&udv=0"
- 1-6["setup-test-env (4 minutes)"];
- click 1-6 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=6914315&udv=0"
- 1-7["review-delete-deployment"];
- 1-8["dependency_scanning"];
- 1-9["qa:internal, qa:internal-as-if-foss"];
- 1-11["qa:selectors, qa:selectors-as-if-foss"];
- 1-14["retrieve-tests-metadata (1 minutes)"];
- click 1-14 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=8356697&udv=0"
- 1-15["code_quality"];
- 1-16["brakeman-sast"];
- 1-17["eslint-sast"];
- 1-18["kubesec-sast"];
- 1-20["secrets-sast"];
- 1-21["static-analysis (14 minutes)"];
- click 1-21 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=6914471&udv=0"
-
- class 1-3 criticalPath;
- class 1-6 criticalPath;
- end
-
- 2_1-1["graphql-verify (2.3 minutes)"];
- click 2_1-1 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=8356715&udv=0"
- 2_1-2["memory-static (4.75 minutes)"];
- click 2_1-2 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=8356721&udv=0"
- 2_1-3["run-dev-fixtures (3 minutes)"];
- click 2_1-3 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=8356729&udv=0"
- 2_1-4["run-dev-fixtures-ee (4 minutes)"];
- click 2_1-4 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=8356731&udv=0"
- subgraph "Needs `setup-test-env`";
- 2_1-1 & 2_1-2 & 2_1-3 & 2_1-4 --> 1-6;
- end
-
- 2_2-2["rspec-all frontend_fixture (7 minutes)"];
- class 2_2-2 criticalPath;
- click 2_2-2 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=7910143&udv=0"
- 2_2-4["memory-on-boot (3.5 minutes)"];
- click 2_2-4 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=8356727&udv=0"
- 2_2-5["webpack-dev-server (4 minutes)"];
- click 2_2-5 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=8404303&udv=0"
- subgraph "Needs `setup-test-env` & `compile-test-assets`";
- 2_2-2 & 2_2-4 & 2_2-5 --> 1-6 & 1-3;
- end
-
- 2_3-1["build-assets-image (1.6 minutes)"];
- subgraph "Needs `compile-production-assets`";
- 2_3-1 --> 1-5
- end
-
- 2_4-1["package-and-qa (manual)"];
- subgraph "Needs `build-qa-image`";
- 2_4-1 --> 1-2;
- click 2_4-1 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=6914305&udv=0"
- end
-
- 2_5-1["rspec & db jobs (12-22 minutes)"];
- subgraph "Needs `compile-test-assets`, `setup-test-env`, & `retrieve-tests-metadata`";
- 2_5-1 --> 1-3 & 1-6 & 1-14;
- class 2_5-1 criticalPath;
- click 2_5-1 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations"
- end
-
- 3_1-1["jest (14.5 minutes)"];
- class 3_1-1 criticalPath;
- click 3_1-1 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=6914204&udv=0"
- subgraph "Needs `rspec-all frontend_fixture`";
- 3_1-1 --> 2_2-2;
- end
-
- 3_2-1["rspec:coverage (4 minutes)"];
- subgraph "Depends on `rspec` jobs";
- 3_2-1 -.->|"(don't use needs because of limitations)"| 2_5-1;
- click 3_2-1 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=7248745&udv=0"
- end
-
- 4_1-1["coverage-frontend (2 minutes)"];
- subgraph "Needs `jest`";
- 4_1-1 --> 3_1-1;
- class 4_1-1 criticalPath;
- click 4_1-1 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=7910777&udv=0"
- end
+ 1-3["compile-test-assets (6 minutes)"];
+ class 1-3 criticalPath;
+ click 1-3 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=6914317&udv=0"
+ 1-6["setup-test-env (4 minutes)"];
+ click 1-6 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=6914315&udv=0"
+ 1-14["retrieve-tests-metadata"];
+ click 1-14 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=8356697&udv=0"
+ 1-15["detect-tests"];
+ click 1-15 "https://app.periscopedata.com/app/gitlab/652085/EP---Jobs-Durations?widget=10113603&udv=1005715"
+
+ 2_5-1["rspec & db jobs (24 minutes)"];
+ class 2_5-1 criticalPath;
+ click 2_5-1 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations"
+ 2_5-1 --> 1-3 & 1-6 & 1-14 & 1-15;
+
+ 3_2-1["rspec:coverage (5.35 minutes)"];
+ class 3_2-1 criticalPath;
+ click 3_2-1 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=7248745&udv=0"
+ 3_2-1 -.->|"(don't use needs<br/>because of limitations)"| 2_5-1;
+
+ 4_3-1["rspec:undercoverage (3.5 minutes)"];
+ class 4_3-1 criticalPath;
+ click 4_3-1 "https://app.periscopedata.com/app/gitlab/652085/EP---Jobs-Durations?widget=13446492&udv=1005715"
+ 4_3-1 --> 3_2-1;
+
```
-### Frontend-only MR pipeline
+### Frontend pipeline
-[Reference pipeline](https://gitlab.com/gitlab-org/gitlab/pipelines/134661039):
+[Reference pipeline](https://gitlab.com/gitlab-org/gitlab/-/pipelines/431913287).
```mermaid
graph RL;
classDef criticalPath fill:#f66;
- subgraph "No needed jobs";
- 1-1["danger-review (2.3 minutes)"];
- click 1-1 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=8100542&udv=0"
- 1-2["build-qa-image (2 minutes)"];
- click 1-2 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=6914325&udv=0"
- 1-3["compile-test-assets (6 minutes)"];
- click 1-3 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=6914317&udv=0"
- 1-4["compile-test-assets as-if-foss (7 minutes)"];
- click 1-4 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=8356616&udv=0"
- 1-5["compile-production-assets (14 minutes)"];
- click 1-5 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=6914312&udv=0"
- 1-6["setup-test-env (4 minutes)"];
- click 1-6 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=6914315&udv=0"
- 1-7["review-stop-failed-deployment"];
- 1-8["dependency_scanning"];
- 1-9["qa:internal, qa:internal-as-if-foss"];
- 1-11["qa:selectors, qa:selectors-as-if-foss"];
- 1-14["retrieve-tests-metadata (1 minutes)"];
- click 1-14 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=8356697&udv=0"
- 1-15["code_quality"];
- 1-16["brakeman-sast"];
- 1-17["eslint-sast"];
- 1-18["kubesec-sast"];
- 1-20["secrets-sast"];
- 1-21["static-analysis (14 minutes)"];
- click 1-21 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=6914471&udv=0"
-
- class 1-3 criticalPath;
- class 1-5 criticalPath;
- class 1-6 criticalPath;
- end
-
- 2_1-1["graphql-verify (2.3 minutes)"];
- click 2_1-1 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=8356715&udv=0"
- 2_1-2["memory-static (4.75 minutes)"];
- click 2_1-2 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=8356721&udv=0"
- 2_1-3["run-dev-fixtures (3 minutes)"];
- click 2_1-3 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=8356729&udv=0"
- 2_1-4["run-dev-fixtures-ee (4 minutes)"];
- click 2_1-4 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=8356731&udv=0"
- subgraph "Needs `setup-test-env`";
- 2_1-1 & 2_1-2 & 2_1-3 & 2_1-4 --> 1-6;
- end
-
- 2_2-2["rspec-all frontend_fixture (7 minutes)"];
- class 2_2-2 criticalPath;
- click 2_2-2 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=7910143&udv=0"
- 2_2-4["memory-on-boot (3.5 minutes)"];
- click 2_2-4 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=8356727&udv=0"
- 2_2-5["webpack-dev-server (4 minutes)"];
- click 2_2-5 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=8404303&udv=0"
- subgraph "Needs `setup-test-env` & `compile-test-assets`";
- 2_2-2 & 2_2-4 & 2_2-5 --> 1-6 & 1-3;
- end
-
- 2_3-1["build-assets-image (1.6 minutes)"];
+ 1-2["build-qa-image (2 minutes)"];
+ click 1-2 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=6914325&udv=0"
+ 1-5["compile-production-assets (16 minutes)"];
+ class 1-5 criticalPath;
+ click 1-5 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=6914312&udv=0"
+
+ 2_3-1["build-assets-image (1.3 minutes)"];
class 2_3-1 criticalPath;
- subgraph "Needs `compile-production-assets`";
- 2_3-1 --> 1-5
- end
-
- 2_4-1["package-and-qa (manual)"];
- subgraph "Needs `build-qa-image` & `build-assets-image`";
- 2_4-1 --> 1-2 & 2_3-1;
- click 2_4-1 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=6914305&udv=0"
- end
-
- 2_5-1["rspec & db jobs (12-22 minutes)"];
- subgraph "Needs `compile-test-assets`, `setup-test-env, & `retrieve-tests-metadata`";
- 2_5-1 --> 1-3 & 1-6 & 1-14;
- class 2_5-1 criticalPath;
- click 2_5-1 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations"
- end
-
- 2_6-1["review-build-cng (27 minutes)"];
- subgraph "Needs `build-assets-image`";
- 2_6-1 --> 2_3-1;
- class 2_6-1 criticalPath;
- click 2_6-1 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=6914314&udv=0"
- end
-
- 3_1-1["jest (14.5 minutes)"];
- class 3_1-1 criticalPath;
- click 3_1-1 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=6914204&udv=0"
- subgraph "Needs `rspec-all frontend_fixture`";
- 3_1-1 --> 2_2-2;
- end
-
- 3_2-1["rspec:coverage (4 minutes)"];
- subgraph "Depends on `rspec` jobs";
- 3_2-1 -.->|"(don't use needs because of limitations)"| 2_5-1;
- click 3_2-1 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=7248745&udv=0"
- end
-
- 4_1-1["coverage-frontend (2 minutes)"];
- subgraph "Needs `jest`";
- 4_1-1 --> 3_1-1;
- class 4_1-1 criticalPath;
- click 4_1-1 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=7910777&udv=0"
- end
-
- 3_3-1["review-deploy (9 minutes)"];
- subgraph "Played by `review-build-cng`";
- 3_3-1 --> 2_6-1;
- class 3_3-1 criticalPath;
- click 3_3-1 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=6721130&udv=0"
- end
-
- 4_2-1["review-qa-smoke (7.4 minutes)"];
- click 4_2-1 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=6729805&udv=0"
- 4_2-2["review-performance (2.5 minutes)"];
- click 4_2-2 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=8356817&udv=0"
- subgraph "Played by `review-deploy`";
- 4_2-1 & 4_2-2 --> 3_3-1;
- end
+ 2_3-1 --> 1-5
+
+ 2_6-1["start-review-app-pipeline (49 minutes)"];
+ class 2_6-1 criticalPath;
+ click 2_6-1 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations"
+ 2_6-1 --> 2_3-1 & 1-2;
```
-### QA-only MR pipeline
+### End-to-end pipeline
-[Reference pipeline](https://gitlab.com/gitlab-org/gitlab/pipelines/134645109):
+[Reference pipeline](https://gitlab.com/gitlab-org/gitlab/-/pipelines/431918463).
```mermaid
graph RL;
classDef criticalPath fill:#f66;
- subgraph "No needed jobs";
- 1-1["danger-review (2.3 minutes)"];
- click 1-1 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=8100542&udv=0"
- 1-2["build-qa-image (2 minutes)"];
- click 1-2 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=6914325&udv=0"
- 1-3["compile-test-assets (6 minutes)"];
- click 1-3 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=6914317&udv=0"
- 1-4["compile-test-assets as-if-foss (7 minutes)"];
- click 1-4 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=8356616&udv=0"
- 1-5["compile-production-assets (14 minutes)"];
- click 1-5 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=6914312&udv=0"
- 1-6["setup-test-env (4 minutes)"];
- click 1-6 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=6914315&udv=0"
- 1-7["review-stop-failed-deployment"];
- 1-8["dependency_scanning"];
- 1-9["qa:internal, qa:internal-as-if-foss"];
- 1-11["qa:selectors, qa:selectors-as-if-foss"];
- 1-14["retrieve-tests-metadata (1 minutes)"];
- click 1-14 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=8356697&udv=0"
- 1-15["code_quality"];
- 1-16["brakeman-sast"];
- 1-17["eslint-sast"];
- 1-18["kubesec-sast"];
- 1-20["secrets-sast"];
- 1-21["static-analysis (14 minutes)"];
- click 1-21 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=6914471&udv=0"
-
- class 1-5 criticalPath;
- end
-
- 2_1-1["graphql-verify (2.3 minutes)"];
- click 2_1-1 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=8356715&udv=0"
- subgraph "Needs `setup-test-env`";
- 2_1-1 --> 1-6;
- end
-
- 2_3-1["build-assets-image (1.6 minutes)"];
- subgraph "Needs `compile-production-assets`";
- 2_3-1 --> 1-5
- class 2_3-1 criticalPath;
- end
-
- 2_4-1["package-and-qa (113 minutes)"];
- subgraph "Needs `build-qa-image` & `build-assets-image`";
- 2_4-1 --> 1-2 & 2_3-1;
- class 2_4-1 criticalPath;
- click 2_4-1 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=6914305&udv=0"
- end
+ 1-2["build-qa-image (2 minutes)"];
+ click 1-2 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=6914325&udv=0"
+ 1-5["compile-production-assets (16 minutes)"];
+ class 1-5 criticalPath;
+ click 1-5 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=6914312&udv=0"
+ 1-15["detect-tests"];
+ click 1-15 "https://app.periscopedata.com/app/gitlab/652085/EP---Jobs-Durations?widget=10113603&udv=1005715"
+
+ 2_3-1["build-assets-image (1.3 minutes)"];
+ class 2_3-1 criticalPath;
+ 2_3-1 --> 1-5
+
+ 2_4-1["package-and-qa (102 minutes)"];
+ class 2_4-1 criticalPath;
+ click 2_4-1 "https://app.periscopedata.com/app/gitlab/652085/Engineering-Productivity---Pipeline-Build-Durations?widget=6914305&udv=0"
+ 2_4-1 --> 1-2 & 2_3-1 & 1-15;
```
## CI configuration internals
diff --git a/doc/development/policies.md b/doc/development/policies.md
index 9a977a49329..61cc6e0edf5 100644
--- a/doc/development/policies.md
+++ b/doc/development/policies.md
@@ -1,6 +1,6 @@
---
stage: Manage
-group: Access
+group: Authentication & Authorization
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
diff --git a/doc/development/profiling.md b/doc/development/profiling.md
index 656b30402a6..789e0640933 100644
--- a/doc/development/profiling.md
+++ b/doc/development/profiling.md
@@ -108,20 +108,6 @@ Find more information about different sampling modes in the [Stackprof docs](htt
This is enabled for all users that can access the performance bar.
-## Sherlock
-
-Sherlock is a custom profiling tool built into GitLab. Sherlock is _only_
-available when running GitLab in development mode _and_ when setting the
-environment variable `ENABLE_SHERLOCK` to a non empty value. For example:
-
-```shell
-ENABLE_SHERLOCK=1 bundle exec rails s
-```
-
-Sherlock is also [available though the GitLab GDK](https://gitlab.com/gitlab-org/gitlab-development-kit/-/blob/main/doc/howto/sherlock.md).
-
-Recorded transactions can be found by navigating to `/sherlock/transactions`.
-
## Bullet
Bullet is a Gem that can be used to track down N+1 query problems. Bullet section is
diff --git a/doc/development/redis/new_redis_instance.md b/doc/development/redis/new_redis_instance.md
index 37ee51ebb82..dcd79be0e5c 100644
--- a/doc/development/redis/new_redis_instance.md
+++ b/doc/development/redis/new_redis_instance.md
@@ -110,6 +110,131 @@ documentation for feature flags.
When we have been using the new instance 100% of the time in production for a
while and there are no issues, we can proceed.
+### Proposed solution: Migrate data by using MultiStore with the fallback strategy
+
+We need a way to migrate users to a new Redis store without causing any inconveniences from UX perspective.
+We also want the ability to fall back to the "old" Redis instance if something goes wrong with the new instance.
+
+Migration Requirements:
+
+- No downtime.
+- No loss of stored data until the TTL for storing data expires.
+- Partial rollout using Feature Flags or ENV vars or combinations of both.
+- Monitoring of the switch.
+- Prometheus metrics in place.
+- Easy rollback without downtime in case the new instance or logic does not behave as expected.
+
+It is somewhat similar to the zero-downtime DB table rename.
+We need to write data into both Redis instances (old + new).
+We read from the new instance, but we need to fall back to the old instance when pre-fetching from the new dedicated Redis instance that failed.
+We need to log any issues or exceptions with a new instance, but still fall back to the old instance.
+
+The proposed migration strategy is to implement and use the [MultiStore](https://gitlab.com/gitlab-org/gitlab/-/blob/fcc42e80ed261a862ee6ca46b182eee293ae60b6/lib/gitlab/redis/multi_store.rb).
+We used this approach with [adding new dedicated Redis instance for session keys](https://gitlab.com/groups/gitlab-com/gl-infra/-/epics/579).
+Also MultiStore comes with corresponding [specs](https://gitlab.com/gitlab-org/gitlab/-/blob/master/spec/lib/gitlab/redis/multi_store_spec.rb).
+
+The MultiStore looks like a `redis-rb ::Redis` instance.
+
+In the new Redis instance class you added in [Step 1](#step-1-support-configuring-the-new-instance),
+override the [Redis](https://gitlab.com/gitlab-org/gitlab/-/blob/fcc42e80ed261a862ee6ca46b182eee293ae60b6/lib/gitlab/redis/sessions.rb#L20-28) method from the `::Gitlab::Redis::Wrapper`
+
+```ruby
+module Gitlab
+ module Redis
+ class Foo < ::Gitlab::Redis::Wrapper
+ ...
+ def self.redis
+ # Don't use multistore if redis.foo configuration is not provided
+ return super if config_fallback?
+
+ primary_store = ::Redis.new(params)
+ secondary_store = ::Redis.new(config_fallback.params)
+
+ MultiStore.new(primary_store, secondary_store, store_name)
+ end
+ end
+ end
+end
+```
+
+MultiStore is initialized by providing the new Redis instance as a primary store, and [old (fallback-instance)](#fallback-instance) as a secondary store.
+The third argument is `store_name` which is used for logs, metrics and feature flag names, in case we use MultiStore implementation for different Redis stores at the same time.
+
+By default, the MultiStore reads and writes only from the default Redis store.
+The default Redis store is `secondary_store` (the old fallback-instance).
+This allows us to introduce MultiStore without changing the default behavior.
+
+MultiStore uses two feature flags to control the actual migration:
+
+- `use_primary_and_secondary_stores_for_[store_name]`
+- `use_primary_store_as_default_for_[store_name]`
+
+For example, if our new Redis instance is called `Gitlab::Redis::Foo`, we can [create](../../../ee/development/feature_flags/#create-a-new-feature-flag) two feature flags by executing:
+
+```shell
+bin/feature-flag use_primary_and_secondary_stores_for_foo
+bin/feature-flag use_primary_store_as_default_for_foo
+```
+
+By enabling `use_primary_and_secondary_stores_for_foo` feature flag, our `Gitlab::Redis::Foo` will use `MultiStore` to write to both new Redis instance
+and the [old (fallback-instance)](#fallback-instance).
+If we fail to fetch data from the new instance, we will fallback and read from the old Redis instance.
+
+We can monitor logs for `Gitlab::Redis::MultiStore::ReadFromPrimaryError`, and also the Prometheus counter `gitlab_redis_multi_store_read_fallback_total`.
+Once we stop seeing them, this means that we are no longer relying on the data stored on the old Redis store.
+At this point, we are probably safe to move the traffic to the new Redis store.
+
+By enabling `use_primary_store_as_default_for_foo` feature flag, the `MultiStore` will use `primary_store` (new instance) as default Redis store.
+
+Once this feature flag is enabled, we can disable `use_primary_and_secondary_stores_for_foo` feature flag.
+This will allow the MultiStore to read and write only from the primary Redis store (new store), moving all the traffic to the new Redis store.
+
+Once we have moved all our traffic to the primary store, our data migration is complete.
+We can safely remove the MultiStore implementation and continue to use newly introduced Redis store instance.
+
+#### Implementation details
+
+MultiStore implements read and write Redis commands separately.
+
+##### Read commands
+
+- `get`
+- `mget`
+- `smembers`
+- `scard`
+
+##### Write commands
+
+- `set`
+- `setnx`
+- `setex`
+- `sadd`
+- `srem`
+- `del`
+- `pipelined`
+- `flushdb`
+
+When a command outside of the supported list is used, `method_missing` will pass it to the old Redis instance and keep track of it.
+This ensures that anything unexpected behaves like it would before.
+
+NOTE:
+By tracking `gitlab_redis_multi_store_method_missing_total` counter and `Gitlab::Redis::MultiStore::MethodMissingError`,
+a developer will need to add an implementation for missing Redis commands before proceeding with the migration.
+
+##### Errors
+
+| error | message |
+|-------------------------------------------------|-----------------------------------------------------------------------|
+| `Gitlab::Redis::MultiStore::ReadFromPrimaryError` | Value not found on the Redis primary store. Read from the Redis secondary store successful. |
+| `Gitlab::Redis::MultiStore::MethodMissingError` | Method missing. Falling back to execute method on the Redis secondary store. |
+
+##### Metrics
+
+| metrics name | type | labels | description |
+|-------------------------------------------------|--------------------|------------------------|----------------------------------------------------|
+| gitlab_redis_multi_store_read_fallback_total | Prometheus Counter | command, instance_name | Client side Redis MultiStore reading fallback total|
+| gitlab_redis_multi_store_method_missing_total | Prometheus Counter | command, instance_name | Client side Redis MultiStore method missing total |
+
## Step 4: clean up after the migration
<!-- markdownlint-disable MD044 -->
diff --git a/doc/development/secure_coding_guidelines.md b/doc/development/secure_coding_guidelines.md
index 21655d6a8fb..51d3338e5ed 100644
--- a/doc/development/secure_coding_guidelines.md
+++ b/doc/development/secure_coding_guidelines.md
@@ -767,3 +767,354 @@ In the example above, the `is_admin?` method is overwritten when passing it to t
- If you must, be **very** confident that you've sanitized the values correctly.
Consider creating an allowlist of values, and validating the user input against that.
- When extending classes that use metaprogramming, make sure you don't inadvertently override any method definition safety checks.
+
+## Working with archive files
+
+Working with archive files like `zip`, `tar`, `jar`, `war`, `cpio`, `apk`, `rar` and `7z` presents an area where potentially critical security vulnerabilities can sneak into an application.
+
+### Zip Slip
+
+In 2018, the security company Snyk [released a blog post](https://snyk.io/research/zip-slip-vulnerability) describing research into a widespread and critical vulnerability present in many libraries and applications which allows an attacker to overwrite arbitrary files on the server file system which, in many cases, can be leveraged to achieve remote code execution. The vulnerability was dubbed Zip Slip.
+
+A Zip Slip vulnerability happens when an application extracts an archive without validating and sanitizing the filenames inside the archive for directory traversal sequences that change the file location when the file is extracted.
+
+Example malicious file names:
+
+- `../../etc/passwd`
+- `../../root/.ssh/authorized_keys`
+- `../../etc/gitlab/gitlab.rb`
+
+If a vulnerable application extracts an archive file with any of these file names, the attacker can overwrite these files with arbitrary content.
+
+### Insecure archive extraction examples
+
+#### Ruby
+
+For zip files, the [rubyzip](https://rubygems.org/gems/rubyzip) Ruby gem is already patched against the Zip Slip vulnerability and will refuse to extract files that try to perform directory traversal, so for this vulnerable example we will extract a `tar.gz` file with `Gem::Package::TarReader`:
+
+```ruby
+# Vulnerable tar.gz extraction example!
+
+begin
+ tar_extract = Gem::Package::TarReader.new(Zlib::GzipReader.open("/tmp/uploaded.tar.gz"))
+rescue Errno::ENOENT
+ STDERR.puts("archive file does not exist or is not readable")
+ exit(false)
+end
+tar_extract.rewind
+
+tar_extract.each do |entry|
+ next unless entry.file? # Only process files in this example for simplicity.
+
+ destination = "/tmp/extracted/#{entry.full_name}" # Oops! We blindly use the entry file name for the destination.
+ File.open(destination, "wb") do |out|
+ out.write(entry.read)
+ end
+end
+```
+
+#### Go
+
+```golang
+// unzip INSECURELY extracts source zip file to destination.
+func unzip(src, dest string) error {
+ r, err := zip.OpenReader(src)
+ if err != nil {
+ return err
+ }
+ defer r.Close()
+
+ os.MkdirAll(dest, 0750)
+
+ for _, f := range r.File {
+ if f.FileInfo().IsDir() { // Skip directories in this example for simplicity.
+ continue
+ }
+
+ rc, err := f.Open()
+ if err != nil {
+ return err
+ }
+ defer rc.Close()
+
+ path := filepath.Join(dest, f.Name) // Oops! We blindly use the entry file name for the destination.
+ os.MkdirAll(filepath.Dir(path), f.Mode())
+ f, err := os.OpenFile(path, os.O_WRONLY|os.O_CREATE|os.O_TRUNC, f.Mode())
+ if err != nil {
+ return err
+ }
+ defer f.Close()
+
+ if _, err := io.Copy(f, rc); err != nil {
+ return err
+ }
+ }
+
+ return nil
+}
+```
+
+#### Best practices
+
+Always expand the destination file path by resolving all potential directory traversals and other sequences that can alter the path and refuse extraction if the final destination path does not start with the intended destination directory.
+
+##### Ruby
+
+```ruby
+# tar.gz extraction example with protection against Zip Slip attacks.
+
+begin
+ tar_extract = Gem::Package::TarReader.new(Zlib::GzipReader.open("/tmp/uploaded.tar.gz"))
+rescue Errno::ENOENT
+ STDERR.puts("archive file does not exist or is not readable")
+ exit(false)
+end
+tar_extract.rewind
+
+tar_extract.each do |entry|
+ next unless entry.file? # Only process files in this example for simplicity.
+
+ # safe_destination will raise an exception in case of Zip Slip / directory traversal.
+ destination = safe_destination(entry.full_name, "/tmp/extracted")
+
+ File.open(destination, "wb") do |out|
+ out.write(entry.read)
+ end
+end
+
+def safe_destination(filename, destination_dir)
+ raise "filename cannot start with '/'" if filename.start_with?("/")
+
+ destination_dir = File.realpath(destination_dir)
+ destination = File.expand_path(filename, destination_dir)
+
+ raise "filename is outside of destination directory" unless
+ destination.start_with?(destination_dir + "/"))
+
+ destination
+end
+```
+
+```ruby
+# zip extraction example using rubyzip with built-in protection against Zip Slip attacks.
+require 'zip'
+
+Zip::File.open("/tmp/uploaded.zip") do |zip_file|
+ zip_file.each do |entry|
+ # Extract entry to /tmp/extracted directory.
+ entry.extract("/tmp/extracted")
+ end
+end
+```
+
+##### Go
+
+You are encouraged to use the secure archive utilities provided by [LabSec](https://gitlab.com/gitlab-com/gl-security/appsec/labsec) which will handle Zip Slip and other types of vulnerabilities for you. The LabSec utilities are also context aware which makes it possible to cancel or timeout extractions:
+
+```golang
+package main
+
+import "gitlab-com/gl-security/appsec/labsec/archive/zip"
+
+func main() {
+ f, err := os.Open("/tmp/uploaded.zip")
+ if err != nil {
+ panic(err)
+ }
+ defer f.Close()
+
+ fi, err := f.Stat()
+ if err != nil {
+ panic(err)
+ }
+
+ if err := zip.Extract(context.Background(), f, fi.Size(), "/tmp/extracted"); err != nil {
+ panic(err)
+ }
+}
+```
+
+In case the LabSec utilities do not fit your needs, here is an example for extracting a zip file with protection against Zip Slip attacks:
+
+```golang
+// unzip extracts source zip file to destination with protection against Zip Slip attacks.
+func unzip(src, dest string) error {
+ r, err := zip.OpenReader(src)
+ if err != nil {
+ return err
+ }
+ defer r.Close()
+
+ os.MkdirAll(dest, 0750)
+
+ for _, f := range r.File {
+ if f.FileInfo().IsDir() { // Skip directories in this example for simplicity.
+ continue
+ }
+
+ rc, err := f.Open()
+ if err != nil {
+ return err
+ }
+ defer rc.Close()
+
+ path := filepath.Join(dest, f.Name)
+
+ // Check for Zip Slip / directory traversal
+ if !strings.HasPrefix(path, filepath.Clean(dest) + string(os.PathSeparator)) {
+ return fmt.Errorf("illegal file path: %s", path)
+ }
+
+ os.MkdirAll(filepath.Dir(path), f.Mode())
+ f, err := os.OpenFile(path, os.O_WRONLY|os.O_CREATE|os.O_TRUNC, f.Mode())
+ if err != nil {
+ return err
+ }
+ defer f.Close()
+
+ if _, err := io.Copy(f, rc); err != nil {
+ return err
+ }
+ }
+
+ return nil
+}
+```
+
+### Symlink attacks
+
+Symlink attacks makes it possible for an attacker to read the contents of arbitrary files on the server of a vulnerable application. While it is a high-severity vulnerability that can often lead to remote code execution and other critical vulnerabilities, it is only exploitable in scenarios where a vulnerable application accepts archive files from the attacker and somehow displays the extracted contents back to the attacker without any validation or sanitization of symbolic links inside the archive.
+
+### Insecure archive symlink extraction examples
+
+#### Ruby
+
+For zip files, the [rubyzip](https://rubygems.org/gems/rubyzip) Ruby gem is already patched against symlink attacks as it simply ignores symbolic links, so for this vulnerable example we will extract a `tar.gz` file with `Gem::Package::TarReader`:
+
+```ruby
+# Vulnerable tar.gz extraction example!
+
+begin
+ tar_extract = Gem::Package::TarReader.new(Zlib::GzipReader.open("/tmp/uploaded.tar.gz"))
+rescue Errno::ENOENT
+ STDERR.puts("archive file does not exist or is not readable")
+ exit(false)
+end
+tar_extract.rewind
+
+# Loop over each entry and output file contents
+tar_extract.each do |entry|
+ next if entry.directory?
+
+ # Oops! We don't check if the file is actually a symbolic link to a potentially sensitive file.
+ puts entry.read
+end
+```
+
+#### Go
+
+```golang
+// printZipContents INSECURELY prints contents of files in a zip file.
+func printZipContents(src string) error {
+ r, err := zip.OpenReader(src)
+ if err != nil {
+ return err
+ }
+ defer r.Close()
+
+ // Loop over each entry and output file contents
+ for _, f := range r.File {
+ if f.FileInfo().IsDir() {
+ continue
+ }
+
+ rc, err := f.Open()
+ if err != nil {
+ return err
+ }
+ defer rc.Close()
+
+ // Oops! We don't check if the file is actually a symbolic link to a potentially sensitive file.
+ buf, err := ioutil.ReadAll(rc)
+ if err != nil {
+ return err
+ }
+
+ fmt.Println(buf.String())
+ }
+
+ return nil
+}
+```
+
+#### Best practices
+
+Always check the type of the archive entry before reading the contents and ignore entries that are not plain files. If you absolutely must support symbolic links, ensure that they only point to files inside the archive and nowhere else.
+
+##### Ruby
+
+```ruby
+# tar.gz extraction example with protection against symlink attacks.
+
+begin
+ tar_extract = Gem::Package::TarReader.new(Zlib::GzipReader.open("/tmp/uploaded.tar.gz"))
+rescue Errno::ENOENT
+ STDERR.puts("archive file does not exist or is not readable")
+ exit(false)
+end
+tar_extract.rewind
+
+# Loop over each entry and output file contents
+tar_extract.each do |entry|
+ next if entry.directory?
+
+ # By skipping symbolic links entirely, we are sure they can't cause any trouble!
+ next if entry.symlink?
+
+ puts entry.read
+end
+```
+
+##### Go
+
+You are encouraged to use the secure archive utilities provided by [LabSec](https://gitlab.com/gitlab-com/gl-security/appsec/labsec) which will handle Zip Slip and symlink vulnerabilities for you. The LabSec utilities are also context aware which makes it possible to cancel or timeout extractions.
+
+In case the LabSec utilities do not fit your needs, here is an example for extracting a zip file with protection against symlink attacks:
+
+```golang
+// printZipContents prints contents of files in a zip file with protection against symlink attacks.
+func printZipContents(src string) error {
+ r, err := zip.OpenReader(src)
+ if err != nil {
+ return err
+ }
+ defer r.Close()
+
+ // Loop over each entry and output file contents
+ for _, f := range r.File {
+ if f.FileInfo().IsDir() {
+ continue
+ }
+
+ // By skipping all irregular file types (including symbolic links), we are sure they can't cause any trouble!
+ if !zf.Mode().IsRegular() {
+ continue
+ }
+
+ rc, err := f.Open()
+ if err != nil {
+ return err
+ }
+ defer rc.Close()
+
+ buf, err := ioutil.ReadAll(rc)
+ if err != nil {
+ return err
+ }
+
+ fmt.Println(buf.String())
+ }
+
+ return nil
+}
+```
diff --git a/doc/development/service_ping/index.md b/doc/development/service_ping/index.md
index 1f751eea4d8..315ff2b090c 100644
--- a/doc/development/service_ping/index.md
+++ b/doc/development/service_ping/index.md
@@ -311,7 +311,8 @@ The following is example content of the Service Ping payload.
"database": {
"adapter": "postgresql",
"version": "9.6.15",
- "pg_system_id": 6842684531675334351
+ "pg_system_id": 6842684531675334351,
+ "flavor": "Cloud SQL for PostgreSQL"
},
"analytics_unique_visits": {
"g_analytics_contribution": 999,
@@ -435,6 +436,10 @@ The following is example content of the Service Ping payload.
## Notable changes
+In GitLab 14.6, [`flavor`](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/75587) was added to try to detect the underlying managed database variant.
+Possible values are "Amazon Aurora PostgreSQL", "PostgreSQL on Amazon RDS", "Cloud SQL for PostgreSQL",
+"Azure Database for PostgreSQL - Flexible Server", or "null".
+
In GitLab 13.5, `pg_system_id` was added to send the [PostgreSQL system identifier](https://www.2ndquadrant.com/en/blog/support-for-postgresqls-system-identifier-in-barman/).
## Export Service Ping SQL queries and definitions
diff --git a/doc/development/service_ping/metrics_instrumentation.md b/doc/development/service_ping/metrics_instrumentation.md
index 6fdbd1eea31..c98b0df92aa 100644
--- a/doc/development/service_ping/metrics_instrumentation.md
+++ b/doc/development/service_ping/metrics_instrumentation.md
@@ -33,6 +33,12 @@ We have built a domain-specific language (DSL) to define the metrics instrumenta
## Database metrics
+- `operation`: Operations for the given `relation`, one of `count`, `distinct_count`.
+- `relation`: `ActiveRecord::Relation` for the objects we want to perform the `operation`.
+- `start`: Specifies the start value of the batch counting, by default is `relation.minimum(:id)`.
+- `finish`: Specifies the end value of the batch counting, by default is `relation.maximum(:id)`.
+- `cache_start_and_finish_as`: Specifies the cache key for `start` and `finish` values and sets up caching them. Use this call when `start` and `finish` are expensive queries that should be reused between different metric calculations.
+
[Example of a merge request that adds a database metric](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/60022).
```ruby
diff --git a/doc/development/service_ping/performance_indicator_metrics.md b/doc/development/service_ping/performance_indicator_metrics.md
new file mode 100644
index 00000000000..48c123171fa
--- /dev/null
+++ b/doc/development/service_ping/performance_indicator_metrics.md
@@ -0,0 +1,17 @@
+---
+stage: Growth
+group: Product Intelligence
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
+---
+
+# Performance Indicator Metrics guide
+
+This guide describes how to use metrics definitions to define [performance indicator](https://about.gitlab.com/handbook/product/product-intelligence-guide/#implementing-product-performance-indicators) metrics.
+
+To use a metric definition to manage a performance indicator:
+
+1. Create a new issue and use the [Performance Indicator Metric issue template](https://gitlab.com/gitlab-org/gitlab/-/issues/new?issuable_template=Performance%20Indicator%20Metric).
+1. Use labels `~"product intelligence"`, `"~Data Warehouse::Impact Check"`.
+1. Create a merge request that includes changes related only to the metric performance indicator.
+1. Update the metric definition `performance_indicator_type` [field](metrics_dictionary.md#metrics-definition-and-validation).
+1. Create an issue in GitLab Data Team project with the [Product Performance Indicator template](https://gitlab.com/gitlab-data/analytics/-/issues/new?issuable_template=Product%20Performance%20Indicator%20Template).
diff --git a/doc/development/sidekiq_style_guide.md b/doc/development/sidekiq_style_guide.md
index 2137a7d83e6..af994e7138d 100644
--- a/doc/development/sidekiq_style_guide.md
+++ b/doc/development/sidekiq_style_guide.md
@@ -13,7 +13,7 @@ modifying Sidekiq workers.
All workers should include `ApplicationWorker` instead of `Sidekiq::Worker`,
which adds some convenience methods and automatically sets the queue based on
-the worker's name.
+the [routing rules](../administration/operations/extra_sidekiq_routing.md#queue-routing-rules).
## Retries
@@ -45,19 +45,26 @@ Each retry for a worker is counted as a failure in our metrics. A worker
which always fails 9 times and succeeds on the 10th would have a 90%
error rate.
-## Dedicated Queues
+## Sidekiq Queues
-All workers should use their own queue, which is automatically set based on the
+Previously, each worker had its own queue, which was automatically set based on the
worker class name. For a worker named `ProcessSomethingWorker`, the queue name
-would be `process_something`. If you're not sure what queue a worker uses,
+would be `process_something`. You can now route workers to a specific queue using
+[queue routing rules](../administration/operations/extra_sidekiq_routing.md#queue-routing-rules).
+In GDK, new workers are routed to a queue named `default`.
+
+If you're not sure what queue a worker uses,
you can find it using `SomeWorker.queue`. There is almost never a reason to
manually override the queue name using `sidekiq_options queue: :some_queue`.
-After adding a new queue, run `bin/rake
+After adding a new worker, run `bin/rake
gitlab:sidekiq:all_queues_yml:generate` to regenerate
`app/workers/all_queues.yml` or `ee/app/workers/all_queues.yml` so that
it can be picked up by
-[`sidekiq-cluster`](../administration/operations/extra_sidekiq_processes.md).
+[`sidekiq-cluster`](../administration/operations/extra_sidekiq_processes.md)
+in installations that don't use routing rules. To learn more about potential changes,
+read [Use routing rules by default and deprecate queue selectors for self-managed](https://gitlab.com/groups/gitlab-com/gl-infra/-/epics/596).
+
Additionally, run
`bin/rake gitlab:sidekiq:sidekiq_queues_yml:generate` to regenerate
`config/sidekiq_queues.yml`.
diff --git a/doc/development/snowplow/dictionary.md b/doc/development/snowplow/dictionary.md
deleted file mode 100644
index 02e9ba5ce20..00000000000
--- a/doc/development/snowplow/dictionary.md
+++ /dev/null
@@ -1,4 +0,0 @@
----
-redirect_to: 'https://metrics.gitlab.com/snowplow.html'
-remove_date: '2021-12-28'
----
diff --git a/doc/development/snowplow/implementation.md b/doc/development/snowplow/implementation.md
index 6da4896c7e7..439485c9e73 100644
--- a/doc/development/snowplow/implementation.md
+++ b/doc/development/snowplow/implementation.md
@@ -397,6 +397,7 @@ Before you test frontend events in development, you must:
All URLs are pseudonymized. The entity identifier [replaces](https://docs.snowplowanalytics.com/docs/collecting-data/collecting-from-own-applications/javascript-trackers/javascript-tracker/javascript-tracker-v2/tracker-setup/other-parameters-2/#Setting_a_custom_page_URL_and_referrer_URL) personally identifiable
information (PII). PII includes usernames, group, and project names.
+Page titles are hardcoded as `GitLab` for the same reason.
#### Snowplow Analytics Debugger Chrome Extension
diff --git a/doc/development/snowplow/schemas.md b/doc/development/snowplow/schemas.md
index f66e0566a9c..eb57e7d98a5 100644
--- a/doc/development/snowplow/schemas.md
+++ b/doc/development/snowplow/schemas.md
@@ -19,6 +19,7 @@ The [`StandardContext`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/g
| `project_id` | **{dotted-circle}** | integer | |
| `namespace_id` | **{dotted-circle}** | integer | |
| `user_id` | **{dotted-circle}** | integer | User database record ID attribute. This file undergoes a pseudonymization process at the collector level. |
+| `context_generated_at` | **{dotted-circle}** | string (date time format) | Timestamp indicating when context was generated. |
| `environment` | **{check-circle}** | string (max 32 chars) | Name of the source environment, such as `production` or `staging` |
| `source` | **{check-circle}** | string (max 32 chars) | Name of the source application, such as `gitlab-rails` or `gitlab-javascript` |
| `plan` | **{dotted-circle}** | string (max 32 chars) | Name of the plan for the namespace, such as `free`, `premium`, or `ultimate`. Automatically picked from the `namespace`. |
@@ -30,6 +31,7 @@ The [`StandardContext`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/g
Frontend events include a [web-specific schema](https://docs.snowplowanalytics.com/docs/understanding-your-pipeline/canonical-event/#Web-specific_fields) provided by Snowplow.
All URLs are pseudonymized. The entity identifier [replaces](https://docs.snowplowanalytics.com/docs/collecting-data/collecting-from-own-applications/javascript-trackers/javascript-tracker/javascript-tracker-v2/tracker-setup/other-parameters-2/#Setting_a_custom_page_URL_and_referrer_URL) personally identifiable
information (PII). PII includes usernames, group, and project names.
+Page titles are hardcoded as `GitLab` for the same reason.
| Field Name | Required | Type | Description |
|--------------------------|---------------------|-----------|----------------------------------------------------------------------------------------------------------------------------------|
@@ -105,7 +107,7 @@ information (PII). PII includes usernames, group, and project names.
| `os_name` | **{dotted-circle}** | string | Name of operating system |
| `os_timezone` | **{dotted-circle}** | string | Client operating system time zone |
| `page_referrer` | **{dotted-circle}** | string | Referrer URL |
-| `page_title` | **{dotted-circle}** | string | Page title |
+| `page_title` | **{dotted-circle}** | string | To not expose personal identifying information, the page title is hardcoded as `GitLab` |
| `page_url` | **{dotted-circle}** | string | Page URL |
| `page_urlfragment` | **{dotted-circle}** | string | Fragment aka anchor |
| `page_urlhost` | **{dotted-circle}** | string | Host aka domain |
diff --git a/doc/development/testing_guide/best_practices.md b/doc/development/testing_guide/best_practices.md
index 0f768a51b66..fe0c4c13ba2 100644
--- a/doc/development/testing_guide/best_practices.md
+++ b/doc/development/testing_guide/best_practices.md
@@ -642,6 +642,11 @@ should either:
It takes around one second to load tests that are using `fast_spec_helper`
instead of 30+ seconds in case of a regular `spec_helper`.
+WARNING:
+To verify that code and its specs are well-isolated from Rails, run the spec
+individually via `bin/rspec`. Don't use `bin/spring rspec` as it loads
+`spec_helper` automatically.
+
### `subject` and `let` variables
The GitLab RSpec suite has made extensive use of `let`(along with its strict, non-lazy
diff --git a/doc/development/testing_guide/ci.md b/doc/development/testing_guide/ci.md
deleted file mode 100644
index de024084c9c..00000000000
--- a/doc/development/testing_guide/ci.md
+++ /dev/null
@@ -1,9 +0,0 @@
----
-redirect_to: '../pipelines.md'
-remove_date: '2022-01-12'
----
-
-This file was moved to [another location](../pipelines.md).
-
-<!-- This redirect file can be deleted after <2022-01-12>. -->
-<!-- Before deletion, see: https://docs.gitlab.com/ee/development/documentation/#move-or-rename-a-page -->
diff --git a/doc/development/testing_guide/end_to_end/feature_flags.md b/doc/development/testing_guide/end_to_end/feature_flags.md
index de34e6a1872..48157a961e1 100644
--- a/doc/development/testing_guide/end_to_end/feature_flags.md
+++ b/doc/development/testing_guide/end_to_end/feature_flags.md
@@ -118,6 +118,32 @@ view 'app/views/devise/passwords/new_edit_behind_ff.html.haml' do
end
```
+## Working with resource classes
+
+If a resource class must behave differently when a feature flag is active, toggle a
+variable with the name of the feature flag inside the class. This variable and condition
+ensure all actions are handled appropriately.
+
+You can set this variable inside the `fabricate_via_api` call. For a consistent approach:
+
+- Use an `activated` check, not a deactivated one.
+- Add the word `activated` to the end of a variable's name.
+- Inside the `initialize` method, set the variable's default value.
+
+For example:
+
+```ruby
+def initialize
+ name_of_the_future_flag_activated = false
+ ...
+end
+```
+
+### Cleanup
+
+After the feature flag is removed, clean up the resource class and delete the variable.
+All methods should use the condition procedures of the now-default state.
+
## Running a scenario with a feature flag enabled
It's also possible to run an entire scenario with a feature flag enabled, without having to edit
diff --git a/doc/development/testing_guide/end_to_end/index.md b/doc/development/testing_guide/end_to_end/index.md
index 57c8c3bf59c..1fc9bc8258a 100644
--- a/doc/development/testing_guide/end_to_end/index.md
+++ b/doc/development/testing_guide/end_to_end/index.md
@@ -170,6 +170,35 @@ Helm chart](https://gitlab.com/gitlab-org/charts/gitlab/), itself deployed with
See [Review Apps](../review_apps.md) for more details about Review Apps.
+### Run tests in parallel
+
+To run tests in parallel on CI, the [Knapsack](https://github.com/KnapsackPro/knapsack)
+gem is used. Knapsack reports are generated automatically and stored in the `GCS` bucket
+`knapsack-reports` in the `gitlab-qa-resources` project. The [`KnapsackReport`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/qa/qa/tools/knapsack_report.rb)
+helper handles automated report generation and upload.
+
+## Test metrics
+
+For additional test health visibility, use a custom setup to export test execution
+results to your [InfluxDb](https://influxdb.quality.gitlab.net/) instance, and visualize
+results as [Grafana](https://dashboards.quality.gitlab.net/) dashboards.
+
+### Provisioning
+
+Provisioning of all components is performed by the
+[`engineering-productivity-infrastructure`](https://gitlab.com/gitlab-org/quality/engineering-productivity-infrastructure) project.
+
+### Exporting metrics in CI
+
+Use these environment variables to configure metrics export:
+
+| Variable | Required | Information |
+| -------- | -------- | ----------- |
+| `QA_INFLUXDB_URL` | `true` | Should be set to `https://influxdb.quality.gitlab.net`. No default value. |
+| `QA_INFLUXDB_TOKEN` | `true` | InfluxDB write token that can be found under `Influxdb auth tokens` document in `Gitlab-QA` `1Password` vault. No default value. |
+| `QA_RUN_TYPE` | `false` | Arbitrary name for test execution, like `package-and-qa`. Automatically inferred from the project name for live environment test executions. No default value. |
+| `QA_EXPORT_TEST_METRICS` | `false` | Flag to enable or disable metrics export. Defaults to `true`. |
+
## Test reports
### Allure report
diff --git a/doc/development/testing_guide/end_to_end/rspec_metadata_tests.md b/doc/development/testing_guide/end_to_end/rspec_metadata_tests.md
index cb15dbe023f..bb214976926 100644
--- a/doc/development/testing_guide/end_to_end/rspec_metadata_tests.md
+++ b/doc/development/testing_guide/end_to_end/rspec_metadata_tests.md
@@ -34,7 +34,7 @@ This is a partial list of the [RSpec metadata](https://relishapp.com/rspec/rspec
| `:packages` | The test requires a GitLab instance that has the [Package Registry](../../../administration/packages/#gitlab-package-registry-administration) enabled. |
| `:quarantine` | The test has been [quarantined](https://about.gitlab.com/handbook/engineering/quality/guidelines/debugging-qa-test-failures/#quarantining-tests), runs in a separate job that only includes quarantined tests, and is allowed to fail. The test is skipped in its regular job so that if it fails it doesn't hold up the pipeline. Note that you can also [quarantine a test only when it runs in a specific context](execution_context_selection.md#quarantine-a-test-for-a-specific-environment). |
| `:relative_url` | The test requires a GitLab instance to be installed under a [relative URL](../../../install/relative_url.md). |
-| `:reliable` | The test has been [promoted to a reliable test](https://about.gitlab.com/handbook/engineering/quality/guidelines/reliable-tests/#promoting-an-existing-test-to-reliable) meaning it passes consistently in all pipelines, including merge requests. |
+| `:reliable` | The test has been [promoted to a reliable test](https://about.gitlab.com/handbook/engineering/quality/quality-engineering/reliable-tests/#promoting-an-existing-test-to-reliable) meaning it passes consistently in all pipelines, including merge requests. |
| `:repository_storage` | The test requires a GitLab instance to be configured to use multiple [repository storage paths](../../../administration/repository_storage_paths.md). Paired with the `:orchestrated` tag. |
| `:requires_admin` | The test requires an administrator account. Tests with the tag are excluded when run against Canary and Production environments. |
| `:requires_git_protocol_v2` | The test requires that Git protocol version 2 is enabled on the server. It's assumed to be enabled by default but if not the test can be skipped by setting `QA_CAN_TEST_GIT_PROTOCOL_V2` to `false`. |
@@ -46,4 +46,4 @@ This is a partial list of the [RSpec metadata](https://relishapp.com/rspec/rspec
| `:smtp` | The test requires a GitLab instance to be configured to use an SMTP server. Tests SMTP notification email delivery from GitLab by using MailHog. |
| `:testcase` | The link to the test case issue in the [GitLab Project Test Cases](https://gitlab.com/gitlab-org/gitlab/-/quality/test_cases). |
| `:transient` | The test tests transient bugs. It is excluded by default. |
-| `:issue`, `:issue_${num}` | Optional links to issues which might be related to the spec. Helps keeping track of related issues and can also be used by tools that create test reports. Currently added automatically to `Allure` test report. Multiple tags can be used by adding optional number postfix like `issue_1`, `issue_2` etc. |
+| `:issue`, `:issue_${num}` | Optional links to issues which might be related to the spec. Helps keep track of related issues and can also be used by tools that create test reports. Currently added automatically to `Allure` test report. Multiple tags can be used by adding an optional numeric suffix like `issue_1`, `issue_2` etc. |
diff --git a/doc/development/testing_guide/review_apps.md b/doc/development/testing_guide/review_apps.md
index 31a807697c5..aef83109b9b 100644
--- a/doc/development/testing_guide/review_apps.md
+++ b/doc/development/testing_guide/review_apps.md
@@ -14,6 +14,7 @@ For any of the following scenarios, the `start-review-app-pipeline` job would be
- for merge requests with CI config changes
- for merge requests with frontend changes
+- for merge requests with changes to `{,ee/,jh/}{app/controllers}/**/*`
- for merge requests with QA changes
- for scheduled pipelines
- the MR has the `pipeline:run-review-app` label set
@@ -21,13 +22,13 @@ For any of the following scenarios, the `start-review-app-pipeline` job would be
## QA runs on Review Apps
On every [pipeline](https://gitlab.com/gitlab-org/gitlab/pipelines/125315730) in the `qa` stage (which comes after the
-`review` stage), the `review-qa-smoke` job is automatically started and it runs
-the QA smoke suite.
+`review` stage), the `review-qa-smoke` and `review-qa-reliable` jobs are automatically started. The `review-qa-smoke` runs
+the QA smoke suite and the `review-qa-reliable` executes E2E tests identified as [reliable](https://about.gitlab.com/handbook/engineering/quality/quality-engineering/reliable-tests).
You can also manually start the `review-qa-all`: it runs the full QA suite.
After the end-to-end test runs have finished, [Allure reports](https://github.com/allure-framework/allure2) are generated and published by
-the `allure-report-qa-smoke` and `allure-report-qa-all` jobs. A comment with links to the reports are added to the merge request.
+the `allure-report-qa-smoke`, `allure-report-qa-reliable`, and `allure-report-qa-all` jobs. A comment with links to the reports are added to the merge request.
## Performance Metrics
@@ -121,7 +122,7 @@ graph TD
B[review-build-cng];
C[review-deploy];
D[CNG-mirror];
- E[review-qa-smoke];
+ E[review-qa-smoke, review-qa-reliable];
A -->|once the `prepare` stage is done| B
B -.->|triggers a CNG-mirror pipeline and wait for it to be done| D
@@ -142,7 +143,7 @@ subgraph "3. gitlab `review` stage"
end
subgraph "4. gitlab `qa` stage"
- E[review-qa-smoke<br><br>gitlab-qa runs the smoke suite against the Review App.]
+ E[review-qa-smoke, review-qa-reliable<br><br>gitlab-qa runs the smoke and reliable suites against the Review App.]
end
subgraph "CNG-mirror pipeline"
@@ -196,7 +197,7 @@ subgraph "CNG-mirror pipeline"
issue with a link to your merge request. Note that the deployment failure can
reveal an actual problem introduced in your merge request (that is, this isn't
necessarily a transient failure)!
-- If the `review-qa-smoke` job keeps failing (note that we already retry it twice),
+- If the `review-qa-smoke` or `review-qa-reliable` job keeps failing (note that we already retry them once),
please check the job's logs: you could discover an actual problem introduced in
your merge request. You can also download the artifacts to see screenshots of
the page at the time the failures occurred. If you don't find the cause of the
@@ -250,7 +251,7 @@ Leading indicators may be health check failures leading to restarts or majority
The [Review Apps Overview dashboard](https://console.cloud.google.com/monitoring/classic/dashboards/6798952013815386466?project=gitlab-review-apps&timeDomain=1d)
aids in identifying load spikes on the cluster, and if nodes are problematic or the entire cluster is trending towards unhealthy.
-### Database related errors in `review-deploy` or `review-qa-smoke`
+### Database related errors in `review-deploy`, `review-qa-smoke`, or `review-qa-reliable`
Occasionally the state of a Review App's database could diverge from the database schema. This could be caused by
changes to migration files or schema, such as a migration being renamed or deleted. This typically manifests in migration errors such as:
diff --git a/doc/development/wikis.md b/doc/development/wikis.md
index 994312da98e..deea3d38470 100644
--- a/doc/development/wikis.md
+++ b/doc/development/wikis.md
@@ -1,5 +1,4 @@
---
-type: reference, dev
stage: Create
group: Editor
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
@@ -91,8 +90,6 @@ Only some data is persisted in the database:
The web UI uploads attachments through the REST API, which stores the files as commits in the wiki repository.
-Prior to GitLab 11.3 attachments were stored outside of the repository, [see this issue](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/33475).
-
## Related topics
- [Gollum installation instructions](https://github.com/gollum/gollum/wiki/Installation)