summaryrefslogtreecommitdiff
path: root/doc/development
diff options
context:
space:
mode:
authorGitLab Bot <gitlab-bot@gitlab.com>2020-12-17 11:59:07 +0000
committerGitLab Bot <gitlab-bot@gitlab.com>2020-12-17 11:59:07 +0000
commit8b573c94895dc0ac0e1d9d59cf3e8745e8b539ca (patch)
tree544930fb309b30317ae9797a9683768705d664c4 /doc/development
parent4b1de649d0168371549608993deac953eb692019 (diff)
downloadgitlab-ce-8b573c94895dc0ac0e1d9d59cf3e8745e8b539ca.tar.gz
Add latest changes from gitlab-org/gitlab@13-7-stable-eev13.7.0-rc42
Diffstat (limited to 'doc/development')
-rw-r--r--doc/development/README.md123
-rw-r--r--doc/development/adding_database_indexes.md2
-rw-r--r--doc/development/adding_service_component.md6
-rw-r--r--doc/development/agent/gitops.md148
-rw-r--r--doc/development/agent/identity.md94
-rw-r--r--doc/development/agent/index.md87
-rw-r--r--doc/development/agent/local.md58
-rw-r--r--doc/development/agent/routing.md223
-rw-r--r--doc/development/agent/user_stories.md77
-rw-r--r--doc/development/api_graphql_styleguide.md264
-rw-r--r--doc/development/api_styleguide.md20
-rw-r--r--doc/development/application_limits.md6
-rw-r--r--doc/development/application_secrets.md13
-rw-r--r--doc/development/approval_rules.md4
-rw-r--r--doc/development/architecture.md36
-rw-r--r--doc/development/auto_devops.md2
-rw-r--r--doc/development/background_migrations.md4
-rw-r--r--doc/development/build_test_package.md4
-rw-r--r--doc/development/bulk_import.md53
-rw-r--r--doc/development/cached_queries.md144
-rw-r--r--doc/development/changelog.md14
-rw-r--r--doc/development/chaos_endpoints.md48
-rw-r--r--doc/development/chatops_on_gitlabcom.md53
-rw-r--r--doc/development/cicd/index.md20
-rw-r--r--doc/development/cicd/templates.md4
-rw-r--r--doc/development/code_comments.md4
-rw-r--r--doc/development/code_intelligence/index.md4
-rw-r--r--doc/development/code_review.md66
-rw-r--r--doc/development/contributing/community_roles.md4
-rw-r--r--doc/development/contributing/design.md2
-rw-r--r--doc/development/contributing/index.md8
-rw-r--r--doc/development/contributing/issue_workflow.md16
-rw-r--r--doc/development/contributing/merge_request_workflow.md6
-rw-r--r--doc/development/contributing/style_guides.md31
-rw-r--r--doc/development/creating_enums.md2
-rw-r--r--doc/development/cycle_analytics.md3
-rw-r--r--doc/development/dangerbot.md48
-rw-r--r--doc/development/database/add_foreign_key_to_existing_column.md6
-rw-r--r--doc/development/database/constraint_naming_convention.md4
-rw-r--r--doc/development/database/database_reviewer_guidelines.md2
-rw-r--r--doc/development/database/index.md3
-rw-r--r--doc/development/database/maintenance_operations.md2
-rw-r--r--doc/development/database/not_null_constraints.md6
-rw-r--r--doc/development/database/setting_multiple_values.md2
-rw-r--r--doc/development/database/strings_and_the_text_data_type.md10
-rw-r--r--doc/development/database/table_partitioning.md4
-rw-r--r--doc/development/database_debugging.md2
-rw-r--r--doc/development/database_query_comments.md2
-rw-r--r--doc/development/database_review.md67
-rw-r--r--doc/development/db_dump.md2
-rw-r--r--doc/development/deleting_migrations.md2
-rw-r--r--doc/development/deprecation_guidelines/index.md4
-rw-r--r--doc/development/diffs.md42
-rw-r--r--doc/development/distributed_tracing.md24
-rw-r--r--doc/development/doc_styleguide.md3
-rw-r--r--doc/development/documentation/feature-change-workflow.md3
-rw-r--r--doc/development/documentation/feature_flags.md10
-rw-r--r--doc/development/documentation/improvement-workflow.md3
-rw-r--r--doc/development/documentation/index.md172
-rw-r--r--doc/development/documentation/restful_api_styleguide.md6
-rw-r--r--doc/development/documentation/site_architecture/deployment_process.md8
-rw-r--r--doc/development/documentation/site_architecture/global_nav.md16
-rw-r--r--doc/development/documentation/site_architecture/index.md21
-rw-r--r--doc/development/documentation/site_architecture/release_process.md36
-rw-r--r--doc/development/documentation/structure.md8
-rw-r--r--doc/development/documentation/styleguide/img/tier_badge.pngbin0 -> 9592 bytes
-rw-r--r--doc/development/documentation/styleguide/index.md774
-rw-r--r--doc/development/documentation/testing.md130
-rw-r--r--doc/development/documentation/workflow.md16
-rw-r--r--doc/development/ee_features.md30
-rw-r--r--doc/development/elasticsearch.md43
-rw-r--r--doc/development/emails.md4
-rw-r--r--doc/development/event_tracking/backend.md3
-rw-r--r--doc/development/event_tracking/frontend.md3
-rw-r--r--doc/development/event_tracking/index.md3
-rw-r--r--doc/development/experiment_guide/index.md179
-rw-r--r--doc/development/export_csv.md21
-rw-r--r--doc/development/fe_guide/accessibility.md2
-rw-r--r--doc/development/fe_guide/architecture.md2
-rw-r--r--doc/development/fe_guide/axios.md2
-rw-r--r--doc/development/fe_guide/dependencies.md4
-rw-r--r--doc/development/fe_guide/design_patterns.md2
-rw-r--r--doc/development/fe_guide/development_process.md4
-rw-r--r--doc/development/fe_guide/droplab/droplab.md14
-rw-r--r--doc/development/fe_guide/droplab/plugins/ajax.md2
-rw-r--r--doc/development/fe_guide/droplab/plugins/filter.md4
-rw-r--r--doc/development/fe_guide/droplab/plugins/index.md2
-rw-r--r--doc/development/fe_guide/droplab/plugins/input_setter.md4
-rw-r--r--doc/development/fe_guide/editor_lite.md10
-rw-r--r--doc/development/fe_guide/emojis.md2
-rw-r--r--doc/development/fe_guide/event_tracking.md3
-rw-r--r--doc/development/fe_guide/frontend_faq.md13
-rw-r--r--doc/development/fe_guide/graphql.md527
-rw-r--r--doc/development/fe_guide/icons.md8
-rw-r--r--doc/development/fe_guide/index.md10
-rw-r--r--doc/development/fe_guide/keyboard_shortcuts.md2
-rw-r--r--doc/development/fe_guide/performance.md272
-rw-r--r--doc/development/fe_guide/principles.md6
-rw-r--r--doc/development/fe_guide/security.md8
-rw-r--r--doc/development/fe_guide/style/html.md2
-rw-r--r--doc/development/fe_guide/style/index.md2
-rw-r--r--doc/development/fe_guide/style/javascript.md8
-rw-r--r--doc/development/fe_guide/style/scss.md186
-rw-r--r--doc/development/fe_guide/style/vue.md6
-rw-r--r--doc/development/fe_guide/style_guide_js.md3
-rw-r--r--doc/development/fe_guide/style_guide_scss.md3
-rw-r--r--doc/development/fe_guide/testing.md3
-rw-r--r--doc/development/fe_guide/tooling.md20
-rw-r--r--doc/development/fe_guide/vue.md20
-rw-r--r--doc/development/fe_guide/vue3_migration.md4
-rw-r--r--doc/development/fe_guide/vuex.md60
-rw-r--r--doc/development/feature_categorization/index.md9
-rw-r--r--doc/development/feature_flags.md3
-rw-r--r--doc/development/feature_flags/controls.md20
-rw-r--r--doc/development/feature_flags/development.md49
-rw-r--r--doc/development/feature_flags/index.md31
-rw-r--r--doc/development/feature_flags/process.md27
-rw-r--r--doc/development/features_inside_dot_gitlab.md2
-rw-r--r--doc/development/file_storage.md11
-rw-r--r--doc/development/filtering_by_label.md2
-rw-r--r--doc/development/foreign_keys.md2
-rw-r--r--doc/development/frontend.md3
-rw-r--r--doc/development/gemfile.md2
-rw-r--r--doc/development/geo.md6
-rw-r--r--doc/development/geo/framework.md68
-rw-r--r--doc/development/git_object_deduplication.md52
-rw-r--r--doc/development/gitaly.md42
-rw-r--r--doc/development/github_importer.md46
-rw-r--r--doc/development/go_guide/dependencies.md28
-rw-r--r--doc/development/go_guide/index.md28
-rw-r--r--doc/development/gotchas.md14
-rw-r--r--doc/development/graphql_guide/batchloader.md2
-rw-r--r--doc/development/graphql_guide/index.md4
-rw-r--r--doc/development/graphql_guide/pagination.md42
-rw-r--r--doc/development/hash_indexes.md2
-rw-r--r--doc/development/i18n/externalization.md30
-rw-r--r--doc/development/i18n/index.md12
-rw-r--r--doc/development/i18n/merging_translations.md10
-rw-r--r--doc/development/i18n/proofreader.md6
-rw-r--r--doc/development/i18n/translation.md10
-rw-r--r--doc/development/i18n_guide.md3
-rw-r--r--doc/development/image_scaling.md4
-rw-r--r--doc/development/img/bulk_imports_overview_v13_7.pngbin0 -> 35495 bytes
-rw-r--r--doc/development/img/each_batch_users_table_bad_index_v13_7.pngbin0 -> 7386 bytes
-rw-r--r--doc/development/img/each_batch_users_table_filter_v13_7.pngbin0 -> 6940 bytes
-rw-r--r--doc/development/img/each_batch_users_table_filtered_index_v13_7.pngbin0 -> 5986 bytes
-rw-r--r--doc/development/img/each_batch_users_table_good_index_v13_7.pngbin0 -> 7046 bytes
-rw-r--r--doc/development/img/each_batch_users_table_iteration_1_v13_7.pngbin0 -> 5942 bytes
-rw-r--r--doc/development/img/each_batch_users_table_iteration_2_v13_7.pngbin0 -> 6539 bytes
-rw-r--r--doc/development/img/each_batch_users_table_iteration_3_v13_7.pngbin0 -> 6665 bytes
-rw-r--r--doc/development/img/each_batch_users_table_iteration_4_v13_7.pngbin0 -> 6620 bytes
-rw-r--r--doc/development/img/each_batch_users_table_iteration_5_v13_7.pngbin0 -> 6897 bytes
-rw-r--r--doc/development/img/each_batch_users_table_v13_7.pngbin0 -> 6361 bytes
-rw-r--r--doc/development/import_export.md14
-rw-r--r--doc/development/import_project.md20
-rw-r--r--doc/development/insert_into_tables_in_batches.md4
-rw-r--r--doc/development/instrumentation.md23
-rw-r--r--doc/development/integrations/codesandbox.md140
-rw-r--r--doc/development/integrations/img/copy_cookies.pngbin0 -> 44311 bytes
-rw-r--r--doc/development/integrations/img/copy_curl.pngbin0 -> 60175 bytes
-rw-r--r--doc/development/integrations/img/example_vuln.png (renamed from doc/development/integrations/example_vuln.png)bin36208 -> 36208 bytes
-rw-r--r--doc/development/integrations/jenkins.md8
-rw-r--r--doc/development/integrations/jira_connect.md36
-rw-r--r--doc/development/integrations/secure.md18
-rw-r--r--doc/development/integrations/secure_partner_integration.md14
-rw-r--r--doc/development/interacting_components.md2
-rw-r--r--doc/development/internal_api.md20
-rw-r--r--doc/development/internal_users.md2
-rw-r--r--doc/development/issuable-like-models.md2
-rw-r--r--doc/development/issue_types.md2
-rw-r--r--doc/development/iterating_tables_in_batches.md333
-rw-r--r--doc/development/kubernetes.md4
-rw-r--r--doc/development/lfs.md6
-rw-r--r--doc/development/licensed_feature_availability.md4
-rw-r--r--doc/development/licensing.md10
-rw-r--r--doc/development/logging.md47
-rw-r--r--doc/development/mass_insert.md4
-rw-r--r--doc/development/merge_request_performance_guidelines.md60
-rw-r--r--doc/development/migration_style_guide.md72
-rw-r--r--doc/development/module_with_instance_variables.md10
-rw-r--r--doc/development/multi_version_compatibility.md20
-rw-r--r--doc/development/namespaces_storage_statistics.md8
-rw-r--r--doc/development/new_fe_guide/dependencies.md2
-rw-r--r--doc/development/new_fe_guide/development/accessibility.md4
-rw-r--r--doc/development/new_fe_guide/development/components.md2
-rw-r--r--doc/development/new_fe_guide/development/index.md2
-rw-r--r--doc/development/new_fe_guide/development/performance.md4
-rw-r--r--doc/development/new_fe_guide/development/testing.md3
-rw-r--r--doc/development/new_fe_guide/index.md4
-rw-r--r--doc/development/new_fe_guide/modules/dirty_submit.md2
-rw-r--r--doc/development/new_fe_guide/modules/index.md2
-rw-r--r--doc/development/new_fe_guide/modules/widget_extensions.md4
-rw-r--r--doc/development/new_fe_guide/style/html.md3
-rw-r--r--doc/development/new_fe_guide/style/index.md3
-rw-r--r--doc/development/new_fe_guide/style/javascript.md3
-rw-r--r--doc/development/new_fe_guide/style/prettier.md3
-rw-r--r--doc/development/new_fe_guide/tips.md2
-rw-r--r--doc/development/newlines_styleguide.md2
-rw-r--r--doc/development/omnibus.md2
-rw-r--r--doc/development/ordering_table_columns.md2
-rw-r--r--doc/development/packages.md40
-rw-r--r--doc/development/performance.md60
-rw-r--r--doc/development/permissions.md10
-rw-r--r--doc/development/pipelines.md61
-rw-r--r--doc/development/policies.md48
-rw-r--r--doc/development/polling.md4
-rw-r--r--doc/development/polymorphic_associations.md16
-rw-r--r--doc/development/post_deployment_migrations.md10
-rw-r--r--doc/development/product_analytics/event_dictionary.md3
-rw-r--r--doc/development/product_analytics/index.md3
-rw-r--r--doc/development/product_analytics/snowplow.md241
-rw-r--r--doc/development/product_analytics/usage_ping.md170
-rw-r--r--doc/development/profiling.md10
-rw-r--r--doc/development/projections.md2
-rw-r--r--doc/development/prometheus.md3
-rw-r--r--doc/development/prometheus_metrics.md8
-rw-r--r--doc/development/pry_debugging.md8
-rw-r--r--doc/development/python_guide/index.md14
-rw-r--r--doc/development/query_count_limits.md10
-rw-r--r--doc/development/query_performance.md74
-rw-r--r--doc/development/query_recorder.md10
-rw-r--r--doc/development/rails_initializers.md2
-rw-r--r--doc/development/rake_tasks.md27
-rw-r--r--doc/development/reactive_caching.md126
-rw-r--r--doc/development/redis.md12
-rw-r--r--doc/development/refactoring_guide/index.md2
-rw-r--r--doc/development/reference_processing.md8
-rw-r--r--doc/development/renaming_features.md10
-rw-r--r--doc/development/repository_mirroring.md6
-rw-r--r--doc/development/reusing_abstractions.md2
-rw-r--r--doc/development/rolling_out_changes_using_feature_flags.md3
-rw-r--r--doc/development/routing.md2
-rw-r--r--doc/development/scalability.md65
-rw-r--r--doc/development/secure_coding_guidelines.md65
-rw-r--r--doc/development/serializing_data.md2
-rw-r--r--doc/development/service_measurement.md8
-rw-r--r--doc/development/session.md2
-rw-r--r--doc/development/sha1_as_binary.md2
-rw-r--r--doc/development/shared_files.md4
-rw-r--r--doc/development/shell_commands.md4
-rw-r--r--doc/development/shell_scripting_guide/index.md16
-rw-r--r--doc/development/sidekiq_debugging.md3
-rw-r--r--doc/development/sidekiq_style_guide.md97
-rw-r--r--doc/development/single_table_inheritance.md2
-rw-r--r--doc/development/sql.md2
-rw-r--r--doc/development/swapping_tables.md2
-rw-r--r--doc/development/telemetry/event_dictionary.md3
-rw-r--r--doc/development/telemetry/index.md3
-rw-r--r--doc/development/telemetry/snowplow.md3
-rw-r--r--doc/development/telemetry/usage_ping.md3
-rw-r--r--doc/development/testing.md3
-rw-r--r--doc/development/testing_guide/best_practices.md50
-rw-r--r--doc/development/testing_guide/ci.md10
-rw-r--r--doc/development/testing_guide/end_to_end/beginners_guide.md32
-rw-r--r--doc/development/testing_guide/end_to_end/best_practices.md12
-rw-r--r--doc/development/testing_guide/end_to_end/dynamic_element_validation.md10
-rw-r--r--doc/development/testing_guide/end_to_end/environment_selection.md8
-rw-r--r--doc/development/testing_guide/end_to_end/feature_flags.md8
-rw-r--r--doc/development/testing_guide/end_to_end/flows.md2
-rw-r--r--doc/development/testing_guide/end_to_end/index.md8
-rw-r--r--doc/development/testing_guide/end_to_end/page_objects.md26
-rw-r--r--doc/development/testing_guide/end_to_end/resources.md31
-rw-r--r--doc/development/testing_guide/end_to_end/rspec_metadata_tests.md18
-rw-r--r--doc/development/testing_guide/end_to_end/running_tests_that_require_special_setup.md24
-rw-r--r--doc/development/testing_guide/end_to_end/style_guide.md5
-rw-r--r--doc/development/testing_guide/flaky_tests.md6
-rw-r--r--doc/development/testing_guide/frontend_testing.md193
-rw-r--r--doc/development/testing_guide/index.md2
-rw-r--r--doc/development/testing_guide/review_apps.md14
-rw-r--r--doc/development/testing_guide/smoke.md6
-rw-r--r--doc/development/testing_guide/testing_levels.md24
-rw-r--r--doc/development/testing_guide/testing_migrations_guide.md31
-rw-r--r--doc/development/testing_guide/testing_rake_tasks.md4
-rw-r--r--doc/development/understanding_explain_plans.md2
-rw-r--r--doc/development/uploads.md24
-rw-r--r--doc/development/utilities.md4
-rw-r--r--doc/development/ux_guide/users.md7
-rw-r--r--doc/development/value_stream_analytics.md20
-rw-r--r--doc/development/verifying_database_capabilities.md2
-rw-r--r--doc/development/what_requires_downtime.md4
-rw-r--r--doc/development/wikis.md2
-rw-r--r--doc/development/windows.md12
282 files changed, 5318 insertions, 2676 deletions
diff --git a/doc/development/README.md b/doc/development/README.md
index 75ea6ae5f3b..2e4674b5288 100644
--- a/doc/development/README.md
+++ b/doc/development/README.md
@@ -11,20 +11,24 @@ description: "Development Guidelines: learn how to contribute to GitLab."
Learn the processes and technical information needed for contributing to GitLab.
-This content is intended for members of the GitLab Team as well as community contributors.
-Content specific to the GitLab Team should instead be included in the [Handbook](https://about.gitlab.com/handbook/).
+This content is intended for members of the GitLab Team as well as community
+contributors. Content specific to the GitLab Team should instead be included in
+the [Handbook](https://about.gitlab.com/handbook/).
-For information on using GitLab to work on your own software projects, see the [GitLab user documentation](../user/index.md).
+For information on using GitLab to work on your own software projects, see the
+[GitLab user documentation](../user/index.md).
-For information on working with GitLab's API, see the [API documentation](../api/README.md).
+For information on working with the GitLab APIs, see the [API documentation](../api/README.md).
-For information on how to install, configure, update, and upgrade your own GitLab instance, see the [administration documentation](../administration/index.md).
+For information about how to install, configure, update, and upgrade your own
+GitLab instance, see the [administration documentation](../administration/index.md).
## Get started
-- Set up GitLab's development environment with [GitLab Development Kit (GDK)](https://gitlab.com/gitlab-org/gitlab-development-kit/-/blob/master/README.md)
+- Set up the GitLab development environment with the
+ [GitLab Development Kit (GDK)](https://gitlab.com/gitlab-org/gitlab-development-kit/-/blob/master/README.md)
- [GitLab contributing guide](contributing/index.md)
- - [Issues workflow](contributing/issue_workflow.md) for more information on:
+ - [Issues workflow](contributing/issue_workflow.md) for more information about:
- Issue tracker guidelines.
- Triaging.
- Labels.
@@ -33,7 +37,7 @@ For information on how to install, configure, update, and upgrade your own GitLa
- Regression issues.
- Technical or UX debt.
- [Merge requests workflow](contributing/merge_request_workflow.md) for more
- information on:
+ information about:
- Merge request guidelines.
- Contribution acceptance criteria.
- Definition of done.
@@ -48,8 +52,10 @@ For information on how to install, configure, update, and upgrade your own GitLa
**Must-reads:**
- [Guide on adapting existing and introducing new components](architecture.md#adapting-existing-and-introducing-new-components)
-- [Code review guidelines](code_review.md) for reviewing code and having code reviewed
-- [Database review guidelines](database_review.md) for reviewing database-related changes and complex SQL queries, and having them reviewed
+- [Code review guidelines](code_review.md) for reviewing code and having code
+ reviewed
+- [Database review guidelines](database_review.md) for reviewing
+ database-related changes and complex SQL queries, and having them reviewed
- [Secure coding guidelines](secure_coding_guidelines.md)
- [Pipelines for the GitLab project](pipelines.md)
@@ -60,26 +66,86 @@ Complementary reads:
- [Guidelines for implementing Enterprise Edition features](ee_features.md)
- [Danger bot](dangerbot.md)
- [Generate a changelog entry with `bin/changelog`](changelog.md)
-- [Requesting access to Chatops on GitLab.com](chatops_on_gitlabcom.md#requesting-access) (for GitLab team members)
+- [Requesting access to ChatOps on GitLab.com](chatops_on_gitlabcom.md#requesting-access) (for GitLab team members)
- [Patch release process for developers](https://gitlab.com/gitlab-org/release/docs/blob/master/general/patch/process.md#process-for-developers)
- [Adding a new service component to GitLab](adding_service_component.md)
### Development guidelines review
-When you submit a change to GitLab's development guidelines, request a review
-from:
+When you submit a change to the GitLab development guidelines, who
+you ask for reviews depends on the level of change.
-- A member of your team or group, to check for technical accuracy.
-- For **significant** changes or proposals, request review from:
- - Engineering managers (FE, BE, DB, Security, UX, and others), according to the subject or process you're proposing.
- - The VP of Development (DRI) ([@clefelhocz1](https://gitlab.com/clefelhocz1)), for
- final approval of the new or changed guidelines.
-- The [Technical Writer assigned to dev guidelines](https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments-to-development-guidelines),
- to review the content for consistency and adherence to documentation guidelines.
+#### Wording, style, or link changes
+
+Not all changes require extensive review. For example, MRs that don't change the
+content's meaning or function can be reviewed, approved, and merged by any
+maintainer or Technical Writer. These can include:
+
+- Typo fixes.
+- Clarifying links, such as to external programming language documentation.
+- Changes to comply with the [Documentation Style Guide](documentation/index.md)
+ that don't change the intent of the documentation page.
+
+#### Specific changes
+
+If the MR proposes changes that are limited to a particular stage, group, or team,
+request a review and approval from an experienced GitLab Team Member in that
+group. For example, if you're documenting a new internal API used exclusively by
+a given group, request an engineering review from one of the group's members.
+
+After the engineering review is complete, assign the MR to the
+[Technical Writer associated with the stage and group](https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments)
+in the modified documentation page's metadata.
+
+If you have questions or need further input, request a review from the
+Technical Writer assigned to the [Development Guidelines](https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments-to-development-guidelines).
+
+#### Broader changes
+
+Some changes affect more than one group. For example:
+
+- Changes to [code review guidelines](code_review.md).
+- Changes to [commit message guidelines](contributing/merge_request_workflow.md#commit-messages-guidelines).
+- Changes to guidelines in [feature flags in development of GitLab](feature_flags/).
+- Changes to [feature flags documentation guidelines](documentation/feature_flags.md).
+
+In these cases, use the following workflow:
+
+1. Request a peer review from a member of your team.
+1. Request a review and approval of an Engineering Manager (EM)
+ or Staff Engineer who's responsible for the area in question:
+
+ - [Frontend](https://about.gitlab.com/handbook/engineering/frontend/)
+ - [Backend](https://about.gitlab.com/handbook/engineering/)
+ - [Database](https://about.gitlab.com/handbook/engineering/development/database/)
+ - [User Experience (UX)](https://about.gitlab.com/handbook/engineering/ux/)
+ - [Security](https://about.gitlab.com/handbook/engineering/security/)
+ - [Quality](https://about.gitlab.com/handbook/engineering/quality/)
+ - [Infrastructure](https://about.gitlab.com/handbook/engineering/infrastructure/)
+ - [Technical Writing](https://about.gitlab.com/handbook/engineering/ux/technical-writing/)
+
+ You can skip this step for MRs authored by EMs or Staff Engineers responsible
+ for their area.
+
+ If there are several affected groups, you may need approvals at the
+ EM/Staff Engineer level from each affected area.
+
+1. After completing the reviews, consult with the EM/Staff Engineer
+ author / approver of the MR.
+
+ If this is a significant change across multiple areas, request final review
+ and approval from the VP of Development, the DRI for Development Guidelines,
+ @clefelhocz1.
+
+1. After all approvals are complete, assign the merge request to the
+ Technical Writer for [Development Guidelines](https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments-to-development-guidelines)
+ for final content review and merge. The Technical Writer may ask for
+ additional approvals as previously suggested before merging the MR.
## UX and Frontend guides
-- [GitLab Design System](https://design.gitlab.com/) for building GitLab with existing CSS styles and elements
+- [GitLab Design System](https://design.gitlab.com/), for building GitLab with
+ existing CSS styles and elements
- [Frontend guidelines](fe_guide/index.md)
- [Emoji guide](fe_guide/emojis.md)
@@ -89,7 +155,8 @@ from:
- [Issuable-like Rails models](issuable-like-models.md)
- [Logging](logging.md)
- [API style guide](api_styleguide.md) for contributing to the API
-- [GraphQL API style guide](api_graphql_styleguide.md) for contributing to the [GraphQL API](../api/graphql/index.md)
+- [GraphQL API style guide](api_graphql_styleguide.md) for contributing to the
+ [GraphQL API](../api/graphql/index.md)
- [Sidekiq guidelines](sidekiq_style_guide.md) for working with Sidekiq workers
- [Working with Gitaly](gitaly.md)
- [Manage feature flags](feature_flags/index.md)
@@ -98,10 +165,11 @@ from:
- [Shell commands](shell_commands.md) in the GitLab codebase
- [`Gemfile` guidelines](gemfile.md)
- [Pry debugging](pry_debugging.md)
-- [Sidekiq debugging](sidekiq_debugging.md)
+- [Sidekiq debugging](../administration/troubleshooting/sidekiq.md)
- [Accessing session data](session.md)
- [Gotchas](gotchas.md) to avoid
-- [Avoid modules with instance variables](module_with_instance_variables.md) if possible
+- [Avoid modules with instance variables](module_with_instance_variables.md), if
+ possible
- [How to dump production data to staging](db_dump.md)
- [Working with the GitHub importer](github_importer.md)
- [Import/Export development documentation](import_export.md)
@@ -135,6 +203,7 @@ from:
- [Wikis development guide](wikis.md)
- [Newlines style guide](newlines_styleguide.md)
- [Image scaling guide](image_scaling.md)
+- [Export to CSV](export_csv.md)
## Performance guides
@@ -146,8 +215,9 @@ from:
for ensuring merge requests do not negatively impact GitLab performance
- [Profiling](profiling.md) a URL, measuring performance using Sherlock, or
tracking down N+1 queries using Bullet.
-- [Cached queries guidelines](cached_queries.md), for tracking down N+1 queries masked by query caching, memory profiling and why should
- we avoid cached queries.
+- [Cached queries guidelines](cached_queries.md), for tracking down N+1 queries
+ masked by query caching, memory profiling and why should we avoid cached
+ queries.
## Database guides
@@ -159,6 +229,7 @@ See [database guidelines](database/index.md).
- [Security Scanners](integrations/secure.md)
- [Secure Partner Integration](integrations/secure_partner_integration.md)
- [How to run Jenkins in development environment](integrations/jenkins.md)
+- [How to run local Codesandbox integration for Web IDE Live Preview](integrations/codesandbox.md)
## Testing guides
diff --git a/doc/development/adding_database_indexes.md b/doc/development/adding_database_indexes.md
index d2526d6c4f2..0991c4740cc 100644
--- a/doc/development/adding_database_indexes.md
+++ b/doc/development/adding_database_indexes.md
@@ -1,7 +1,7 @@
---
stage: Enablement
group: Database
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Adding Database Indexes
diff --git a/doc/development/adding_service_component.md b/doc/development/adding_service_component.md
index 74309a5c616..7e2add2c91f 100644
--- a/doc/development/adding_service_component.md
+++ b/doc/development/adding_service_component.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Adding a new Service Component to GitLab
@@ -47,14 +47,14 @@ Adding a new service follows the same [merge request workflow](contributing/merg
The first iteration should be to add the ability to connect and use the service as an externally installed component. Often this involves providing settings in GitLab to connect to the service, or allow connections from it. And then shipping documentation on how to install and configure the service with GitLab.
-TIP: **Tip:**
+NOTE:
[Elasticsearch](../integration/elasticsearch.md#installing-elasticsearch) is an example of a service that has been integrated this way. And many of the other services, including internal projects like Gitaly, started off as separately installed alternatives.
**For services that depend on the existing GitLab codebase:**
The first iteration should be opt-in, either through the `gitlab.yml` configuration or through [feature flags](feature_flags/index.md). For these types of services it is often necessary to [bundle the service and its dependencies with GitLab](#bundling-a-service-with-gitlab) as part of the initial integration.
-TIP: **Tip:**
+NOTE:
[ActionCable](https://docs.gitlab.com/omnibus/settings/actioncable.html) is an example of a service that has been added this way.
## Bundling a service with GitLab
diff --git a/doc/development/agent/gitops.md b/doc/development/agent/gitops.md
new file mode 100644
index 00000000000..8c8586326fa
--- /dev/null
+++ b/doc/development/agent/gitops.md
@@ -0,0 +1,148 @@
+---
+stage: Configure
+group: Configure
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+---
+
+# GitOps with the Kubernetes Agent **(PREMIUM ONLY)**
+
+The [GitLab Kubernetes Agent](../../user/clusters/agent/index.md) supports the
+[pull-based version](https://www.gitops.tech/#pull-based-deployments) of
+[GitOps](https://www.gitops.tech/). To be useful, the feature must be able to perform these tasks:
+
+- Connect one or more Kubernetes clusters to a GitLab project or group.
+- Synchronize cluster-wide state from a Git repository.
+- Synchronize namespace-scoped state from a Git repository.
+- Control the following settings:
+
+ - The kinds of objects an agent can manage.
+ - Enabling the namespaced mode of operation for managing objects only in a specific namespace.
+ - Enabling the non-namespaced mode of operation for managing objects in any namespace, and
+ managing non-namespaced objects.
+
+- Synchronize state from one or more Git repositories into a cluster.
+- Configure multiple agents running in different clusters to synchronize state
+ from the same repository.
+
+## GitOps architecture
+
+In this architecture, the Kubernetes cluster (`agentk`) periodically fetches
+configuration from (`kas`), spawning a goroutine for each configured GitOps
+repository. Each goroutine makes a streaming `GetObjectsToSynchronize()` gRPC call.
+`kas` accepts these requests, then checks if this agent is authorized to access
+this GitLab repository. If authorized, `kas` polls Gitaly for repository updates
+and sends the latest manifests to the agent.
+
+Before each poll, `kas` verifies with GitLab that the agent's token is still valid.
+When `agentk` receives an updated manifest, it performs a synchronization using
+[`gitops-engine`](https://github.com/argoproj/gitops-engine).
+
+If a repository is removed from the list, `agentk` stops the `GetObjectsToSynchronize()`
+calls to that repository.
+
+```mermaid
+graph TB
+ agentk -- fetch configuration --> kas
+ agentk -- fetch GitOps manifests --> kas
+
+ subgraph "GitLab"
+ kas[kas]
+ GitLabRoR[GitLab RoR]
+ Gitaly[Gitaly]
+ kas -- poll GitOps repositories --> Gitaly
+ kas -- authZ for agentk --> GitLabRoR
+ kas -- fetch configuration --> Gitaly
+ end
+
+ subgraph "Kubernetes cluster"
+ agentk[agentk]
+ end
+```
+
+## Architecture considered but not implemented
+
+As part of the implementation process, this architecture was considered, but ultimately
+not implemented.
+
+In this architecture, `agentk` periodically fetches configuration from `kas`. For each
+configured GitOps repository, it spawns a goroutine. Each goroutine then spawns a
+copy of [`git-sync`](https://github.com/kubernetes/git-sync). It polls a particular
+repository and invokes a corresponding webhook on `agentk` when it changes. When that
+happens, `agentk` performs a synchronization using
+[`gitops-engine`](https://github.com/argoproj/gitops-engine).
+
+For repositories no longer in the list, `agentk` stops corresponding goroutines
+and `git-sync` copies, also deleting their cloned repositories from disk:
+
+```mermaid
+graph TB
+ agentk -- fetch configuration --> kas
+ git-sync -- poll GitOps repositories --> GitLabRoR
+
+ subgraph "GitLab"
+ kas[kas]
+ GitLabRoR[GitLab RoR]
+ kas -- authZ for agentk --> GitLabRoR
+ kas -- fetch configuration --> Gitaly[Gitaly]
+ end
+
+ subgraph "Kubernetes cluster"
+ agentk[agentk]
+ git-sync[git-sync]
+ agentk -- control --> git-sync
+ git-sync -- notify about changes --> agentk
+ end
+```
+
+## Comparing implemented and non-implemented architectures
+
+Both architectures attempt to answer the same question: how to grant an agent
+access to a non-public repository?
+
+In the **implemented** architecture:
+
+- Favorable: Fewer moving parts, as `git-sync` and `git` are not used, making this
+ design more reliable.
+- Favorable: Uses existing connectivity and authentication mechanisms are used (gRPC + `agentk` token).
+- Favorable: No polling through external infrastructure. Saves traffic and avoids
+ noise in access logs.
+
+In the **unimplemented** architecture:
+
+- Favorable: `agentk` uses `git-sync` to access repositories with standard protocols
+ (either HTTPS, or SSH and Git) with accepted authentication and authorization methods.
+
+ - Unfavorable: The user must put credentials into a `secret`. GitLab doesn't have
+ a mechanism for per-repository tokens for robots.
+ - Unfavorable: Rotating all credentials is more work than rotating a single `agentk` token.
+
+- Unfavorable: A dependency on an external component (`git-sync`) that can be avoided.
+- Unfavorable: More network traffic and connections than the implemented design
+
+### Ideas considered for the unimplemented design
+
+As part of the design process, these ideas were considered, and discarded:
+
+- Running `git-sync` and `gitops-engine` as part of `kas`.
+
+ - Favorable: More code and infrastructure under our control for GitLab.com
+ - Unfavorable: Running an arbitrary number of `git-sync` processes would require
+ an unbounded amount of RAM and disk space.
+ - Unfavorable: Unclear which `kas` replica is responsible for which agent and
+ repository synchronization. If done as part of `agentk`, leader election can be
+ done using [client-go](https://pkg.go.dev/k8s.io/client-go/tools/leaderelection?tab=doc).
+
+- Running `git-sync` and a "`gitops-engine` driver" helper program as a separate
+ Kubernetes `Deployment`.
+
+ - Favorable: Better isolation and higher resiliency. For example, if the node
+ with `agentk` dies, not all synchronization stops.
+ - Favorable: Each deployment has its own memory and disk limits.
+ - Favorable: Per-repository synchronization identity (distinct `ServiceAccount`)
+ can be implemented.
+ - Unfavorable: Time consuming to implement properly:
+
+ - Each `Deployment` needs CRUD (create, update, and delete) permissions.
+ - Users may want to customize a `Deployment`, or add and remove satellite objects
+ like `PodDisruptionBudget`, `HorizontalPodAutoscaler`, and `PodSecurityPolicy`.
+ - Metrics, monitoring, logs for the `Deployment`.
diff --git a/doc/development/agent/identity.md b/doc/development/agent/identity.md
new file mode 100644
index 00000000000..65de1a6f0c8
--- /dev/null
+++ b/doc/development/agent/identity.md
@@ -0,0 +1,94 @@
+---
+stage: Configure
+group: Configure
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+---
+
+# Kubernetes Agent identity and authentication **(PREMIUM ONLY)**
+
+This page uses the word `agent` to describe the concept of the
+GitLab Kubernetes Agent. The program that implements the concept is called `agentk`.
+Read the
+[architecture page](https://gitlab.com/gitlab-org/cluster-integration/gitlab-agent/-/blob/master/doc/architecture.md)
+for more information.
+
+## Agent identity and name
+
+In a GitLab installation, each agent must have a unique, immutable name. This
+name must be unique in the project the agent is attached to, and this name must
+follow the [DNS label standard from RFC 1123](https://tools.ietf.org/html/rfc1123).
+The name must:
+
+- Contain at most 63 characters.
+- Contain only lowercase alphanumeric characters or `-`.
+- Start with an alphanumeric character.
+- End with an alphanumeric character.
+
+Kubernetes uses the
+[same naming restriction](https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#dns-label-names)
+for some names.
+
+The regex for names is: `/\A[a-z0-9]([-a-z0-9]*[a-z0-9])?\z/`.
+
+## Multiple agents in a cluster
+
+A Kubernetes cluster may have 0 or more agents running in it. Each agent likely
+has a different configuration. Some may enable features A and B, and some may
+enable features B and C. This flexibility enables different groups of people to
+use different features of the agent in the same cluster.
+
+For example, [Priyanka (Platform Engineer)](https://about.gitlab.com/handbook/marketing/product-marketing/roles-personas/#priyanka-platform-engineer)
+may want to use cluster-wide features of the agent, while
+[Sasha (Software Developer)](https://about.gitlab.com/handbook/marketing/product-marketing/roles-personas/#sasha-software-developer)
+uses the agent that only has access to a particular namespace.
+
+Each agent is likely running using a
+[`ServiceAccount`](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/),
+a distinct Kubernetes identity, with a distinct set of permissions attached to it.
+These permissions enable the agent administrator to follow the
+[principle of least privilege](https://en.wikipedia.org/wiki/Principle_of_least_privilege)
+and minimize the permissions each particular agent needs.
+
+## Kubernetes Agent authentication
+
+When adding a new agent, GitLab provides the user with a bearer access token. The
+agent uses this token to authenticate with GitLab. This token is a random string
+and does not encode any information in it, but it is secret and must
+be treated with care. Store it as a `Secret` in Kubernetes.
+
+Each agent can have 0 or more tokens in a GitLab database. Having several valid
+tokens helps you rotate tokens without needing to re-register an agent. Each token
+record in the database has the following fields:
+
+- Agent identity it belongs to.
+- Token value. Encrypted at rest.
+- Creation time.
+- Who created it.
+- Revocation flag to mark token as revoked.
+- Revocation time.
+- Who revoked it.
+- A text field to store any comments the administrator may want to make about the token for future self.
+
+Tokens can be managed by users with `maintainer` and higher level of
+[permissions](../../user/permissions.md).
+
+Tokens are immutable, and only the following fields can be updated:
+
+- Revocation flag. Can only be updated to `true` once, but immutable after that.
+- Revocation time. Set to the current time when revocation flag is set, but immutable after that.
+- Comments field. Can be updated any number of times, including after the token has been revoked.
+
+The agent sends its token, along with each request, to GitLab to authenticate itself.
+For each request, GitLab checks the token's validity:
+
+- Does the token exist in the database?
+- Has the token been revoked?
+
+This information may be cached for some time to reduce load on the database.
+
+## Kubernetes Agent authorization
+
+GitLab provides the following information in its response for a given Agent access token:
+
+- Agent configuration Git repository. (The agent doesn't support per-folder authorization.)
+- Agent name.
diff --git a/doc/development/agent/index.md b/doc/development/agent/index.md
new file mode 100644
index 00000000000..95661c8ddbd
--- /dev/null
+++ b/doc/development/agent/index.md
@@ -0,0 +1,87 @@
+---
+stage: Configure
+group: Configure
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+---
+
+# Kubernetes Agent development **(PREMIUM ONLY)**
+
+This page contains developer-specific information about the GitLab Kubernetes Agent.
+[End-user documentation about the GitLab Kubernetes Agent](../../user/clusters/agent/index.md)
+is also available.
+
+The agent can help you perform tasks like these:
+
+- Integrate a cluster, located behind a firewall or NAT, with GitLab. To
+ learn more, read [issue #212810, Invert the model GitLab.com uses for Kubernetes integration by leveraging long lived reverse tunnels](https://gitlab.com/gitlab-org/gitlab/-/issues/212810).
+- Access API endpoints in a cluster in real time. For an example use case, read
+ [issue #218220, Allow Prometheus in K8s cluster to be installed manually](https://gitlab.com/gitlab-org/gitlab/-/issues/218220#note_348729266).
+- Enable real-time features by pushing information about events happening in a cluster.
+ For example, you could build a cluster view dashboard to visualize changes in progress
+ in a cluster. For more information about these efforts, read about the
+ [Real-Time Working Group](https://about.gitlab.com/company/team/structure/working-groups/real-time/).
+- Enable a [cache of Kubernetes objects through informers](https://github.com/kubernetes/client-go/blob/ccd5becdffb7fd8006e31341baaaacd14db2dcb7/tools/cache/shared_informer.go#L34-L183),
+ kept up-to-date with very low latency. This cache helps you:
+
+ - Reduce or eliminate information propagation latency by avoiding Kubernetes API calls
+ and polling, and only fetching data from an up-to-date cache.
+ - Lower the load placed on the Kubernetes API by removing polling.
+ - Eliminate any rate-limiting errors by removing polling.
+ - Simplify backend code by replacing polling code with cache access. While it's another
+ API call, no polling is needed. This example describes [fetching cached data synchronously from the front end](https://gitlab.com/gitlab-org/gitlab/-/issues/217792#note_348582537) instead of fetching data from the Kubernetes API.
+
+## Architecture of the Kubernetes Agent
+
+The GitLab Kubernetes Agent and the GitLab Kubernetes Agent Server use
+[bidirectional streaming](https://grpc.io/docs/what-is-grpc/core-concepts/#bidirectional-streaming-rpc)
+to allow the connection acceptor (the gRPC server, GitLab Kubernetes Agent Server) to
+act as a client. The connection acceptor sends requests as gRPC replies. The client-server
+relationship is inverted because the connection must be initiated from inside the
+Kubernetes cluster to bypass any firewall or NAT the cluster may be located behind.
+To learn more about this inversion, read
+[issue #212810](https://gitlab.com/gitlab-org/gitlab/-/issues/212810).
+
+This diagram describes how GitLab (`GitLab RoR`), the GitLab Kubernetes Agent (`agentk`), and the GitLab Kubernetes Agent Server (`kas`) work together.
+
+```mermaid
+graph TB
+ agentk -- gRPC bidirectional streaming --> kas
+
+ subgraph "GitLab"
+ kas[kas]
+ GitLabRoR[GitLab RoR] -- gRPC --> kas
+ kas -- gRPC --> Gitaly[Gitaly]
+ kas -- REST API --> GitLabRoR
+ end
+
+ subgraph "Kubernetes cluster"
+ agentk[agentk]
+ end
+```
+
+- `GitLab RoR` is the main GitLab application. It uses gRPC to talk to `kas`.
+- `agentk` is the GitLab Kubernetes Agent. It keeps a connection established to a
+ `kas` instance, waiting for requests to process. It may also actively send information
+ about things happening in the cluster.
+- `kas` is the GitLab Kubernetes Agent Server, and is responsible for:
+ - Accepting requests from `agentk`.
+ - [Authentication of requests](https://gitlab.com/gitlab-org/cluster-integration/gitlab-agent/-/blob/master/doc/identity_and_auth.md) from `agentk` by querying `GitLab RoR`.
+ - Fetching agent's configuration from a corresponding Git repository by querying Gitaly.
+ - Matching incoming requests from `GitLab RoR` with existing connections from
+ the right `agentk`, forwarding requests to it and forwarding responses back.
+ - (Optional) Sending notifications through ActionCable for events received from `agentk`.
+ - Polling manifest repositories for [GitOps support](https://gitlab.com/gitlab-org/cluster-integration/gitlab-agent/-/blob/master/doc/gitops.md) by communicating with Gitaly.
+
+<i class="fa fa-youtube-play youtube" aria-hidden="true"></i>
+To learn more about how the repository is structured, see
+[GitLab Kubernetes Agent repository overview](https://www.youtube.com/watch?v=j8CyaCWroUY).
+
+## Guiding principles
+
+GitLab prefers to add logic into `kas` rather than `agentk`. `agentk` should be kept
+streamlined and small to minimize the need for upgrades. On GitLab.com, `kas` is
+managed by GitLab, so upgrades and features can be added without requiring you
+to upgrade `agentk` in your clusters.
+
+`agentk` can't be viewed as a dumb reverse proxy because features are planned to be built
+[on top of the cache with informers](https://github.com/kubernetes/client-go/blob/ccd5becdffb7fd8006e31341baaaacd14db2dcb7/tools/cache/shared_informer.go#L34-L183).
diff --git a/doc/development/agent/local.md b/doc/development/agent/local.md
new file mode 100644
index 00000000000..75d45366238
--- /dev/null
+++ b/doc/development/agent/local.md
@@ -0,0 +1,58 @@
+---
+stage: Configure
+group: Configure
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+---
+
+# Run the Kubernetes Agent locally **(PREMIUM ONLY)**
+
+You can run `kas` and `agentk` locally to test the [Kubernetes Agent](index.md) yourself.
+
+1. Create a `cfg.yaml` file from the contents of
+ [`config_example.yaml`](https://gitlab.com/gitlab-org/cluster-integration/gitlab-agent/-/blob/master/pkg/kascfg/config_example.yaml), or this example:
+
+ ```yaml
+ agent:
+ listen:
+ network: tcp
+ address: 127.0.0.1:8150
+ websocket: false
+ gitops:
+ poll_period: "10s"
+ gitlab:
+ address: http://localhost:3000
+ authentication_secret_file: /Users/tkuah/code/ee-gdk/gitlab/.gitlab_kas_secret
+ ```
+
+1. Create a `token.txt`. This is the token for
+ [the agent you created](../../user/clusters/agent/index.md#create-an-agent-record-in-gitlab). This file must not contain a newline character. You can create the file with this command:
+
+ ```shell
+ echo -n "<TOKEN>" > token.txt
+ ```
+
+1. Start the binaries with the following commands:
+
+ ```shell
+ # Need GitLab to start
+ gdk start
+ # Stop GDK's version of kas
+ gdk stop gitlab-k8s-agent
+
+ # Start kas
+ bazel run //cmd/kas -- --configuration-file="$(pwd)/cfg.yaml"
+ ```
+
+1. In a new terminal window, run this command to start `agentk`:
+
+ ```shell
+ bazel run //cmd/agentk -- --kas-address=grpc://127.0.0.1:8150 --token-file="$(pwd)/token.txt"
+ ```
+
+You can also inspect the
+[Makefile](https://gitlab.com/gitlab-org/cluster-integration/gitlab-agent/-/blob/master/Makefile)
+for more targets.
+
+<i class="fa fa-youtube-play youtube" aria-hidden="true"></i>
+To learn more about how the repository is structured, see
+[GitLab Kubernetes Agent repository overview](https://www.youtube.com/watch?v=j8CyaCWroUY).
diff --git a/doc/development/agent/routing.md b/doc/development/agent/routing.md
new file mode 100644
index 00000000000..43cc78ccdfb
--- /dev/null
+++ b/doc/development/agent/routing.md
@@ -0,0 +1,223 @@
+---
+stage: Configure
+group: Configure
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+---
+
+# Routing `kas` requests in the Kubernetes Agent **(PREMIUM ONLY)**
+
+This document describes how `kas` routes requests to concrete `agentk` instances.
+GitLab must talk to GitLab Kubernetes Agent Server (`kas`) to:
+
+- Get information about connected agents. [Read more](https://gitlab.com/gitlab-org/gitlab/-/issues/249560).
+- Interact with agents. [Read more](https://gitlab.com/gitlab-org/gitlab/-/issues/230571).
+- Interact with Kubernetes clusters. [Read more](https://gitlab.com/gitlab-org/gitlab/-/issues/240918).
+
+Each agent connects to an instance of `kas` and keeps an open connection. When
+GitLab must talk to a particular agent, a `kas` instance connected to this agent must
+be found, and the request routed to it.
+
+## System design
+
+For an architecture overview please see
+[architecture.md](https://gitlab.com/gitlab-org/cluster-integration/gitlab-agent/-/blob/master/doc/architecture.md).
+
+```mermaid
+flowchart LR
+ subgraph "Kubernetes 1"
+ agentk1p1["agentk 1, Pod1"]
+ agentk1p2["agentk 1, Pod2"]
+ end
+
+ subgraph "Kubernetes 2"
+ agentk2p1["agentk 2, Pod1"]
+ end
+
+ subgraph "Kubernetes 3"
+ agentk3p1["agentk 3, Pod1"]
+ end
+
+ subgraph kas
+ kas1["kas 1"]
+ kas2["kas 2"]
+ kas3["kas 3"]
+ end
+
+ GitLab["GitLab Rails"]
+ Redis
+
+ GitLab -- "gRPC to any kas" --> kas
+ kas1 -- register connected agents --> Redis
+ kas2 -- register connected agents --> Redis
+ kas1 -- lookup agent --> Redis
+
+ agentk1p1 -- "gRPC" --> kas1
+ agentk1p2 -- "gRPC" --> kas2
+ agentk2p1 -- "gRPC" --> kas1
+ agentk3p1 -- "gRPC" --> kas2
+```
+
+For this architecture, this diagram shows a request to `agentk 3, Pod1` for the list of pods:
+
+```mermaid
+sequenceDiagram
+ GitLab->>+kas1: Get list of running<br />Pods from agentk<br />with agent_id=3
+ Note right of kas1: kas1 checks for<br />agent connected with agent_id=3.<br />It does not.<br />Queries Redis
+ kas1->>+Redis: Get list of connected agents<br />with agent_id=3
+ Redis-->-kas1: List of connected agents<br />with agent_id=3
+ Note right of kas1: kas1 picks a specific agentk instance<br />to address and talks to<br />the corresponding kas instance,<br />specifying which agentk instance<br />to route the request to.
+ kas1->>+kas2: Get the list of running Pods<br />from agentk 3, Pod1
+ kas2->>+agentk 3 Pod1: Get list of Pods
+ agentk 3 Pod1->>-kas2: Get list of Pods
+ kas2-->>-kas1: List of running Pods<br />from agentk 3, Pod1
+ kas1-->>-GitLab: List of running Pods<br />from agentk with agent_id=3
+```
+
+Each `kas` instance tracks the agents connected to it in Redis. For each agent, it
+stores a serialized protobuf object with information about the agent. When an agent
+disconnects, `kas` removes all corresponding information from Redis. For both events,
+`kas` publishes a notification to a Redis [pub-sub channel](https://redis.io/topics/pubsub).
+
+Each agent, while logically a single entity, can have multiple replicas (multiple pods)
+in a cluster. `kas` accommodates that and records per-replica (generally per-connection)
+information. Each open `GetConfiguration()` streaming request is given
+a unique identifier which, combined with agent ID, identifies an `agentk` instance.
+
+gRPC can keep multiple TCP connections open for a single target host. `agentk` only
+runs one `GetConfiguration()` streaming request. `kas` uses that connection, and
+doesn't see idle TCP connections because they are handled by the gRPC framework.
+
+Each `kas` instance provides information to Redis, so other `kas` instances can discover and access it.
+
+Information is stored in Redis with an [expiration time](https://redis.io/commands/expire),
+to expire information for `kas` instances that become unavailable. To prevent
+information from expiring too quickly, `kas` periodically updates the expiration time
+for valid entries. Before terminating, `kas` cleans up the information it adds into Redis.
+
+When `kas` must atomically update multiple data structures in Redis, it uses
+[transactions](https://redis.io/topics/transactions) to ensure data consistency.
+Grouped data items must have the same expiration time.
+
+In addition to the existing `agentk -> kas` gRPC endpoint, `kas` exposes two new,
+separate gRPC endpoints for GitLab and for `kas -> kas` requests. Each endpoint
+is a separate network listener, making it easier to control network access to endpoints
+and allowing separate configuration for each endpoint.
+
+Databases, like PostgreSQL, aren't used because the data is transient, with no need
+to reliably persist it.
+
+### `GitLab : kas` external endpoint
+
+GitLab authenticates with `kas` using JWT and the same shared secret used by the
+`kas -> GitLab` communication. The JWT issuer should be `gitlab` and the audience
+should be `gitlab-kas`.
+
+When accessed through this endpoint, `kas` plays the role of request router.
+
+If a request from GitLab comes but no connected agent can handle it, `kas` blocks
+and waits for a suitable agent to connect to it or to another `kas` instance. It
+stops waiting when the client disconnects, or when some long timeout happens, such
+as client timeout. `kas` is notified of new agent connections through a
+[pub-sub channel](https://redis.io/topics/pubsub) to avoid frequent polling.
+When a suitable agent connects, `kas` routes the request to it.
+
+### `kas : kas` internal endpoint
+
+This endpoint is an implementation detail, an internal API, and should not be used
+by any other system. It's protected by JWT using a secret, shared among all `kas`
+instances. No other system must have access to this secret.
+
+When accessed through this endpoint, `kas` uses the request itself to determine
+which `agentk` to send the request to. It prevents request cycles by only following
+the instructions in the request, rather than doing discovery. It's the responsibility
+of the `kas` receiving the request from the _external_ endpoint to retry and re-route
+requests. This method ensures a single central component for each request can determine
+how a request is routed, rather than distributing the decision across several `kas` instances.
+
+### API definitions
+
+```proto
+syntax = "proto3";
+
+import "google/protobuf/timestamp.proto";
+
+message KasAddress {
+ string ip = 1;
+ uint32 port = 2;
+}
+
+message ConnectedAgentInfo {
+ // Agent id.
+ int64 id = 1;
+ // Identifies a particular agentk->kas connection. Randomly generated when agent connects.
+ int64 connection_id = 2;
+ string version = 3;
+ string commit = 4;
+ // Pod namespace.
+ string pod_namespace = 5;
+ // Pod name.
+ string pod_name = 6;
+ // When the connection was established.
+ google.protobuf.Timestamp connected_at = 7;
+ KasAddress kas_address = 8;
+ // What else do we need?
+}
+
+message KasInstanceInfo {
+ string version = 1;
+ string commit = 2;
+ KasAddress address = 3;
+ // What else do we need?
+}
+
+message ConnectedAgentsForProjectRequest {
+ int64 project_id = 1;
+}
+
+message ConnectedAgentsForProjectResponse {
+ // There may 0 or more agents with the same id, depending on the number of running Pods.
+ repeated ConnectedAgentInfo agents = 1;
+}
+
+message ConnectedAgentsByIdRequest {
+ int64 agent_id = 1;
+}
+
+message ConnectedAgentsByIdResponse {
+ repeated ConnectedAgentInfo agents = 1;
+}
+
+// API for use by GitLab.
+service KasApi {
+ // Connected agents for a particular configuration project.
+ rpc ConnectedAgentsForProject (ConnectedAgentsForProjectRequest) returns (ConnectedAgentsForProjectResponse) {
+ }
+ // Connected agents for a particular agent id.
+ rpc ConnectedAgentsById (ConnectedAgentsByIdRequest) returns (ConnectedAgentsByIdResponse) {
+ }
+ // Depends on the need, but here is the call from the example above.
+ rpc GetPods (GetPodsRequest) returns (GetPodsResponse) {
+ }
+}
+
+message Pod {
+ string namespace = 1;
+ string name = 2;
+}
+
+message GetPodsRequest {
+ int64 agent_id = 1;
+ int64 connection_id = 2;
+}
+
+message GetPodsResponse {
+ repeated Pod pods = 1;
+}
+
+// Internal API for use by kas for kas -> kas calls.
+service KasInternal {
+ // Depends on the need, but here is the call from the example above.
+ rpc GetPods (GetPodsRequest) returns (GetPodsResponse) {
+ }
+}
+```
diff --git a/doc/development/agent/user_stories.md b/doc/development/agent/user_stories.md
new file mode 100644
index 00000000000..2929573ffd3
--- /dev/null
+++ b/doc/development/agent/user_stories.md
@@ -0,0 +1,77 @@
+---
+stage: Configure
+group: Configure
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+---
+
+# Kubernetes Agent user stories **(PREMIUM ONLY)**
+
+The [personas in action](https://about.gitlab.com/handbook/marketing/product-marketing/roles-personas/#user-personas)
+for the Kubernetes Agent are:
+
+- [Sasha, the Software Developer](https://about.gitlab.com/handbook/marketing/strategic-marketing/roles-personas/#sasha-software-developer).
+- [Allison, the Application Operator](https://about.gitlab.com/handbook/marketing/strategic-marketing/roles-personas/#allison-application-ops).
+- [Priyanka, the Platform Engineer](https://about.gitlab.com/handbook/marketing/strategic-marketing/roles-personas/#priyanka-platform-engineer).
+
+[Devon, the DevOps engineer](https://about.gitlab.com/handbook/marketing/strategic-marketing/roles-personas/#devon-devops-engineer)
+is intentionally excluded here, as DevOps is more of a role than a persona.
+
+There are various workflows to support, so some user stories might seem to contradict each other. They don't.
+
+## Software Developer user stories
+
+<!-- vale gitlab.FirstPerson = NO -->
+
+- As a Software Developer, I want to push my code, and move to the next development task,
+ to work on business applications.
+- As a Software Developer, I want to set necessary dependencies and resource requirements
+ together with my application code, so my code runs fine after deployment.
+
+<!-- vale gitlab.FirstPerson = YES -->
+
+## Application Operator user stories
+
+<!-- vale gitlab.FirstPerson = NO -->
+
+- As an Application Operator, I want to standardize the deployments used by my teams,
+ so I can support all teams with minimal effort.
+- As an Application Operator, I want to have a single place to define all the deployments,
+ so I can assure security fixes are applied everywhere.
+- As an Application Operator, I want to offer a set of predefined templates to
+ Software Developers, so they can get started quickly and can deploy to production
+ without my intervention, and I am not a bottleneck.
+- As an Application Operator, I want to know exactly what changes are being deployed,
+ so I can fulfill my SLAs.
+- As an Application Operator, I want deep insights into what versions of my applications
+ are running and want to be able to debug them, so I can fix operational issues.
+- As an Application Operator, I want application code to be automatically deployed
+ to staging environments when new versions are available.
+- As an Application Operator, I want to follow my preferred deployment strategy,
+ so I can move code into production in a reliable way.
+- As an Application Operator, I want review all code before it's deployed into production,
+ so I can fulfill my SLAs.
+- As an Application Operator, I want to be notified before deployment when new code needs my attention,
+ so I can review it swiftly.
+
+<!-- vale gitlab.FirstPerson = YES -->
+
+## Platform Engineer user stories
+
+<!-- vale gitlab.FirstPerson = NO -->
+
+- As a Platform Engineer, I want to restrict customizations to preselected values
+ for Operators, so I can fulfill my SLAs.
+- As a Platform Engineer, I want to allow some level of customization to Operators,
+ so I don't become a bottleneck.
+- As a Platform Engineer, I want to define all deployments in a single place, so
+ I can assure security fixes are applied everywhere.
+- As a Platform Engineer, I want to define the infrastructure by code, so my
+ infrastructure management is testable, reproducible, traceable, and scalable.
+- As a Platform Engineer, I want to define various policies that applications must
+ follow, so that I can fulfill my SLAs.
+- As a Platform Engineer, I want approved tooling for log management and persistent storage,
+ so I can scale, secure, and manage them as needed.
+- As a Platform Engineer, I want to be alerted when my infrastructure differs from
+ its definition, so I can make sure that everything is configured as expected.
+
+<!-- vale gitlab.FirstPerson = YES -->
diff --git a/doc/development/api_graphql_styleguide.md b/doc/development/api_graphql_styleguide.md
index 0e81a332d6c..832a89ecac1 100644
--- a/doc/development/api_graphql_styleguide.md
+++ b/doc/development/api_graphql_styleguide.md
@@ -1,12 +1,12 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# GraphQL API style guide
-This document outlines the style guide for GitLab's [GraphQL API](../api/graphql/index.md).
+This document outlines the style guide for the GitLab [GraphQL API](../api/graphql/index.md).
## How GitLab implements GraphQL
@@ -19,7 +19,7 @@ which is exposed as an API endpoint at `/api/graphql`.
## Deep Dive
In March 2019, Nick Thomas hosted a Deep Dive (GitLab team members only: `https://gitlab.com/gitlab-org/create-stage/issues/1`)
-on GitLab's [GraphQL API](../api/graphql/index.md) to share his domain specific knowledge
+on the GitLab [GraphQL API](../api/graphql/index.md) to share his domain specific knowledge
with anyone who may work in this part of the codebase in the future. You can find the
[recording on YouTube](https://www.youtube.com/watch?v=-9L_1MWrjkg), and the slides on
[Google Slides](https://docs.google.com/presentation/d/1qOTxpkTdHIp1CRjuTvO-aXg0_rUtzE3ETfLUdnBB5uQ/edit)
@@ -44,7 +44,7 @@ add a `HTTP_PRIVATE_TOKEN` header.
## Global IDs
-GitLab's GraphQL API uses Global IDs (i.e: `"gid://gitlab/MyObject/123"`)
+The GitLab GraphQL API uses Global IDs (i.e: `"gid://gitlab/MyObject/123"`)
and never database primary key IDs.
Global ID is [a convention](https://graphql.org/learn/global-object-identification/)
@@ -154,7 +154,7 @@ Further reading:
### Exposing Global IDs
-In keeping with GitLab's use of [Global IDs](#global-ids), always convert
+In keeping with the GitLab use of [Global IDs](#global-ids), always convert
database primary key IDs into Global IDs when you expose them.
All fields named `id` are
@@ -179,7 +179,7 @@ end
### Connection types
-TIP: **Tip:**
+NOTE:
For specifics on implementation, see [Pagination implementation](#pagination-implementation).
GraphQL uses [cursor based
@@ -268,8 +268,8 @@ query($project_path: ID!) {
}
```
-To ensure that we get consistent ordering, we will append an ordering on the primary
-key, in descending order. This is usually `id`, so basically we will add `order(id: :desc)`
+To ensure that we get consistent ordering, we append an ordering on the primary
+key, in descending order. This is usually `id`, so we add `order(id: :desc)`
to the end of the relation. A primary key _must_ be available on the underlying table.
#### Shortcut fields
@@ -310,12 +310,12 @@ class MergeRequestPermissionsType < BasePermissionType
abilities :admin_merge_request, :update_merge_request, :create_note
ability_field :resolve_note,
- description: 'Indicates the user can resolve discussions on the merge request'
+ description: 'Indicates the user can resolve discussions on the merge request.'
permission_field :push_to_source_branch, method: :can_push_to_source_branch?
end
```
-- **`permission_field`**: Will act the same as `graphql-ruby`'s
+- **`permission_field`**: Acts the same as `graphql-ruby`'s
`field` method but setting a default description and type and making
them non-nullable. These options can still be overridden by adding
them as arguments.
@@ -323,7 +323,7 @@ end
behaves the same way as `permission_field` and the same
arguments can be overridden.
- **`abilities`**: Allows exposing several abilities defined in our
- policies at once. The fields for these will all have be non-nullable
+ policies at once. The fields for these must all be non-nullable
booleans with a default description.
## Feature flags
@@ -331,7 +331,7 @@ end
Developers can add [feature flags](../development/feature_flags/index.md) to GraphQL
fields in the following ways:
-- Add the `feature_flag` property to a field. This will allow the field to be _hidden_
+- Add the `feature_flag` property to a field. This allows the field to be _hidden_
from the GraphQL schema when the flag is disabled.
- Toggle the return value when resolving the field.
@@ -339,7 +339,7 @@ You can refer to these guidelines to decide which approach to use:
- If your field is experimental, and its name or type is subject to
change, use the `feature_flag` property.
-- If your field is stable and its definition will not change, even after the flag is
+- If your field is stable and its definition doesn't change, even after the flag is
removed, toggle the return value of the field instead. Note that
[all fields should be nullable](#nullable-fields) anyway.
@@ -347,15 +347,15 @@ You can refer to these guidelines to decide which approach to use:
The `feature_flag` property allows you to toggle the field's
[visibility](https://graphql-ruby.org/authorization/visibility.html)
-within the GraphQL schema. This will remove the field from the schema
+within the GraphQL schema. This removes the field from the schema
when the flag is disabled.
A description is [appended](https://gitlab.com/gitlab-org/gitlab/-/blob/497b556/app/graphql/types/base_field.rb#L44-53)
to the field indicating that it is behind a feature flag.
-CAUTION: **Caution:**
-If a client queries for the field when the feature flag is disabled, the query will
-fail. Consider this when toggling the visibility of the feature on or off on
+WARNING:
+If a client queries for the field when the feature flag is disabled, the query
+fails. Consider this when toggling the visibility of the feature on or off on
production.
The `feature_flag` property does not allow the use of
@@ -369,7 +369,7 @@ Example:
```ruby
field :test_field, type: GraphQL::STRING_TYPE,
null: true,
- description: 'Some test field',
+ description: 'Some test field.',
feature_flag: :my_feature_flag
```
@@ -385,7 +385,7 @@ When applying a feature flag to toggle the value of a field, the
- State that the value of the field can be toggled by a feature flag.
- Name the feature flag.
-- State what the field will return when the feature flag is disabled (or
+- State what the field returns when the feature flag is disabled (or
enabled, if more appropriate).
Example:
@@ -394,7 +394,7 @@ Example:
field :foo, GraphQL::STRING_TYPE,
null: true,
description: 'Some test field. Will always return `null`' \
- 'if `my_feature_flag` feature flag is disabled'
+ 'if `my_feature_flag` feature flag is disabled.'
def foo
object.foo if Feature.enabled?(:my_feature_flag, object)
@@ -403,11 +403,11 @@ end
## Deprecating fields and enum values
-GitLab's GraphQL API is versionless, which means we maintain backwards
+The GitLab GraphQL API is versionless, which means we maintain backwards
compatibility with older versions of the API with every change. Rather
than removing a field or [enum value](#enums), we need to _deprecate_ it instead.
The deprecated parts of the schema can then be removed in a future release
-in accordance with [GitLab's deprecation process](../api/graphql/index.md#deprecation-process).
+in accordance with the [GitLab deprecation process](../api/graphql/index.md#deprecation-process).
Fields and enum values are deprecated using the `deprecated` property.
The value of the property is a `Hash` of:
@@ -420,12 +420,12 @@ Example:
```ruby
field :token, GraphQL::STRING_TYPE, null: true,
deprecated: { reason: 'Login via token has been removed', milestone: '10.0' },
- description: 'Token for login'
+ description: 'Token for login.'
```
The original `description` of the things being deprecated should be maintained,
-and should _not_ be updated to mention the deprecation. Instead, the `reason` will
-be appended to the `description`.
+and should _not_ be updated to mention the deprecation. Instead, the `reason`
+is appended to the `description`.
### Deprecation reason style guide
@@ -441,7 +441,7 @@ Example:
```ruby
field :designs, ::Types::DesignManagement::DesignCollectionType, null: true,
deprecated: { reason: 'Use `designCollection`', milestone: '10.0' },
- description: 'The designs associated with this issue',
+ description: 'The designs associated with this issue.',
```
```ruby
@@ -477,20 +477,20 @@ module Types
graphql_name 'TrafficLightState'
description 'State of a traffic light'
- value 'RED', description: 'Drivers must stop'
- value 'YELLOW', description: 'Drivers must stop when it is safe to'
- value 'GREEN', description: 'Drivers can start or keep driving'
+ value 'RED', description: 'Drivers must stop.'
+ value 'YELLOW', description: 'Drivers must stop when it is safe to.'
+ value 'GREEN', description: 'Drivers can start or keep driving.'
end
end
```
-If the enum will be used for a class property in Ruby that is not an uppercase string,
-you can provide a `value:` option that will adapt the uppercase value.
+If the enum is used for a class property in Ruby that is not an uppercase string,
+you can provide a `value:` option that adapts the uppercase value.
In the following example:
-- GraphQL inputs of `OPENED` will be converted to `'opened'`.
-- Ruby values of `'opened'` will be converted to `"OPENED"` in GraphQL responses.
+- GraphQL inputs of `OPENED` are converted to `'opened'`.
+- Ruby values of `'opened'` are converted to `"OPENED"` in GraphQL responses.
```ruby
module Types
@@ -498,8 +498,8 @@ module Types
graphql_name 'EpicState'
description 'State of a GitLab epic'
- value 'OPENED', value: 'opened', description: 'An open Epic'
- value 'CLOSED', value: 'closed', description: 'An closed Epic'
+ value 'OPENED', value: 'opened', description: 'An open Epic.'
+ value 'CLOSED', value: 'closed', description: 'A closed Epic.'
end
end
```
@@ -523,7 +523,7 @@ module Types
description 'Incident severity'
::IssuableSeverity.severities.keys.each do |severity|
- value severity.upcase, value: severity, description: "#{severity.titleize} severity"
+ value severity.upcase, value: severity, description: "#{severity.titleize} severity."
end
end
end
@@ -536,7 +536,7 @@ When data to be returned by GraphQL is stored as
GraphQL types whenever possible. Avoid using the `GraphQL::Types::JSON` type unless
the JSON data returned is _truly_ unstructured.
-If the structure of the JSON data varies, but will be one of a set of known possible
+If the structure of the JSON data varies, but is one of a set of known possible
structures, use a
[union](https://graphql-ruby.org/type_definitions/unions.html).
An example of the use of a union for this purpose is
@@ -562,15 +562,15 @@ We can use GraphQL types like this:
```ruby
module Types
class ChartType < BaseObject
- field :title, GraphQL::STRING_TYPE, null: true, description: 'Title of the chart'
- field :data, [Types::ChartDatumType], null: true, description: 'Data of the chart'
+ field :title, GraphQL::STRING_TYPE, null: true, description: 'Title of the chart.'
+ field :data, [Types::ChartDatumType], null: true, description: 'Data of the chart.'
end
end
module Types
class ChartDatumType < BaseObject
- field :x, GraphQL::INT_TYPE, null: true, description: 'X-axis value of the chart datum'
- field :y, GraphQL::INT_TYPE, null: true, description: 'Y-axis value of the chart datum'
+ field :x, GraphQL::INT_TYPE, null: true, description: 'X-axis value of the chart datum.'
+ field :y, GraphQL::INT_TYPE, null: true, description: 'Y-axis value of the chart datum.'
end
end
```
@@ -584,7 +584,7 @@ A description of a field or argument is given using the `description:`
keyword. For example:
```ruby
-field :id, GraphQL::ID_TYPE, description: 'ID of the resource'
+field :id, GraphQL::ID_TYPE, description: 'ID of the resource.'
```
Descriptions of fields and arguments are viewable to users through:
@@ -605,20 +605,20 @@ descriptions:
this field do?". Example: `'Indicates project has a Git repository'`.
- Always include the word `"timestamp"` when describing an argument or
field of type `Types::TimeType`. This lets the reader know that the
- format of the property will be `Time`, rather than just `Date`.
-- No `.` at end of strings.
+ format of the property is `Time`, rather than just `Date`.
+- Must end with a period (`.`).
Example:
```ruby
-field :id, GraphQL::ID_TYPE, description: 'ID of the Issue'
-field :confidential, GraphQL::BOOLEAN_TYPE, description: 'Indicates the issue is confidential'
-field :closed_at, Types::TimeType, description: 'Timestamp of when the issue was closed'
+field :id, GraphQL::ID_TYPE, description: 'ID of the issue.'
+field :confidential, GraphQL::BOOLEAN_TYPE, description: 'Indicates the issue is confidential.'
+field :closed_at, Types::TimeType, description: 'Timestamp of when the issue was closed.'
```
### `copy_field_description` helper
-Sometimes we want to ensure that two descriptions will always be identical.
+Sometimes we want to ensure that two descriptions are always identical.
For example, to keep a type field description the same as a mutation argument
when they both represent the same property.
@@ -641,13 +641,13 @@ abilities as in the Rails app.
If the:
- Currently authenticated user fails the authorization, the authorized
- resource will be returned as `null`.
-- Resource is part of a collection, the collection will be filtered to
+ resource is returned as `null`.
+- Resource is part of a collection, the collection is filtered to
exclude the objects that the user's authorization checks failed against.
Also see [authorizing resources in a mutation](#authorizing-resources).
-TIP: **Tip:**
+NOTE:
Try to load only what the currently authenticated user is allowed to
view with our existing finders first, without relying on authorization
to filter the records. This minimizes database queries and unnecessary
@@ -656,7 +656,7 @@ authorization checks of the loaded records.
### Type authorization
Authorize a type by passing an ability to the `authorize` method. All
-fields with the same type will be authorized by checking that the
+fields with the same type is authorized by checking that the
currently authenticated user has the required ability.
For example, the following authorization ensures that the currently
@@ -785,7 +785,7 @@ end
You should never re-use resolvers directly. Resolvers have a complex life-cycle, with
authorization, readiness and resolution orchestrated by the framework, and at
-each stage lazy values can be returned to take advantage of batching
+each stage [lazy values](#laziness) can be returned to take advantage of batching
opportunities. Never instantiate a resolver or a mutation in application code.
Instead, the units of code reuse are much the same as in the rest of the
@@ -803,6 +803,32 @@ overhead. If you are writing:
- A `Mutation`, feel free to lookup objects directly.
- A `Resolver` or methods on a `BaseObject`, then you want to allow for batching.
+### Error handling
+
+Resolvers may raise errors, which will be converted to top-level errors as
+appropriate. All anticipated errors should be caught and transformed to an
+appropriate GraphQL error (see
+[`Gitlab::Graphql::Errors`](https://gitlab.com/gitlab-org/gitlab/blob/master/lib/gitlab/graphql/errors.rb)).
+Any uncaught errors will be suppressed and the client will receive the message
+`Internal service error`.
+
+The one special case is permission errors. In the REST API we return
+`404 Not Found` for any resources that the user does not have permission to
+access. The equivalent behavior in GraphQL is for us to return `null` for
+all absent or unauthorized resources.
+Query resolvers **should not raise errors for unauthorized resources**.
+
+The rationale for this is that clients must not be able to distinguish between
+the absence of a record and the presence of one they do not have access to. To
+do so is a security vulnerability, since it leaks information we want to keep
+hidden.
+
+In most cases you don't need to worry about this - this is handled correctly by
+the resolver field authorization we declare with the `authorize` DSL calls. If
+you need to do something more custom however, remember, if you encounter an
+object the `current_user` does not have access to when resolving a field, then
+the entire field should resolve to `null`.
+
### Deriving resolvers (`BaseResolver.single` and `BaseResolver.last`)
For some simple use cases, we can derive resolvers from others.
@@ -889,8 +915,8 @@ Then we can use these resolver on fields:
```ruby
# In PipelineType
-field :jobs, resolver: JobsResolver, description: 'All jobs'
-field :job, resolver: JobsResolver.single, description: 'A single job'
+field :jobs, resolver: JobsResolver, description: 'All jobs.'
+field :job, resolver: JobsResolver.single, description: 'A single job.'
```
### Correct use of `Resolver#ready?`
@@ -922,7 +948,7 @@ before calling `resolve`! An example can be seen in our [`GraphQLHelpers`](https
The full query is known in advance during execution, which means we can make use
of [lookahead](https://graphql-ruby.org/queries/lookahead.html) to optimize our
-queries, and batch load associations we know we will need. Consider adding
+queries, and batch load associations we know we need. Consider adding
lookahead support in your resolvers to avoid `N+1` performance issues.
To enable support for common lookahead use-cases (pre-loading associations when
@@ -965,7 +991,7 @@ to advertise the need for lookahead:
field :my_things, MyThingType.connection_type, null: true,
extras: [:lookahead], # Necessary
resolver: MyThingResolver,
- description: 'My things'
+ description: 'My things.'
```
For an example of real world use, please
@@ -996,7 +1022,7 @@ When using resolvers, they can and should serve as the SSoT for field metadata.
All field options (apart from the field name) can be declared on the resolver.
These include:
-- `type` (this is particularly important, and will soon be mandatory)
+- `type` (this is particularly important, and is planned to be mandatory)
- `extras`
- `description`
@@ -1034,7 +1060,7 @@ To find the parent object in your `Presenter` class:
field :computed_field, SomeType, null: true,
method: :my_computing_method,
extras: [:parent], # Necessary
- description: 'My field description'
+ description: 'My field description.'
field :resolver_field, resolver: SomeTypeResolver
@@ -1042,7 +1068,7 @@ To find the parent object in your `Presenter` class:
extras [:parent]
type SomeType, null: true
- description 'My field description'
+ description 'My field description.'
```
1. Declare your field's method in your Presenter class and have it accept the `parent` keyword argument.
@@ -1080,7 +1106,7 @@ are returned as the result of the mutation.
#### Update mutation granularity
-GitLab's service-oriented architecture means that most mutations call a Create, Delete, or Update
+The service-oriented architecture in GitLab means that most mutations call a Create, Delete, or Update
service, for example `UpdateMergeRequestService`.
For Update mutations, a you might want to only update one aspect of an object, and thus only need a
_fine-grained_ mutation, for example `MergeRequest::SetWip`.
@@ -1161,10 +1187,10 @@ Example:
```ruby
argument :my_arg, GraphQL::STRING_TYPE,
required: true,
- description: "A description of the argument"
+ description: "A description of the argument."
```
-Each GraphQL `argument` defined will be passed to the `#resolve` method
+Each GraphQL `argument` defined is passed to the `#resolve` method
of a mutation as keyword arguments.
Example:
@@ -1175,7 +1201,7 @@ def resolve(my_arg:)
end
```
-`graphql-ruby` will automatically wrap up arguments into an
+`graphql-ruby` wraps up arguments into an
[input type](https://graphql.org/learn/schema/#input-types).
For example, the
@@ -1186,11 +1212,11 @@ defines these arguments (some
```ruby
argument :project_path, GraphQL::ID_TYPE,
required: true,
- description: "The project the merge request to mutate is in"
+ description: "The project the merge request to mutate is in."
argument :iid, GraphQL::STRING_TYPE,
required: true,
- description: "The iid of the merge request to mutate"
+ description: "The IID of the merge request to mutate."
argument :wip,
GraphQL::BOOLEAN_TYPE,
@@ -1207,7 +1233,7 @@ These arguments automatically generate an input type called
### Object identifier arguments
-In keeping with GitLab's use of [Global IDs](#global-ids), mutation
+In keeping with the GitLab use of [Global IDs](#global-ids), mutation
arguments should use Global IDs to identify an object and never database
primary key IDs.
@@ -1231,7 +1257,7 @@ single mutation when multiple are performed within a single request.
### The `resolve` method
The `resolve` method receives the mutation's arguments as keyword arguments.
-From here, we can call the service that will modify the resource.
+From here, we can call the service that modifies the resource.
The `resolve` method should then return a hash with the same field
names as defined on the mutation including an `errors` array. For example,
@@ -1242,7 +1268,7 @@ field:
field :merge_request,
Types::MergeRequestType,
null: true,
- description: "The merge request after mutation"
+ description: "The merge request after mutation."
```
This means that the hash returned from `resolve` in this mutation
@@ -1262,8 +1288,8 @@ should look like this:
### Mounting the mutation
To make the mutation available it must be defined on the mutation
-type that lives in `graphql/types/mutation_types`. The
-`mount_mutation` helper method will define a field based on the
+type that is stored in `graphql/types/mutation_types`. The
+`mount_mutation` helper method defines a field based on the
GraphQL-name of the mutation:
```ruby
@@ -1278,7 +1304,7 @@ module Types
end
```
-Will generate a field called `mergeRequestSetWip` that
+Generates a field called `mergeRequestSetWip` that
`Mutations::MergeRequests::SetWip` to be resolved.
### Authorizing resources
@@ -1301,13 +1327,13 @@ end
We can then call `authorize!` in the `resolve` method, passing in the resource we
want to validate the abilities for.
-Alternatively, we can add a `find_object` method that will load the
+Alternatively, we can add a `find_object` method that loads the
object on the mutation. This would allow you to use the
`authorized_find!` helper method.
When a user is not allowed to perform the action, or an object is not
found, we should raise a
-`Gitlab::Graphql::Errors::ResourceNotAvailable` error. Which will be
+`Gitlab::Graphql::Errors::ResourceNotAvailable` error which is
correctly rendered to the clients.
### Errors in mutations
@@ -1418,8 +1444,8 @@ of errors should be treated as internal, and not shown to the user in specific
detail.
We need to inform the user when the mutation fails, but we do not need to
-tell them why, since they cannot have caused it, and nothing they can do will
-fix it, although we may offer to retry the mutation.
+tell them why, since they cannot have caused it, and nothing they can do
+fixes it, although we may offer to retry the mutation.
#### Categorizing errors
@@ -1483,7 +1509,7 @@ Sometimes a mutation or resolver may accept a number of optional
arguments, but we still want to validate that at least one of the optional
arguments is provided. In this situation, consider using the `#ready?`
method within your mutation or resolver to provide the validation. The
-`#ready?` method will be called before any work is done within the
+`#ready?` method is called before any work is done within the
`#resolve` method.
Example:
@@ -1504,7 +1530,7 @@ In the future this may be able to be done using `InputUnions` if
[this RFC](https://github.com/graphql/graphql-spec/blob/master/rfcs/InputUnion.md)
is merged.
-## GitLab's custom scalars
+## GitLab custom scalars
### `Types::TimeType`
@@ -1527,37 +1553,71 @@ and handles time inputs.
Example:
```ruby
-field :created_at, Types::TimeType, null: true, description: 'Timestamp of when the issue was created'
+field :created_at, Types::TimeType, null: true, description: 'Timestamp of when the issue was created.'
```
## Testing
-_full stack_ tests for a graphql query or mutation live in
+### Writing unit tests
+
+Before creating unit tests, review the following examples:
+
+- [`spec/graphql/resolvers/users_resolver_spec.rb`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/spec/graphql/resolvers/users_resolver_spec.rb)
+- [`spec/graphql/mutations/issues/create_spec.rb`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/spec/graphql/mutations/issues/create_spec.rb)
+
+It's faster to test as much of the logic from your GraphQL queries and mutations
+with unit tests, which are stored in `spec/graphql`.
+
+Use unit tests to verify that:
+
+- Types have the expected fields.
+- Resolvers and mutations apply authorizations and return expected data.
+- Edge cases are handled correctly.
+
+### Writing integration tests
+
+Integration tests check the full stack for a GraphQL query or mutation and are stored in
`spec/requests/api/graphql`.
-When adding a query, the `a working graphql query` shared example can
-be used to test if the query renders valid results.
+For speed, you should test most logic in unit tests instead of integration tests.
+However, integration tests that check if data is returned verify the following
+additional items:
+
+- The mutation is actually queryable within the schema (was mounted in `MutationType`).
+- The data returned by a resolver or mutation correctly matches the
+ [return types](https://graphql-ruby.org/fields/introduction.html#field-return-type) of
+ the fields and resolves without errors.
+
+Integration tests can also verify the following items, because they invoke the
+full stack:
+
+- An argument or scalar's [`prepare`](#validating-arguments) applies correctly.
+- Logic in a resolver or mutation's [`#ready?` method](#correct-use-of-resolverready) applies correctly.
+- An [argument's `default_value`](https://graphql-ruby.org/fields/arguments.html) applies correctly.
+- Objects resolve performantly and there are no N+1 issues.
-Using the `GraphqlHelpers#all_graphql_fields_for`-helper, a query
-including all available fields can be constructed. This makes it easy
-to add a test rendering all possible fields for a query.
+When adding a query, you can use the `a working graphql query` shared example to test if the query
+renders valid results.
+
+You can construct a query including all available fields using the `GraphqlHelpers#all_graphql_fields_for`
+helper. This makes it easy to add a test rendering all possible fields for a query.
If you're adding a field to a query that supports pagination and sorting,
visit [Testing](graphql_guide/pagination.md#testing) for details.
-To test GraphQL mutation requests, `GraphqlHelpers` provides 2
+To test GraphQL mutation requests, `GraphqlHelpers` provides two
helpers: `graphql_mutation` which takes the name of the mutation, and
-a hash with the input for the mutation. This will return a struct with
+a hash with the input for the mutation. This returns a struct with
a mutation query, and prepared variables.
-This struct can then be passed to the `post_graphql_mutation` helper,
-that will post the request with the correct parameters, like a GraphQL
+You can then pass this struct to the `post_graphql_mutation` helper,
+that posts the request with the correct parameters, like a GraphQL
client would do.
-To access the response of a mutation, the `graphql_mutation_response`
-helper is available.
+To access the response of a mutation, you can use the `graphql_mutation_response`
+helper.
-Using these helpers, we can build specs like this:
+Using these helpers, you can build specs like this:
```ruby
let(:mutation) do
@@ -1629,13 +1689,13 @@ end
- Mimic the folder structure of `app/graphql/types`:
For example, tests for fields on `Types::Ci::PipelineType`
- in `app/graphql/types/ci/pipeline_type.rb` should live in
+ in `app/graphql/types/ci/pipeline_type.rb` should be stored in
`spec/requests/api/graphql/ci/pipeline_spec.rb` regardless of the query being
used to fetch the pipeline data.
## Notes about Query flow and GraphQL infrastructure
-GitLab's GraphQL infrastructure can be found in `lib/gitlab/graphql`.
+The GitLab GraphQL infrastructure can be found in `lib/gitlab/graphql`.
[Instrumentation](https://graphql-ruby.org/queries/instrumentation.html) is functionality
that wraps around a query being executed. It is implemented as a module that uses the `Instrumentation` class.
@@ -1702,3 +1762,19 @@ For guidance, see the [GraphQL API](documentation/graphql_styleguide.md) page.
## Include a changelog entry
All client-facing changes **must** include a [changelog entry](changelog.md).
+
+## Laziness
+
+One important technique unique to GraphQL for managing performance is
+using **lazy** values. Lazy values represent the promise of a result,
+allowing their action to be run later, which enables batching of queries in
+different parts of the query tree. The main example of lazy values in our code is
+the [GraphQL BatchLoader](graphql_guide/batchloader.md).
+
+To manage lazy values directly, read `Gitlab::Graphql::Lazy`, and in
+particular `Gitlab::Graphql::Laziness`. This contains `#force` and
+`#delay`, which help implement the basic operations of creation and
+elimination of laziness, where needed.
+
+For dealing with lazy values without forcing them, use
+`Gitlab::Graphql::Lazy.with_value`.
diff --git a/doc/development/api_styleguide.md b/doc/development/api_styleguide.md
index d21975a43d2..b2c93f16770 100644
--- a/doc/development/api_styleguide.md
+++ b/doc/development/api_styleguide.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# API style guide
@@ -34,7 +34,7 @@ for a good example):
- `desc` for the method summary. You should pass it a block for additional
details such as:
- The GitLab version when the endpoint was added. If it is behind a feature flag, mention that instead: _This feature is gated by the :feature\_flag\_symbol feature flag._
- - If the endpoint is deprecated, and if so, when will it be removed
+ - If the endpoint is deprecated, and if so, its planned removal date
- `params` for the method parameters. This acts as description,
[validation, and coercion of the parameters](https://github.com/ruby-grape/grape#parameter-validation-and-coercion)
@@ -72,7 +72,7 @@ parent namespaces.
– <https://github.com/ruby-grape/grape#include-parent-namespaces>
-In most cases you will want to exclude parameters from the parent namespaces:
+In most cases you should exclude parameters from the parent namespaces:
```ruby
declared(params, include_parent_namespaces: false)
@@ -93,8 +93,8 @@ User.create(params) # imagine the user submitted `admin=1`... :)
User.create(declared(params, include_parent_namespaces: false).to_h)
```
-NOTE: **Note:**
-`declared(params)` return a `Hashie::Mash` object, on which you will have to
+NOTE:
+`declared(params)` return a `Hashie::Mash` object, on which you must
call `.to_h`.
But we can use `params[key]` directly when we access single elements.
@@ -109,7 +109,7 @@ Model.create(foo: params[:foo])
## Array types
With Grape v1.3+, Array types must be defined with a `coerce_with`
-block, or parameters will fail to validate when passed a string from an
+block, or parameters, fails to validate when passed a string from an
API request. See the [Grape upgrading
documentation](https://github.com/ruby-grape/grape/blob/master/UPGRADING.md#ensure-that-array-types-have-explicit-coercions)
for more details.
@@ -140,7 +140,7 @@ before do
end
```
-With this change, a request to PUT `/test?user_ids` will cause Grape to
+With this change, a request to PUT `/test?user_ids` causes Grape to
pass `params` to be `{ user_ids: [] }`.
There is [an open issue in the Grape tracker](https://github.com/ruby-grape/grape/issues/2068)
@@ -148,7 +148,7 @@ to make this easier.
## Using HTTP status helpers
-For non-200 HTTP responses, use the provided helpers in `lib/api/helpers.rb` to ensure correct behavior (`not_found!`, `no_content!` etc.). These will `throw` inside Grape and abort the execution of your endpoint.
+For non-200 HTTP responses, use the provided helpers in `lib/api/helpers.rb` to ensure correct behavior (`not_found!`, `no_content!` etc.). These `throw` inside Grape and abort the execution of your endpoint.
For `DELETE` requests, you should also generally use the `destroy_conditionally!` helper which by default returns a `204 No Content` response on success, or a `412 Precondition Failed` response if the given `If-Unmodified-Since` header is out of range. This helper calls `#destroy` on the passed resource, but you can also implement a custom deletion method by passing a block.
@@ -249,7 +249,7 @@ In order to avoid N+1 problems that are common when returning collections
of records in an API endpoint, we should use eager loading.
A standard way to do this within the API is for models to implement a
-scope called `with_api_entity_associations` that will preload the
+scope called `with_api_entity_associations` that preloads the
associations and data returned in the API. An example of this scope can
be seen in
[the `Issue` model](https://gitlab.com/gitlab-org/gitlab/blob/2fedc47b97837ea08c3016cf2fb773a0300a4a25/app%2Fmodels%2Fissue.rb#L62).
@@ -259,7 +259,7 @@ In situations where the same model has multiple entities in the API
discretion with applying this scope. It may be that you optimize for the
most basic entity, with successive entities building upon that scope.
-The `with_api_entity_associations` scope will also [automatically preload
+The `with_api_entity_associations` scope also [automatically preloads
data](https://gitlab.com/gitlab-org/gitlab/blob/19f74903240e209736c7668132e6a5a735954e7c/app%2Fmodels%2Ftodo.rb#L34)
for `Todo` _targets_ when returned in the [to-dos API](../api/todos.md).
diff --git a/doc/development/application_limits.md b/doc/development/application_limits.md
index 41fcf5301ad..c661ff3f617 100644
--- a/doc/development/application_limits.md
+++ b/doc/development/application_limits.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Application limits development
@@ -12,7 +12,7 @@ limits to GitLab.
## Documentation
First of all, you have to gather information and decide which are the different
-limits that will be set for the different GitLab tiers. You also need to
+limits that are set for the different GitLab tiers. You also need to
coordinate with others to [document](../administration/instance_limits.md)
and communicate those limits.
@@ -63,7 +63,7 @@ It's recommended to create two separate migration script files.
end
```
- Some plans exist only on GitLab.com. This will be a no-op for plans
+ Some plans exist only on GitLab.com. This is a no-op for plans
that do not exist.
### Plan limits validation
diff --git a/doc/development/application_secrets.md b/doc/development/application_secrets.md
index abc5ff7b985..92b18f5ad78 100644
--- a/doc/development/application_secrets.md
+++ b/doc/development/application_secrets.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Application secrets
@@ -16,20 +16,21 @@ This page is a development guide for application secrets.
| `otp_key_base` | The base key for One Time Passwords, described in [User management](../raketasks/user_management.md#rotate-two-factor-authentication-encryption-key) |
|`db_key_base` | The base key to encrypt the data for `attr_encrypted` columns |
|`openid_connect_signing_key` | The singing key for OpenID Connect |
+| `encrypted_settings_key_base` | The base key to encrypt settings files with |
## Where the secrets are stored
|Installation type |Location |
|--- |--- |
|Omnibus |[`/etc/gitlab/gitlab-secrets.json`](https://docs.gitlab.com/omnibus/settings/backups.html#backup-and-restore-omnibus-gitlab-configuration) |
-|Cloud Native GitLab Charts |[Kubernets Secrets](https://gitlab.com/gitlab-org/charts/gitlab/-/blob/f65c3d37fc8cf09a7987544680413552fb666aac/doc/installation/secrets.md#gitlab-rails-secret)|
+|Cloud Native GitLab Charts |[Kubernetes Secrets](https://gitlab.com/gitlab-org/charts/gitlab/-/blob/f65c3d37fc8cf09a7987544680413552fb666aac/doc/installation/secrets.md#gitlab-rails-secret)|
|Source |`<path-to-gitlab-rails>/config/secrets.yml` (Automatically generated by [01_secret_token.rb](https://gitlab.com/gitlab-org/gitlab/-/blob/master/config/initializers/01_secret_token.rb)) |
## Warning: Before you add a new secret to application secrets
Before you add a new secret to [`config/initializers/01_secret_token.rb`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/config/initializers/01_secret_token.rb),
-make sure you also update Omnibus GitLab or updates will fail. Omnibus is responsible for writing the `secrets.yml` file.
-If Omnibus doesn't know about a secret, Rails will attempt to write to the file, but this will fail because Rails doesn't have write access.
+make sure you also update Omnibus GitLab or updates fail. Omnibus is responsible for writing the `secrets.yml` file.
+If Omnibus doesn't know about a secret, Rails attempts to write to the file, but this fails because Rails doesn't have write access.
The same rules apply to Cloud Native GitLab charts, you must update the charts at first.
In case you need the secret to have same value on each node (which is usually the case) you need to make sure it's configured for all
GitLab.com environments prior to changing this file.
@@ -43,5 +44,5 @@ GitLab.com environments prior to changing this file.
## Further iteration
-We might deprecate/remove this automatic secret generation '01_secret_token.rb' in the future.
-Please see [this issue](https://gitlab.com/gitlab-org/gitlab/-/issues/222690) for more information.
+We may either deprecate or remove this automatic secret generation `01_secret_token.rb` in the future.
+Please see [issue 222690](https://gitlab.com/gitlab-org/gitlab/-/issues/222690) for more information.
diff --git a/doc/development/approval_rules.md b/doc/development/approval_rules.md
index d190f2b7c63..542bce1cb97 100644
--- a/doc/development/approval_rules.md
+++ b/doc/development/approval_rules.md
@@ -1,7 +1,7 @@
---
stage: Create
group: Source Code
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Approval Rules **(STARTER)**
@@ -18,7 +18,7 @@ can change often. The code should explain those things better. The components
mentioned here are the major parts of the application for the approval rules
feature to work.
-NOTE: **Note:**
+NOTE:
This is a living document and should be updated accordingly when parts
of the codebase touched in this document are changed or removed, or when new components
are added.
diff --git a/doc/development/architecture.md b/doc/development/architecture.md
index a1097ad4ed6..f8ab97de848 100644
--- a/doc/development/architecture.md
+++ b/doc/development/architecture.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# GitLab architecture overview
@@ -94,7 +94,7 @@ The simplest way to ensure this, is to add support for your feature or service t
### Simplified component overview
This is a simplified architecture diagram that can be used to
-understand GitLab's architecture.
+understand the GitLab architecture.
A complete architecture diagram is available in our
[component diagram](#component-diagram) below.
@@ -236,7 +236,7 @@ Table description links:
| [Certificate Management](#certificate-management) | TLS Settings, Let's Encrypt | βœ… | βœ… | βš™ | βœ… | βš™ | βš™ | CE & EE |
| [Consul](#consul) | Database node discovery, failover | βš™ | ❌ | ❌ | βœ… | ❌ | ❌ | EE Only |
| [Database Migrations](#database-migrations) | Database migrations | βœ… | βœ… | βœ… | βœ… | βš™ | βœ… | CE & EE |
-| [Elasticsearch](#elasticsearch) | Improved search within GitLab | – | – | – | ❌ | – | – | EE Only |
+| [Elasticsearch](#elasticsearch) | Improved search within GitLab | – | – | – | βœ… | – | – | EE Only |
| [Gitaly](#gitaly) | Git RPC service for handling all Git calls made by GitLab | βœ… | βœ… | βœ… | βœ… | βš™ | βœ… | CE & EE |
| [GitLab Exporter](#gitlab-exporter) | Generates a variety of GitLab metrics | βœ… | βœ… | βœ… | βœ… | ❌ | ❌ | CE & EE |
| [GitLab Geo Node](#gitlab-geo) | Geographically distributed GitLab nodes | βš™ | βš™ | ❌ | βœ… | ❌ | βš™ | EE Only |
@@ -282,8 +282,8 @@ When deployed, GitLab should be considered the amalgamation of the below process
GitLab can be considered to have two layers from a process perspective:
-- **Monitoring**: Anything from this layer is not required to deliver GitLab the application, but will allow administrators more insight into their infrastructure and what the service as a whole is doing.
-- **Core**: Any process that is vital for the delivery of GitLab as a platform. If any of these processes halt there will be a GitLab outage. For the Core layer, you can further divide into:
+- **Monitoring**: Anything from this layer is not required to deliver GitLab the application, but allows administrators more insight into their infrastructure and what the service as a whole is doing.
+- **Core**: Any process that is vital for the delivery of GitLab as a platform. If any of these processes halt, a GitLab outage results. For the Core layer, you can further divide into:
- **Processors**: These processes are responsible for actually performing operations and presenting the service.
- **Data**: These services store/expose structured data for the GitLab service.
@@ -297,7 +297,7 @@ GitLab can be considered to have two layers from a process perspective:
- Process: `alertmanager`
- GitLab.com: [Monitoring of GitLab.com](https://about.gitlab.com/handbook/engineering/monitoring/)
-[Alert manager](https://prometheus.io/docs/alerting/latest/alertmanager/) is a tool provided by Prometheus that _"handles alerts sent by client applications such as the Prometheus server. It takes care of deduplicating, grouping, and routing them to the correct receiver integration such as email, PagerDuty, or Opsgenie. It also takes care of silencing and inhibition of alerts."_ You can read more in [issue #45740](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/45740) about what we will be alerting on.
+[Alert manager](https://prometheus.io/docs/alerting/latest/alertmanager/) is a tool provided by Prometheus that _"handles alerts sent by client applications such as the Prometheus server. It takes care of deduplicating, grouping, and routing them to the correct receiver integration such as email, PagerDuty, or Opsgenie. It also takes care of silencing and inhibition of alerts."_ You can read more in [issue #45740](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/45740) about what we alert on.
#### Certificate management
@@ -340,7 +340,7 @@ Consul is a tool for service discovery and configuration. Consul is distributed,
- [Source](../integration/elasticsearch.md)
- [GDK](https://gitlab.com/gitlab-org/gitlab-development-kit/blob/master/doc/howto/elasticsearch.md)
- Layer: Core Service (Data)
-- GitLab.com: [Get Advanced Search working on GitLab.com](https://gitlab.com/groups/gitlab-org/-/epics/153) epic.
+- GitLab.com: [Get Advanced Search working on GitLab.com (Closed)](https://gitlab.com/groups/gitlab-org/-/epics/153) epic.
Elasticsearch is a distributed RESTful search engine built for the cloud.
@@ -434,7 +434,7 @@ GitLab CI/CD is the open-source continuous integration service included with Git
#### GitLab Shell
-- [Project page](https://gitlab.com/gitlab-org/gitlab-shell/blob/master/README.md)
+- [Project page](https://gitlab.com/gitlab-org/gitlab-shell/-/blob/main/README.md)
- Configuration:
- [Omnibus](https://gitlab.com/gitlab-org/omnibus-gitlab/blob/master/files/gitlab-config-template/gitlab.rb.template)
- [Charts](https://docs.gitlab.com/charts/charts/gitlab/gitlab-shell/)
@@ -600,7 +600,7 @@ GitLab packages the popular Database to provide storage for Application meta dat
- Process: `postgres-exporter`
- GitLab.com: [Monitoring of GitLab.com](https://about.gitlab.com/handbook/engineering/monitoring/)
-[`postgres_exporter`](https://github.com/wrouesnel/postgres_exporter) is the community provided Prometheus exporter that will deliver data about PostgreSQL to Prometheus for use in Grafana Dashboards.
+[`postgres_exporter`](https://github.com/wrouesnel/postgres_exporter) is the community provided Prometheus exporter that delivers data about PostgreSQL to Prometheus for use in Grafana Dashboards.
#### Prometheus
@@ -656,10 +656,10 @@ Redis is packaged to provide a place to store:
The registry is what users use to store their own Docker images. The bundled
registry uses NGINX as a load balancer and GitLab as an authentication manager.
-Whenever a client requests to pull or push an image from the registry, it will
-return a `401` response along with a header detailing where to get an
-authentication token, in this case the GitLab instance. The client will then
-request a pull or push auth token from GitLab and retry the original request
+Whenever a client requests to pull or push an image from the registry, it
+returns a `401` response along with a header detailing where to get an
+authentication token, in this case the GitLab instance. The client then
+requests a pull or push auth token from GitLab and retries the original request
to the registry. Learn more about [token authentication](https://docs.docker.com/registry/spec/auth/token/).
An external registry can also be configured to use GitLab as an auth endpoint.
@@ -710,7 +710,7 @@ disabled by default.
- Process: `puma`
- GitLab.com: [Puma](../user/gitlab_com/index.md#puma)
-[Puma](https://puma.io/) is a Ruby application server that is used to run the core Rails Application that provides the user facing features in GitLab. Often process output you will see this as `bundle` or `config.ru` depending on the GitLab version.
+[Puma](https://puma.io/) is a Ruby application server that is used to run the core Rails Application that provides the user facing features in GitLab. Often this displays in process output as `bundle` or `config.ru` depending on the GitLab version.
#### Unicorn
@@ -727,7 +727,7 @@ disabled by default.
- Process: `unicorn`
- GitLab.com: [Unicorn](../user/gitlab_com/index.md#unicorn)
-[Unicorn](https://yhbt.net/unicorn/) is a Ruby application server that is used to run the core Rails Application that provides the user facing features in GitLab. Often process output you will see this as `bundle` or `config.ru` depending on the GitLab version.
+[Unicorn](https://yhbt.net/unicorn/) is a Ruby application server that is used to run the core Rails Application that provides the user facing features in GitLab. Often this displays in process output as `bundle` or `config.ru` depending on the GitLab version.
#### LDAP Authentication
@@ -792,16 +792,16 @@ It's important to understand the distinction as some processes are used in both
### GitLab Web HTTP request cycle
-When making a request to an HTTP Endpoint (think `/users/sign_in`) the request will take the following path through the GitLab Service:
+When making a request to an HTTP Endpoint (think `/users/sign_in`) the request takes the following path through the GitLab Service:
- NGINX - Acts as our first line reverse proxy.
- GitLab Workhorse - This determines if it needs to go to the Rails application or somewhere else to reduce load on Puma.
-- Puma - Since this is a web request, and it needs to access the application it will go to Puma.
+- Puma - Since this is a web request, and it needs to access the application, it routes to Puma.
- PostgreSQL/Gitaly/Redis - Depending on the type of request, it may hit these services to store or retrieve data.
### GitLab Git request cycle
-Below we describe the different paths that HTTP vs. SSH Git requests will take. There is some overlap with the Web Request Cycle but also some differences.
+Below we describe the different paths that HTTP vs. SSH Git requests take. There is some overlap with the Web Request Cycle but also some differences.
### Web request (80/443)
diff --git a/doc/development/auto_devops.md b/doc/development/auto_devops.md
index bf259e47cb1..c457573b87a 100644
--- a/doc/development/auto_devops.md
+++ b/doc/development/auto_devops.md
@@ -1,7 +1,7 @@
---
stage: Configure
group: Configure
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Auto DevOps development guide
diff --git a/doc/development/background_migrations.md b/doc/development/background_migrations.md
index 86a2aed5ea1..3ef5bf382b8 100644
--- a/doc/development/background_migrations.md
+++ b/doc/development/background_migrations.md
@@ -36,7 +36,7 @@ Some examples where background migrations can be useful:
- Populating one column based on JSON stored in another column.
- Migrating data that depends on the output of external services (e.g. an API).
-NOTE: **Note:**
+NOTE:
If the background migration is part of an important upgrade, make sure it's announced
in the release post. Discuss with your Project Manager if you're not sure the migration falls
into this category.
@@ -144,7 +144,7 @@ once.
## Cleaning Up
-NOTE: **Note:**
+NOTE:
Cleaning up any remaining background migrations _must_ be done in either a major
or minor release, you _must not_ do this in a patch release.
diff --git a/doc/development/build_test_package.md b/doc/development/build_test_package.md
index e99915a24d0..421ee5d0c84 100644
--- a/doc/development/build_test_package.md
+++ b/doc/development/build_test_package.md
@@ -1,7 +1,7 @@
---
stage: Enablement
group: Distribution
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Building a package for testing
@@ -13,7 +13,7 @@ pipeline that can be used to trigger a pipeline in the Omnibus GitLab repository
that will create:
- A deb package for Ubuntu 16.04, available as a build artifact, and
-- A Docker image, which is pushed to [Omnibus GitLab's container
+- A Docker image, which is pushed to the [Omnibus GitLab container
registry](https://gitlab.com/gitlab-org/omnibus-gitlab/container_registry)
(images titled `gitlab-ce` and `gitlab-ee` respectively and image tag is the
commit which triggered the pipeline).
diff --git a/doc/development/bulk_import.md b/doc/development/bulk_import.md
new file mode 100644
index 00000000000..40e4af923ea
--- /dev/null
+++ b/doc/development/bulk_import.md
@@ -0,0 +1,53 @@
+---
+stage: Manage
+group: Import
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+---
+
+# GitLab Group Migration
+
+[Introduced](https://gitlab.com/groups/gitlab-org/-/epics/2771) in GitLab 13.7.
+
+WARNING:
+This feature is [under construction](https://gitlab.com/groups/gitlab-org/-/epics/2771) and its API/Architecture might change in the future.
+
+GitLab Group Migration is the evolution of Project and Group Import functionality. The
+goal is to have an easier way to the user migrate a whole Group, including
+Projects, from one GitLab instance to another.
+
+## Design decisions
+
+### Overview
+
+The following architectural diagram illustrates how the Group Migration
+works with a set of [ETL](#etl) Pipelines leveraging from the current [GitLab APIs](#api).
+
+![Simplified Component Overview](img/bulk_imports_overview_v13_7.png)
+
+### [ETL](https://www.ibm.com/cloud/learn/etl)
+
+<!-- Direct quote from the IBM URL link -->
+
+> ETL, for extract, transform and load, is a data integration process that
+> combines data from multiple data sources into a single, consistent data store
+> that is loaded into a data warehouse or other target system.
+
+Using ETL architecture makes the code more explicit and easier to follow, test and extend. The
+idea is to have one ETL pipeline for each relation to be imported.
+
+### API
+
+The current [Project](../user/project/settings/import_export.md) and [Group](../user/group/settings/import_export.md) Import are file based, so they require an export
+step to generate the file to be imported.
+
+GitLab Group migration leverages on [GitLab API](../api/README.md) to speed the migration.
+
+And, because we're on the road to [GraphQL](../api/README.md#road-to-graphql),
+GitLab Group Migration will be contributing towards to expand the GraphQL API coverage, which benefits both GitLab
+and its users.
+
+### Namespace
+
+The migration process starts with the creation of a [`BulkImport`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/models/bulk_import.rb)
+record to keep track of the migration. From there all the code related to the
+GitLab Group Migration can be found under the new `BulkImports` namespace in all the application layers.
diff --git a/doc/development/cached_queries.md b/doc/development/cached_queries.md
index 812e5f88754..4f82d721164 100644
--- a/doc/development/cached_queries.md
+++ b/doc/development/cached_queries.md
@@ -1,80 +1,101 @@
-# Cached queries guidelines
+---
+stage: none
+group: unassigned
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
+---
-Rails provides an [SQL query cache](https://guides.rubyonrails.org/caching_with_rails.html#sql-caching),
-used to cache the results of database queries for the duration of the request.
+# Cached queries guidelines
-If Rails encounters the same query again for that request,
-it will use the cached result set as opposed to running the query against the database again.
+Rails provides an [SQL query cache](https://guides.rubyonrails.org/caching_with_rails.html#sql-caching)
+which is used to cache the results of database queries for the duration of a request.
+When Rails encounters the same query again within the same request, it uses the cached
+result set instead of running the query against the database again.
-The query results are only cached for the duration of that single request, it does not persist across multiple requests.
+The query results are only cached for the duration of that single request, and
+don't persist across multiple requests.
## Why cached queries are considered bad
-The cached queries help with reducing DB load, but they still:
+Cached queries help by reducing the load on the database, but they still:
- Consume memory.
-- Require as to re-instantiate each `ActiveRecord` object.
-- Require as to re-instantiate each relation of the object.
-- Make us spend additional CPU-cycles to look into a list of cached queries.
-
-The Cached SQL queries are cheaper, but they are not cheap at all from `memory` perspective.
-They could mask [N+1 query problem](https://guides.rubyonrails.org/active_record_querying.html#eager-loading-associations),
-so we should threat them the same way we threat regular N+1 queries.
+- Require Rails to re-instantiate each `ActiveRecord` object.
+- Require Rails to re-instantiate each relation of the object.
+- Make us spend additional CPU cycles to look into a list of cached queries.
+
+Although cached queries are cheaper from a database perspective, they are potentially
+more expensive from a memory perspective. They could mask
+[N+1 query problems](https://guides.rubyonrails.org/active_record_querying.html#eager-loading-associations),
+so you should treat them the same way you treat regular N+1 queries.
-In case of N+1 queries, masked with cached queries, we are executing the same query N times.
-It will not hit the database N times, it will return the cached results instead.
-This is still expensive since we need to re-initialize objects each time, and this is CPU/Memory expensive.
-Instead, we should use the same in-memory objects, if possible.
+In cases of N+1 queries masked by cached queries, the same query is executed N times.
+It doesn't hit the database N times but instead returns the cached results N times.
+This is still expensive because you need to re-initialize objects each time at a
+greater expense to the CPU and memory resources. Instead, you should use the same
+in-memory objects whenever possible.
-When we introduce a new feature, we should avoid N+1 problems,
-minimize the [query count](merge_request_performance_guidelines.md#query-counts), and pay special attention that [cached
-queries](merge_request_performance_guidelines.md#cached-queries) are not masking N+1 problems.
+When you introduce a new feature, you should:
-## How to detect
+- Avoid N+1 queries.
+- Minimize the [query count](merge_request_performance_guidelines.md#query-counts).
+- Pay special attention to ensure
+ [cached queries](merge_request_performance_guidelines.md#cached-queries) are not
+ masking N+1 problems.
+
+## How to detect cached queries
### Detect potential offenders by using Kibana
-On GitLab.com, we are logging entries with the number of executed cached queries in the
-`pubsub-redis-inf-gprd*` index with the [`db_cached_count`](https://log.gprd.gitlab.net/goto/77d18d80ad84c5df1bf1da5c2cd35b82).
-We can filter endpoints that have a large number of executed cached queries. For example, if we encounter an endpoint
-that has 100+ `db_cached_count`, this could indicate that there is an N+1 problem masked with cached queries.
-We should probably investigate this endpoint further, to check if we are executing duplicated cached queries.
+GitLab.com, logs entries with the number of executed cached queries in the
+`pubsub-redis-inf-gprd*` index as
+[`db_cached_count`](https://log.gprd.gitlab.net/goto/77d18d80ad84c5df1bf1da5c2cd35b82).
+You can filter by endpoints that have a large number of executed cached queries. For
+example, an endpoint with a `db_cached_count` greater than 100 can indicate an N+1 problem which
+is masked by cached queries. You should investigate this endpoint further to determine
+if it is indeed executing duplicated cached queries.
+
+For more Kibana visualizations related to cached queries, read
+[issue #259007, 'Provide metrics that would help us to detect the potential N+1 CACHED SQL calls'](https://gitlab.com/gitlab-org/gitlab/-/issues/259007).
-For more cached queries Kibana visualizations see [this issue](https://gitlab.com/gitlab-org/gitlab/-/issues/259007).
+### Inspect suspicious endpoints using the Performance Bar
-### Inspect suspicious endpoint using Performance Bar
+When building features, use the
+[performance bar](../administration/monitoring/performance/performance_bar.md)
+to view the list of database queries, including cached queries. The
+performance bar shows a warning when the number of total executed and cached queries is
+greater than 100.
-When building features, you could use the [performance bar](../administration/monitoring/performance/performance_bar.md)
-to list database queries, which will include cached queries as well. The performance bar will show a warning
-when threshold of total executed queries (including cached ones) has exceeded 100 queries.
+To learn more about the statistics available to you, read the
+[Performance Bar documentation](../administration/monitoring/performance/performance_bar.md).
## What to look for
-Using [Kibana](cached_queries.md#detect-potential-offenders-by-using-kibana), you can look for a large number
-of executed cached queries. End-points with large number of `db_cached_count` could indicate that there
-are probably a lot of duplicated cached queries, which often indicates a masked N+1 problem.
+Using [Kibana](#detect-potential-offenders-by-using-kibana), you can look for a large number
+of executed cached queries. Endpoints with a large `db_cached_count` could suggest a large number
+of duplicated cached queries, which often indicates a masked N+1 problem.
-When you investigate specific endpoint, you could use
-the [performance bar](cached_queries.md#inspect-suspicious-endpoint-using-performance-bar).
-If you see a lot of similar queries, this often indicates an N+1 query issue (or a similar kind of query batching problem).
-If you see same cached query executed multiple times, this often indicates a masked N+1 query problem.
+When you investigate a specific endpoint, use
+the [performance bar](#inspect-suspicious-endpoints-using-the-performance-bar)
+to identify similar and cached queries, which may also indicate an N+1 query issue
+(or a similar kind of query batching problem).
-For example, let's say you wanted to debug `GroupMembers` page.
+### An example
-In the left corner of the performance bar you could see **Database queries** showing the total number of database queries
+For example, let's debug the "Group Members" page. In the left corner of the
+performance bar, **Database queries** shows the total number of database queries
and the number of executed cached queries:
![Performance Bar Database Queries](img/performance_bar_members_page.png)
-We can see that there are 55 cached queries. By clicking on the number, a modal window with more details is shown.
-Cached queries are marked with the `cached` label, so they are easy to spot. We can see that there are multiple duplicated
-cached queries:
+The page included 55 cached queries. Clicking the number displays a modal window
+with more details about queries. Cached queries are marked with the `cached` label
+below the query. You can see multiple duplicate cached queries in this modal window:
![Performance Bar Cached Queries Modal](img/performance_bar_cached_queries.png)
-If we click on `...` for one of them, it will expand the actual stack trace:
+Click **...** to expand the actual stack trace:
-```shell
+```ruby
[
"app/models/group.rb:305:in `has_owner?'",
"ee/app/views/shared/members/ee/_license_badge.html.haml:1",
@@ -99,24 +120,30 @@ If we click on `...` for one of them, it will expand the actual stack trace:
]
```
-The stack trace, shows us that we obviously have an N+1 problem, since we are repeatably executing for each group member:
+The stack trace shows an N+1 problem, because the code repeatedly executes
+`group.has_owner?(current_user)` for each group member. To solve this issue,
+move the repeated line of code outside of the loop, passing the result to each rendered member instead:
-```ruby
-group.has_owner?(current_user)
-```
+```erb
+- current_user_is_group_owner = @group && @group.has_owner?(current_user)
-This is easily solvable by extracting this check, above the loop.
+= render partial: 'shared/members/member',
+ collection: @members, as: :member,
+ locals: { membership_source: @group,
+ group: @group,
+ current_user_is_group_owner: current_user_is_group_owner }
+```
-After [the fix](https://gitlab.com/gitlab-org/gitlab/-/issues/231468), we now have:
+After [fixing the cached query](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/44626/diffs#27c2761d66e496495be07d0925697f7e62b5bd14), the performance bar now shows only
+6 cached queries:
![Performance Bar Fixed Cached Queries](img/performance_bar_fixed_cached_queries.png)
## How to measure the impact of the change
-We can use the [memory profiler](performance.md#using-memory-profiler) to profile our code.
-For the previous example, we could wrap the profiler around the `Groups::GroupMembersController#index` action.
-
-We had:
+Use the [memory profiler](performance.md#using-memory-profiler) to profile your code.
+For [this example](#an-example), wrap the profiler around the `Groups::GroupMembersController#index` action. Before the fix, the application had
+the following statistics:
- Total allocated: 7133601 bytes (84858 objects)
- Total retained: 757595 bytes (6070 objects)
@@ -124,7 +151,8 @@ We had:
- `db_cached_count`: 55
- `db_duration`: 303ms
-After the fix, we can see that we have reduced the allocated memory as well as the number of cached queries and improved execution time:
+The fix reduced the allocated memory, and the number of cached queries. These
+factors help improve the overall execution time:
- Total allocated: 5313899 bytes (65290 objects), 1810KB (25%) less
- Total retained: 685593 bytes (5278 objects), 72KB (9%) less
@@ -132,7 +160,7 @@ After the fix, we can see that we have reduced the allocated memory as well as t
- `db_cached_count`: 6 (89% less)
- `db_duration`: 162ms (87% faster)
-## See also
+## For more information
- [Metrics that would help us detect the potential N+1 Cached SQL calls](https://gitlab.com/gitlab-org/gitlab/-/issues/259007)
- [Merge Request performance guidelines for cached queries](merge_request_performance_guidelines.md#cached-queries)
diff --git a/doc/development/changelog.md b/doc/development/changelog.md
index b8e879c1826..894ae5a1893 100644
--- a/doc/development/changelog.md
+++ b/doc/development/changelog.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Changelog entries
@@ -47,7 +47,7 @@ the `author` field. GitLab team members **should not**.
- Any user-facing change **must** have a changelog entry. This includes both visual changes (regardless of how minor), and changes to the rendered DOM which impact how a screen reader may announce the content.
- Any client-facing change to our REST and GraphQL APIs **must** have a changelog entry.
- Performance improvements **should** have a changelog entry.
-- Changes that need to be documented in the Product Analytics [Event Dictionary](https://about.gitlab.com/handbook/product/product-analytics-guide#event-dictionary)
+- Changes that need to be documented in the Product Analytics [Event Dictionary](https://about.gitlab.com/handbook/product/product-analytics-guide/#event-dictionary)
also require a changelog entry.
- _Any_ contribution from a community member, no matter how small, **may** have
a changelog entry regardless of these guidelines if the contributor wants one.
@@ -55,7 +55,7 @@ the `author` field. GitLab team members **should not**.
- Any docs-only changes **should not** have a changelog entry.
- Any change behind a disabled feature flag **should not** have a changelog entry.
- Any change behind an enabled feature flag **should** have a changelog entry.
-- Any change that adds new usage data metrics and changes that needs to be documented in Product Analytics [Event Dictionary](https://about.gitlab.com/handbook/product/product-analytics-guide#event-dictionary) **should** have a changelog entry.
+- Any change that adds new usage data metrics and changes that needs to be documented in Product Analytics [Event Dictionary](https://about.gitlab.com/handbook/product/product-analytics-guide/#event-dictionary) **should** have a changelog entry.
- A change that adds snowplow events **should** have a changelog entry -
- A change that [removes a feature flag](feature_flags/development.md) **should** have a changelog entry -
only if the feature flag did not default to true already.
@@ -118,7 +118,7 @@ Its simplest usage is to provide the value for `title`:
bin/changelog 'Hey DZ, I added a feature to GitLab!'
```
-If you want to generate a changelog entry for GitLab EE, you will need to pass
+If you want to generate a changelog entry for GitLab EE, you must pass
the `--ee` option:
```plaintext
@@ -144,10 +144,10 @@ At this point the script would ask you to select the category of the change (map
```
The entry filename is based on the name of the current Git branch. If you run
-the command above on a branch called `feature/hey-dz`, it will generate a
+the command above on a branch called `feature/hey-dz`, it generates a
`changelogs/unreleased/feature-hey-dz.yml` file.
-The command will output the path of the generated file and its contents:
+The command outputs the path of the generated file and its contents:
```plaintext
create changelogs/unreleased/my-feature.yml
@@ -175,7 +175,7 @@ type:
You can pass the **`--amend`** argument to automatically stage the generated
file and amend it to the previous commit.
-If you use **`--amend`** and don't provide a title, it will automatically use
+If you use **`--amend`** and don't provide a title, it uses
the "subject" of the previous commit, which is the first line of the commit
message:
diff --git a/doc/development/chaos_endpoints.md b/doc/development/chaos_endpoints.md
index 63218af857d..9104c01c980 100644
--- a/doc/development/chaos_endpoints.md
+++ b/doc/development/chaos_endpoints.md
@@ -1,14 +1,14 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Generating chaos in a test GitLab instance
As [Werner Vogels](https://twitter.com/Werner), the CTO at Amazon Web Services, famously put it, **Everything fails, all the time**.
-As a developer, it's as important to consider the failure modes in which your software will operate as much as normal operation. Doing so can mean the difference between a minor hiccup leading to a scattering of `500` errors experienced by a tiny fraction of users and a full site outage that affects all users for an extended period.
+As a developer, it's as important to consider the failure modes in which your software may operate as much as normal operation. Doing so can mean the difference between a minor hiccup leading to a scattering of `500` errors experienced by a tiny fraction of users, and a full site outage that affects all users for an extended period.
To paraphrase [Tolstoy](https://en.wikipedia.org/wiki/Anna_Karenina_principle), _all happy servers are alike, but all failing servers are failing in their own way_. Luckily, there are ways we can attempt to simulate these failure modes, and the chaos endpoints are tools for assisting in this process.
@@ -24,7 +24,7 @@ Currently, there are four endpoints for simulating the following conditions:
For obvious reasons, these endpoints are not enabled by default on `production`.
They are enabled by default on **development** environments.
-DANGER: **Warning:**
+WARNING:
It is required that you secure access to the chaos endpoints using a secret token.
You should not enable them in production unless you absolutely know what you're doing.
@@ -40,17 +40,17 @@ Replace `secret` with your own secret token.
## Invoking chaos
-Once you have enabled the chaos endpoints and restarted the application, you can start testing using the endpoints.
+After you have enabled the chaos endpoints and restarted the application, you can start testing using the endpoints.
-By default, when invoking a chaos endpoint, the web worker process which receives the request will handle it. This means, for example, that if the Kill
-operation is invoked, the Puma or Unicorn worker process handling the request will be killed. To test these operations in Sidekiq, the `async` parameter on
-each endpoint can be set to `true`. This will run the chaos process in a Sidekiq worker.
+By default, when invoking a chaos endpoint, the web worker process which receives the request handles it. This means, for example, that if the Kill
+operation is invoked, the Puma or Unicorn worker process handling the request is killed. To test these operations in Sidekiq, the `async` parameter on
+each endpoint can be set to `true`. This runs the chaos process in a Sidekiq worker.
## Memory leaks
To simulate a memory leak in your application, use the `/-/chaos/leakmem` endpoint.
-The memory is not retained after the request finishes. After the request has completed, the Ruby garbage collector will attempt to recover the memory.
+The memory is not retained after the request finishes. After the request has completed, the Ruby garbage collector attempts to recover the memory.
```plaintext
GET /-/chaos/leakmem
@@ -66,8 +66,8 @@ GET /-/chaos/leakmem?memory_mb=1024&duration_s=50&async=true
| `async` | boolean | no | Set to true to leak memory in a Sidekiq background worker process |
```shell
-curl http://localhost:3000/-/chaos/leakmem?memory_mb=1024&duration_s=10 --header 'X-Chaos-Secret: secret'
-curl http://localhost:3000/-/chaos/leakmem?memory_mb=1024&duration_s=10&token=secret
+curl "http://localhost:3000/-/chaos/leakmem?memory_mb=1024&duration_s=10" --header 'X-Chaos-Secret: secret'
+curl "http://localhost:3000/-/chaos/leakmem?memory_mb=1024&duration_s=10&token=secret"
```
## CPU spin
@@ -85,12 +85,12 @@ GET /-/chaos/cpu_spin?duration_s=50&async=true
| Attribute | Type | Required | Description |
| ------------ | ------- | -------- | --------------------------------------------------------------------- |
-| `duration_s` | integer | no | Duration, in seconds, that the core will be used. Defaults to 30s |
+| `duration_s` | integer | no | Duration, in seconds, that the core is used. Defaults to 30s |
| `async` | boolean | no | Set to true to consume CPU in a Sidekiq background worker process |
```shell
-curl http://localhost:3000/-/chaos/cpu_spin?duration_s=60 --header 'X-Chaos-Secret: secret'
-curl http://localhost:3000/-/chaos/cpu_spin?duration_s=60&token=secret
+curl "http://localhost:3000/-/chaos/cpu_spin?duration_s=60" --header 'X-Chaos-Secret: secret'
+curl "http://localhost:3000/-/chaos/cpu_spin?duration_s=60&token=secret"
```
## DB spin
@@ -110,19 +110,19 @@ GET /-/chaos/db_spin?duration_s=50&async=true
| Attribute | Type | Required | Description |
| ------------ | ------- | -------- | --------------------------------------------------------------------------- |
| `interval_s` | float | no | Interval, in seconds, for every DB request. Defaults to 1s |
-| `duration_s` | integer | no | Duration, in seconds, that the core will be used. Defaults to 30s |
+| `duration_s` | integer | no | Duration, in seconds, that the core is used. Defaults to 30s |
| `async` | boolean | no | Set to true to perform the operation in a Sidekiq background worker process |
```shell
-curl http://localhost:3000/-/chaos/db_spin?interval_s=1&duration_s=60 --header 'X-Chaos-Secret: secret'
-curl http://localhost:3000/-/chaos/db_spin?interval_s=1&duration_s=60&token=secret
+curl "http://localhost:3000/-/chaos/db_spin?interval_s=1&duration_s=60" --header 'X-Chaos-Secret: secret'
+curl "http://localhost:3000/-/chaos/db_spin?interval_s=1&duration_s=60&token=secret"
```
## Sleep
-This endpoint is similar to the CPU Spin endpoint but simulates off-processor activity, such as network calls to backend services. It will sleep for a given duration_s.
+This endpoint is similar to the CPU Spin endpoint but simulates off-processor activity, such as network calls to backend services. It sleeps for a given `duration_s`.
-As with the CPU Spin endpoint, this may lead to your request timing out if duration_s exceeds the configured limit.
+As with the CPU Spin endpoint, this may lead to your request timing out if `duration_s` exceeds the configured limit.
```plaintext
GET /-/chaos/sleep
@@ -132,17 +132,17 @@ GET /-/chaos/sleep?duration_s=50&async=true
| Attribute | Type | Required | Description |
| ------------ | ------- | -------- | ---------------------------------------------------------------------- |
-| `duration_s` | integer | no | Duration, in seconds, that the request will sleep for. Defaults to 30s |
+| `duration_s` | integer | no | Duration, in seconds, that the request sleeps for. Defaults to 30s |
| `async` | boolean | no | Set to true to sleep in a Sidekiq background worker process |
```shell
-curl http://localhost:3000/-/chaos/sleep?duration_s=60 --header 'X-Chaos-Secret: secret'
-curl http://localhost:3000/-/chaos/sleep?duration_s=60&token=secret
+curl "http://localhost:3000/-/chaos/sleep?duration_s=60" --header 'X-Chaos-Secret: secret'
+curl "http://localhost:3000/-/chaos/sleep?duration_s=60&token=secret"
```
## Kill
-This endpoint will simulate the unexpected death of a worker process using a `kill` signal.
+This endpoint simulates the unexpected death of a worker process using a `kill` signal.
Because this endpoint uses the `KILL` signal, the worker isn't given an
opportunity to cleanup or shutdown.
@@ -157,6 +157,6 @@ GET /-/chaos/kill?async=true
| `async` | boolean | no | Set to true to kill a Sidekiq background worker process |
```shell
-curl http://localhost:3000/-/chaos/kill --header 'X-Chaos-Secret: secret'
-curl http://localhost:3000/-/chaos/kill?token=secret
+curl "http://localhost:3000/-/chaos/kill" --header 'X-Chaos-Secret: secret'
+curl "http://localhost:3000/-/chaos/kill?token=secret"
```
diff --git a/doc/development/chatops_on_gitlabcom.md b/doc/development/chatops_on_gitlabcom.md
index c64766af589..a2a0005f7cb 100644
--- a/doc/development/chatops_on_gitlabcom.md
+++ b/doc/development/chatops_on_gitlabcom.md
@@ -1,31 +1,66 @@
---
stage: Configure
group: Configure
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
-# Chatops on GitLab.com
+# ChatOps on GitLab.com
ChatOps on GitLab.com allows GitLab team members to run various automation tasks on GitLab.com using Slack.
## Requesting access
-GitLab team-members may need access to Chatops on GitLab.com for administration
+GitLab team-members may need access to ChatOps on GitLab.com for administration
tasks such as:
- Configuring feature flags.
- Running `EXPLAIN` queries against the GitLab.com production replica.
- Get deployment status of all of our environments or for a specific commit: `/chatops run auto_deploy status [commit_sha]`
-To request access to Chatops on GitLab.com:
+To request access to ChatOps on GitLab.com:
-1. Log into <https://ops.gitlab.net/users/sign_in> **using the same username** as for GitLab.com (you may have to rename it).
- 1. You could also use the "Sign in with" Google button to sign in, with your GitLab.com email address.
-1. Ask one of your team members to add you to the `chatops` project in Ops. They can do it by running `/chatops run member add <username> gitlab-com/chatops --ops` command in the `#chat-ops-test` Slack channel.
-1. If you had to change your username for GitLab.com on the first step, make sure [to reflect this information](https://gitlab.com/gitlab-com/www-gitlab-com#adding-yourself-to-the-team-page) on [the team page](https://about.gitlab.com/company/team/).
+1. Sign in to [Internal GitLab for Operations](https://ops.gitlab.net/users/sign_in)
+ with one of the following methods:
+
+ - The same username you use on GitLab.com. You may have to choose a different
+ username later.
+ - Clicking the **Sign in with Google** button to sign in with your GitLab.com email address.
+
+1. Confirm that your username in [Internal GitLab for Operations](https://ops.gitlab.net/)
+ is the same as your username in [GitLab.com](https://gitlab.com/). If the usernames
+ don't match, update the username at [Internal GitLab for Operations](https://ops.gitlab.net/).
+
+1. Comment in your onboarding issue, and tag your onboarding buddy and your manager.
+ Request they add you to the `ops` ChatOps project by running this command
+ in the `#chat-ops-test` Slack channel, replacing `<username>` with your username:
+ `/chatops run member add <username> gitlab-com/chatops --ops`
+
+ <!-- vale gitlab.FirstPerson = NO -->
+
+ > Hi `__BUDDY_HANDLE__` and `__MANAGER_HANDLE__`, could you please add me to
+ > the ChatOps project in Ops by running this command:
+ > `/chatops run member add <username> gitlab-com/chatops --ops` in the
+ > `#chat-ops-test` Slack channel? Thanks in advance.
+
+ <!-- vale gitlab.FirstPerson = YES -->
+
+1. Ensure you've set up two-factor authentication.
+1. After you're added to the ChatOps project, run this command to check your user
+ status and ensure you can execute commands in the `#chat-ops-test` Slack channel:
+
+ ```plaintext
+ /chatops run user find <username>
+ ```
+
+ The bot guides you through the process of allowing your user to execute
+ commands in the `#chat-ops-test` Slack channel.
+
+1. If you had to change your username for GitLab.com on the first step, make sure
+ [to reflect this information](https://gitlab.com/gitlab-com/www-gitlab-com#adding-yourself-to-the-team-page)
+ on [the team page](https://about.gitlab.com/company/team/).
## See also
-- [Chatops Usage](../ci/chatops/README.md)
+- [ChatOps Usage](../ci/chatops/README.md)
- [Understanding EXPLAIN plans](understanding_explain_plans.md)
- [Feature Groups](feature_flags/development.md#feature-groups)
diff --git a/doc/development/cicd/index.md b/doc/development/cicd/index.md
index 30ccc52ec5e..eede1d691a9 100644
--- a/doc/development/cicd/index.md
+++ b/doc/development/cicd/index.md
@@ -1,7 +1,7 @@
---
stage: Verify
group: Continuous Integration
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
type: index, concepts, howto
---
@@ -32,8 +32,8 @@ On the left side we have the events that can trigger a pipeline based on various
- When GitHub integration is used with [external pull requests](../../ci/ci_cd_for_external_repos/index.md#pipelines-for-external-pull-requests).
- When an upstream pipeline contains a [bridge job](../../ci/yaml/README.md#trigger) which triggers a downstream pipeline.
-Triggering any of these events will invoke the [`CreatePipelineService`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/services/ci/create_pipeline_service.rb)
-which takes as input event data and the user triggering it, then will attempt to create a pipeline.
+Triggering any of these events invokes the [`CreatePipelineService`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/services/ci/create_pipeline_service.rb)
+which takes as input event data and the user triggering it, then attempts to create a pipeline.
The `CreatePipelineService` relies heavily on the [`YAML Processor`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/yaml_processor.rb)
component, which is responsible for taking in a YAML blob as input and returns the abstract data structure of a
@@ -65,20 +65,20 @@ the `Runner API Gateway`.
We can register, delete, and verify runners, which also causes read/write queries to the database. After a runner is connected,
it keeps asking for the next job to execute. This invokes the [`RegisterJobService`](https://gitlab.com/gitlab-org/gitlab/blob/master/app/services/ci/register_job_service.rb)
-which will pick the next job and assign it to the runner. At this point the job will transition to a
+which picks the next job and assigns it to the runner. At this point the job transitions to a
`running` state, which again triggers `ProcessPipelineService` due to the status change.
For more details read [Job scheduling](#job-scheduling)).
While a job is being executed, the runner sends logs back to the server as well any possible artifacts
that need to be stored. Also, a job may depend on artifacts from previous jobs in order to run. In this
-case the runner will download them using a dedicated API endpoint.
+case the runner downloads them using a dedicated API endpoint.
Artifacts are stored in object storage, while metadata is kept in the database. An important example of artifacts
are reports (JUnit, SAST, DAST, etc.) which are parsed and rendered in the merge request.
Job status transitions are not all automated. A user may run [manual jobs](../../ci/yaml/README.md#whenmanual), cancel a pipeline, retry
specific failed jobs or the entire pipeline. Anything that
-causes a job to change status will trigger `ProcessPipelineService`, as it's responsible for
+causes a job to change status triggers `ProcessPipelineService`, as it's responsible for
tracking the status of the entire pipeline.
A special type of job is the [bridge job](../../ci/yaml/README.md#trigger) which is executed server-side
@@ -90,7 +90,7 @@ from the `CreatePipelineService` every time a downstream pipeline is triggered.
When a Pipeline is created all its jobs are created at once for all stages, with an initial state of `created`. This makes it possible to visualize the full content of a pipeline.
-A job with the `created` state won't be seen by the runner yet. To make it possible to assign a job to a runner, the job must transition first into the `pending` state, which can happen if:
+A job with the `created` state isn't seen by the runner yet. To make it possible to assign a job to a runner, the job must transition first into the `pending` state, which can happen if:
1. The job is created in the very first stage of the pipeline.
1. The job required a manual start and it has been triggered.
@@ -99,7 +99,7 @@ A job with the `created` state won't be seen by the runner yet. To make it possi
When the runner is connected, it requests the next `pending` job to run by polling the server continuously.
-NOTE: **Note:**
+NOTE:
API endpoints used by the runner to interact with GitLab are defined in [`lib/api/ci/runner.rb`](https://gitlab.com/gitlab-org/gitlab/blob/master/lib/api/ci/runner.rb)
After the server receives the request it selects a `pending` job based on the [`Ci::RegisterJobService` algorithm](#ciregisterjobservice), then assigns and sends the job to the runner.
@@ -134,8 +134,8 @@ There are 3 top level queries that this service uses to gather the majority of t
This list of jobs is then filtered further by matching tags between job and runner tags.
-NOTE: **Note:**
-If a job contains tags, the runner will not pick the job if it does not match **all** the tags.
+NOTE:
+If a job contains tags, the runner doesn't pick the job if it does not match **all** the tags.
The runner may have more tags than defined for the job, but not vice-versa.
Finally if the runner can only pick jobs that are tagged, all untagged jobs are filtered out.
diff --git a/doc/development/cicd/templates.md b/doc/development/cicd/templates.md
index e4c3e622ede..1ab569ba0df 100644
--- a/doc/development/cicd/templates.md
+++ b/doc/development/cicd/templates.md
@@ -1,7 +1,7 @@
---
stage: Release
-group: Progressive Delivery
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+group: Release
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
type: index, concepts, howto
---
diff --git a/doc/development/code_comments.md b/doc/development/code_comments.md
index d9ab719d18a..b1db781b9ee 100644
--- a/doc/development/code_comments.md
+++ b/doc/development/code_comments.md
@@ -1,14 +1,14 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Code comments
Whenever you add comment to the code that is expected to be addressed at any time
in future, please create a technical debt issue for it. Then put a link to it
-to the code comment you've created. This will allow other developers to quickly
+to the code comment you've created. This allows other developers to quickly
check if a comment is still relevant and what needs to be done to address it.
Examples:
diff --git a/doc/development/code_intelligence/index.md b/doc/development/code_intelligence/index.md
index 24abf57e9d6..c5673f6eee2 100644
--- a/doc/development/code_intelligence/index.md
+++ b/doc/development/code_intelligence/index.md
@@ -1,7 +1,7 @@
---
stage: Create
group: Source Code
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Code Intelligence
@@ -10,7 +10,7 @@ info: To determine the technical writer assigned to the Stage/Group associated w
This document describes the design behind [Code Intelligence](../../user/project/code_intelligence.md).
-GitLab's built-in Code Intelligence is powered by
+The built-in Code Intelligence in GitLab is powered by
[LSIF](https://lsif.dev) and comes down to generating an LSIF document for a
project in a CI job, processing the data, uploading it as a CI artifact and
displaying this information for the files in the project.
diff --git a/doc/development/code_review.md b/doc/development/code_review.md
index 63a47240435..00f4cf90481 100644
--- a/doc/development/code_review.md
+++ b/doc/development/code_review.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Code Review Guidelines
@@ -31,7 +31,7 @@ If you need some guidance (for example, it's your first merge request), feel fre
one of the [Merge request coaches](https://about.gitlab.com/company/team/).
If you need assistance with security scans or comments, feel free to include the
-Security Team (`@gitlab-com/gl-security`) in the review.
+Application Security Team (`@gitlab-com/gl-security/appsec`) in the review.
Depending on the areas your merge request touches, it must be **approved** by one
or more [maintainers](https://about.gitlab.com/handbook/engineering/workflow/code-review/#maintainer):
@@ -40,7 +40,7 @@ For approvals, we use the approval functionality found in the merge request
widget. Reviewers can add their approval by [approving additionally](../user/project/merge_requests/merge_request_approvals.md#adding-or-removing-an-approval).
Getting your merge request **merged** also requires a maintainer. If it requires
-more than one approval, the last maintainer to review and approve it will also merge it.
+more than one approval, the last maintainer to review and approve merges it.
### Domain experts
@@ -69,7 +69,7 @@ It picks reviewers and maintainers from the list at the
[engineering projects](https://about.gitlab.com/handbook/engineering/projects/)
page, with these behaviors:
-1. It will not pick people whose [GitLab status](../user/profile/index.md#current-status)
+1. It doesn't pick people whose [GitLab status](../user/profile/index.md#current-status)
contains the string 'OOO', or the emoji is `:palm_tree:` or `:beach:`.
1. [Trainee maintainers](https://about.gitlab.com/handbook/engineering/workflow/code-review/#trainee-maintainer)
are three times as likely to be picked as other reviewers.
@@ -110,8 +110,8 @@ with [domain expertise](#domain-experts).
1. If your merge request includes a new dependency or a filesystem change, it must be
**approved by a [Distribution team member](https://about.gitlab.com/company/team/)**. See how to work with the [Distribution team](https://about.gitlab.com/handbook/engineering/development/enablement/distribution/#how-to-work-with-distribution) for more details.
1. If your merge request includes documentation changes, it must be **approved
- by a [Technical writer](https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers)**, based on
- the appropriate [product category](https://about.gitlab.com/handbook/product/product-categories/).
+ by a [Technical writer](https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments)**, based on
+ the appropriate [product category](https://about.gitlab.com/handbook/product/categories/).
1. If your merge request includes end-to-end **and** non-end-to-end changes (*3*), it must be **approved
by a [Software Engineer in Test](https://about.gitlab.com/handbook/engineering/quality/#individual-contributors)**.
1. If your merge request only includes end-to-end changes (*3*) **or** if the MR author is a [Software Engineer in Test](https://about.gitlab.com/handbook/engineering/quality/#individual-contributors), it must be **approved by a [Quality maintainer](https://about.gitlab.com/handbook/engineering/projects/#gitlab_maintainers_qa)**
@@ -156,9 +156,9 @@ up confusion or verify that the end result matches what they had in mind, to
database specialists to get input on the data model or specific queries, or to
any other developer to get an in-depth review of the solution.
-If an author is unsure if a merge request needs a [domain expert's](#domain-experts) opinion, that's
-usually a pretty good sign that it does, since without it the required level of
-confidence in their solution will not have been reached.
+If an author is unsure if a merge request needs a [domain expert's](#domain-experts) opinion,
+that indicates it does. Without it it's unlikely they have the required level of confidence in their
+solution.
Before the review, the author is requested to submit comments on the merge
request diff alerting the reviewer to anything important as well as for anything
@@ -197,18 +197,18 @@ however, if one isn't available or you think the merge request doesn't need a re
Maintainers are responsible for the overall health, quality, and consistency of
the GitLab codebase, across domains and product areas.
-Consequently, their reviews will focus primarily on things like overall
+Consequently, their reviews focus primarily on things like overall
architecture, code organization, separation of concerns, tests, DRYness,
consistency, and readability.
-Since a maintainer's job only depends on their knowledge of the overall GitLab
+Because a maintainer's job only depends on their knowledge of the overall GitLab
codebase, and not that of any specific domain, they can review, approve, and merge
merge requests from any team and in any product area.
-Maintainers will do their best to also review the specifics of the chosen solution
+Maintainers do their best to also review the specifics of the chosen solution
before merging, but as they are not necessarily [domain experts](#domain-experts), they may be poorly
placed to do so without an unreasonable investment of time. In those cases, they
-will defer to the judgment of the author and earlier reviewers, in favor of focusing on their primary responsibilities.
+defer to the judgment of the author and earlier reviewers, in favor of focusing on their primary responsibilities.
If a maintainer feels that an MR is substantial enough that it warrants a review from a [domain expert](#domain-experts),
and it is unclear whether a domain expert have been involved in the reviews to date,
@@ -259,8 +259,8 @@ Instead these should be sent to the [Release Manager](https://about.gitlab.com/c
understand" or "Alternative solution:" comments. Post a follow-up comment
summarizing one-on-one discussion.
- If you ask a question to a specific person, always start the comment by
- mentioning them; this will ensure they see it if their notification level is
- set to "mentioned" and other people will understand they don't have to respond.
+ mentioning them; this ensures they see it if their notification level is
+ set to "mentioned" and other people understand they don't have to respond.
### Having your merge request reviewed
@@ -272,8 +272,10 @@ first time.
of your shiny new branch, read through the entire diff. Does it make sense?
Did you include something unrelated to the overall purpose of the changes? Did
you forget to remove any debugging code?
+<!-- vale gitlab.FutureTense = NO -->
- Be grateful for the reviewer's suggestions. ("Good call. I'll make that
change.")
+<!-- vale gitlab.FutureTense = YES -->
- Don't take it personally. The review is of the code, not of you.
- Explain why the code exists. ("It's like that because of these reasons. Would
it be more clear if I rename this class/file/method/variable?")
@@ -361,7 +363,7 @@ your own suggestions to the merge request. Note that:
- **Before applying suggestions**, edit the merge request to make sure
[squash and
merge](../user/project/merge_requests/squash_and_merge.md#squash-and-merge)
- is enabled, otherwise, the pipeline's Danger job will fail.
+ is enabled, otherwise, the pipeline's Danger job fails.
- If a merge request does not have squash and merge enabled, and it
has more than one commit, then see the note below about rewriting
commit history.
@@ -390,10 +392,10 @@ When ready to merge:
- When you set the MR to "Merge When Pipeline Succeeds", you should take over
subsequent revisions for anything that would be spotted after that.
-Thanks to **Pipeline for Merged Results**, authors won't have to rebase their
-branch as frequently anymore (only when there are conflicts) since the Merge
-Results Pipeline will already incorporate the latest changes from `master`.
-This results in faster review/merge cycles since maintainers don't have to ask
+Thanks to **Pipeline for Merged Results**, authors no longer have to rebase their
+branch as frequently anymore (only when there are conflicts) because the Merge
+Results Pipeline already incorporate the latest changes from `master`.
+This results in faster review/merge cycles because maintainers don't have to ask
for a final rebase: instead, they only have to start a MR pipeline and set MWPS.
This step brings us very close to the actual Merge Trains feature by testing the
Merge Results against the latest `master` at the time of the pipeline creation.
@@ -451,7 +453,7 @@ Enterprise Edition instance. This has some implications:
1. Reversible.
1. Performant at the scale of GitLab.com - ask a maintainer to test the
migration on the staging environment if you aren't sure.
- 1. Categorised correctly:
+ 1. Categorized correctly:
- Regular migrations run before the new code is running on the instance.
- [Post-deployment migrations](post_deployment_migrations.md) run _after_
the new code is deployed, when the instance is configured to do that.
@@ -459,14 +461,14 @@ Enterprise Edition instance. This has some implications:
should only be done for migrations that would take an extreme amount of
time at GitLab.com scale.
1. **Sidekiq workers** [cannot change in a backwards-incompatible way](sidekiq_style_guide.md#sidekiq-compatibility-across-updates):
- 1. Sidekiq queues are not drained before a deploy happens, so there will be
+ 1. Sidekiq queues are not drained before a deploy happens, so there are
workers in the queue from the previous version of GitLab.
1. If you need to change a method signature, try to do so across two releases,
and accept both the old and new arguments in the first of those.
1. Similarly, if you need to remove a worker, stop it from being scheduled in
- one release, then remove it in the next. This will allow existing jobs to
+ one release, then remove it in the next. This allows existing jobs to
execute.
- 1. Don't forget, not every instance will upgrade to every intermediate version
+ 1. Don't forget, not every instance is upgraded to every intermediate version
(some people may go from X.1.0 to X.10.0, or even try bigger upgrades!), so
try to be liberal in accepting the old format if it is cheap to do so.
1. **Cached values** may persist across releases. If you are changing the type a
@@ -478,12 +480,12 @@ Enterprise Edition instance. This has some implications:
1. Try to avoid that, and add to `ApplicationSetting` instead.
1. Ensure that it is also
[added to Omnibus](https://docs.gitlab.com/omnibus/settings/gitlab.yml.html#adding-a-new-setting-to-gitlab-yml).
-1. **Filesystem access** can be slow, so try to avoid
+1. **File system access** can be slow, so try to avoid
[shared files](shared_files.md) when an alternative solution is available.
### Review turnaround time
-Since [unblocking others is always a top priority](https://about.gitlab.com/handbook/values/#global-optimization),
+Because [unblocking others is always a top priority](https://about.gitlab.com/handbook/values/#global-optimization),
reviewers are expected to review assigned merge requests in a timely manner,
even when this may negatively impact their other tasks and priorities.
@@ -496,15 +498,15 @@ To ensure swift feedback to ready-to-review code, we maintain a `Review-response
> - review-response SLO = (time when first review response is provided) - (time MR is assigned to reviewer) < 2 business days
-If you don't think you'll be able to review a merge request within the `Review-response` SLO
+If you don't think you can review a merge request in the `Review-response` SLO
time frame, let the author know as soon as possible and try to help them find
-another reviewer or maintainer who will be able to, so that they can be unblocked
+another reviewer or maintainer who is able to, so that they can be unblocked
and get on with their work quickly.
If you think you are at capacity and are unable to accept any more reviews until
some have been completed, communicate this through your GitLab status by setting
the `:red_circle:` emoji and mentioning that you are at capacity in the status
-text. This will guide contributors to pick a different reviewer, helping us to
+text. This guides contributors to pick a different reviewer, helping us to
meet the SLO.
Of course, if you are out of office and have
@@ -522,13 +524,13 @@ A merge request may benefit from being considered a customer critical priority b
Properties of customer critical merge requests:
-- The [Senior Director of Development](https://about.gitlab.com/job-families/engineering/engineering-management/#senior-director-engineering) ([@clefelhocz1](https://gitlab.com/clefelhocz1)) is the DRI for deciding if a merge request will be customer critical.
-- The DRI will assign the `customer-critical-merge-request` label to the merge request.
+- The [Senior Director of Development](https://about.gitlab.com/job-families/engineering/engineering-management/#senior-director-engineering) ([@clefelhocz1](https://gitlab.com/clefelhocz1)) is the DRI for deciding if a merge request is customer critical.
+- The DRI assigns the `customer-critical-merge-request` label to the merge request.
- It is required that the reviewer(s) and maintainer(s) involved with a customer critical merge request are engaged as soon as this decision is made.
- It is required to prioritize work for those involved on a customer critical merge request so that they have the time available necessary to focus on it.
- It is required to adhere to GitLab [values](https://about.gitlab.com/handbook/values/) and processes when working on customer critical merge requests, taking particular note of family and friends first/work second, definition of done, iteration, and release when it's ready.
- Customer critical merge requests are required to not reduce security, introduce data-loss risk, reduce availability, nor break existing functionality per the process for [prioritizing technical decisions](https://about.gitlab.com/handbook/engineering/#prioritizing-technical-decisions.md).
-- On customer critical requests, it is _recommended_ that those involved _consider_ coordinating synchronously (Zoom, Slack) in addition to asynchronously (merge requests comments) if they believe this will reduce elapsed time to merge even though this _may_ sacrifice [efficiency](https://about.gitlab.com/company/culture/all-remote/asynchronous/#evaluating-efficiency.md).
+- On customer critical requests, it is _recommended_ that those involved _consider_ coordinating synchronously (Zoom, Slack) in addition to asynchronously (merge requests comments) if they believe this may reduce the elapsed time to merge even though this _may_ sacrifice [efficiency](https://about.gitlab.com/company/culture/all-remote/asynchronous/#evaluating-efficiency.md).
- After a customer critical merge request is merged, a retrospective must be completed with the intention of reducing the frequency of future customer critical merge requests.
## Examples
diff --git a/doc/development/contributing/community_roles.md b/doc/development/contributing/community_roles.md
index d880361e3aa..5419992517b 100644
--- a/doc/development/contributing/community_roles.md
+++ b/doc/development/contributing/community_roles.md
@@ -1,7 +1,7 @@
---
stage: none
group: Development
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Community members & roles
@@ -10,7 +10,7 @@ GitLab community members and their privileges/responsibilities.
| Roles | Responsibilities | Requirements |
|-------|------------------|--------------|
-| Maintainer | Accepts merge requests on several GitLab projects | Added to the [team page](https://about.gitlab.com/company/team/). An expert on code reviews and knows the product/code base |
+| Maintainer | Accepts merge requests on several GitLab projects | Added to the [team page](https://about.gitlab.com/company/team/). An expert on code reviews and knows the product/codebase |
| Reviewer | Performs code reviews on MRs | Added to the [team page](https://about.gitlab.com/company/team/) |
| Developer |Has access to GitLab internal infrastructure & issues (e.g. HR-related) | GitLab employee or a Core Team member (with an NDA) |
| Contributor | Can make contributions to all GitLab public projects | Have a GitLab.com account |
diff --git a/doc/development/contributing/design.md b/doc/development/contributing/design.md
index 891c764f07f..c1dd5ff4c0b 100644
--- a/doc/development/contributing/design.md
+++ b/doc/development/contributing/design.md
@@ -2,7 +2,7 @@
type: reference, dev
stage: none
group: Development
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Implement design & UI elements
diff --git a/doc/development/contributing/index.md b/doc/development/contributing/index.md
index 17431195c3d..329303558b0 100644
--- a/doc/development/contributing/index.md
+++ b/doc/development/contributing/index.md
@@ -2,7 +2,7 @@
type: reference, dev
stage: none
group: Development
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Contribute to GitLab
@@ -37,7 +37,7 @@ Report suspected security vulnerabilities in private to
`support@gitlab.com`, also see the
[disclosure section on the GitLab.com website](https://about.gitlab.com/security/disclosure/).
-DANGER: **Warning:**
+WARNING:
Do **NOT** create publicly viewable issues for suspected security vulnerabilities.
## Code of conduct
@@ -144,10 +144,10 @@ Keep the following in mind when submitting merge requests:
- When reviewers are reading through a merge request they may request guidance from other
reviewers.
-- If the code quality is found to not meet GitLab’s standards, the merge request reviewer will
+- If the code quality is found to not meet GitLab standards, the merge request reviewer will
provide guidance and refer the author to our:
- [Documentation](../documentation/styleguide/index.md) style guide.
- - Code style guides.
+ - [Code style guides](style_guides.md).
- Sometimes style guides will be followed but the code will lack structural integrity, or the
reviewer will have reservations about the code’s overall quality. When there is a reservation,
the reviewer will inform the author and provide some guidance.
diff --git a/doc/development/contributing/issue_workflow.md b/doc/development/contributing/issue_workflow.md
index 99650c24661..da38d1e73b4 100644
--- a/doc/development/contributing/issue_workflow.md
+++ b/doc/development/contributing/issue_workflow.md
@@ -2,7 +2,7 @@
type: reference, dev
stage: none
group: Development
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Issues workflow
@@ -92,7 +92,7 @@ The GitLab handbook documents [when something is a bug](https://about.gitlab.com
### Stage labels
-Stage labels specify which [stage](https://about.gitlab.com/handbook/product/product-categories/#hierarchy) the issue belongs to.
+Stage labels specify which [stage](https://about.gitlab.com/handbook/product/categories/#hierarchy) the issue belongs to.
#### Naming and color convention
@@ -136,7 +136,7 @@ The current group labels can be found by [searching the labels list for `group::
These labels are [scoped labels](../../user/project/labels.md#scoped-labels)
and thus are mutually exclusive.
-You can find the groups listed in the [Product Stages, Groups, and Categories](https://about.gitlab.com/handbook/product/product-categories/) page.
+You can find the groups listed in the [Product Stages, Groups, and Categories](https://about.gitlab.com/handbook/product/categories/) page.
We use the term group to map down product requirements from our product stages.
As a team needs some way to collect the work their members are planning to be assigned to, we use the `~group::` labels to do so.
@@ -155,7 +155,7 @@ Please read [Stage and Group labels in Throughput](https://about.gitlab.com/hand
### Category labels
From the handbook's
-[Product stages, groups, and categories](https://about.gitlab.com/handbook/product/product-categories/#hierarchy)
+[Product stages, groups, and categories](https://about.gitlab.com/handbook/product/categories/#hierarchy)
page:
> Categories are high-level capabilities that may be a standalone product at
@@ -187,7 +187,7 @@ in <https://gitlab.com/gitlab-com/www-gitlab-com/blob/master/data/categories.yml
### Feature labels
From the handbook's
-[Product stages, groups, and categories](https://about.gitlab.com/handbook/product/product-categories/#hierarchy)
+[Product stages, groups, and categories](https://about.gitlab.com/handbook/product/categories/#hierarchy)
page:
> Features: Small, discrete functionalities. e.g. Issue weights. Some common
@@ -311,7 +311,7 @@ We automatically add the ~"Accepting merge requests" label to issues
that match the [triage policy](https://about.gitlab.com/handbook/engineering/quality/triage-operations/#accepting-merge-requests).
We recommend people that have never contributed to any open source project to
-look for issues labeled `~"Accepting merge requests"` with a [weight of 1](https://gitlab.com/groups/gitlab-org/-/issues?state=opened&label_name[]=Accepting+merge+requests&assignee_id=None&sort=weight&weight=1) or the `~"Good for 1st time contributors"` [label](https://gitlab.com/gitlab-org/gitlab/-/issues?scope=all&utf8=%E2%9C%93&state=opened&label_name[]=Good%20for%201st%20time%20contributors&assignee_id=None) attached to it.
+look for issues labeled `~"Accepting merge requests"` with a [weight of 1](https://gitlab.com/groups/gitlab-org/-/issues?state=opened&label_name[]=Accepting+merge+requests&assignee_id=None&sort=weight&weight=1) or the `~"Good for new contributors"` [label](https://gitlab.com/gitlab-org/gitlab/-/issues?scope=all&utf8=%E2%9C%93&state=opened&label_name[]=good%20for%20new%20contributors&assignee_id=None) attached to it.
More experienced contributors are very welcome to tackle
[any of them](https://gitlab.com/groups/gitlab-org/-/issues?state=opened&label_name[]=Accepting+merge+requests&assignee_id=None).
@@ -410,8 +410,8 @@ in the regression issue as fixes are addressed.
## Technical and UX debt
-In order to track things that can be improved in GitLab's codebase,
-we use the ~"technical debt" label in [GitLab's issue tracker](https://gitlab.com/gitlab-org/gitlab/-/issues).
+In order to track things that can be improved in the GitLab codebase,
+we use the ~"technical debt" label in the [GitLab issue tracker](https://gitlab.com/gitlab-org/gitlab/-/issues).
For missed user experience requirements, we use the ~"UX debt" label.
These labels should be added to issues that describe things that can be improved,
diff --git a/doc/development/contributing/merge_request_workflow.md b/doc/development/contributing/merge_request_workflow.md
index d5ffff7bfc8..7859af4e88b 100644
--- a/doc/development/contributing/merge_request_workflow.md
+++ b/doc/development/contributing/merge_request_workflow.md
@@ -2,7 +2,7 @@
type: reference, dev
stage: none
group: Development
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Merge requests workflow
@@ -219,7 +219,7 @@ the contribution acceptance criteria below:
instructions for help if the "license-finder" test fails with a
`Dependencies that need approval` error. Also, make the reviewer aware of the new
library and explain why you need it.
-1. The merge request meets GitLab's [definition of done](#definition-of-done), below.
+1. The merge request meets the GitLab [definition of done](#definition-of-done), below.
## Definition of done
@@ -241,7 +241,7 @@ requirements.
1. Reviewed by relevant reviewers and all concerns are addressed for Availability, Regressions, Security. Documentation reviews should take place as soon as possible, but they should not block a merge request.
1. Merged by a project maintainer.
1. Create an issue in the [infrastructure issue tracker](https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues) to inform the Infrastructure department when your contribution is changing default settings or introduces a new setting, if relevant.
-1. Confirmed to be working in the [Canary stage](https://about.gitlab.com/handbook/engineering/#canary-testing) or on GitLab.com once the contribution is deployed.
+1. Confirmed to be working in the [Canary stage](https://about.gitlab.com/handbook/engineering/#canary-testing) with no new [Sentry](https://about.gitlab.com/handbook/engineering/#sentry) errors or on GitLab.com once the contribution is deployed.
1. Added to the [release post](https://about.gitlab.com/handbook/marketing/blog/release-posts/),
if relevant.
1. Added to [the website](https://gitlab.com/gitlab-com/www-gitlab-com/blob/master/data/features.yml), if relevant.
diff --git a/doc/development/contributing/style_guides.md b/doc/development/contributing/style_guides.md
index fd3fe239110..bfaee407cb8 100644
--- a/doc/development/contributing/style_guides.md
+++ b/doc/development/contributing/style_guides.md
@@ -2,7 +2,7 @@
type: reference, dev
stage: none
group: Development
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Style guides
@@ -39,6 +39,9 @@ go get github.com/Arkweid/lefthook
## Or with Rubygems
gem install lefthook
+### You may need to run the following if you're using rbenv
+rbenv rehash
+
# 3. Install the Git hooks
lefthook install -f
```
@@ -94,6 +97,32 @@ To that end, we encourage creation of new RuboCop rules in the codebase.
When creating a new cop that could be applied to multiple applications, we encourage you
to add it to our [GitLab Styles](https://gitlab.com/gitlab-org/gitlab-styles) gem.
+### Resolving RuboCop exceptions
+
+When the number of RuboCop exceptions exceed the default [`exclude-limit` of 15](https://docs.rubocop.org/rubocop/1.2/usage/basic_usage.html#command-line-flags),
+we may want to resolve exceptions over multiple commits. To minimize confusion,
+we should track our progress through the exception list.
+
+When auto-generating the `.rubocop_todo.yml` exception list for a particular Cop,
+and more than 15 files are affected, we should add the exception list to
+a different file, `.rubocop_manual_todo.yml`.
+
+This ensures that our list isn't mistakenly removed by another auto generation of
+the `.rubocop_todo.yml`. This also allows us greater visibility into the exceptions
+which are currently being resolved.
+
+One way to generate the initial list is to run the todo auto generation,
+with `exclude limit` set to a high number.
+
+```shell
+bundle exec rubocop --auto-gen-config --auto-gen-only-exclude --exclude-limit=10000
+```
+
+You can then move the list from the freshly generated `.rubocop_todo.yml` for the Cop being actively
+resolved and place it in the `.rubocop_manual_todo.yml`. In this scenario, do not commit auto generated
+changes to the `.rubocop_todo.yml` as an `exclude limit` that is higher than 15 will make the
+`.rubocop_todo.yml` hard to parse.
+
## Database migrations
See the dedicated [Database Migrations Style Guide](../migration_style_guide.md).
diff --git a/doc/development/creating_enums.md b/doc/development/creating_enums.md
index fbf35171ecb..d0f04c30c7b 100644
--- a/doc/development/creating_enums.md
+++ b/doc/development/creating_enums.md
@@ -1,7 +1,7 @@
---
stage: Enablement
group: Database
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Creating enums
diff --git a/doc/development/cycle_analytics.md b/doc/development/cycle_analytics.md
index 3947a012bd5..1619f3dcb10 100644
--- a/doc/development/cycle_analytics.md
+++ b/doc/development/cycle_analytics.md
@@ -3,3 +3,6 @@ redirect_to: 'value_stream_analytics.md'
---
This document was moved to [another location](value_stream_analytics.md)
+
+<!-- This redirect file can be deleted after February 1, 2021. -->
+<!-- Before deletion, see: https://docs.gitlab.com/ee/development/documentation/#move-or-rename-a-page -->
diff --git a/doc/development/dangerbot.md b/doc/development/dangerbot.md
index 4feec5c093a..59b31437161 100644
--- a/doc/development/dangerbot.md
+++ b/doc/development/dangerbot.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Danger bot
@@ -10,7 +10,7 @@ The GitLab CI/CD pipeline includes a `danger-review` job that uses [Danger](http
to perform a variety of automated checks on the code under test.
Danger is a gem that runs in the CI environment, like any other analysis tool.
-What sets it apart from, e.g., RuboCop, is that it's designed to allow you to
+What sets it apart from (for example, RuboCop) is that it's designed to allow you to
easily write arbitrary code to test properties of your code or changes. To this
end, it provides a set of common helpers and access to information about what
has actually changed in your environment, then simply runs your code!
@@ -32,7 +32,7 @@ from the start of the merge request.
### Disadvantages
-- It's not obvious Danger will update the old comment, thus you need to
+- It's not obvious Danger updates the old comment, thus you need to
pay attention to it if it is updated or not.
## Run Danger locally
@@ -46,15 +46,14 @@ bin/rake danger_local
## Operation
On startup, Danger reads a [`Dangerfile`](https://gitlab.com/gitlab-org/gitlab/blob/master/Dangerfile)
-from the project root. GitLab's Danger code is decomposed into a set of helpers
+from the project root. Danger code in GitLab is decomposed into a set of helpers
and plugins, all within the [`danger/`](https://gitlab.com/gitlab-org/gitlab-foss/tree/master/danger/)
-subdirectory, so ours just tells Danger to load it all. Danger will then run
+subdirectory, so ours just tells Danger to load it all. Danger then runs
each plugin against the merge request, collecting the output from each. A plugin
may output notifications, warnings, or errors, all of which are copied to the
-CI job's log. If an error happens, the CI job (and so the entire pipeline) will
-be failed.
+CI job's log. If an error happens, the CI job (and so the entire pipeline) fails.
-On merge requests, Danger will also copy the output to a comment on the MR
+On merge requests, Danger also copies the output to a comment on the MR
itself, increasing visibility.
## Development guidelines
@@ -67,7 +66,7 @@ continue to apply. However, there are a few things that deserve special emphasis
Danger is a powerful tool and flexible tool, but not always the most appropriate
way to solve a given problem or workflow.
-First, be aware of GitLab's [commitment to dogfooding](https://about.gitlab.com/handbook/engineering/#dogfooding).
+First, be aware of the GitLab [commitment to dogfooding](https://about.gitlab.com/handbook/engineering/#dogfooding).
The code we write for Danger is GitLab-specific, and it **may not** be most
appropriate place to implement functionality that addresses a need we encounter.
Our users, customers, and even our own satellite projects, such as [Gitaly](https://gitlab.com/gitlab-org/gitaly),
@@ -75,17 +74,17 @@ often face similar challenges, after all. Think about how you could fulfill the
same need while ensuring everyone can benefit from the work, and do that instead
if you can.
-If a standard tool (e.g. `rubocop`) exists for a task, it is better to use it
-directly, rather than calling it via Danger. Running and debugging the results
-of those tools locally is easier if Danger isn't involved, and unless you're
-using some Danger-specific functionality, there's no benefit to including it in
-the Danger run.
+If a standard tool (for example, `rubocop`) exists for a task, it's better to
+use it directly, rather than calling it by using Danger. Running and debugging
+the results of those tools locally is easier if Danger isn't involved, and
+unless you're using some Danger-specific functionality, there's no benefit to
+including it in the Danger run.
Danger is well-suited to prototyping and rapidly iterating on solutions, so if
what we want to build is unclear, a solution in Danger can be thought of as a
trial run to gather information about a product area. If you're doing this, make
sure the problem you're trying to solve, and the outcomes of that prototyping,
-are captured in an issue or epic as you go along. This will help us to address
+are captured in an issue or epic as you go along. This helps us to address
the need as part of the product in a future version of GitLab!
### Implementation details
@@ -110,16 +109,17 @@ At present, we do this by putting the code in a module in `lib/gitlab/danger/...
and including it in the matching `danger/plugins/...` file. Specs can then be
added in `spec/lib/gitlab/danger/...`.
-You'll only know if your `Dangerfile` works by pushing the branch that contains
-it to GitLab. This can be quite frustrating, as it significantly increases the
-cycle time when developing a new task, or trying to debug something in an
-existing one. If you've followed the guidelines above, most of your code can
-be exercised locally in RSpec, minimizing the number of cycles you need to go
-through in CI. However, you can speed these cycles up somewhat by emptying the
+To determine if your `Dangerfile` works, push the branch that contains it to
+GitLab. This can be quite frustrating, as it significantly increases the cycle
+time when developing a new task, or trying to debug something in an existing
+one. If you've followed the guidelines above, most of your code can be exercised
+locally in RSpec, minimizing the number of cycles you need to go through in CI.
+However, you can speed these cycles up somewhat by emptying the
`.gitlab/ci/rails.gitlab-ci.yml` file in your merge request. Just don't forget
to revert the change before merging!
-To enable the Dangerfile on another existing GitLab project, run the following extra steps, based on [this procedure](https://danger.systems/guides/getting_started.html#creating-a-bot-account-for-danger-to-use):
+To enable the Dangerfile on another existing GitLab project, run the following
+extra steps, based on [this procedure](https://danger.systems/guides/getting_started.html#creating-a-bot-account-for-danger-to-use):
1. Add `@gitlab-bot` to the project as a `reporter`.
1. Add the `@gitlab-bot`'s `GITLAB_API_PRIVATE_TOKEN` value as a value for a new CI/CD
@@ -156,10 +156,10 @@ at GitLab so far:
To work around this, you can add an [environment
variable](../ci/variables/README.md) called
`DANGER_GITLAB_API_TOKEN` with a personal API token to your
- fork. That way the danger comments will be made from CI using that
+ fork. That way the danger comments are made from CI using that
API token instead.
Making the variable
- [masked](../ci/variables/README.md#mask-a-custom-variable) will make sure
+ [masked](../ci/variables/README.md#mask-a-custom-variable) makes sure
it doesn't show up in the job logs. The variable cannot be
[protected](../ci/variables/README.md#protect-a-custom-variable),
as it needs to be present for all feature branches.
diff --git a/doc/development/database/add_foreign_key_to_existing_column.md b/doc/development/database/add_foreign_key_to_existing_column.md
index 85411ff9aa7..a5d40d455d8 100644
--- a/doc/development/database/add_foreign_key_to_existing_column.md
+++ b/doc/development/database/add_foreign_key_to_existing_column.md
@@ -1,7 +1,7 @@
---
stage: Enablement
group: Database
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Adding foreign key constraint to an existing column
@@ -79,7 +79,7 @@ class AddNotValidForeignKeyToEmailsUser < ActiveRecord::Migration[5.2]
end
```
-CAUTION: **Caution:**
+WARNING:
Avoid using the `add_foreign_key` constraint more than once per migration file, unless the source and target tables are identical.
#### Data migration to fix existing records
@@ -119,7 +119,7 @@ end
Validating the foreign key will scan the whole table and make sure that each relation is correct.
-NOTE: **Note:**
+NOTE:
When using [background migrations](../background_migrations.md), foreign key validation should happen in the next GitLab release.
Migration file for validating the foreign key:
diff --git a/doc/development/database/constraint_naming_convention.md b/doc/development/database/constraint_naming_convention.md
index 63a2d607ac5..debf74d3b40 100644
--- a/doc/development/database/constraint_naming_convention.md
+++ b/doc/development/database/constraint_naming_convention.md
@@ -1,7 +1,7 @@
---
stage: Enablement
group: Database
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Constraints naming conventions
@@ -17,7 +17,7 @@ Please note that the intent is not to retroactively change names in existing dat
| **Foreign Key** | `fk_<table name>_<column name>[_and_<column name>]*_<foreign table name>` | | `fk_projects_group_id_groups` |
| **Index** | `index_<table name>_on_<column name>[_and_<column name>]*[_and_<column name in partial clause>]*` | | `index_repositories_on_group_id` |
| **Unique Constraint** | `unique_<table name>_<column name>[_and_<column name>]*` | | `unique_projects_group_id_and_name` |
-| **Check Constraint** | `check_<table name>_<column name>[_and_<column name>]*[_<suffix>]?` | The optional suffix should denote the type of validation, such as `length` and `enum`. It can also be used to desambiguate multiple `CHECK` constraints on the same column. | `check_projects_name_length`<br />`check_projects_type_enum`<br />`check_projects_admin1_id_and_admin2_id_differ` |
+| **Check Constraint** | `check_<table name>_<column name>[_and_<column name>]*[_<suffix>]?` | The optional suffix should denote the type of validation, such as `length` and `enum`. It can also be used to disambiguate multiple `CHECK` constraints on the same column. | `check_projects_name_length`<br />`check_projects_type_enum`<br />`check_projects_admin1_id_and_admin2_id_differ` |
| **Exclusion Constraint** | `excl_<table name>_<column name>[_and_<column name>]*_[_<suffix>]?` | The optional suffix should denote the type of exclusion being performed. | `excl_reservations_start_at_end_at_no_overlap` |
## Observations
diff --git a/doc/development/database/database_reviewer_guidelines.md b/doc/development/database/database_reviewer_guidelines.md
index 3345df8b46b..26083183d6d 100644
--- a/doc/development/database/database_reviewer_guidelines.md
+++ b/doc/development/database/database_reviewer_guidelines.md
@@ -1,7 +1,7 @@
---
stage: Enablement
group: Database
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Database Reviewer Guidelines
diff --git a/doc/development/database/index.md b/doc/development/database/index.md
index 19159c6c0ff..367ef455898 100644
--- a/doc/development/database/index.md
+++ b/doc/development/database/index.md
@@ -1,7 +1,7 @@
---
stage: Enablement
group: Database
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Database guides
@@ -59,6 +59,7 @@ info: To determine the technical writer assigned to the Stage/Group associated w
- [Client-side connection-pool](client_side_connection_pool.md)
- [Updating multiple values](setting_multiple_values.md)
- [Constraints naming conventions](constraint_naming_convention.md)
+- [Query performance guidelines](../query_performance.md)
## Case studies
diff --git a/doc/development/database/maintenance_operations.md b/doc/development/database/maintenance_operations.md
index c84ec31471e..9e7a35531ca 100644
--- a/doc/development/database/maintenance_operations.md
+++ b/doc/development/database/maintenance_operations.md
@@ -1,7 +1,7 @@
---
stage: Enablement
group: Database
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Maintenance operations
diff --git a/doc/development/database/not_null_constraints.md b/doc/development/database/not_null_constraints.md
index 96271863d94..01d5b7af7f1 100644
--- a/doc/development/database/not_null_constraints.md
+++ b/doc/development/database/not_null_constraints.md
@@ -1,7 +1,7 @@
---
stage: Enablement
group: Database
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# `NOT NULL` constraints
@@ -66,7 +66,7 @@ different releases:
- Add a post-deployment migration to add the `NOT NULL` constraint with `validate: false`.
- Add a post-deployment migration to fix the existing records.
- NOTE: **Note:**
+ NOTE:
Depending on the size of the table, a background migration for cleanup could be required in the next release.
See the [`NOT NULL` constraints on large tables](not_null_constraints.md#not-null-constraints-on-large-tables) section for more information.
@@ -94,7 +94,7 @@ that all epics should have a user-generated description.
After checking our production database, we know that there are `epics` with `NULL` descriptions,
so we can not add and validate the constraint in one step.
-NOTE: **Note:**
+NOTE:
Even if we did not have any epic with a `NULL` description, another instance of GitLab could have
such records, so we would follow the same process either way.
diff --git a/doc/development/database/setting_multiple_values.md b/doc/development/database/setting_multiple_values.md
index c354247a9f8..54870380047 100644
--- a/doc/development/database/setting_multiple_values.md
+++ b/doc/development/database/setting_multiple_values.md
@@ -1,7 +1,7 @@
---
stage: Enablement
group: Database
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Setting Multiple Values
diff --git a/doc/development/database/strings_and_the_text_data_type.md b/doc/development/database/strings_and_the_text_data_type.md
index fe8cfa5cd22..8b839e929c7 100644
--- a/doc/development/database/strings_and_the_text_data_type.md
+++ b/doc/development/database/strings_and_the_text_data_type.md
@@ -1,7 +1,7 @@
---
stage: Enablement
group: Database
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Strings and the Text data type
@@ -17,7 +17,7 @@ The `text` data type can not be defined with a limit, so `add_text_limit` is enf
adding a [check constraint](https://www.postgresql.org/docs/11/ddl-constraints.html) on the
column and then validating it at a followup step.
-## Background info
+## Background information
The reason we always want to use `text` instead of `string` is that `string` columns have the
disadvantage that if you want to update their limit, you have to run an `ALTER TABLE ...` command.
@@ -133,7 +133,7 @@ Adding text limits to existing database columns requires multiple steps split in
- Add a post-deployment migration to add the limit to the text column with `validate: false`.
- Add a post-deployment migration to fix the existing records.
- NOTE: **Note:**
+ NOTE:
Depending on the size of the table, a background migration for cleanup could be required in the next release.
See [text limit constraints on large tables](strings_and_the_text_data_type.md#text-limit-constraints-on-large-tables) for more information.
@@ -154,7 +154,7 @@ other processes that try to access it while running the update.
Also, after checking our production database, we know that there are `issues` with more characters in
their title than the 1024 character limit, so we can not add and validate the constraint in one step.
-NOTE: **Note:**
+NOTE:
Even if we did not have any record with a title larger than the provided limit, another
instance of GitLab could have such records, so we would follow the same process either way.
@@ -256,7 +256,7 @@ end
To keep this guide short, we skipped the definition of the background migration and only
provided a high level example of the post-deployment migration that is used to schedule the batches.
-You can find more info on the guide about [background migrations](../background_migrations.md)
+You can find more information on the guide about [background migrations](../background_migrations.md)
#### Validate the text limit (next release)
diff --git a/doc/development/database/table_partitioning.md b/doc/development/database/table_partitioning.md
index 30d0b0a2f5b..358b9bb42b0 100644
--- a/doc/development/database/table_partitioning.md
+++ b/doc/development/database/table_partitioning.md
@@ -1,7 +1,7 @@
---
stage: Enablement
group: Database
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Database table partitioning
@@ -98,7 +98,7 @@ CREATE TABLE audit_events (
PARTITION BY RANGE(created_at);
```
-NOTE: **Note:**
+NOTE:
The primary key of a partitioned table must include the partition key as
part of the primary key definition.
diff --git a/doc/development/database_debugging.md b/doc/development/database_debugging.md
index ebd57df498e..f4558189000 100644
--- a/doc/development/database_debugging.md
+++ b/doc/development/database_debugging.md
@@ -1,7 +1,7 @@
---
stage: Enablement
group: Database
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Troubleshooting and Debugging Database
diff --git a/doc/development/database_query_comments.md b/doc/development/database_query_comments.md
index ccaaadef4f4..8a5abad3815 100644
--- a/doc/development/database_query_comments.md
+++ b/doc/development/database_query_comments.md
@@ -1,7 +1,7 @@
---
stage: Enablement
group: Database
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Database query comments with Marginalia
diff --git a/doc/development/database_review.md b/doc/development/database_review.md
index d1ec32af464..f0c265df9ab 100644
--- a/doc/development/database_review.md
+++ b/doc/development/database_review.md
@@ -1,7 +1,7 @@
---
stage: Enablement
group: Database
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Database Review Guidelines
@@ -36,9 +36,22 @@ point out specific queries for review and there are no obviously
complex queries, it is enough to concentrate on reviewing the
migration only.
-It is preferable to review queries in SQL form and generally accepted
-to ask the author to translate any ActiveRecord queries in SQL form
-for review.
+### Required
+
+The following artifacts are required prior to submitting for a ~database review.
+If your merge request description does not include these items, the review will be reassigned back to the author.
+
+If new migrations are introduced, in the MR **you are required to provide**:
+
+- The output of both migrating and rolling back for all migrations
+
+If new queries have been introduced or existing queries have been updated, **you are required to provide**:
+
+- [Query plans](#query-plans) for each raw SQL query included in the merge request along with the link to the query plan following each raw SQL snippet.
+- [Raw SQL](#raw-sql) for all changed or added queries (as translated from ActiveRecord queries).
+ - In case of updating an existing query, the raw SQL of both the old and the new version of the query should be provided together with their query plans.
+
+Refer to [Preparation when adding or modifying queries](#preparation-when-adding-or-modifying-queries) for how to provide this information.
### Roles and process
@@ -47,9 +60,11 @@ A Merge Request **author**'s role is to:
- Decide whether a database review is needed.
- If database review is needed, add the ~database label.
- [Prepare the merge request for a database review](#how-to-prepare-the-merge-request-for-a-database-review).
+- Provide the [required](#required) artifacts prior to submitting the MR.
A database **reviewer**'s role is to:
+- Ensure the [required](#required) artifacts are provided and in the proper format. If they are not, reassign the merge request back to the author.
- Perform a first-pass review on the MR and suggest improvements to the author.
- Once satisfied, relabel the MR with ~"database::reviewed", approve it, and
reassign MR to the database **maintainer** suggested by Reviewer
@@ -104,23 +119,43 @@ test its execution using `CREATE INDEX CONCURRENTLY` in the `#database-lab` Slac
#### Preparation when adding or modifying queries
+##### Raw SQL
+
- Write the raw SQL in the MR description. Preferably formatted
nicely with [pgFormatter](https://sqlformat.darold.net) or
[paste.depesz.com](https://paste.depesz.com) and using regular quotes
(e.g. `"projects"."id"`) and avoiding smart quotes (e.g. `β€œprojects”.β€œid”`).
-- Include the output of `EXPLAIN (ANALYZE, BUFFERS)` of the relevant
- queries in the description. If the output is too long, wrap it in
- `<details>` blocks, paste it in a GitLab Snippet, or provide the
- link to the plan at: [explain.depesz.com](https://explain.depesz.com).
+- In case of queries generated dynamically by using parameters, there should be one raw SQL query for each variation.
+
+ For example, a finder for issues that may take as a parameter an optional filter on projects,
+ should include both the version of the simple query over issues and the one that joins issues
+ and projects and applies the filter.
+
+ There are finders or other methods that can generate a very large amount of permutations.
+ There is no need to exhaustively add all the possible generated queries, just the one with
+ all the parameters included and one for each type of queries generated.
+
+ For example, if joins or a group by clause are optional, the versions without the group by clause
+ and with less joins should be also included, while keeping the appropriate filters for the remaining tables.
+
+- If a query is going to be always used with a limit and an offset, those should always be
+ included with the maximum allowed limit used and a non 0 offset.
+
+##### Query Plans
+
+- The query plan for each raw SQL query included in the merge request along with the link to the query plan following each raw SQL snippet.
+- Provide the link to the plan at: [explain.depesz.com](https://explain.depesz.com). Paste both the plan and the query used in the form.
- When providing query plans, make sure it hits enough data:
- You can use a GitLab production replica to test your queries on a large scale,
through the `#database-lab` Slack channel or through [chatops](understanding_explain_plans.md#chatops).
- Usually, the `gitlab-org` namespace (`namespace_id = 9970`) and the
`gitlab-org/gitlab-foss` (`project_id = 13083`) or the `gitlab-org/gitlab` (`project_id = 278964`)
projects provide enough data to serve as a good example.
-- For query changes, it is best to provide the SQL query along with a
- plan _before_ and _after_ the change. This helps to spot differences
- quickly.
+ - That means that no query plan should return 0 records or less records than the provided limit (if a limit is included). If a query is used in batching, a proper example batch with adequate included results should be identified and provided.
+ - If your queries belong to a new feature in GitLab.com and thus they don't return data in production, it's suggested to analyze the query and to provide the plan from a local environment.
+ - More information on how to find the number of actual returned records in [Understanding EXPLAIN plans](understanding_explain_plans.md)
+- For query changes, it is best to provide both the SQL queries along with the
+ plan _before_ and _after_ the change. This helps spot differences quickly.
- Include data that shows the performance improvement, preferably in
the form of a benchmark.
@@ -158,8 +193,8 @@ test its execution using `CREATE INDEX CONCURRENTLY` in the `#database-lab` Slac
- Maintainer: After the merge request is merged, notify Release Managers about it on `#f_upcoming_release` Slack channel.
- Check consistency with `db/structure.sql` and that migrations are [reversible](migration_style_guide.md#reversibility)
- Check that the relevant version files under `db/schema_migrations` were added or removed.
- - Check queries timing (If any): Queries executed in a migration
- need to fit comfortably within `15s` - preferably much less than that - on GitLab.com.
+ - Check queries timing (If any): In a single transaction, cumulative query time executed in a migration
+ needs to fit comfortably within `15s` - preferably much less than that - on GitLab.com.
- For column removals, make sure the column has been [ignored in a previous release](what_requires_downtime.md#dropping-columns)
- Check [background migrations](background_migrations.md):
- Establish a time estimate for execution on GitLab.com. For historical purposes,
@@ -190,7 +225,7 @@ test its execution using `CREATE INDEX CONCURRENTLY` in the `#database-lab` Slac
- For given queries, review parameters regarding data distribution
- [Check query plans](understanding_explain_plans.md) and suggest improvements
to queries (changing the query, schema or adding indexes and similar)
- - General guideline is for queries to come in below 100ms execution time
+ - General guideline is for queries to come in below [100ms execution time](query_performance.md#timing-guidelines-for-queries)
- Avoid N+1 problems and minimalize the [query count](merge_request_performance_guidelines.md#query-counts).
### Timing guidelines for migrations
@@ -199,11 +234,11 @@ In general, migrations for a single deploy shouldn't take longer than
1 hour for GitLab.com. The following guidelines are not hard rules, they were
estimated to keep migration timing to a minimum.
-NOTE: **Note:**
+NOTE:
Keep in mind that all runtimes should be measured against GitLab.com.
| Migration Type | Execution Time Recommended | Notes |
|----|----|---|
| Regular migrations on `db/migrate` | `3 minutes` | A valid exception are index creation as this can take a long time. |
| Post migrations on `db/post_migrate` | `10 minutes` | |
-| Background migrations | --- | Since these are suitable for larger tables, it's not possible to set a precise timing guideline, however, any single query must stay below `1 second` execution time with cold caches. |
+| Background migrations | --- | Since these are suitable for larger tables, it's not possible to set a precise timing guideline, however, any single query must stay below [`1 second` execution time](query_performance.md#timing-guidelines-for-queries) with cold caches. |
diff --git a/doc/development/db_dump.md b/doc/development/db_dump.md
index b16cd4b08d1..505305c860a 100644
--- a/doc/development/db_dump.md
+++ b/doc/development/db_dump.md
@@ -1,7 +1,7 @@
---
stage: Enablement
group: Database
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Importing a database dump into a staging environment
diff --git a/doc/development/deleting_migrations.md b/doc/development/deleting_migrations.md
index df36715fa6a..f159f679c32 100644
--- a/doc/development/deleting_migrations.md
+++ b/doc/development/deleting_migrations.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Delete existing migrations
diff --git a/doc/development/deprecation_guidelines/index.md b/doc/development/deprecation_guidelines/index.md
index 5cc408bb57b..c2fea3c6053 100644
--- a/doc/development/deprecation_guidelines/index.md
+++ b/doc/development/deprecation_guidelines/index.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Deprecation guidelines
@@ -15,7 +15,7 @@ It's important to understand the difference between **deprecation** and
**removal**:
**Deprecation** is the process of flagging/marking/announcing that a feature
-will be removed in a future version of GitLab.
+is scheduled for removal in a future version of GitLab.
**Removal** is the process of actually removing a feature that was previously
deprecated.
diff --git a/doc/development/diffs.md b/doc/development/diffs.md
index ece7bb9e9ee..fba8eda0408 100644
--- a/doc/development/diffs.md
+++ b/doc/development/diffs.md
@@ -1,12 +1,12 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Working with diffs
-Currently we rely on different sources to present diffs, these include:
+We rely on different sources to present diffs. These include:
- Gitaly service
- Database (through `merge_request_diff_files`)
@@ -14,7 +14,14 @@ Currently we rely on different sources to present diffs, these include:
## Deep Dive
-In January 2019, Oswaldo Ferreira hosted a Deep Dive (GitLab team members only: `https://gitlab.com/gitlab-org/create-stage/issues/1`) on GitLab's Diffs and Commenting on Diffs functionality to share his domain specific knowledge with anyone who may work in this part of the code base in the future. You can find the [recording on YouTube](https://www.youtube.com/watch?v=K6G3gMcFyek), and the slides on [Google Slides](https://docs.google.com/presentation/d/1bGutFH2AT3bxOPZuLMGl1ANWHqFnrxwQwjiwAZkF-TU/edit) and in [PDF](https://gitlab.com/gitlab-org/create-stage/uploads/b5ad2f336e0afcfe0f99db0af0ccc71a/). Everything covered in this deep dive was accurate as of GitLab 11.7, and while specific details may have changed since then, it should still serve as a good introduction.
+In January 2019, Oswaldo Ferreira hosted a Deep Dive (GitLab team members only:
+`https://gitlab.com/gitlab-org/create-stage/issues/1`) on GitLab Diffs and Commenting on Diffs
+functionality to share his domain specific knowledge with anyone who may work in this part of the
+codebase in the future. You can find the [recording on YouTube](https://www.youtube.com/watch?v=K6G3gMcFyek),
+and the slides on [Google Slides](https://docs.google.com/presentation/d/1bGutFH2AT3bxOPZuLMGl1ANWHqFnrxwQwjiwAZkF-TU/edit)
+and in [PDF](https://gitlab.com/gitlab-org/create-stage/uploads/b5ad2f336e0afcfe0f99db0af0ccc71a/).
+Everything covered in this deep dive was accurate as of GitLab 11.7, and while specific details may
+have changed since then, it should still serve as a good introduction.
## Architecture overview
@@ -56,8 +63,7 @@ of hitting the repository every time we need the diff of the file, we:
## Diff limits
As explained above, we limit single diff files and the size of the whole diff. There are scenarios where we collapse the diff file,
-and cases where the diff file is not presented at all, and the user is guided to the Blob view. Here we'll go into details about
-these limits.
+and cases where the diff file is not presented at all, and the user is guided to the Blob view.
### Diff collection limits
@@ -67,40 +73,40 @@ Limits that act onto all diff files collection. Files number, lines number and f
Gitlab::Git::DiffCollection.collection_limits[:safe_max_files] = Gitlab::Git::DiffCollection::DEFAULT_LIMITS[:max_files] = 100
```
-File diffs will be collapsed (but be expandable) if 100 files have already been rendered.
+File diffs are collapsed (but are expandable) if 100 files have already been rendered.
```ruby
Gitlab::Git::DiffCollection.collection_limits[:safe_max_lines] = Gitlab::Git::DiffCollection::DEFAULT_LIMITS[:max_lines] = 5000
```
-File diffs will be collapsed (but be expandable) if 5000 lines have already been rendered.
+File diffs are collapsed (but be expandable) if 5000 lines have already been rendered.
```ruby
Gitlab::Git::DiffCollection.collection_limits[:safe_max_bytes] = Gitlab::Git::DiffCollection.collection_limits[:safe_max_files] * 5.kilobytes = 500.kilobytes
```
-File diffs will be collapsed (but be expandable) if 500 kilobytes have already been rendered.
+File diffs are collapsed (but be expandable) if 500 kilobytes have already been rendered.
```ruby
Gitlab::Git::DiffCollection.collection_limits[:max_files] = Commit::DIFF_HARD_LIMIT_FILES = 1000
```
-No more files will be rendered at all if 1000 files have already been rendered.
+No more files are rendered at all if 1000 files have already been rendered.
```ruby
Gitlab::Git::DiffCollection.collection_limits[:max_lines] = Commit::DIFF_HARD_LIMIT_LINES = 50000
```
-No more files will be rendered at all if 50,000 lines have already been rendered.
+No more files are rendered at all if 50,000 lines have already been rendered.
```ruby
Gitlab::Git::DiffCollection.collection_limits[:max_bytes] = Gitlab::Git::DiffCollection.collection_limits[:max_files] * 5.kilobytes = 5000.kilobytes
```
-No more files will be rendered at all if 5 megabytes have already been rendered.
+No more files are rendered at all if 5 megabytes have already been rendered.
-All collection limit parameters are currently sent and applied on Gitaly. That is, once the limit is surpassed,
-Gitaly will only return the safe amount of data to be persisted on `merge_request_diff_files`.
+All collection limit parameters are sent and applied on Gitaly. That is, after the limit is surpassed,
+Gitaly only returns the safe amount of data to be persisted on `merge_request_diff_files`.
### Individual diff file limits
@@ -110,24 +116,24 @@ Limits that act onto each diff file of a collection. Files number, lines number
Diff patches are collapsed when surpassing 10% of the value set in `ApplicationSettings#diff_max_patch_bytes`.
That is, it's equivalent to 10kb if the maximum allowed value is 100kb.
-The diff will still be persisted and expandable if the patch size doesn't
+The diff is persisted and expandable if the patch size doesn't
surpass `ApplicationSettings#diff_max_patch_bytes`.
Although this nomenclature (Collapsing) is also used on Gitaly, this limit is only used on GitLab (hardcoded - not sent to Gitaly).
-Gitaly will only return `Diff.Collapsed` (RPC) when surpassing collection limits.
+Gitaly only returns `Diff.Collapsed` (RPC) when surpassing collection limits.
#### Not expandable patches (too large)
The patch not be rendered if it's larger than `ApplicationSettings#diff_max_patch_bytes`.
-Users will see a `This source diff could not be displayed because it is too large` message.
+Users see a `This source diff could not be displayed because it is too large` message.
```ruby
Commit::DIFF_SAFE_LINES = Gitlab::Git::DiffCollection::DEFAULT_LIMITS[:max_lines] = 5000
```
-File diff will be suppressed (technically different from collapsed, but behaves the same, and is expandable) if it has more than 5000 lines.
+File diff is suppressed (technically different from collapsed, but behaves the same, and is expandable) if it has more than 5000 lines.
-This limit is currently hardcoded and only applied on GitLab.
+This limit is hardcoded and only applied on GitLab.
## Viewers
diff --git a/doc/development/distributed_tracing.md b/doc/development/distributed_tracing.md
index 6d277f9ae99..9228609aae9 100644
--- a/doc/development/distributed_tracing.md
+++ b/doc/development/distributed_tracing.md
@@ -1,7 +1,7 @@
---
stage: Monitor
group: Health
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Distributed Tracing - development guidelines
@@ -49,7 +49,7 @@ all subsystems at GitLab:
- Correlation IDs should never be used to pass context (for example, a username or an IP address).
- Correlation IDs should never be _parsed_, or manipulated in other ways (for example, split).
-The [LabKit library](https://gitlab.com/gitlab-org/labkit) provides a standardized interface for working with GitLab's
+The [LabKit library](https://gitlab.com/gitlab-org/labkit) provides a standardized interface for working with GitLab
correlation IDs in the Go programming language. LabKit can be used as a
reference implementation for developers working with tracing and correlation IDs
on non-Go GitLab subsystems.
@@ -59,7 +59,7 @@ on non-Go GitLab subsystems.
GitLab uses the `GITLAB_TRACING` environment variable to configure distributed tracing. The same
configuration is used for all components (e.g., Workhorse, Rails, etc).
-When `GITLAB_TRACING` is not set, the application will not be instrumented, meaning that there is
+When `GITLAB_TRACING` is not set, the application isn't instrumented, meaning that there is
no overhead at all.
To enable `GITLAB_TRACING`, a valid _"configuration-string"_ value should be set, with a URL-like
@@ -94,8 +94,8 @@ by typing `p` `b` in the browser window.
Once the performance bar is enabled, click on the **Trace** link in the performance bar to go to
the Jaeger UI.
-The Jaeger search UI will return a query for the `Correlation-ID` of the current request. Normally,
-this search should return a single trace result. Clicking this result will show the detail of the
+The Jaeger search UI returns a query for the `Correlation-ID` of the current request. Normally,
+this search should return a single trace result. Clicking this result shows the detail of the
trace in a hierarchical time-line.
![Jaeger Search UI](img/distributed_tracing_jaeger_ui.png)
@@ -154,7 +154,7 @@ This should start the process with the default listening ports.
### 2. Configure the `GITLAB_TRACING` environment variable
-Once you have Jaeger running, you'll need to configure the `GITLAB_TRACING` variable with the
+Once you have Jaeger running, configure the `GITLAB_TRACING` variable with the
appropriate configuration string.
**TL;DR:** If you are running everything on the same host, use the following value:
@@ -178,9 +178,9 @@ This configuration string uses the Jaeger driver `opentracing://jaeger` with the
| `udp_endpoint` | `localhost:6831` | This is the default. Configures Jaeger to send trace information to the UDP listener on port `6831` using compact thrift protocol. Note that we've experienced some issues with the [Jaeger Client for Ruby](https://github.com/salemove/jaeger-client-ruby) when using this protocol. |
| `sampler` | `probabalistic` | Configures Jaeger to use a probabilistic random sampler. The rate of samples is configured by the `sampler_param` value. |
| `sampler_param` | `0.01` | Use a ratio of `0.01` to configure the `probabalistic` sampler to randomly sample _1%_ of traces. |
-| `service_name` | `api` | Override the service name used by the Jaeger backend. This parameter will take precedence over the application-supplied value. |
+| `service_name` | `api` | Override the service name used by the Jaeger backend. This parameter takes precedence over the application-supplied value. |
-NOTE: **Note:**
+NOTE:
The same `GITLAB_TRACING` value should to be configured in the environment
variables for all GitLab processes, including Workhorse, Gitaly, Rails, and Sidekiq.
@@ -189,7 +189,7 @@ variables for all GitLab processes, including Workhorse, Gitaly, Rails, and Side
After the `GITLAB_TRACING` environment variable is exported to all GitLab services, start the
application.
-When `GITLAB_TRACING` is configured properly, the application will log this on startup:
+When `GITLAB_TRACING` is configured properly, the application logs this on startup:
```shell
13:41:53 gitlab-workhorse.1 | 2019/02/12 13:41:53 Tracing enabled
@@ -198,7 +198,7 @@ When `GITLAB_TRACING` is configured properly, the application will log this on s
...
```
-If `GITLAB_TRACING` is not configured correctly, this will also be logged:
+If `GITLAB_TRACING` is not configured correctly, this issue is logged:
```shell
13:43:45 gitaly.1 | 2019/02/12 13:43:45 skipping tracing configuration step: tracer: unable to load driver mytracer
@@ -215,6 +215,6 @@ not set.
By default, the Jaeger search UI is available at <http://localhost:16686/search>.
-TIP: **Tip:**
-Don't forget that you will need to generate traces by using the application before
+NOTE:
+Don't forget that you must generate traces by using the application before
they appear in the Jaeger UI.
diff --git a/doc/development/doc_styleguide.md b/doc/development/doc_styleguide.md
index 4a9f2a626f7..1595a841ade 100644
--- a/doc/development/doc_styleguide.md
+++ b/doc/development/doc_styleguide.md
@@ -3,3 +3,6 @@ redirect_to: 'documentation/styleguide.md'
---
This document was moved to [another location](documentation/styleguide.md).
+
+<!-- This redirect file can be deleted after February 1, 2021. -->
+<!-- Before deletion, see: https://docs.gitlab.com/ee/development/documentation/#move-or-rename-a-page -->
diff --git a/doc/development/documentation/feature-change-workflow.md b/doc/development/documentation/feature-change-workflow.md
index 004d8833e63..78e5510ffca 100644
--- a/doc/development/documentation/feature-change-workflow.md
+++ b/doc/development/documentation/feature-change-workflow.md
@@ -3,3 +3,6 @@ redirect_to: 'workflow.md'
---
This document was moved to [another location](workflow.md).
+
+<!-- This redirect file can be deleted after February 1, 2021. -->
+<!-- Before deletion, see: https://docs.gitlab.com/ee/development/documentation/#move-or-rename-a-page -->
diff --git a/doc/development/documentation/feature_flags.md b/doc/development/documentation/feature_flags.md
index 5bc3e1d59e2..59298c5345f 100644
--- a/doc/development/documentation/feature_flags.md
+++ b/doc/development/documentation/feature_flags.md
@@ -30,7 +30,7 @@ See how to document them below, according to the state of the flag:
- [Features that can be enabled or disabled for a single project](#features-enabled-by-project).
- [Features with the feature flag removed](#features-with-flag-removed).
-The [`**(CORE ONLY)**`](styleguide/index.md#product-badges) badge or equivalent for
+The [`**(CORE ONLY)**`](styleguide/index.md#product-tier-badges) badge or equivalent for
the feature's tier should be added to the line and heading that refers to
enabling/disabling feature flags as Admin access is required to do so,
therefore, it indicates that it cannot be done by regular users of GitLab.com.
@@ -65,7 +65,7 @@ be enabled for a single project, and is not ready for production use:
> - It's not recommended for production use.
> - To use it in GitLab self-managed instances, ask a GitLab administrator to [enable it](#anchor-to-section). **(CORE ONLY)**
-CAUTION: **Warning:**
+WARNING:
This feature might not be available to you. Check the **version history** note above for details.
(...Regular content goes here...)
@@ -124,7 +124,7 @@ use:
> - It's recommended for production use.
> - For GitLab self-managed instances, GitLab administrators can opt to [disable it](#anchor-to-section). **(CORE ONLY)**
-CAUTION: **Warning:**
+WARNING:
This feature might not be available to you. Check the **version history** note above for details.
(...Regular content goes here...)
@@ -180,7 +180,7 @@ cannot be enabled for a single project, and is ready for production use:
> - It's recommended for production use.
> - For GitLab self-managed instances, GitLab administrators can opt to [disable it](#anchor-to-section). **(CORE ONLY)**
-CAUTION: **Warning:**
+WARNING:
This feature might not be available to you. Check the **version history** note above for details.
(...Regular content goes here...)
@@ -253,7 +253,7 @@ be enabled by project, and is ready for production use:
> - It's recommended for production use.
> - For GitLab self-managed instances, GitLab administrators can opt to [disable it](#anchor-to-section). **(CORE ONLY)**
-CAUTION: **Warning:**
+WARNING:
This feature might not be available to you. Check the **version history** note above for details.
(...Regular content goes here...)
diff --git a/doc/development/documentation/improvement-workflow.md b/doc/development/documentation/improvement-workflow.md
index 004d8833e63..78e5510ffca 100644
--- a/doc/development/documentation/improvement-workflow.md
+++ b/doc/development/documentation/improvement-workflow.md
@@ -3,3 +3,6 @@ redirect_to: 'workflow.md'
---
This document was moved to [another location](workflow.md).
+
+<!-- This redirect file can be deleted after February 1, 2021. -->
+<!-- Before deletion, see: https://docs.gitlab.com/ee/development/documentation/#move-or-rename-a-page -->
diff --git a/doc/development/documentation/index.md b/doc/development/documentation/index.md
index 69a8ff10878..5fb5e9b433a 100644
--- a/doc/development/documentation/index.md
+++ b/doc/development/documentation/index.md
@@ -1,13 +1,13 @@
---
stage: none
group: Documentation Guidelines
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
description: Learn how to contribute to GitLab Documentation.
---
# GitLab Documentation guidelines
-GitLab's documentation is [intended as the single source of truth (SSOT)](https://about.gitlab.com/handbook/documentation/) for information about how to configure, use, and troubleshoot GitLab. The documentation contains use cases and usage instructions for every GitLab feature, organized by product area and subject. This includes topics and workflows that span multiple GitLab features, and the use of GitLab with other applications.
+The GitLab documentation is [intended as the single source of truth (SSOT)](https://about.gitlab.com/handbook/documentation/) for information about how to configure, use, and troubleshoot GitLab. The documentation contains use cases and usage instructions for every GitLab feature, organized by product area and subject. This includes topics and workflows that span multiple GitLab features, and the use of GitLab with other applications.
In addition to this page, the following resources can help you craft and contribute to documentation:
@@ -38,8 +38,8 @@ Documentation issues and merge requests are part of their respective repositorie
The [CI pipeline for the main GitLab project](../pipelines.md) is configured to automatically
run only the jobs that match the type of contribution. If your contribution contains
-**only** documentation changes, then only documentation-related jobs will be run, and
-the pipeline will complete much faster than a code contribution.
+**only** documentation changes, then only documentation-related jobs run, and
+the pipeline completes much faster than a code contribution.
If you are submitting documentation-only changes to Runner, Omnibus, or Charts,
the fast pipeline is not determined automatically. Instead, create branches for
@@ -82,7 +82,7 @@ All values are treated as strings and are only used for the
Each page should ideally have metadata related to the stage and group it
belongs to, as well as an information block as described below:
-- `stage`: The [Stage](https://about.gitlab.com/handbook/product/product-categories/#devops-stages)
+- `stage`: The [Stage](https://about.gitlab.com/handbook/product/categories/#devops-stages)
to which the majority of the page's content belongs.
- `group`: The [Group](https://about.gitlab.com/company/team/structure/#product-groups)
to which the majority of the page's content belongs.
@@ -93,7 +93,7 @@ belongs to, as well as an information block as described below:
```plaintext
To determine the technical writer assigned to the Stage/Group
associated with this page, see
- https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+ https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
```
For example, the following metadata would be at the beginning of a product
@@ -104,7 +104,7 @@ feature:
---
stage: Monitor
group: APM
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
```
@@ -152,7 +152,7 @@ comments: false
Each page can have additional, optional metadata (set in the
[default.html](https://gitlab.com/gitlab-org/gitlab-docs/-/blob/fc3577921343173d589dfa43d837b4307e4e620f/layouts/default.html#L30-52)
-Nanoc layout), which will be displayed at the top of the page if defined:
+Nanoc layout), which is displayed at the top of the page if defined:
- `reading_time`: If you want to add an indication of the approximate reading
time of a page, you can set `reading_time` to `true`. This uses a simple
@@ -161,78 +161,82 @@ Nanoc layout), which will be displayed at the top of the page if defined:
## Move or rename a page
-Moving or renaming a document is the same as changing its location. This process
-requires specific steps to ensure that visitors can find the new
-documentation page, whether they're using `/help` from a GitLab instance or by
-visiting <https://docs.gitlab.com>.
-
-Be sure to assign a technical writer to a page move or rename MR. Technical
+Moving or renaming a document is the same as changing its location.
+Be sure to assign a technical writer to any MR that renames or moves a page. Technical
Writers can help with any questions and can review your change.
-To change a document's location, don't remove the old document, but instead
-replace all of its content with the following:
+When moving or renaming a page, you must redirect browsers to the new page. This
+ensures users find the new page, and have the opportunity to update their bookmarks.
-```markdown
----
-redirect_to: '../path/to/file/index.md'
----
+There are two types of redirects:
-This document was moved to [another location](../path/to/file/index.md).
-```
+- Redirect files added into the docs themselves, for users who view the docs in `/help`
+ on self-managed instances. For example, [`/help` on GitLab.com](https://gitlab.com/help).
+- Redirects in a [`_redirects`](../../user/project/pages/redirects.md) file, for users
+ who view the docs on <http://docs.gitlab.com>.
-Replace `../path/to/file/index.md` with the relative path to the old document.
+To add a redirect:
-The `redirect_to` variable supports both full and relative URLs; for example:
+1. In an MR in one of the internal docs projects (`gitlab`, `gitlab-runner`, `omnibus-gitlab`
+ or `charts`):
+ 1. Move or rename the doc, but do not delete the old doc.
+ 1. In the old doc, add the redirect code for `/help`. Use the following template exactly,
+ and only change the links and date. Use relative paths and `.md` for a redirect
+ to another docs page. Use the full URL to redirect to a different project or site:
-- `https://docs.gitlab.com/ee/path/to/file.html`
-- `../path/to/file.html`
-- `path/to/file.md`
+ ```markdown
+ ---
+ redirect_to: '../path/to/file/index.md'
+ ---
-The redirect works for <https://docs.gitlab.com>, and any `*.md` paths are
-changed to `*.html`. The description line following the `redirect_to` code
-informs the visitor that the document changed location if the redirect process
-doesn't complete successfully.
+ This document was moved to [another location](../path/to/file/index.md).
-For example, if you move `doc/workflow/lfs/index.md` to
-`doc/administration/lfs.md`, then the steps would be:
+ <!-- This redirect file can be deleted after <YYYY-MM-DD>. -->
+ <!-- Before deletion, see: https://docs.gitlab.com/ee/development/documentation/#move-or-rename-a-page -->
+ ```
-1. Copy `doc/workflow/lfs/index.md` to `doc/administration/lfs.md`
-1. Replace the contents of `doc/workflow/lfs/index.md` with:
+ Redirect files linking to docs in any of the 4 internal docs projects can be
+ removed after 3 months. Redirect files linking to external sites can be removed
+ after 1 year.
- ```markdown
- ---
- redirect_to: '../../administration/lfs.md'
- ---
+ 1. If the document being moved has any Disqus comments on it, follow the steps
+ described in [Redirections for pages with Disqus comments](#redirections-for-pages-with-disqus-comments).
+ 1. Assign the MR to a technical writer for review and merge.
+1. If the redirect is to one of the 4 internal docs projects (not an external URL),
+ create an MR in [`gitlab-docs`](https://gitlab.com/gitlab-org/gitlab-docs):
+ 1. Update [`_redirects`](https://gitlab.com/gitlab-org/gitlab-docs/-/blob/master/content/_redirects)
+ with one redirect entry for each renamed or moved file. This code works for
+ <https://docs.gitlab.com> links only:
- This document was moved to [another location](../../administration/lfs.md).
- ```
+ ```plaintext
+ /ee/path/to/old_file.html /ee/path/to/new_file 302 # To be removed after YYYY-MM-DD
+ ```
-1. Find and replace any occurrences of the old location with the new one.
- A quick way to find them is to use `git grep` on the repository you changed
- the file from:
+ The path must start with the internal project directory `/ee` for `gitlab`,
+ `/gitlab-runner`, `/omnibus-gitlab` or `charts`, and must end with `.html`.
- ```shell
- git grep -n "workflow/lfs/lfs_administration"
- git grep -n "lfs/lfs_administration"
- ```
+ `_redirects` entries can be removed after one year.
-1. If the document being moved has any Disqus comments on it, follow the steps
- described in [Redirections for pages with Disqus comments](#redirections-for-pages-with-disqus-comments).
+1. Search for links to the old file. You must find and update all links to the old file:
-Things to note:
+ - In <https://gitlab.com/gitlab-com/www-gitlab-com>, search for full URLs:
+ `grep -r "docs.gitlab.com/ee/path/to/file.html" .`
+ - In <https://gitlab.com/gitlab-org/gitlab-docs/-/tree/master/content/_data>,
+ search the navigation bar configuration files for the path with `.html`:
+ `grep -r "path/to/file.html" .`
+ - In any of the 4 internal projects. This includes searching for links in the docs
+ and codebase. Search for all variations, including full URL and just the path.
+ In macOS for example, go to the root directory of the `gitlab` project and run:
-- Since we also use inline documentation, except for the documentation itself,
- the document might also be referenced in the views of GitLab (`app/`) which will
- render when visiting `/help`, and sometimes in the testing suite (`spec/`).
- You must search these paths for references to the doc and update them as well.
-- The above `git grep` command will search recursively in the directory you run
- it in for `workflow/lfs/lfs_administration` and `lfs/lfs_administration`
- and will print the file and the line where this file is mentioned.
- You may ask why the two greps. Since [we use relative paths to link to
- documentation](styleguide/index.md#links), sometimes it might be useful to search a path deeper.
-- The `*.md` extension is not used when a document is linked to GitLab's
- built-in help page, which is why we omit it in `git grep`.
-- Use the checklist on the "Change documentation location" MR description template.
+ ```shell
+ grep -r "docs.gitlab.com/ee/path/to/file.html" .
+ grep -r "path/to/file.html" .
+ grep -r "path/to/file.md" .
+ grep -r "path/to/file" .
+ ```
+
+ You may need to try variations of relative links as well, such as `../path/to/file`
+ or even `../file` to find every case.
### Redirections for pages with Disqus comments
@@ -249,7 +253,7 @@ available under `https://docs.gitlab.com/my-old-location/README.html` to a new l
`https://docs.gitlab.com/my-new-location/index.html`.
Into the **new document** front matter, we add the following information. You must
-include the file name in the `disqus_identifier` URL, even if it's `index.html` or `README.html`.
+include the filename in the `disqus_identifier` URL, even if it's `index.html` or `README.html`.
```yaml
---
@@ -267,7 +271,7 @@ Before getting started, make sure you read the introductory section
- Label the MR `Documentation` (can only be done by people with `developer` access, for example, GitLab team members)
- Assign the correct milestone per note below (can only be done by people with `developer` access, for example, GitLab team members)
-Documentation will be merged if it is an improvement on existing content,
+Documentation is merged if it is an improvement on existing content,
represents a good-faith effort to follow the template and style standards,
and is believed to be accurate.
@@ -285,16 +289,16 @@ Every GitLab instance includes the documentation, which is available at `/help`
(`https://gitlab.example.com/help`). For example, <https://gitlab.com/help>.
The documentation available online on <https://docs.gitlab.com> is deployed every four hours from the `master` branch of GitLab, Omnibus, and Runner. Therefore,
-after a merge request gets merged, it will be available online on the same day.
-However, it will be shipped (and available on `/help`) within the milestone assigned
+after a merge request gets merged, it is available online on the same day.
+However, it's shipped (and available on `/help`) within the milestone assigned
to the MR.
For example, let's say your merge request has a milestone set to 11.3, which
-will be released on 2018-09-22. If it gets merged on 2018-09-15, it will be
+a release date of 2018-09-22. If it gets merged on 2018-09-15, it is
available online on 2018-09-15, but, as the feature freeze date has passed, if
the MR does not have a `~"Pick into 11.3"` label, the milestone has to be changed
-to 11.4 and it will be shipped with all GitLab packages only on 2018-10-22,
-with GitLab 11.4. Meaning, it will only be available under `/help` from GitLab
+to 11.4 and it ships with all GitLab packages only on 2018-10-22,
+with GitLab 11.4. Meaning, it's available only with `/help` from GitLab
11.4 onward, but available on <https://docs.gitlab.com/> on the same day it was merged.
### Linking to `/help`
@@ -365,7 +369,7 @@ You can combine one or more of the following:
### GitLab `/help` tests
Several [RSpec tests](https://gitlab.com/gitlab-org/gitlab/blob/master/spec/features/help_pages_spec.rb)
-are run to ensure GitLab documentation renders and works correctly. In particular, that [main docs landing page](../../README.md) will work correctly from `/help`.
+are run to ensure GitLab documentation renders and works correctly. In particular, that [main docs landing page](../../README.md) works correctly from `/help`.
For example, [GitLab.com's `/help`](https://gitlab.com/help).
## Docs site architecture
@@ -381,7 +385,7 @@ on how the left-side navigation menu is built and updated.
## Previewing the changes live
-NOTE: **Note:**
+NOTE:
To preview your changes to documentation locally, follow this
[development guide](https://gitlab.com/gitlab-org/gitlab-docs/blob/master/README.md#development-when-contributing-to-gitlab-documentation) or [these instructions for GDK](https://gitlab.com/gitlab-org/gitlab-development-kit/blob/master/doc/howto/gitlab_docs.md).
@@ -392,20 +396,20 @@ The live preview is currently enabled for the following projects:
If your merge request has docs changes, you can use the manual `review-docs-deploy` job
to deploy the docs review app for your merge request.
-You will need at least Maintainer permissions to be able to run it.
+You need at least Maintainer permissions to be able to run it.
![Manual trigger a docs build](img/manual_build_docs.png)
You must push a branch to those repositories, as it doesn't work for forks.
-The `review-docs-deploy*` job will:
+The `review-docs-deploy*` job:
-1. Create a new branch in the [`gitlab-docs`](https://gitlab.com/gitlab-org/gitlab-docs)
+1. Creates a new branch in the [`gitlab-docs`](https://gitlab.com/gitlab-org/gitlab-docs)
project named after the scheme: `docs-preview-$DOCS_GITLAB_REPO_SUFFIX-$CI_MERGE_REQUEST_IID`,
where `DOCS_GITLAB_REPO_SUFFIX` is the suffix for each product, e.g, `ee` for
EE, `omnibus` for Omnibus GitLab, etc, and `CI_MERGE_REQUEST_IID` is the ID
of the respective merge request.
-1. Trigger a cross project pipeline and build the docs site with your changes.
+1. Triggers a cross project pipeline and build the docs site with your changes.
In case the review app URL returns 404, this means that either the site is not
yet deployed, or something went wrong with the remote pipeline. Give it a few
@@ -414,11 +418,11 @@ remote pipeline from the link in the merge request's job output.
If the pipeline failed or got stuck, drop a line in the `#docs` chat channel.
Make sure that you always delete the branch of the merge request you were
-working on. If you don't, the remote docs branch won't be removed either,
-and the server where the Review Apps are hosted will eventually be out of
+working on. If you don't, the remote docs branch isn't removed either,
+and the server where the Review Apps are hosted can eventually run out of
disk space.
-TIP: **Tip:**
+NOTE:
Someone with no merge rights to the GitLab projects (think of forks from
contributors) cannot run the manual job. In that case, you can
ask someone from the GitLab team who has the permissions to do that for you.
@@ -449,7 +453,7 @@ If you want to know the in-depth details, here's what's really happening:
- The number of the merge request is added so that you can know by the
`gitlab-docs` branch name the merge request it originated from.
1. The remote branch is then created if it doesn't exist (meaning you can
- re-run the manual job as many times as you want and this step will be skipped).
+ re-run the manual job as many times as you want and this step is skipped).
1. A new cross-project pipeline is triggered in the docs project.
1. The preview URL is shown both at the job output and in the merge request
widget. You also get the link to the remote pipeline.
@@ -494,7 +498,7 @@ To run the tool on an existing screenshot generator, take the following steps:
1. Set up the [GitLab Development Kit (GDK)](https://gitlab.com/gitlab-org/gitlab-development-kit/blob/master/doc/howto/gitlab_docs.md).
1. Navigate to the subdirectory with your cloned GitLab repository, typically `gdk/gitlab`.
1. Make sure that your GDK database is fully migrated: `bin/rake db:migrate RAILS_ENV=development`.
-1. Install pngquant, see the tool website for more info: [`pngquant`](https://pngquant.org/)
+1. Install pngquant, see the tool website for more information: [`pngquant`](https://pngquant.org/)
1. Run `scripts/docs_screenshots.rb spec/docs_screenshots/<name_of_screenshot_generator>.rb <milestone-version>`.
1. Identify the location of the screenshots, based on the `gitlab/doc` location defined by the `it` parameter in your script.
1. Commit the newly created screenshots.
@@ -537,8 +541,12 @@ To have the screenshot focuses few more steps are needed:
- **wait for the content**: `expect(screenshot_area).to have_content 'Expiration interval'`
- **set the crop area**: `set_crop_data(screenshot_area, 20)`
-In particular `set_crop_data` accepts as arguments: a `DOM` element and a padding, the padding will be added around the element enlarging the screenshot area.
+In particular, `set_crop_data` accepts as arguments: a `DOM` element and a
+padding. The padding is added around the element, enlarging the screenshot area.
#### Live example
Please use `spec/docs_screenshots/container_registry_docs.rb` as a guide and as an example to create your own scripts.
+
+<!-- This redirect file can be deleted after February 1, 2021. -->
+<!-- Before deletion, see: https://docs.gitlab.com/ee/development/documentation/#move-or-rename-a-page -->
diff --git a/doc/development/documentation/restful_api_styleguide.md b/doc/development/documentation/restful_api_styleguide.md
index 13c6140114f..c8c48e71b48 100644
--- a/doc/development/documentation/restful_api_styleguide.md
+++ b/doc/development/documentation/restful_api_styleguide.md
@@ -3,7 +3,7 @@ type: reference, dev
stage: none
group: Development
info: "See the Technical Writers assigned to Development Guidelines: https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments-to-development-guidelines"
-description: "Writing styles, markup, formatting, and other standards for GitLab's RESTful APIs."
+description: "Writing styles, markup, formatting, and other standards for the GitLab RESTful APIs."
---
# RESTful API
@@ -107,10 +107,10 @@ Rendered example:
## cURL Examples
-The following sections include a set of [cURL](https://curl.haxx.se) examples
+The following sections include a set of [cURL](https://curl.se/) examples
you can use in the API documentation.
-CAUTION: **Caution:**
+WARNING:
Do not use information for real users, URLs, or tokens. For documentation, refer to our
relevant style guide sections on [Fake user information](styleguide/index.md#fake-user-information),
[Fake URLs](styleguide/index.md#fake-urls), and [Fake tokens](styleguide/index.md#fake-tokens).
diff --git a/doc/development/documentation/site_architecture/deployment_process.md b/doc/development/documentation/site_architecture/deployment_process.md
index f101a669968..d92f58e5501 100644
--- a/doc/development/documentation/site_architecture/deployment_process.md
+++ b/doc/development/documentation/site_architecture/deployment_process.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Documentation deployment process
@@ -36,7 +36,7 @@ and tag all tooling images locally:
```
For each image, there's a manual job under the `images` stage in
-[`.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab-docs/blob/master/.gitlab-ci.yml) which can be invoked at will.
+[`.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab-docs/blob/master/.gitlab-ci.yml) which can be invoked at any time.
## Update an old Docker image with new upstream docs content
@@ -46,12 +46,12 @@ for the version in question.
## Porting new website changes to old versions
-CAUTION: **Warning:**
+WARNING:
Porting changes to older branches can have unintended effects as we're constantly
changing the backend of the website. Use only when you know what you're doing
and make sure to test locally.
-The website will keep changing and being improved. In order to consolidate
+The website keeps changing and being improved. In order to consolidate
those changes to the stable branches, we'd need to pick certain changes
from time to time.
diff --git a/doc/development/documentation/site_architecture/global_nav.md b/doc/development/documentation/site_architecture/global_nav.md
index 21b0c4b6b43..fcf4662502f 100644
--- a/doc/development/documentation/site_architecture/global_nav.md
+++ b/doc/development/documentation/site_architecture/global_nav.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
description: "Learn how GitLab docs' global navigation works and how to add new items."
---
@@ -101,11 +101,11 @@ The global nav has 3 components:
The available sections are described on the table below:
-| Section | Description |
-| ------------- | ------------------------------------------ |
-| User | Documentation for the GitLab's user UI. |
-| Administrator | Documentation for the GitLab's Admin Area. |
-| Contributor | Documentation for developing GitLab. |
+| Section | Description |
+| ------------- | ------------------------------------ |
+| User | Documentation for the GitLab UI. |
+| Administrator | Documentation for the Admin Area. |
+| Contributor | Documentation for developing GitLab. |
The majority of the links available on the nav were added according to the UI.
The match is not perfect, as for some UI nav items the documentation doesn't
@@ -250,7 +250,7 @@ below the doc link:
```
All nav links are clickable. If the higher-level link does not have a link
-of its own, it must link to its first sub-item link, mimicking GitLab's navigation.
+of its own, it must link to its first sub-item link, mimicking the navigation in GitLab.
This must be avoided so that we don't have duplicated links nor two `.active` links
at the same time.
@@ -292,7 +292,7 @@ and the following syntax rules.
an "information" icon on the nav to make the user aware that the feature is
EE-only.
-CAUTION: **Caution:**
+WARNING:
All links present on the data file must end in `.html`, not `.md`. Do not
start any relative link with a forward slash `/`.
diff --git a/doc/development/documentation/site_architecture/index.md b/doc/development/documentation/site_architecture/index.md
index 3772746e25b..be25a083948 100644
--- a/doc/development/documentation/site_architecture/index.md
+++ b/doc/development/documentation/site_architecture/index.md
@@ -1,8 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
-description: "Learn how GitLab's documentation website is architectured."
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Documentation site architecture
@@ -14,8 +13,8 @@ static site generator.
## Architecture
-While the source of the documentation content is stored in GitLab's respective product
-repositories, the source that is used to build the documentation
+While the source of the documentation content is stored in the repositories for
+each GitLab product, the source that is used to build the documentation
site _from that content_ is located at <https://gitlab.com/gitlab-org/gitlab-docs>.
The following diagram illustrates the relationship between the repositories
@@ -46,7 +45,7 @@ from where content is sourced, the `gitlab-docs` project, and the published outp
H -- symlink --> G
```
-You will not find any GitLab docs content in the `gitlab-docs` repository.
+GitLab docs content isn't kept in the `gitlab-docs` repository.
All documentation files are hosted in the respective repository of each
product, and all together are pulled to generate the docs website:
@@ -55,14 +54,14 @@ product, and all together are pulled to generate the docs website:
- [GitLab Runner](https://gitlab.com/gitlab-org/gitlab-runner/tree/master/docs)
- [GitLab Chart](https://gitlab.com/charts/gitlab/tree/master/doc)
-NOTE: **Note:**
+NOTE:
In September 2019, we [moved towards a single codebase](https://gitlab.com/gitlab-org/gitlab/-/issues/2952),
as such the docs for CE and EE are now identical. For historical reasons and
in order not to break any existing links throughout the internet, we still
maintain the CE docs (`https://docs.gitlab.com/ce/`), although it is hidden
from the website, and is now a symlink to the EE docs. When
[Pages supports redirects](https://gitlab.com/gitlab-org/gitlab-pages/-/issues/24),
-we will be able to remove this completely.
+we can remove this completely.
## Assets
@@ -114,7 +113,7 @@ located in the [Dockerfiles directory](https://gitlab.com/gitlab-org/gitlab-docs
If you need to rebuild the Docker images immediately (must have maintainer level permissions):
-CAUTION: **Caution:**
+WARNING:
If you change the dockerfile configuration and rebuild the images, you can break the master
pipeline in the main `gitlab` repository as well as in `gitlab-docs`. Create an image with
a different name first and test it to ensure you do not break the pipelines.
@@ -179,7 +178,7 @@ we reference the array with a symbol (`:versions`).
Whenever the custom CSS and JavaScript files under `content/assets/` change,
make sure to bump their version in the front matter. This method guarantees that
-your changes will take effect by clearing the cache of previous files.
+your changes take effect by clearing the cache of previous files.
Always use Nanoc's way of including those files, do not hardcode them in the
layouts. For example use:
@@ -196,7 +195,7 @@ The links pointing to the files should be similar to:
<%= @items['/path/to/assets/file.*'].path %>
```
-Nanoc will then build and render those links correctly according with what's
+Nanoc then builds and renders those links correctly according with what's
defined in [`Rules`](https://gitlab.com/gitlab-org/gitlab-docs/blob/master/Rules).
## Linking to source files
@@ -236,7 +235,7 @@ If you’re a GitLab team member, find credentials for the Algolia dashboard
in the shared [GitLab 1Password account](https://about.gitlab.com/handbook/security/#1password-for-teams).
To receive weekly reports of the search usage, search the Google doc with
title `Email, Slack, and GitLab Groups and Aliases`, search for `docsearch`,
-and add a comment with your email. You'll be added to the alias that gets the weekly
+and add a comment with your email to be added to the alias that gets the weekly
reports.
## Monthly release process (versions)
diff --git a/doc/development/documentation/site_architecture/release_process.md b/doc/development/documentation/site_architecture/release_process.md
index 547adc89a08..ba5363dbb71 100644
--- a/doc/development/documentation/site_architecture/release_process.md
+++ b/doc/development/documentation/site_architecture/release_process.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# GitLab Docs monthly release process
@@ -26,7 +26,7 @@ To add a new charts version:
version mapping. Note that only the `major.minor` version is needed.
1. Create a new merge request and merge it.
-TIP: **Tip:**
+NOTE:
It can be handy to create the future mappings since they are pretty much known.
In that case, when a new GitLab version is released, you don't have to repeat
this first step.
@@ -44,14 +44,14 @@ this needs to happen when the stable branches for all products have been created
```
A new `Dockerfile.12.0` should have been created and `.gitlab-ci.yml` should
- have the branches variables updated into a new branch. They will be automatically
+ have the branches variables updated into a new branch. They are automatically
committed.
1. Push the newly created branch, but **don't create a merge request**.
- Once you push, the `image:docker-singe` job will create a new Docker image
+ After you push, the `image:docs-single` job creates a new Docker image
tagged with the branch name you created in the first step. In the end, the
- image will be uploaded in the [Container Registry](https://gitlab.com/gitlab-org/gitlab-docs/container_registry)
- and it will be listed under the `registry` environment folder at
+ image is uploaded in the [Container Registry](https://gitlab.com/gitlab-org/gitlab-docs/container_registry)
+ and it is listed under the `registry` environment folder at
`https://gitlab.com/gitlab-org/gitlab-docs/-/environments/folders/registry` (must
have developer access).
@@ -66,7 +66,7 @@ Visit `http://localhost:4000/12.0/` to see if everything works correctly.
## 3. Create the release merge request
-NOTE: **Note:**
+NOTE:
To be [automated](https://gitlab.com/gitlab-org/gitlab-docs/-/issues/750).
Now it's time to create the monthly release merge request that adds the new
@@ -114,20 +114,20 @@ version and rotates the old one:
The versions dropdown is in a way "hardcoded". When the site is built, it looks
at the contents of `content/_data/versions.yaml` and based on that, the dropdown
-is populated. So, older branches will have different content, which means the
-dropdown will list one or more releases behind. Remember that the new changes of
+is populated. Older branches have different content, which means the
+dropdown list is one or more releases behind. Remember that the new changes of
the dropdown are included in the unmerged `release-X-Y` branch.
The content of `content/_data/versions.yaml` needs to change for all online
versions (stable branches `X.Y` of the `gitlab-docs` project):
-1. Run the Rake task that will create all the respective merge requests needed to
- update the dropdowns and will be set to automatically be merged when their
+1. Run the Rake task that creates all the respective merge requests needed to
+ update the dropdowns. Set these to automatically be merged when their
pipelines succeed:
- NOTE: **Note:**
+ NOTE:
The `release-X-Y` branch needs to be present locally,
- and you need to have switched to it, otherwise the Rake task will fail.
+ and you need to have switched to it, otherwise the Rake task fails.
```shell
git checkout release-X-Y
@@ -138,13 +138,13 @@ versions (stable branches `X.Y` of the `gitlab-docs` project):
to check that their pipelines pass, and once all are merged, proceed to the
following and final step.
-TIP: **Tip:**
+NOTE:
In case a pipeline fails, see [troubleshooting](#troubleshooting).
## 5. Merge the release merge request
The dropdown merge requests should have now been merged into their respective
-version (stable `X.Y` branch), which will trigger another pipeline. At this point,
+version (stable `X.Y` branch), which triggers another pipeline. At this point,
you need to only babysit the pipelines and make sure they don't fail:
1. Check the [pipelines page](https://gitlab.com/gitlab-org/gitlab-docs/pipelines)
@@ -152,9 +152,9 @@ you need to only babysit the pipelines and make sure they don't fail:
1. After all the pipelines of the online versions succeed, merge the release merge request.
1. Finally, run the
[`Build docker images weekly` pipeline](https://gitlab.com/gitlab-org/gitlab-docs/pipeline_schedules)
- that will build the `:latest` and `:archives` Docker images.
+ that builds the `:latest` and `:archives` Docker images.
-Once the scheduled pipeline succeeds, the docs site will be deployed with all
+Once the scheduled pipeline succeeds, the docs site is deployed with all
new versions online.
## Troubleshooting
@@ -163,7 +163,7 @@ Releasing a new version is a long process that involves many moving parts.
### `test_internal_links_and_anchors` failing on dropdown merge requests
-DANGER: **Deprecated:**
+WARNING:
We now pin versions in the `.gitlab-ci.yml` of the respective branch,
so the steps below are deprecated.
diff --git a/doc/development/documentation/structure.md b/doc/development/documentation/structure.md
index 3cc7f49f05e..fd0c29f0fc1 100644
--- a/doc/development/documentation/structure.md
+++ b/doc/development/documentation/structure.md
@@ -1,7 +1,7 @@
---
stage: none
group: Style Guide
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
description: What to include in GitLab documentation pages.
---
@@ -9,7 +9,7 @@ description: What to include in GitLab documentation pages.
Use these standards to contribute content to the GitLab documentation.
-Before getting started, familiarize yourself with [GitLab's Documentation guidelines](index.md)
+Before getting started, familiarize yourself with [Documentation guidelines for GitLab](index.md)
and the [Documentation Style Guide](styleguide/index.md).
## Components of a documentation page
@@ -39,7 +39,7 @@ pre-deployment and post-deployment tasks.
## Template for new docs
-Follow the [folder structure and file name guidelines](styleguide/index.md#folder-structure-overview)
+Follow the [folder structure and filename guidelines](styleguide/index.md#folder-structure-overview)
and create a new topic by using this template:
```markdown
@@ -53,7 +53,7 @@ in Google Search snippets. It may help to write the page intro first, and then r
stage: Add the stage name here
group: Add the group name here
info: To determine the technical writer assigned to the Stage/Group associated with this page,
-see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Feature or Use Case Name **[TIER]** (1)
diff --git a/doc/development/documentation/styleguide/img/tier_badge.png b/doc/development/documentation/styleguide/img/tier_badge.png
new file mode 100644
index 00000000000..674d869da9b
--- /dev/null
+++ b/doc/development/documentation/styleguide/img/tier_badge.png
Binary files differ
diff --git a/doc/development/documentation/styleguide/index.md b/doc/development/documentation/styleguide/index.md
index 41e38266a58..971652f76d3 100644
--- a/doc/development/documentation/styleguide/index.md
+++ b/doc/development/documentation/styleguide/index.md
@@ -1,16 +1,15 @@
---
stage: none
group: Style Guide
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
description: 'Writing styles, markup, formatting, and other standards for GitLab Documentation.'
---
# Documentation Style Guide
-This document defines the standards for GitLab's documentation content and
-files.
+This document defines the standards for GitLab documentation.
-For broader information about the documentation, see the [Documentation guidelines](index.md).
+For broader information about the documentation, see the [Documentation guidelines](../index.md).
For guidelines specific to text in the GitLab interface, see the Pajamas [Content](https://design.gitlab.com/content/error-messages/) section.
@@ -48,20 +47,20 @@ documentation.
### All information
Include problem-solving actions that may address rare cases or be considered
-_risky_, so long as proper context is provided in the form of fully detailed
+_risky_, but provide proper context through fully-detailed
warnings and caveats. This kind of content should be included as it could be
helpful to others and, when properly explained, its benefits outweigh the risks.
If you think you have found an exception to this rule, contact the
Technical Writing team.
-We will add all troubleshooting information to the documentation, no matter how
+GitLab adds all troubleshooting information to the documentation, no matter how
unlikely a user is to encounter a situation. For the [Troubleshooting sections](#troubleshooting),
people in GitLab Support can merge additions themselves.
### All media types
Include any media types/sources if the content is relevant to readers. You can
-freely include or link presentations, diagrams, videos, and so on; no matter who
+freely include or link presentations, diagrams, and videos. No matter who
it was originally composed for, if it is helpful to any of our audiences, we can
include it.
@@ -84,25 +83,25 @@ different types. For example, [Divio recommends](https://www.divio.com/blog/docu
At GitLab, we have so many product changes in our monthly releases that we can't
afford to continuously update multiple types of information. If we have multiple
-types, the information will become outdated. Therefore, we have a
+types, the information becomes outdated. Therefore, we have a
[single template](../structure.md) for documentation.
-We currently do not distinguish specific document types, although we are open to
+GitLab documentation does not distinguish specific document types. We are open to
reconsidering this policy after the documentation has reached a future stage of
maturity and quality. If you are reading this, then despite our continuous
improvement efforts, that point hasn't been reached.
### Link instead of summarize
-There is a temptation to summarize the information on another page. This will
-cause the information to live in two places. Instead, link to the single source
+There is a temptation to summarize the information on another page, which
+causes the information to live in two places. Instead, link to the single source
of truth and explain why it is important to consume the information.
### Organize by topic, not by type
-Beyond top-level audience-type folders (for example, `administration`), we
-organize content by topic, not by type, so it can be located in the
-single-source-of-truth (SSOT) section for the subject matter.
+We organize content by topic, not by type, so it can be located in the
+single-source-of-truth (SSOT) section for the subject matter. Top-level audience-type
+folders, like `administration`, are exceptions.
For example, do not create groupings of similar media types. For example:
@@ -117,38 +116,37 @@ cross-link between any related content.
### Docs-first methodology
-We employ a _documentation-first methodology_ to help ensure the documentation
-remains a complete and trusted resource, and to make communicating about the use
+We employ a _documentation-first methodology_. This method ensures the documentation
+remains a complete and trusted resource, and makes communicating about the use
of GitLab more efficient.
- If the answer to a question exists in documentation, share the link to the
documentation instead of rephrasing the information.
-- When you encounter new information not available in GitLab’s documentation (for
+- When you encounter new information not available in GitLab documentation (for
example, when working on a support case or testing a feature), your first step
should be to create a merge request (MR) to add this information to the
documentation. You can then share the MR to communicate this information.
-New information that would be useful toward the future usage or troubleshooting
-of GitLab should not be written directly in a forum or other messaging system,
-but added to a documentation MR and then referenced, as described above. Note
+New information about the future usage or troubleshooting
+of GitLab should not be written directly in a forum or other messaging system.
+Instead, add it to a documentation merge request, then reference it. Note
that among any other documentation changes, you can either:
- Add a [Troubleshooting section](#troubleshooting) to a doc if none exists.
- Un-comment and use the placeholder Troubleshooting section included as part of
our [documentation template](../structure.md#template-for-new-docs), if present.
-The more we reflexively add useful information to the documentation, the more
-(and more successfully) the documentation will be used to efficiently accomplish
-tasks and solve problems.
+The more we reflexively add information to the documentation, the more
+the documentation helps others efficiently accomplish tasks and solve problems.
If you have questions when considering, authoring, or editing documentation, ask
-the Technical Writing team on Slack in `#docs` or in GitLab by mentioning the
-writer for the applicable [DevOps stage](https://about.gitlab.com/handbook/product/product-categories/#devops-stages).
+the Technical Writing team. They're available on Slack in `#docs` or in GitLab by mentioning the
+writer for the applicable [DevOps stage](https://about.gitlab.com/handbook/product/categories/#devops-stages).
Otherwise, forge ahead with your best effort. It does not need to be perfect;
the team is happy to review and improve upon your content. Review the
[Documentation guidelines](index.md) before you begin your first documentation MR.
-Having a knowledge base in any form that's separate from the documentation would
+Maintaining a knowledge base separate from the documentation would
be against the documentation-first methodology, because the content would overlap with
the documentation.
@@ -161,11 +159,11 @@ Markdown rendering engine. For a complete Kramdown reference, see the
[GitLab Markdown Kramdown Guide](https://about.gitlab.com/handbook/markdown-guide/).
The [`gitlab-kramdown`](https://gitlab.com/gitlab-org/gitlab_kramdown) Ruby gem
-will support all [GFM markup](../../../user/markdown.md) in the future, which is
-all markup supported for display in the GitLab application itself. For now, use
-regular Markdown markup, following the rules in the linked style guide.
+plans to support all [GitLab Flavored Markdown](../../../user/markdown.md) in the future, which is
+all Markdown supported for display in the GitLab application itself. For now, use
+regular Markdown, following the rules in the linked style guide.
-Note that Kramdown-specific markup (for example, `{:.class}`) doesn't render
+Kramdown-specific markup (for example, `{:.class}`) doesn't render
properly on GitLab instances under [`/help`](../index.md#gitlab-help).
### HTML in Markdown
@@ -184,7 +182,7 @@ GitLab ensures that the Markdown used across all documentation is consistent, as
well as easy to review and maintain, by [testing documentation changes](../testing.md)
with [markdownlint](../testing.md#markdownlint). This lint test fails when any
document has an issue with Markdown formatting that may cause the page to render
-incorrectly within GitLab. It will also fail when a document is using
+incorrectly in GitLab. It also fails when a document has
non-standard Markdown (which may render correctly, but is not the current
standard for GitLab documentation).
@@ -194,7 +192,7 @@ A rule that could cause confusion is `MD044/proper-names`, as it might not be
immediately clear what caused markdownlint to fail, or how to correct the
failure. This rule checks a list of known words, listed in the `.markdownlint.json`
file in each project, to verify proper use of capitalization and backticks.
-Words in backticks will be ignored by markdownlint.
+Words in backticks are ignored by markdownlint.
In general, product names should follow the exact capitalization of the official
names of the products, protocols, and so on. See [`.markdownlint.json`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.markdownlint.json)
@@ -211,7 +209,7 @@ included in backticks. For example:
- "Change the `needs` keyword in your `.gitlab.yml`..."
- `needs` is a parameter, and `.gitlab.yml` is a file, so both need backticks.
- Additionally, `.gitlab.yml` will fail markdownlint without backticks as it
+ Additionally, `.gitlab.yml` without backticks fails markdownlint because it
does not have capital G or L.
- "Run `git clone` to clone a Git repository..."
- `git clone` is a command, so it must be lowercase, while Git is the product,
@@ -241,23 +239,23 @@ to update.
Put files for a specific product area into the related folder:
-| Directory | What belongs here |
-|:----------------------|:----------------------------------------------------------------------------------------------------------------------------------------|
-| `doc/user/` | User related documentation. Anything that can be done within the GitLab user interface goes here, including usage of the `/admin` interface. |
-| `doc/administration/` | Documentation that requires the user to have access to the server where GitLab is installed. The admin settings that can be accessed by using GitLab's interface exist under `doc/user/admin_area/`. |
-| `doc/api/` | API related documentation. |
+| Directory | What belongs here |
+|:----------------------|:------------------|
+| `doc/user/` | User related documentation. Anything that can be done in the GitLab user interface goes here, including usage of the `/admin` interface. |
+| `doc/administration/` | Documentation that requires the user to have access to the server where GitLab is installed. Administrator settings in the GitLab user interface are under `doc/user/admin_area/`. |
+| `doc/api/` | API-related documentation. |
| `doc/development/` | Documentation related to the development of GitLab, whether contributing code or documentation. Related process and style guides should go here. |
-| `doc/legal/` | Legal documents about contributing to GitLab. |
-| `doc/install/` | Contains instructions for installing GitLab. |
-| `doc/update/` | Contains instructions for updating GitLab. |
-| `doc/topics/` | Indexes per topic (`doc/topics/topic_name/index.md`): all resources for that topic. |
+| `doc/legal/` | Legal documents about contributing to GitLab. |
+| `doc/install/` | Contains instructions for installing GitLab. |
+| `doc/update/` | Contains instructions for updating GitLab. |
+| `doc/topics/` | Indexes per topic (`doc/topics/topic_name/index.md`): all resources for that topic. |
### Work with directories and files
Refer to the following items when working with directories and files:
1. When you create a new directory, always start with an `index.md` file.
- Don't use another file name and _do not_ create `README.md` files.
+ Don't use another filename and _do not_ create `README.md` files.
1. _Do not_ use special characters and spaces, or capital letters in file
names, directory names, branch names, and anything that generates a path.
1. When creating or renaming a file or directory and it has more than one word
@@ -277,24 +275,24 @@ Refer to the following items when working with directories and files:
Every page you would navigate under `/profile` should have its own document,
for example, `account.md`, `applications.md`, or `emails.md`.
- `doc/user/dashboard/` should contain all dashboard related documentation.
- - `doc/user/admin_area/` should contain all admin related documentation
- describing what can be achieved by accessing GitLab's admin interface
- (_not to be confused with `doc/administration` where server access is
- required_).
+ - `doc/user/admin_area/` should contain all administrator-related
+ documentation describing what can be achieved by accessing the GitLab
+ administrator interface (not to be confused with `doc/administration` where
+ server access is required).
- Every category under `/admin/application_settings/` should have its
own document located at `doc/user/admin_area/settings/`. For example,
the **Visibility and Access Controls** category should have a document
located at `doc/user/admin_area/settings/visibility_and_access_controls.md`.
1. The `doc/topics/` directory holds topic-related technical content. Create
`doc/topics/topic_name/subtopic_name/index.md` when subtopics become necessary.
- General user- and admin- related documentation, should be placed accordingly.
+ General user and administrator documentation should be placed accordingly.
1. The `/university/` directory is *deprecated* and the majority of its documentation
has been moved.
If you're unsure where to place a document or a content addition, this shouldn't
stop you from authoring and contributing. Use your best judgment, and then ask
-the reviewer of your MR to confirm your decision, or ask a technical writer at
-any stage in the process. The technical writing team will review all
+the reviewer of your MR to confirm your decision. You can also ask a technical writer at
+any stage in the process. The technical writing team reviews all
documentation changes, regardless, and can move content if there is a better
place for it.
@@ -305,9 +303,9 @@ Do not include the same information in multiple places.
### References across documents
-- Give each folder an `index.md` page that introduces the topic, introduces the
- pages within, and links to the pages within (including to the index pages of
- any next-level subpaths).
+- Give each folder an `index.md` page that introduces the topic, and both introduces
+ and links to the child pages, including to the index pages of
+ any next-level sub-paths.
- To ensure discoverability, ensure each new or renamed doc is linked from its
higher-level index page and other related pages.
- When making reference to other GitLab products and features, link to their
@@ -315,7 +313,7 @@ Do not include the same information in multiple places.
- When making reference to third-party products or technologies, link out to
their external sites, documentation, and resources.
-### Structure within documents
+### Structure in documents
- Include any and all applicable subsections as described on the
[structure and template](../structure.md) page.
@@ -331,11 +329,12 @@ GitLab documentation should be clear and easy to understand.
- Write in US English with US grammar. (Tested in [`British.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab/British.yml).)
- Use [inclusive language](#inclusive-language).
-### Point of view
+### Trademark
-In most cases, it’s appropriate to use the second-person (you, yours) point of
-view, because it’s friendly and easy to understand. (Tested in
-[`FirstPerson.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab/FirstPerson.yml).)
+Only use the GitLab name and trademarks in accordance with
+[GitLab Brand Guidelines](https://about.gitlab.com/handbook/marketing/inbound-marketing/digital-experience/brand-guidelines/#trademark).
+
+Don't use the possessive form of the word GitLab (`GitLab's`).
### Capitalization
@@ -353,7 +352,7 @@ item, use the same capitalization that's displayed in the user interface.
Standards for this content are listed in the [Pajamas Design System Content section](https://design.gitlab.com/content/punctuation/)
and typically match what's called for in this Documentation Style Guide.
-If you think there's a mistake in the way the user interface text is styled,
+If you think the user interface text contains style mistakes,
create an issue or an MR to propose a change to the user interface text.
#### Feature names
@@ -524,78 +523,35 @@ You can use the following fake tokens as examples:
| Health check token | `Tu7BgjR9qeZTEyRzGG2P` |
| Request profile token | `7VgpS4Ax5utVD2esNstz` |
-### Language to avoid
-
-When creating documentation, limit or avoid the use of the following verb
-tenses, words, and phrases:
-
-- Avoid jargon when possible, and when not possible, define the term or
- [link to a definition](#links-to-external-documentation).
-- Avoid uncommon words when a more-common alternative is possible, ensuring that
- content is accessible to more readers.
-- Don't write in the first person singular.
- (Tested in [`FirstPerson.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab/FirstPerson.yml).)
- - Instead of _I_ or _me_, use _we_, _you_, _us_, or _one_.
- - When possible, stay user focused by writing in the second person (_you_ or
- the imperative).
-- Don't overuse "that". In many cases, you can remove "that" from a sentence
- and improve readability.
-- Avoid use of the future tense:
- - Instead of "after you execute this command, GitLab will display the
- result", use "after you execute this command, GitLab displays the result".
- - Only use the future tense to convey when the action or result will actually
- occur at a future time.
-- Don't use slashes to clump different words together or as a replacement for
- the word "or":
- - Instead of "and/or," consider using "or," or use another sensible
- construction.
- - Other examples include "clone/fetch," author/assignee," and
- "namespace/repository name." Break apart any such instances in an
- appropriate way.
- - Exceptions to this rule include commonly accepted technical terms, such as
- CI/CD and TCP/IP.
-<!-- vale gitlab.LatinTerms = NO -->
-- We discourage the use of Latin abbreviations and terms, such as _e.g._,
- _i.e._, _etc._, or _via_, as even native users of English can misunderstand
- those terms. (Tested in [`LatinTerms.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab/LatinTerms.yml).)
- - Instead of _i.e._, use _that is_.
- - Instead of _via_, use _through_.
- - Instead of _e.g._, use _for example_, _such as_, _for instance_, or _like_.
- - Instead of _etc._, either use _and so on_ or consider editing it out, since
- it can be vague.
-<!-- vale gitlab.LatinTerms = YES -->
-- Avoid using the word *currently* when talking about the product or its
- features. The documentation describes the product as it is, and not as it
- will be at some indeterminate point in the future.
-- Avoid the using the word *scalability* with increasing GitLab's performance
- for additional users. Using the words *scale* or *scaling* in other cases is
- acceptable, but references to increasing GitLab's performance for additional
- users should direct readers to the GitLab
- [reference architectures](../../../administration/reference_architectures/index.md)
- page.
-- Avoid all forms of the phrases *high availability* and *HA*, and instead
- direct readers to the GitLab [reference architectures](../../../administration/reference_architectures/index.md)
- for information about configuring GitLab to have the performance needed for
- additional users over time.
-- Don't use profanity or obscenities. Doing so may negatively affect other
- users and contributors, which is contrary to GitLab's value of
- [Diversity, Inclusion, and Belonging](https://about.gitlab.com/handbook/values/#diversity-inclusion).
-- Avoid the use of [racially-insensitive terminology or phrases](https://www.marketplace.org/2020/06/17/tech-companies-update-language-to-avoid-offensive-terms/). For example:
- - Use *primary* and *secondary* for database and server relationships.
- - Use *allowlist* and *denylist* to describe access control lists.
-- Avoid the word _please_. For details, see the [Microsoft style guide](https://docs.microsoft.com/en-us/style-guide/a-z-word-list-term-collections/p/please).
-- Avoid words like _easily_, _simply_, _handy_, and _useful._ If the user
- doesn't find the process to be these things, we lose their trust.
-
-### Word usage clarifications
-
-- Don't use "may" and "might" interchangeably:
- - Use "might" to indicate the probability of something occurring. "If you
- skip this step, the import process might fail."
- - Use "may" to indicate giving permission for someone to do something, or
- consider using "can" instead. "You may select either option on this
- screen." Or, "You can select either option on this screen."
-
+### Usage list
+<!-- vale off -->
+
+| Usage | Guidance |
+|-----------------------|-----|
+| admin, admin area | Use **administration**, **administrator**, **administer**, or **Admin Area** instead. |.
+| and/or | Use **or** instead, or another sensible construction. |
+| currently | Do not use when talking about the product or its features. The documentation describes the product as it is today. |
+| easily | Do not use. If the user doesn't find the process to be these things, we lose their trust. |
+| e.g. | Do not use Latin abbreviations. Use **for example**, **such as**, **for instance**, or **like** instead. ([Vale](../testing.md#vale) rule: [`LatinTerms.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab/LatinTerms.yml)) |
+| future tense | When possible, use present tense instead. For example, use `after you execute this command, GitLab displays the result` instead of `after you execute this command, GitLab will display the result`. ([Vale](../testing.md#vale) rule: [`FutureTense.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab/FutureTense.yml)) |
+| handy | Do not use. If the user doesn't find the process to be these things, we lose their trust. |
+| high availability, HA | Do not use. Instead, direct readers to the GitLab [reference architectures](../../../administration/reference_architectures/index.md) for information about configuring GitLab for handling greater amounts of users. |
+| I | Do not use first-person singular. Use **you**, **we**, or **us** instead. ([Vale](../testing.md#vale) rule: [`FirstPerson.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab/FirstPerson.yml)) |
+| i.e. | Do not use Latin abbreviations. Use **that is** instead. ([Vale](../testing.md#vale) rule: [`LatinTerms.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab/LatinTerms.yml)) |
+| jargon | Do not use. Define the term or [link to a definition](#links-to-external-documentation). |
+| may, might | **Might** means something has the probability of occurring. **May** gives permission to do something. Consider **can** instead of **may**. |
+| me, myself, mine | Do not use first-person singular. Use **you**, **we**, or **us** instead. ([Vale](../testing.md#vale) rule: [`FirstPerson.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab/FirstPerson.yml)) |
+| please | Do not use. For details, see the [Microsoft style guide](https://docs.microsoft.com/en-us/style-guide/a-z-word-list-term-collections/p/please). |
+| profanity | Do not use. Doing so may negatively affect other users and contributors, which is contrary to the GitLab value of [Diversity, Inclusion, and Belonging](https://about.gitlab.com/handbook/values/#diversity-inclusion). |
+| scalability | Do not use when talking about increasing GitLab performance for additional users. The words scale or scaling are sometimes acceptable, but references to increasing GitLab performance for additional users should direct readers to the GitLab [reference architectures](../../../administration/reference_architectures/index.md) page. |
+| simply | Do not use. If the user doesn't find the process to be these things, we lose their trust. |
+| slashes | Instead of **and/or**, use **or** or another sensible construction. This rule also applies to other slashes, like **follow/unfollow**. Some exceptions (like **CI/CD**) are allowed. |
+| that | Do not use. Example: `the file that you save` can be `the file you save`. |
+| useful | Do not use. If the user doesn't find the process to be these things, we lose their trust. |
+| utilize | Do not use. Use **use** instead. It's more succinct and easier for non-native English speakers to understand. |
+| via | Do not use Latin abbreviations. Use **with**, **through**, or **by using** instead. ([Vale](../testing.md#vale) rule: [`LatinTerms.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/gitlab/LatinTerms.yml)) |
+
+<!-- vale on -->
### Contractions
Contractions are encouraged, and can create a friendly and informal tone,
@@ -604,11 +560,13 @@ especially in tutorials, instructional documentation, and
Some contractions, however, should be avoided:
+- Do not use [the word GitLab in a contraction](#trademark).
+
- Do not use contractions with a proper noun and a verb. For example:
| Do | Don't |
|------------------------------------------|-----------------------------------------|
- | GitLab is creating X. | GitLab's creating X. |
+ | Canada is establishing X. | Canada's establishing X. |
- Do not use contractions when you need to emphasize a negative. For example:
@@ -674,8 +632,8 @@ Follow these guidelines for punctuation:
### Placeholder text
-Often in examples, a writer will provide a command or configuration that
-uses values specific to the reader.
+You might want to provide a command or configuration that
+uses specific values.
In these cases, use [`<` and `>`](https://en.wikipedia.org/wiki/Usage_message#Pattern)
to call out where a reader must replace text with their own value.
@@ -691,12 +649,23 @@ cp <your_source_directory> <your_destination_directory>
Use the HTML `<kbd>` tag when referring to keystroke presses. For example:
```plaintext
-To stop the command, press <kbd>Ctrl</kbd>+<kbd>C</kbd>.
+To stop the command, press <kbd>Control</kbd>+<kbd>C</kbd>.
```
When the docs are generated, the output is:
-To stop the command, press <kbd>Ctrl</kbd>+<kbd>C</kbd>.
+To stop the command, press <kbd>Control</kbd>+<kbd>C</kbd>.
+
+### Spaces between words
+
+Use only standard spaces between words. The search engine for the documentation
+website doesn't split words separated with
+[non-breaking spaces](https://en.wikipedia.org/wiki/Non-breaking_space) when
+indexing, and fails to create expected individual search terms. Tests that search
+for certain words separated by regular spaces can't find words separated by
+non-breaking spaces.
+
+Tested in [`lint-doc.sh`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/scripts/lint-doc.sh).
## Lists
@@ -733,7 +702,7 @@ This is a list of available features:
- Use dashes (`-`) for unordered lists instead of asterisks (`*`).
- Prefix `1.` to every item in an ordered list. When rendered, the list items
- will appear with sequential numbering.
+ display with sequential numbering.
### Punctuation
@@ -793,6 +762,8 @@ Items nested in lists should always align with the first character of the list
item. In unordered lists (using `-`), this means two spaces for each level of
indentation:
+<!-- vale off -->
+
````markdown
- Unordered list item 1
@@ -806,7 +777,7 @@ indentation:
- Unordered list item 3
```plaintext
- a codeblock that will next inside list item 3
+ a code block that nests inside list item 3
```
- Unordered list item 4
@@ -814,8 +785,12 @@ indentation:
![an image that will nest inside list item 4](image.png)
````
+<!-- vale on -->
+
For ordered lists, use three spaces for each level of indentation:
+<!-- vale off -->
+
````markdown
1. Ordered list item 1
@@ -829,7 +804,7 @@ For ordered lists, use three spaces for each level of indentation:
1. Ordered list item 3
```plaintext
- a codeblock that will next inside list item 3
+ a code block that nests inside list item 3
```
1. Ordered list item 4
@@ -837,6 +812,8 @@ For ordered lists, use three spaces for each level of indentation:
![an image that will nest inside list item 4](image.png)
````
+<!-- vale on -->
+
You can nest full lists inside other lists using the same rules as above. If you
want to mix types, that's also possible, if you don't mix items at the same
level:
@@ -864,7 +841,7 @@ that's best described by a matrix, tables are the best choice.
### Creation guidelines
-Due to accessibility and scannability requirements, tables should not have any
+To keep tables accessible and scannable, tables should not have any
empty cells. If there is no otherwise meaningful value for a cell, consider entering
*N/A* (for 'not applicable') or *none*.
@@ -886,9 +863,9 @@ Consider installing a plugin or extension in your editor for formatting tables:
### Feature tables
-When creating tables of lists of features (such as whether or not features are
-available to certain roles on the [Permissions](../../../user/permissions.md#project-members-permissions)
-page), use the following phrases (based on the SVG icons):
+When creating tables of lists of features (such the features
+available to each role on the [Permissions](../../../user/permissions.md#project-members-permissions)
+page), use the following phrases:
| Option | Markdown | Displayed result |
|--------|--------------------------|------------------------|
@@ -901,8 +878,8 @@ Valid for Markdown content only, not for front matter entries:
- Standard quotes: double quotes (`"`). Example: "This is wrapped in double
quotes".
-- Quote within a quote: double quotes (`"`) wrap single quotes (`'`). Example:
- "I am 'quoting' something within a quote".
+- Quote inside a quote: double quotes (`"`) wrap single quotes (`'`). Example:
+ "This sentence 'quotes' something in a quote".
For other punctuation rules, refer to the
[GitLab UX guide](https://design.gitlab.com/content/punctuation/).
@@ -910,7 +887,7 @@ For other punctuation rules, refer to the
## Headings
- Add _only one H1_ in each document, by adding `#` at the beginning of
- it (when using Markdown). The `h1` will be the document `<title>`.
+ it (when using Markdown). The `h1` becomes the document `<title>`.
- Start with an `h2` (`##`), and respect the order `h2` > `h3` > `h4` > `h5` > `h6`.
Never skip the hierarchy level, such as `h2` > `h4`
- Avoid putting numbers in headings. Numbers shift, hence documentation anchor
@@ -922,16 +899,16 @@ For other punctuation rules, refer to the
- When possible, avoid including words that might change in the future. Changing
a heading changes its anchor URL, which affects other linked pages.
- When introducing a new document, be careful for the headings to be
- grammatically and syntactically correct. Mention an [assigned technical writer (TW)](https://about.gitlab.com/handbook/product/product-categories/)
+ grammatically and syntactically correct. Mention an [assigned technical writer (TW)](https://about.gitlab.com/handbook/product/categories/)
for review.
This is to ensure that no document with wrong heading is going live without an
audit, thus preventing dead links and redirection issues when corrected.
- Leave exactly one blank line before and after a heading.
- Do not use links in headings.
-- Add the corresponding [product badge](#product-badges) according to the tier the
+- Add the corresponding [product badge](#product-tier-badges) according to the tier the
feature belongs.
- Our documentation site search engine prioritizes words used in headings and
- subheadings. Make you subheading titles clear, descriptive, and complete to help
+ subheadings. Make your subheading titles clear, descriptive, and complete to help
users find the right example, as shown in the section on [heading titles](#heading-titles).
- See [Capitalization](#capitalization) for guidelines on capitalizing headings.
@@ -943,54 +920,52 @@ search engine optimization (SEO), use the imperative, where possible.
| Do | Don't |
|:--------------------------------------|:------------------------------------------------------------|
| Configure GDK | Configuring GDK |
-| GitLab Release and Maintenance Policy | This section covers GitLab's Release and Maintenance Policy |
+| GitLab Release and Maintenance Policy | This section covers the GitLab Release and Maintenance Policy |
| Backport to older releases | Backporting to older releases |
| GitLab Pages examples | Examples |
For guidelines on capitalizing headings, see the section on [capitalization](#capitalization).
-NOTE: **Note:**
-If you change an existing title, be careful. These changes might affect not
-only [links](#anchor-links) within the page, but might also affect links to the
-GitLab documentation from both the GitLab application and external sites.
+NOTE:
+If you change an existing title, be careful. In-page [anchor links](#anchor-links),
+links in the GitLab application, and links from external sites can break.
### Anchor links
Headings generate anchor links when rendered. `## This is an example` generates
the anchor `#this-is-an-example`.
-NOTE: **Note:**
+NOTE:
[Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/39717) in
-GitLab 13.4, [product badges](#product-badges) used in headings aren't included
-in the generated anchor links. For example, when you link to
+GitLab 13.4, [product badges](#product-tier-badges) used in headings aren't
+included in the generated anchor links. For example, when you link to
`## This is an example **(CORE)**`, use the anchor `#this-is-an-example`.
Keep in mind that the GitLab user interface links to many documentation pages
-and anchor links to take the user to the right spot. Therefore, when you change
+and anchor links to take the user to the right spot. When you change
a heading, search `doc/*`, `app/views/*`, and `ee/app/views/*` for the old
-anchor to make sure you're not breaking an anchor linked from other
-documentation nor from the GitLab user interface. If you find the old anchor, be
-sure to replace it with the new one.
+anchor. If you do not fix these links, the [`ui-docs-lint` job](../testing.md#ui-link-tests)
+in your merge request fails.
Important:
- Avoid crosslinking documentation to headings unless you need to link to a
- specific section of the document. This will avoid breaking anchors in the
+ specific section of the document. This avoids breaking anchors in the
future in case the heading is changed.
-- If possible, avoid changing headings since they're not only linked internally.
+- If possible, avoid changing headings, because they're not only linked internally.
There are various links to GitLab documentation on the internet, such as
tutorials, presentations, StackOverflow posts, and other sources.
- Do not link to `h1` headings.
-Note that, with Kramdown, it is possible to add a custom ID to an HTML element
-with Markdown markup, but they _do not_ work in GitLab's `/help`. Therefore,
-do not use this option until further notice.
+Note that with Kramdown, it's possible to add a custom ID to an HTML element
+with Markdown markup, but they don't work in `/help`. Because of this, don't use
+this option.
## Links
Links are important in GitLab documentation. They allow you to [link instead of
summarizing](#link-instead-of-summarize) to help preserve a [single source of truth](#why-a-single-source-of-truth)
-within GitLab documentation.
+in GitLab documentation.
We include guidance for links in the following categories:
@@ -1004,7 +979,7 @@ We include guidance for links in the following categories:
for authoritative sources.
- When to use [links requiring permissions](#links-requiring-permissions).
- How to set up a [link to a video](#link-to-video).
-- How to [include links with version text](#text-for-documentation-requiring-version-text).
+- How to [include links with version text](#where-to-put-version-text).
- How to [link to specific lines of code](#link-to-specific-lines-of-code)
### Basic link criteria
@@ -1018,13 +993,13 @@ We include guidance for links in the following categories:
### Links to internal documentation
-NOTE: **Note:**
+NOTE:
_Internal_ refers to documentation in the same project. When linking to
documentation in separate projects (for example, linking to Omnibus documentation
from GitLab documentation), you must use absolute URLs.
Do not use absolute URLs like `https://docs.gitlab.com/ee/index.html` to
-crosslink to other documentation within the same project. Use relative links to
+cross-link to other documentation in the same project. Use relative links to
the file, like `../index.md`. (These are converted to HTML when the site is
rendered.)
@@ -1033,7 +1008,7 @@ Relative linking enables crosslinks to work:
- in Review Apps, local previews, and `/help`.
- when working on the documentation locally, so you can verify that they work as
early as possible in the process.
-- within the GitLab user interface when browsing doc files in their respective
+- in the GitLab user interface when browsing doc files in their respective
repositories. For example, the links displayed at
`https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/README.md`.
@@ -1053,7 +1028,7 @@ To link to internal documentation:
Do: `../../geo/replication/troubleshooting.md`
-- Always add the file name `file.md` at the end of the link with the `.md`
+- Always add the filename `file.md` at the end of the link with the `.md`
extension, not `.html`.
Don't:
@@ -1068,7 +1043,7 @@ To link to internal documentation:
- `../../issues/tags.md`
- `../../issues/tags.md#stages`
-NOTE: **Note:**
+NOTE:
Using the Markdown extension is necessary for the [`/help`](../index.md#gitlab-help)
section of GitLab.
@@ -1109,7 +1084,7 @@ While many of these sources to avoid can help you learn skills and or features,
they can become obsolete quickly. Nobody is obliged to maintain any of these
sites. Therefore, we should avoid using them as reference literature.
-NOTE: **Note:**
+NOTE:
Non-authoritative sources are acceptable only if there is no equivalent
authoritative source. Even then, focus on non-authoritative sources that are
extensively cited or peer-reviewed.
@@ -1122,7 +1097,7 @@ Don't link directly to:
- Project features that require [special permissions](../../../user/permissions.md)
to view.
-These will fail for:
+These fail for:
- Those without sufficient permissions.
- Automated link checkers.
@@ -1143,16 +1118,16 @@ For more information, see the [confidential issue](../../../user/project/issues/
### Link to specific lines of code
-When linking to specific lines within a file, link to a commit instead of to the
-branch. Lines of code change through time, therefore, linking to a line by using
+When linking to specific lines in a file, link to a commit instead of to the
+branch. Lines of code change over time. Linking to a line by using
the commit link ensures the user lands on the line you're referring to. The
-**Permalink** button, which is available when viewing a file within a project,
-makes it easy to generate a link to the most recent commit of the given file.
+**Permalink** button, displayed when viewing a file in a project,
+provides a link to the most recent commit of that file.
- _Do_: `[link to line 3](https://gitlab.com/gitlab-org/gitlab/-/blob/11f17c56d8b7f0b752562d78a4298a3a95b5ce66/.gitlab/issue_templates/Feature%20proposal.md#L3)`
- _Don't_: `[link to line 3](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/issue_templates/Feature%20proposal.md#L3).`
-If that linked expression is no longer in that line of the file due to additional
+If that linked expression has changed line numbers due to additional
commits, you can still search the file for that query. In this case, update the
document to ensure it links to the most recent version of the file.
@@ -1206,9 +1181,9 @@ When you take screenshots:
### Save the image
-- Save the image with a lowercase file name that's descriptive of the feature
+- Save the image with a lowercase filename that's descriptive of the feature
or concept in the image. If the image is of the GitLab interface, append the
- GitLab version to the file name, based on the following format:
+ GitLab version to the filename, based on the following format:
`image_name_vX_Y.png`. For example, for a screenshot taken from the pipelines
page of GitLab 11.1, a valid name is `pipelines_v11_1.png`. If you're adding an
illustration that doesn't include parts of the user interface, add the release
@@ -1218,10 +1193,10 @@ When you take screenshots:
the `.md` document that you're working on is located.
- Consider using PNG images instead of JPEG.
- [Compress all PNG images](#compress-images).
-- Compress gifs with <https://ezgif.com/optimize> or similar tool.
+- Compress GIFs with <https://ezgif.com/optimize> or similar tool.
- Images should be used (only when necessary) to _illustrate_ the description
of a process, not to _replace_ it.
-- Max image size: 100KB (gifs included).
+- Max image size: 100KB (GIFs included).
- See also how to link and embed [videos](#videos) to illustrate the
documentation.
@@ -1268,12 +1243,14 @@ request.
## Videos
-Adding GitLab's existing YouTube video tutorials to the documentation is highly
+Adding GitLab YouTube video tutorials to the documentation is highly
encouraged, unless the video is outdated. Videos should not replace
documentation, but complement or illustrate it. If content in a video is
-fundamental to a feature and its key use cases, but this is not adequately
-covered in the documentation, add this detail to the documentation text or
-create an issue to review the video and do so.
+fundamental to a feature and its key use cases, but isn't adequately
+covered in the documentation, you should:
+
+- Add this detail to the documentation text.
+- Create an issue to review the video and update the page.
Do not upload videos to the product repositories. [Link](#link-to-video) or
[embed](#embed-videos) them instead.
@@ -1297,11 +1274,11 @@ You can link any up-to-date video that's useful to the GitLab user.
The [GitLab documentation site](https://docs.gitlab.com) supports embedded
videos.
-You can only embed videos from [GitLab's official YouTube account](https://www.youtube.com/channel/UCnMGQ8QHMAnVIsI3xJrihhg).
+You can embed videos from [the official YouTube account for GitLab](https://www.youtube.com/channel/UCnMGQ8QHMAnVIsI3xJrihhg) only.
For videos from other sources, [link](#link-to-video) them instead.
-In most cases, it is better to [link to video](#link-to-video) instead, because
-an embed takes up a lot of space on the page and can be distracting to readers.
+In most cases, [link to a video](#link-to-video), because
+embedded videos take up a lot of space on the page and can be distracting to readers.
To embed a video:
@@ -1341,9 +1318,9 @@ This is how it renders on the GitLab documentation site:
> - The `figure` tag is required for semantic SEO and the `video_container`
class is necessary to make sure the video is responsive and displays on
different mobile devices.
-> - The `<div class="video-fallback">` is a fallback necessary for GitLab's
-`/help`, as GitLab's Markdown processor does not support iframes. It's hidden on
-the documentation site, but will be displayed on GitLab's `/help`.
+> - The `<div class="video-fallback">` is a fallback necessary for
+`/help`, because the GitLab Markdown processor doesn't support iframes. It's
+hidden on the documentation site, but is displayed by `/help`.
## Code blocks
@@ -1361,10 +1338,12 @@ the documentation site, but will be displayed on GitLab's `/help`.
and leave a blank line between the command and the output.
- When providing a command without output, don't prefix the shell command with `$`.
- If you need to include triple backticks inside a code block, use four backticks
- for the codeblock fences instead of three.
+ for the code block fences instead of three.
- For regular fenced code blocks, always use a highlighting class corresponding to
the language for better readability. Examples:
+ <!-- vale off -->
+
````markdown
```ruby
Ruby code
@@ -1383,6 +1362,8 @@ the documentation site, but will be displayed on GitLab's `/help`.
```
````
+ <!-- vale on -->
+
Syntax highlighting is required for fenced code blocks added to the GitLab
documentation. Refer to the following table for the most common language classes,
or check the [complete list](https://github.com/rouge-ruby/rouge/wiki/List-of-supported-languages-and-lexers)
@@ -1398,7 +1379,7 @@ of available language classes:
| `graphql` | |
| `haml` | |
| `html` | |
-| `ini` | For some simple config files that are not in TOML format. |
+| `ini` | For some simple configuration files that are not in TOML format. |
| `javascript` | Alias `js`. |
| `json` | |
| `markdown` | Alias: `md`. |
@@ -1406,7 +1387,7 @@ of available language classes:
| `nginx` | |
| `perl` | |
| `php` | |
-| `plaintext` | Examples with no defined language, such as output from shell commands or API calls. If a codeblock has no language, it defaults to `plaintext`. Alias: `text`. |
+| `plaintext` | Examples with no defined language, such as output from shell commands or API calls. If a code block has no language, it defaults to `plaintext`. Alias: `text`.|
| `prometheus` | Prometheus configuration examples. |
| `python` | |
| `ruby` | Alias: `rb`. |
@@ -1436,7 +1417,7 @@ Usage examples:
Example: `**{tanuki}**` renders as: **{tanuki}**.
- Icon with custom size: `**{icon-name, size}**`
- Available sizes (in px): 8, 10, 12, 14, 16, 18, 24, 32, 48, and 72
+ Available sizes (in pixels): 8, 10, 12, 14, 16, 18, 24, 32, 48, and 72
Example: `**{tanuki, 24}**` renders as: **{tanuki, 24}**.
- Icon with custom size and class: `**{icon-name, size, class-name}**`.
@@ -1467,14 +1448,14 @@ interface:
| Section | Description |
|:-------------------------|:----------------------------------------------------------------------------------------------------------------------------|
| **{overview}** Overview | View your GitLab Dashboard, and administer projects, users, groups, jobs, runners, and Gitaly servers. |
-| **{monitor}** Monitoring | View GitLab system information, and information on background jobs, logs, health checks, requests profiles, and audit logs. |
+| **{monitor}** Monitoring | View GitLab system information, and information on background jobs, logs, health checks, requests profiles, and audit events. |
| **{messages}** Messages | Send and manage broadcast messages for your users. |
```
| Section | Description |
|:-------------------------|:----------------------------------------------------------------------------------------------------------------------------|
| **{overview}** Overview | View your GitLab Dashboard, and administer projects, users, groups, jobs, runners, and Gitaly servers. |
-| **{monitor}** Monitoring | View GitLab system information, and information on background jobs, logs, health checks, requests profiles, and audit logs. |
+| **{monitor}** Monitoring | View GitLab system information, and information on background jobs, logs, health checks, requests profiles, and audit events. |
| **{messages}** Messages | Send and manage broadcast messages for your users. |
Use an icon when you find yourself having to describe an interface element. For
@@ -1485,102 +1466,72 @@ example:
## Alert boxes
-When you need to call special attention to particular sentences, use the
-following markup to create highlighted alert boxes.
+Use alert boxes to call attention to information.
-Alert boxes work for one paragraph only. Multiple paragraphs, lists, and headers
-won't render correctly. For multiple lines, use [blockquotes](#blockquotes)
-instead.
+Alert boxes are generated when the words `NOTE:` or `WARNING:` are followed by a
+line break. For example:
-Alert boxes render only on the GitLab documentation site (<https://docs.gitlab.com>).
-In the GitLab product help, alert boxes appear as plain Markdown text.
+```markdown
+NOTE:
+This is something to note.
+```
-These alert boxes are used in the GitLab documentation. These aren't strict
-guidelines, but for consistency you should try to use these values:
+To display an alert box for multiple paragraphs, lists, or headers, use
+[blockquotes](#blockquotes) instead.
-| Color | Markup | Default keyword | Alternative keywords |
-|--------|------------|-----------------|----------------------------------------------------------------------|
-| Blue | `NOTE:` | `**Note:**` | |
-| Yellow | `CAUTION:` | `**Caution:**` | `**Warning:**`, `**Important:**` |
-| Red | `DANGER:` | `**Danger:**` | `**Warning:**`, `**Important:**`, `**Deprecated:**`, `**Required:**` |
-| Green | `TIP:` | `**Tip:**` | |
+Alert boxes render only on the GitLab documentation site (<https://docs.gitlab.com>).
+In the GitLab product help, alert boxes appear as plain text.
### Note
-Notes indicate additional information that's of special use to the reader.
-Notes are most effective when used _sparingly_.
+Use notes sparingly. Too many notes can make topics difficult to scan.
-Try to avoid them. Too many notes can impact the scannability of a topic and
-create an overly busy page.
+Instead of adding a note:
-Instead of adding a note, try one of these alternatives:
+- Re-write the sentence as part of a paragraph.
+- Put the information into its own paragraph.
+- Put the content under a new subheading.
-- Re-write the sentence as part of the most-relevant paragraph.
-- Put the information into its own standalone paragraph.
-- Put the content under a new subheading that introduces the topic, which makes
- it more visible.
-
-If you must use a note, use the following formatting:
+If you must use a note, use this format:
```markdown
-NOTE: **Note:**
+NOTE:
This is something to note.
```
-How it renders on the GitLab documentation site:
+It renders on the GitLab documentation site as:
-NOTE: **Note:**
+NOTE:
This is something to note.
-### Tip
-
-```markdown
-TIP: **Tip:**
-This is a tip.
-```
-
-How it renders on the GitLab documentation site:
-
-TIP: **Tip:**
-This is a tip.
-
-### Caution
-
-```markdown
-CAUTION: **Caution:**
-This is something to be cautious about.
-```
-
-How it renders on the GitLab documentation site:
-
-CAUTION: **Caution:**
-This is something to be cautious about.
+### Warning
-### Danger
+Use a warning to indicate deprecated features, or to provide a warning about
+procedures that have the potential for data loss.
```markdown
-DANGER: **Warning:**
-This is a breaking change, a bug, or something very important to note.
+WARNING:
+This is something to be warned about.
```
-How it renders on the GitLab documentation site:
+It renders on the GitLab documentation site as:
-DANGER: **Warning:**
-This is a breaking change, a bug, or something very important to note.
+WARNING:
+This is something to be warned about.
## Blockquotes
-For highlighting a text within a blue blockquote, use this format:
+For highlighting a text inside a blockquote, use this format:
```markdown
> This is a blockquote.
```
-which renders on the [GitLab documentation site](https://docs.gitlab.com) as:
+It renders on the GitLab documentation site as:
> This is a blockquote.
-If the text spans across multiple lines it's OK to split the line.
+If the text spans multiple lines, you can split them.
For multiple paragraphs, use the symbol `>` before every line:
@@ -1593,7 +1544,7 @@ For multiple paragraphs, use the symbol `>` before every line:
> - Second item in the list
```
-Which renders to:
+It renders on the GitLab documentation site as:
> This is the first paragraph.
>
@@ -1610,7 +1561,7 @@ documentation authors on agreed styles and usage of terms.
### Merge requests (MRs)
Merge requests allow you to exchange changes you made to source code and
-collaborate with other people on the same project. You'll see this term used in
+collaborate with other people on the same project. This term is used in
the following ways:
- Use lowercase _merge requests_ regardless of whether referring to the feature
@@ -1654,36 +1605,43 @@ elements:
|:------------|:--------------------------------|:----------------------|
| _go to_ | making a browser go to location | "navigate to", "open" |
-## GitLab versions and tiers
+## GitLab versions
-Tagged and released versions of GitLab documentation are available:
+GitLab product documentation pages (not including [Contributor and Development](../../README.md)
+pages in the `/development` directory) can include version information to help
+users be aware of recent improvements or additions.
-- In the [documentation archives](https://docs.gitlab.com/archives/).
-- At the `/help` URL for any GitLab installation.
+The GitLab Technical Writing team determines which versions of
+documentation to display on this site based on the GitLab
+[Statement of Support](https://about.gitlab.com/support/statement-of-support.html#we-support-the-current-major-version-and-the-two-previous-major-versions).
-The version introducing a new feature is added to the top of the topic in the
-documentation to provide a link back to how the feature was developed.
+### View older GitLab documentation versions
-TIP: **Tip:**
-Whenever you have documentation related to the `gitlab.rb` file, you're working
-with a self-managed installation. The section or page is therefore likely to
-apply only to self-managed instances. If so, the relevant "`TIER` ONLY"
-[Product badge](#product-badges) should be included at the highest applicable
-heading level.
+Older versions of GitLab may no longer have documentation available from `docs.gitlab.com`.
+If documentation for your version is no longer available from `docs.gitlab.com`, you can still view a
+tagged and released set of documentation for your installed version:
-### Text for documentation requiring version text
+- In the [documentation archives](https://docs.gitlab.com/archives/).
+- At the `/help` URL of your GitLab instance.
+- In the documentation repository based on the respective branch (for example,
+ the [13.2 branch](https://gitlab.com/gitlab-org/gitlab/-/tree/13-2-stable-ee/doc)).
-When a feature is new or updated, you can add version information as a bulleted
-item in the **Version history**, or as an inline reference with related text.
+### Where to put version text
+
+When a feature is added or updated, you can include its version information
+either as a **Version history** item or as an inline text reference.
+
+Version text shouldn't include information about the tier in which the feature
+is available. This information is provided by the [product badge](#product-tier-badges)
+displayed for the page or feature.
#### Version text in the **Version History**
-If all content in a section is related, add version text following the header for
-the section. Each entry should be on a single line. To render correctly, it must be on its
-own line and surrounded by blank lines.
+If all content in a section is related, add version text following the header
+for the section. The version information must be surrounded by blank lines, and
+each entry should be on its own line.
-Features should declare the GitLab version that introduced a feature in a blockquote
-following the header:
+Add the version history information as a blockquote:
```markdown
## Feature name
@@ -1693,24 +1651,16 @@ following the header:
This feature does something.
```
-Whenever possible, version text should have a link to the _completed_ issue,
-merge request, or epic that introduced the feature. An issue is preferred over
-a merge request, and a merge request is preferred over an epic. For example:
+Whenever possible, version text should have a link to the completed issue, merge
+request, or epic that introduced the feature. An issue is preferred over a merge
+request, and a merge request is preferred over an epic. For example:
```markdown
> [Introduced](<link-to-issue>) in GitLab 11.3.
```
-If the feature is only available in GitLab Enterprise Edition, mention
-the [paid tier](https://about.gitlab.com/handbook/marketing/strategic-marketing/tiers/)
-the feature is available in:
-
-```markdown
-> [Introduced](<link-to-issue>) in [GitLab Starter](https://about.gitlab.com/pricing/) 11.3.
-```
-
-If listing information for multiple version as a feature evolves, add the
-information to a block-quoted bullet list. For example:
+If you're adding information about new features or changes in a release, update
+the blockquote to use a bulleted list:
```markdown
> - [Introduced](<link-to-issue>) in GitLab 11.3.
@@ -1720,9 +1670,8 @@ information to a block-quoted bullet list. For example:
If a feature is moved to another tier:
```markdown
-> - [Introduced](<link-to-issue>) in [GitLab Premium](https://about.gitlab.com/pricing/) 11.5.
-> - [Moved](<link-to-issue>) to [GitLab Starter](https://about.gitlab.com/pricing/) in 11.8.
-> - [Moved](<link-to-issue>) to GitLab Core in 12.0.
+> - [Moved](<link-to-issue>) from [GitLab Premium](https://about.gitlab.com/pricing/) to [GitLab Starter](https://about.gitlab.com/pricing/) in 11.8.
+> - [Moved](<link-to-issue>) from [GitLab Starter](https://about.gitlab.com/pricing/) to GitLab Core in 12.0.
```
If a feature is deprecated, include a link to a replacement (when available):
@@ -1731,127 +1680,123 @@ If a feature is deprecated, include a link to a replacement (when available):
> - [Deprecated](<link-to-issue>) in GitLab 11.3. Replaced by [meaningful text](<link-to-appropriate-documentation>).
```
-You can also describe the replacement in surrounding text, if available.
-
-If the deprecation is not obvious in existing text, you may want to include a
-warning such as:
+You can also describe the replacement in surrounding text, if available. If the
+deprecation isn't obvious in existing text, you may want to include a warning:
```markdown
-DANGER: **Deprecated:**
-This feature was [deprecated](link-to-issue) in GitLab 12.3
-and replaced by [Feature name](link-to-feature-documentation).
+WARNING:
+This feature was [deprecated](link-to-issue) in GitLab 12.3 and replaced by
+[Feature name](link-to-feature-documentation).
```
+In the first major GitLab version after the feature was deprecated, be sure to
+remove information about that deprecated feature.
+
#### Inline version text
If you're adding content to an existing topic, you can add version information
inline with the existing text.
In this case, add `([introduced/deprecated](<link-to-issue>) in GitLab X.X)`.
-If applicable, include the paid tier: `([introduced/deprecated](<link-to-issue>) in [GitLab Premium](https://about.gitlab.com/pricing/) 12.4)`
Including the issue link is encouraged, but isn't a requirement. For example:
```markdown
-The voting strategy (in GitLab 13.4 and later) requires
-the primary and secondary voters to agree.
+The voting strategy in GitLab 13.4 and later requires the primary and secondary
+voters to agree.
```
#### End-of-life for features or products
-Whenever a feature or product enters the end-of-life process, indicate its
-status by using the `Danger` [alert](#alert-boxes) with the `**Important**`
-keyword directly below the feature or product's header (which can include H1
-page titles). Link to the deprecation and removal issues, if possible.
+When a feature or product enters its end-of-life, indicate its status by
+creating a [warning alert](#alert-boxes) directly following its relevant header.
+If possible, link to its deprecation and removal issues.
For example:
```markdown
-DANGER: **Important:**
+WARNING:
This feature is in its end-of-life process. It is [deprecated](link-to-issue)
for use in GitLab X.X, and is planned for [removal](link-to-issue) in GitLab X.X.
```
+After the feature or product is officially deprecated and removed, remove
+its information from the GitLab documentation.
+
### Versions in the past or future
When describing functionality available in past or future versions, use:
-- _Earlier_, and not _older_ or _before_.
-- _Later_, and not _newer_ or _after_.
+- Earlier, and not older or before.
+- Later, and not newer or after.
For example:
-- Available in GitLab 12.3 and earlier.
+- Available in GitLab 13.1 and earlier.
- Available in GitLab 12.4 and later.
-- In GitLab 11.4 and earlier, ...
-- In GitLab 10.6 and later, ...
+- In GitLab 12.2 and earlier, ...
+- In GitLab 11.6 and later, ...
-### Importance of referencing GitLab versions and tiers
+### Removing versions after each major release
-Mentioning GitLab versions and tiers is important to all users and contributors
-to quickly have access to the issue or merge request that introduced the change.
-Also, they can know what features they have in their GitLab
-instance and version, given that the note has some key information.
+Whenever a major GitLab release occurs, we remove all version references
+to now-unsupported versions of GitLab. Note that this includes the removal of
+specific instructions for users of non-supported GitLab versions. For example,
+if GitLab versions 11.x and later are supported, special
+instructions for users of GitLab 10 should be removed.
-`[Introduced](link-to-issue) in [GitLab Premium](https://about.gitlab.com/pricing/) 12.7`
-links to the issue that introduced the feature, says which GitLab tier it
-belongs to, says the GitLab version that it became available in, and links to
-the pricing page in case the user wants to upgrade to a paid tier to use that
-feature.
+To view historical information about a feature, review GitLab
+[release posts](https://about.gitlab.com/releases/), or search for the issue or
+merge request where the work was done.
-For example, if you're a regular user and you're looking at the documentation
-for a feature you haven't used before, you can immediately see if that feature
-is available to you or not. Alternatively, if you've been using a certain
-feature for a long time and it changed in some way, it's important to be able to
-determine when it changed and what's new in that feature.
+## Products and features
-This is even more important as we don't have a perfect process for shipping
-documentation. Unfortunately, we still see features without documentation, and
-documentation without features. So, for now, we cannot rely 100% on the
-documentation site versions.
+Refer to the information in this section when describing products and features
+in the GitLab product documentation.
-Over time, version text will reference a progressively older version of GitLab.
-In cases where version text refers to versions of GitLab four or more major
-versions back, you can consider removing the text if it's irrelevant or confusing.
+### Avoid line breaks in names
-For example, if the current major version is 12.x, version text referencing
-versions of GitLab 8.x and older are candidates for removal if necessary for
-clearer or cleaner documentation.
+If a feature or product name contains spaces, don't split the name with a line break.
+When names change, it is more complicated to search or grep text that has line breaks.
-## Products and features
+### Product tier badges
-Refer to the information in this section when describing products and features
-within the GitLab product documentation.
+Tier badges are displayed as orange text next to a heading. For example:
-### Avoid line breaks in names
+![Tier badge](img/tier_badge.png)
-When entering a product or feature name that includes a space (such as
-GitLab Community Edition) or even other companies' products (such as
-Amazon Web Services), be sure to not split the product or feature name across
-lines with an inserted line break. Splitting product or feature names across
-lines makes searching for these items more difficult, and can cause problems if
-names change.
+You must assign a tier badge:
-For example, the following Markdown content is _not_ formatted correctly:
+- To [all H1 topic headings](#product-tier-badges-on-headings).
+- To topic headings that don't apply to the same tier as the H1.
+- To [sections of a topic](#product-tier-badges-on-other-content),
+ if they apply to a tier other than what applies to the H1.
-```markdown
-When entering a product or feature name that includes a space (such as GitLab
-Community Edition), don't split the product or feature name across lines.
-```
+#### Product tier badges on headings
-Instead, it should appear similar to the following:
+To add a tier badge to a heading, add the relevant [tier badge](#available-product-tier-badges)
+after the heading text. For example:
```markdown
-When entering a product or feature name that includes a space (such as
-GitLab Community Edition), don't split the product or feature name across lines.
+# Heading title `**(CORE)**`
```
-### Product badges
+#### Product tier badges on other content
-When a feature is available in paid tiers, add the corresponding tier to the
-header or other page element according to the feature's availability:
+In paragraphs, list names, and table cells, an information icon displays when you
+add a tier badge. More verbose information displays when a user points to the icon:
+
+- `**(STARTER)**` displays as **(STARTER)**
+- `**(STARTER ONLY)**` displays as **(STARTER ONLY)**
+- `**(SILVER ONLY)**` displays as **(SILVER ONLY)**
+
+The `**(STARTER)**` generates a `span` element to trigger the
+badges and tooltips (`<span class="badge-trigger starter">`). When the keyword
+_only_ is added, the corresponding GitLab.com badge isn't displayed.
-| Tier in which feature is available | Tier markup |
+#### Available product tier badges
+
+| Tier in which feature is available | Tier badge |
|:-----------------------------------------------------------------------|:----------------------|
| GitLab Core and GitLab.com Free, and their higher tiers | `**(CORE)**` |
| GitLab Starter and GitLab.com Bronze, and their higher tiers | `**(STARTER)**` |
@@ -1866,32 +1811,10 @@ header or other page element according to the feature's availability:
| _Only_ GitLab.com Silver and higher tiers (no self-managed instances) | `**(SILVER ONLY)**` |
| _Only_ GitLab.com Gold (no self-managed instances) | `**(GOLD ONLY)**` |
-For clarity, all page title headers (H1s) must be have a tier markup for the
-lowest tier that has information on the documentation page.
-
-If sections of a page apply to higher tier levels, they can be separately
-labeled with their own tier markup.
-
-#### Product badge display behavior
-
-When using the tier markup with headers, the documentation page will display the
-full tier badge with the header line.
-
-You can also use the tier markup with paragraphs, list items, and table cells.
-For these cases, the tier mention will be represented by an orange info icon
-**{information}** that will display the tiers when visitors point to the icon.
-For example:
-
-- `**(STARTER)**` displays as **(STARTER)**
-- `**(STARTER ONLY)**` displays as **(STARTER ONLY)**
-- `**(SILVER ONLY)**` displays as **(SILVER ONLY)**
-
-#### How it works
-
-Introduced by [!244](https://gitlab.com/gitlab-org/gitlab-docs/-/merge_requests/244),
-the special markup `**(STARTER)**` will generate a `span` element to trigger the
-badges and tooltips (`<span class="badge-trigger starter">`). When the keyword
-_only_ is added, the corresponding GitLab.com badge will not be displayed.
+Topics that mention the `gitlab.rb` file are referring to
+self-managed instances of GitLab. To prevent confusion, include the relevant `TIER ONLY`
+tier badge on the highest applicable heading level on
+the page.
## Specific sections
@@ -1900,45 +1823,42 @@ sections are outlined in this section.
### GitLab restart
-There are many cases that a restart/reconfigure of GitLab is required. To avoid
-duplication, link to the special document that can be found in
-[`doc/administration/restart_gitlab.md`](../../../administration/restart_gitlab.md).
-Usually the text will read like:
+When a restart or reconfigure of GitLab is required, avoid duplication by linking
+to [`doc/administration/restart_gitlab.md`](../../../administration/restart_gitlab.md)
+with text like this, replacing 'reconfigure' with 'restart' as needed:
```markdown
Save the file and [reconfigure GitLab](../../../administration/restart_gitlab.md)
for the changes to take effect.
```
-If the document you are editing resides in a place other than the GitLab CE/EE
-`doc/` directory, instead of the relative link, use the full path:
-`https://docs.gitlab.com/ee/administration/restart_gitlab.html`. Replace
-`reconfigure` with `restart` where appropriate.
+If the document resides outside of the `doc/` directory, use the full path
+instead of the relative link:
+`https://docs.gitlab.com/ee/administration/restart_gitlab.html`.
### Installation guide
-**Ruby:**
In [step 2 of the installation guide](../../../install/installation.md#2-ruby),
-we install Ruby from source. Whenever there is a new version that needs to
-be updated, remember to change it throughout the codeblock and also replace
-the sha256sum (it can be found in the [downloads page](https://www.ruby-lang.org/en/downloads/)
-of the Ruby website).
+we install Ruby from source. To update the guide for a new Ruby version:
-### Configuration documentation for source and Omnibus installations
+- Change the version throughout the code block.
+- Replace the sha256sum. It's available on the
+ [downloads page](https://www.ruby-lang.org/en/downloads/) of the Ruby website.
-GitLab currently officially supports two installation methods: installations
-from source and Omnibus packages installations.
-
-Whenever there's a setting that's configurable for both installation methods,
-the preference is to document it in the CE documentation to avoid duplication.
+### Configuration documentation for source and Omnibus installations
-Configuration settings include:
+GitLab supports two installation methods: installations from source, and Omnibus
+packages. Possible configuration settings include:
- Settings that touch configuration files in `config/`.
-- NGINX settings and settings in `lib/support/` in general.
+- NGINX settings.
+- Other settings in `lib/support/`.
-When you document a list of steps, it may entail editing the configuration file
-and reconfiguring or restarting GitLab. In that case, use these styles:
+Configuration procedures can require users to edit configuration files, reconfigure
+GitLab, or restart GitLab. Use these styles to document these steps, replacing
+`PATH/TO` with the appropriate path:
+
+<!-- vale off -->
````markdown
**For Omnibus installations**
@@ -1949,7 +1869,7 @@ and reconfiguring or restarting GitLab. In that case, use these styles:
external_url "https://gitlab.example.com"
```
-1. Save the file and [reconfigure](path/to/administration/restart_gitlab.md#omnibus-gitlab-reconfigure)
+1. Save the file and [reconfigure](PATH/TO/administration/restart_gitlab.md#omnibus-gitlab-reconfigure)
GitLab for the changes to take effect.
---
@@ -1963,25 +1883,25 @@ and reconfiguring or restarting GitLab. In that case, use these styles:
host: "gitlab.example.com"
```
-1. Save the file and [restart](path/to/administration/restart_gitlab.md#installations-from-source)
+1. Save the file and [restart](PATH/TO/administration/restart_gitlab.md#installations-from-source)
GitLab for the changes to take effect.
````
+<!-- vale on -->
+
In this case:
-- Before each step list the installation method is declared in bold.
-- Three dashes (`---`) are used to create a horizontal line and separate the two
- methods.
-- The code blocks are indented one or more spaces under the list item to render
- correctly.
-- Different highlighting languages are used for each config in the code block.
-- The [GitLab Restart](#gitlab-restart) section is used to explain a required
+- Bold the installation method's name.
+- Separate the methods with three dashes (`---`) to create a horizontal line.
+- Indent the code blocks to line up with the list item they belong to..
+- Use the appropriate syntax highlighting for each code block.
+- Use the [GitLab Restart](#gitlab-restart) section to explain any required
restart or reconfigure of GitLab.
### Troubleshooting
-For troubleshooting sections, you should provide as much context as possible so
-users can identify the problem they are facing and resolve it on their own. You
+For troubleshooting sections, provide as much context as possible so
+users can identify their problem and resolve it on their own. You
can facilitate this by making sure the troubleshooting content addresses:
1. The problem the user needs to solve.
@@ -1989,7 +1909,7 @@ can facilitate this by making sure the troubleshooting content addresses:
1. Steps the user can take towards resolution of the problem.
If the contents of each category can be summarized in one line and a list of
-steps aren't required, consider setting up a [table](#tables) with headers of
+steps aren't required, consider setting up a [table](#tables). Create headers of
_Problem_ \| _Cause_ \| _Solution_ (or _Workaround_ if the fix is temporary), or
_Error message_ \| _Solution_.
diff --git a/doc/development/documentation/testing.md b/doc/development/documentation/testing.md
index 043da38d207..d2e3f473532 100644
--- a/doc/development/documentation/testing.md
+++ b/doc/development/documentation/testing.md
@@ -1,40 +1,46 @@
---
stage: none
group: Documentation Guidelines
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
description: Learn how to contribute to GitLab Documentation.
---
# Documentation testing
-We treat documentation as code, and so use tests in our CI pipeline to maintain the
-standards and quality of the docs. The current tests, which run in CI jobs when a
-merge request with new or changed docs is submitted, are:
-
-- [`docs lint`](https://gitlab.com/gitlab-org/gitlab/-/blob/0b562014f7b71f98540e682c8d662275f0011f2f/.gitlab/ci/docs.gitlab-ci.yml#L41):
- Runs several tests on the content of the docs themselves:
- - [`lint-doc.sh` script](https://gitlab.com/gitlab-org/gitlab/blob/master/scripts/lint-doc.sh)
- runs the following checks and linters:
- - All cURL examples use the long flags (ex: `--header`, not `-H`).
- - The `CHANGELOG.md` does not contain duplicate versions.
- - No files in `doc/` are executable.
- - No new `README.md` was added.
- - [markdownlint](#markdownlint).
- - [Vale](#vale).
- - Nanoc tests:
- - [`internal_links`](https://gitlab.com/gitlab-org/gitlab/-/blob/0b562014f7b71f98540e682c8d662275f0011f2f/.gitlab/ci/docs.gitlab-ci.yml#L58)
- checks that all internal links (ex: `[link](../index.md)`) are valid.
- - [`internal_anchors`](https://gitlab.com/gitlab-org/gitlab/-/blob/0b562014f7b71f98540e682c8d662275f0011f2f/.gitlab/ci/docs.gitlab-ci.yml#L60)
- checks that all internal anchors (ex: `[link](../index.md#internal_anchor)`)
- are valid.
- - [`ui-docs-links lint`](https://gitlab.com/gitlab-org/gitlab/-/blob/0b562014f7b71f98540e682c8d662275f0011f2f/.gitlab/ci/docs.gitlab-ci.yml#L62)
- checks that all links to docs from UI elements (`app/views` files, for example)
- are linking to valid docs and anchors.
+GitLab documentation is stored in projects with code and treated like code. Therefore, we use
+processes similar to those used for code to maintain standards and quality of documentation.
+
+We have tests:
+
+- To lint the words and structure of the documentation.
+- To check the validity of internal links within the documentation suite.
+- To check the validity of links from UI elements, such as files in `app/views` files.
+
+For the specifics of each test run in our CI/CD pipelines, see the configuration for those tests
+in the relevant projects:
+
+- <https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/ci/docs.gitlab-ci.yml>
+- <https://gitlab.com/gitlab-org/gitlab-runner/-/blob/master/.gitlab/ci/docs.gitlab-ci.yml>
+- <https://gitlab.com/gitlab-org/omnibus-gitlab/-/blob/master/gitlab-ci-config/gitlab-com.yml>
+- <https://gitlab.com/gitlab-org/charts/gitlab/-/blob/master/.gitlab-ci.yml>
## Run tests locally
-Apart from [previewing your changes locally](index.md#previewing-the-changes-live), you can also run all lint checks
-and Nanoc tests locally.
+Similar to [previewing your changes locally](index.md#previewing-the-changes-live), you can also
+run these tests on your local computer. This has the advantage of:
+
+- Speeding up the feedback loop. You can know of any problems with the changes in your branch
+ without waiting for a CI/CD pipeline to run.
+- Lowering costs. Running tests locally is cheaper than running tests on the cloud
+ infrastructure GitLab uses.
+
+To run tests locally, it's important to:
+
+- [Install the tools](#install-linters), and [keep them up to date](#update-linters).
+- Run [linters](#lint-checks), [documentation link tests](#documentation-link-tests), and
+ [UI link tests](#ui-link-tests) the same way they are run in CI/CD pipelines. It's important to use
+ same configuration we use in CI/CD pipelines, which can be different than the default configuration
+ of the tool.
### Lint checks
@@ -66,15 +72,15 @@ The output should be similar to:
This requires you to either:
-- Have the required lint tools installed on your machine.
+- Have the [required lint tools installed](#local-linters) on your computer.
- A working Docker installation, in which case an image with these tools pre-installed is used.
-### Nanoc tests
+### Documentation link tests
-To execute Nanoc tests locally:
+To execute documentation link tests locally:
1. Navigate to the [`gitlab-docs`](https://gitlab.com/gitlab-org/gitlab-docs) directory.
-1. Run:
+1. Run the following commands:
```shell
# Check for broken internal links
@@ -85,7 +91,7 @@ To execute Nanoc tests locally:
bundle exec nanoc check internal_anchors
```
-### `ui-docs-links` test
+### UI link tests
The `ui-docs-links lint` job uses `haml-lint` to test that all links to docs from
UI elements (`app/views` files, for example) are linking to valid docs and anchors.
@@ -100,9 +106,9 @@ To run the `ui-docs-links` test locally:
```
If you receive an error the first time you run this test, run `bundle install`, which
-installs GitLab's dependencies, and try again.
+installs the dependencies for GitLab, and try again.
-If you don't want to install all of GitLab's dependencies to test the links, you can:
+If you don't want to install all of the dependencies to test the links, you can:
1. Open the `gitlab` directory in a terminal window.
1. Install `haml-lint`:
@@ -186,27 +192,34 @@ You can use Vale:
- [In a Git hook](#configure-pre-push-hooks). Vale only reports errors in the Git hook (the same
configuration as the CI/CD pipelines), and does not report suggestions or warnings.
-### Install linters
+#### Vale result types
-At a minimum, install [markdownlint](#markdownlint) and [Vale](#vale) to match the checks run in
-build pipelines:
+Vale returns three types of results: `suggestion`, `warning`, and `error`:
-1. Install `markdownlint-cli`, using either:
+- **Suggestion**-level results are writing tips and aren't displayed in CI
+ job output. Suggestions don't break CI.
+- **Warning**-level results are [Style Guide](styleguide/index.md) violations, aren't displayed in CI
+ job output, and should contain clear explanations of how to resolve the warning.
+ Warnings may be technical debt, or can be future error-level test items
+ (after the Technical Writing team completes its cleanup). Warnings don't break CI.
+- **Error**-level results are Style Guide violations, and should contain clear explanations
+ about how to resolve the error. Errors break CI and are displayed in CI job output.
+ of how to resolve the error. Errors break CI and are displayed in CI job output.
- - `npm`:
+### Install linters
- ```shell
- npm install -g markdownlint-cli
- ```
+At a minimum, install [markdownlint](#markdownlint) and [Vale](#vale) to match the checks run in
+build pipelines:
- - `yarn`:
+1. Install `markdownlint-cli`:
- ```shell
- yarn global add markdownlint-cli
- ```
+ ```shell
+ yarn global add markdownlint-cli
+ ```
- We recommend installing the version of `markdownlint-cli` currently used in the documentation
- linting [Docker image](https://gitlab.com/gitlab-org/gitlab-docs/-/blob/master/.gitlab-ci.yml#L420).
+ We recommend installing the version of `markdownlint-cli`
+ [used](https://gitlab.com/gitlab-org/gitlab-docs/-/blob/master/.gitlab-ci.yml#L447) when building
+ the `image:docs-lint-markdown`.
1. Install [`vale`](https://github.com/errata-ai/vale/releases). For example, to install using
`brew` for macOS, run:
@@ -215,14 +228,29 @@ build pipelines:
brew install vale
```
- We recommend installing the version of Vale currently used in the documentation linting
- [Docker image](https://gitlab.com/gitlab-org/gitlab-docs/-/blob/master/.gitlab-ci.yml#L419).
+These tools can be [integrated with your code editor](#configure-editors).
+
+### Update linters
+
+It's important to use linter versions that are the same or newer than those run in
+CI/CD. This provides access to new features and possible bug fixes.
-In addition to using markdownlint and Vale at the command line, these tools can be
-[integrated with your code editor](#configure-editors).
+To match the versions of `markdownlint-cli` and `vale` used in the GitLab projects, refer to the
+[versions used](https://gitlab.com/gitlab-org/gitlab-docs/-/blob/master/.gitlab-ci.yml#L447)
+when building the `image:docs-lint-markdown` Docker image containing these tools for CI/CD.
+
+| Tool | Version | Command | Additional information |
+|--------------------|----------|-------------------------------------------|------------------------|
+| `markdownlint-cli` | Latest | `yarn global add markdownlint-cli` | n/a |
+| `markdownlint-cli` | Specfic | `yarn global add markdownlint-cli@0.23.2` | The `@` indicates a specific version, and this example updates the tool to version `0.23.2`. |
+| Vale | Latest | `brew update && brew upgrade vale` | This command is for macOS only. |
+| Vale | Specific | n/a | Not possible using `brew`, but can be [directly downloaded](https://github.com/errata-ai/vale/releases). |
### Configure editors
+Using linters in your editor is more convenient than having to run the commands from the
+command line.
+
To configure markdownlint within your editor, install one of the following as appropriate:
- [Sublime Text](https://packagecontrol.io/packages/SublimeLinter-contrib-markdownlint)
diff --git a/doc/development/documentation/workflow.md b/doc/development/documentation/workflow.md
index 9bc80116d25..e809ca84707 100644
--- a/doc/development/documentation/workflow.md
+++ b/doc/development/documentation/workflow.md
@@ -1,13 +1,13 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Documentation process
The process for creating and maintaining GitLab product documentation allows
-anyone to contribute a merge request or create an issue for GitLab's
+anyone to contribute a merge request or create an issue for GitLab
documentation.
Documentation updates relating to new features or feature enhancements must
@@ -60,9 +60,9 @@ To update GitLab documentation:
- The [Structure and template](structure.md) page.
- The [Style Guide](styleguide/index.md).
- The [Markdown Guide](https://about.gitlab.com/handbook/markdown-guide/).
-1. Follow GitLab's [Merge Request Guidelines](../contributing/merge_request_workflow.md#merge-request-guidelines).
+1. Follow the [Merge Request Guidelines](../contributing/merge_request_workflow.md#merge-request-guidelines).
-TIP: **Tip:**
+NOTE:
Work in a fork if you do not have Developer access to the GitLab project.
Request help from the Technical Writing team if you:
@@ -79,7 +79,7 @@ To request help:
- If urgent help is required, directly assign the Technical Writer in the issue or in the merge request.
- If non-urgent help is required, ping the Technical Writer in the issue or merge request.
-If you are a member of GitLab's Slack workspace, you can request help in `#docs`.
+If you are a member of the GitLab Slack workspace, you can request help in `#docs`.
### Reviewing and merging
@@ -146,7 +146,7 @@ Remember:
advance of a milestone release and for larger documentation changes.
- You can request a post-merge Technical Writer review of documentation if it's important to get the
code with which it ships merged as soon as possible. In this case, the author of the original MR
- will address the feedback provided by the Technical Writer in a follow-up MR.
+ can address the feedback provided by the Technical Writer in a follow-up MR.
- The Technical Writer can also help decide that documentation can be merged without Technical
writer review, with the review to occur soon after merge.
@@ -154,8 +154,8 @@ Remember:
Ensure the following if skipping an initial Technical Writer review:
-- That [product badges](styleguide/index.md#product-badges) are applied.
-- That the GitLab [version](styleguide/index.md#text-for-documentation-requiring-version-text) that
+- That [product badges](styleguide/index.md#product-tier-badges) are applied.
+- That the GitLab [version](styleguide/index.md#gitlab-versions) that
introduced the feature has been included.
- That changes to headings don't affect in-app hyperlinks.
- Specific [user permissions](../../user/permissions.md) are documented.
diff --git a/doc/development/ee_features.md b/doc/development/ee_features.md
index 26a1e9ec3aa..5be601187ca 100644
--- a/doc/development/ee_features.md
+++ b/doc/development/ee_features.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Guidelines for implementing Enterprise Edition features
@@ -9,7 +9,7 @@ info: To determine the technical writer assigned to the Stage/Group associated w
- **Write the code and the tests.**: As with any code, EE features should have
good test coverage to prevent regressions.
- **Write documentation.**: Add documentation to the `doc/` directory. Describe
- the feature and include screenshots, if applicable. Indicate [what editions](documentation/styleguide/index.md#product-badges)
+ the feature and include screenshots, if applicable. Indicate [what editions](documentation/styleguide/index.md#product-tier-badges)
the feature applies to.
- **Submit a MR to the `www-gitlab-com` project.**: Add the new feature to the
[EE features list](https://about.gitlab.com/features/).
@@ -122,10 +122,10 @@ This is also not just applied to models. Here's a list of other examples:
To test an `EE` namespaced module that extends a CE class with EE features,
create the spec file as you normally would in the `ee/spec` directory, including the second `ee/` subdirectory.
-For example, an extension `ee/app/models/ee/user.rb` would have its tests in `ee/app/models/ee/user_spec.rb`.
+For example, an extension `ee/app/models/ee/user.rb` would have its tests in `ee/spec/models/ee/user_spec.rb`.
In the `RSpec.describe` call, use the CE class name where the EE module would be used.
-For example, in `ee/app/models/ee/user_spec.rb`, the test would start with:
+For example, in `ee/spec/models/ee/user_spec.rb`, the test would start with:
```ruby
RSpec.describe User do
@@ -143,7 +143,7 @@ There are a few gotchas with it:
- you should always [`extend ::Gitlab::Utils::Override`](utilities.md#override) and use `override` to
guard the "overrider" method to ensure that if the method gets renamed in
- CE, the EE override won't be silently forgotten.
+ CE, the EE override isn't silently forgotten.
- when the "overrider" would add a line in the middle of the CE
implementation, you should refactor the CE method and split it in
smaller methods. Or create a "hook" method that is empty in CE,
@@ -284,7 +284,7 @@ wrap it in a self-descriptive method and use that method.
For example, in GitLab-FOSS, the only user created by the system is `User.ghost`
but in EE there are several types of bot-users that aren't really users. It would
be incorrect to override the implementation of `User#ghost?`, so instead we add
-a method `#internal?` to `app/models/user.rb`. The implementation will be:
+a method `#internal?` to `app/models/user.rb`. The implementation:
```ruby
def internal?
@@ -303,13 +303,13 @@ end
### Code in `config/routes`
-When we add `draw :admin` in `config/routes.rb`, the application will try to
+When we add `draw :admin` in `config/routes.rb`, the application tries to
load the file located in `config/routes/admin.rb`, and also try to load the
file located in `ee/config/routes/admin.rb`.
In EE, it should at least load one file, at most two files. If it cannot find
-any files, an error will be raised. In CE, since we don't know if there will
-be an EE route, it will not raise any errors even if it cannot find anything.
+any files, an error is raised. In CE, since we don't know if an
+an EE route exists, it doesn't raise any errors even if it cannot find anything.
This means if we want to extend a particular CE route file, just add the same
file located in `ee/config/routes`. If we want to add an EE only route, we
@@ -467,7 +467,7 @@ end
#### Using `render_if_exists`
Instead of using regular `render`, we should use `render_if_exists`, which
-will not render anything if it cannot find the specific partial. We use this
+doesn't render anything if it cannot find the specific partial. We use this
so that we could put `render_if_exists` in CE, keeping code the same between
CE and EE.
@@ -482,7 +482,7 @@ The disadvantage of this:
##### Caveats
The `render_if_exists` view path argument must be relative to `app/views/` and `ee/app/views`.
-Resolving an EE template path that is relative to the CE view path will not work.
+Resolving an EE template path that is relative to the CE view path doesn't work.
```haml
- # app/views/projects/index.html.haml
@@ -577,7 +577,7 @@ We can define `params` and use `use` in another `params` definition to
include parameters defined in EE. However, we need to define the "interface" first
in CE in order for EE to override it. We don't have to do this in other places
due to `prepend_if_ee`, but Grape is complex internally and we couldn't easily
-do that, so we'll follow regular object-oriented practices that we define the
+do that, so we follow regular object-oriented practices that we define the
interface first here.
For example, suppose we have a few more optional parameters for EE. We can move the
@@ -738,7 +738,7 @@ end
It's very hard to extend this in an EE module, and this is simply storing
some meta-data for a particular route. Given that, we could simply leave the
-EE `route_setting` in CE as it won't hurt and we are just not going to use
+EE `route_setting` in CE as it doesn't hurt and we don't use
those meta-data in CE.
We could revisit this policy when we're using `route_setting` more and whether
@@ -1039,7 +1039,7 @@ export default {
`import MyComponent from 'ee_else_ce/path/my_component'.vue`
-- this way the correct component will be included for either the ce or ee implementation
+- this way the correct component is included for either the CE or EE implementation
**For EE components that need different results for the same computed values, we can pass in props to the CE wrapper as seen in the example.**
@@ -1053,7 +1053,7 @@ export default {
For regular JS files, the approach is similar.
-1. We will keep using the [`ee_else_ce`](../development/ee_features.md#javascript-code-in-assetsjavascripts) helper, this means that EE only code should be inside the `ee/` folder.
+1. We keep using the [`ee_else_ce`](../development/ee_features.md#javascript-code-in-assetsjavascripts) helper, this means that EE only code should be inside the `ee/` folder.
1. An EE file should be created with the EE only code, and it should extend the CE counterpart.
1. For code inside functions that can't be extended, the code should be moved into a new file and we should use `ee_else_ce` helper:
diff --git a/doc/development/elasticsearch.md b/doc/development/elasticsearch.md
index cde221aaf16..1c92601dde9 100644
--- a/doc/development/elasticsearch.md
+++ b/doc/development/elasticsearch.md
@@ -1,7 +1,7 @@
---
stage: Enablement
group: Global Search
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Elasticsearch knowledge **(STARTER ONLY)**
@@ -13,9 +13,9 @@ the [Elasticsearch integration documentation](../integration/elasticsearch.md#en
## Deep Dive
-In June 2019, Mario de la Ossa hosted a Deep Dive (GitLab team members only: `https://gitlab.com/gitlab-org/create-stage/issues/1`) on GitLab's [Elasticsearch integration](../integration/elasticsearch.md) to share his domain specific knowledge with anyone who may work in this part of the code base in the future. You can find the [recording on YouTube](https://www.youtube.com/watch?v=vrvl-tN2EaA), and the slides on [Google Slides](https://docs.google.com/presentation/d/1H-pCzI_LNrgrL5pJAIQgvLX8Ji0-jIKOg1QeJQzChug/edit) and in [PDF](https://gitlab.com/gitlab-org/create-stage/uploads/c5aa32b6b07476fa8b597004899ec538/Elasticsearch_Deep_Dive.pdf). Everything covered in this deep dive was accurate as of GitLab 12.0, and while specific details may have changed since then, it should still serve as a good introduction.
+In June 2019, Mario de la Ossa hosted a Deep Dive (GitLab team members only: `https://gitlab.com/gitlab-org/create-stage/issues/1`) on the GitLab [Elasticsearch integration](../integration/elasticsearch.md) to share his domain specific knowledge with anyone who may work in this part of the codebase in the future. You can find the [recording on YouTube](https://www.youtube.com/watch?v=vrvl-tN2EaA), and the slides on [Google Slides](https://docs.google.com/presentation/d/1H-pCzI_LNrgrL5pJAIQgvLX8Ji0-jIKOg1QeJQzChug/edit) and in [PDF](https://gitlab.com/gitlab-org/create-stage/uploads/c5aa32b6b07476fa8b597004899ec538/Elasticsearch_Deep_Dive.pdf). Everything covered in this deep dive was accurate as of GitLab 12.0, and while specific details may have changed since then, it should still serve as a good introduction.
-In August 2020, a second Deep Dive was hosted, focusing on [GitLab's specific architecture for multi-indices support](#zero-downtime-reindexing-with-multiple-indices). The [recording on YouTube](https://www.youtube.com/watch?v=0WdPR9oB2fg) and the [slides](https://lulalala.gitlab.io/gitlab-elasticsearch-deepdive) are available. Everything covered in this deep dive was accurate as of GitLab 13.3.
+In August 2020, a second Deep Dive was hosted, focusing on [GitLab-specific architecture for multi-indices support](#zero-downtime-reindexing-with-multiple-indices). The [recording on YouTube](https://www.youtube.com/watch?v=0WdPR9oB2fg) and the [slides](https://lulalala.gitlab.io/gitlab-elasticsearch-deepdive/) are available. Everything covered in this deep dive was accurate as of GitLab 13.3.
## Supported Versions
@@ -68,7 +68,7 @@ The `whitespace` tokenizer was selected in order to have more control over how t
Please see the `code` filter for an explanation on how tokens are split.
-NOTE: **Note:**
+NOTE:
Currently the [Elasticsearch code_analyzer doesn't account for all code cases](../integration/elasticsearch.md#known-issues).
#### `code_search_analyzer`
@@ -119,7 +119,7 @@ Patterns:
- `'"((?:\\"|[^"]|\\")*)"'`: captures terms inside quotes, removing the quotes
- `"'((?:\\'|[^']|\\')*)'"`: same as above, for single-quotes
- `'\.([^.]+)(?=\.|\s|\Z)'`: separate terms with periods in-between
-- `'([\p{L}_.-]+)'`: some common chars in file names to keep the whole filename intact (eg. `my_file-Γ±ame.txt`)
+- `'([\p{L}_.-]+)'`: some common chars in file names to keep the whole filename intact (for example `my_file-Γ±ame.txt`)
- `'([\p{L}\d_]+)'`: letters, numbers and underscores are the most common tokens in programming. Always capture them greedily regardless of context.
## Gotchas
@@ -129,7 +129,7 @@ Patterns:
## Zero downtime reindexing with multiple indices
-NOTE: **Note:**
+NOTE:
This is not applicable yet as multiple indices functionality is not fully implemented.
Currently GitLab can only handle a single version of setting. Any setting/schema changes would require reindexing everything from scratch. Since reindexing can take a long time, this can cause search functionality downtime.
@@ -168,12 +168,12 @@ The global configurations per version are now in the `Elastic::(Version)::Config
### Creating new version of schema
-NOTE: **Note:**
+NOTE:
This is not applicable yet as multiple indices functionality is not fully implemented.
Folders like `ee/lib/elastic/v12p1` contain snapshots of search logic from different versions. To keep a continuous Git history, the latest version lives under `ee/lib/elastic/latest`, but its classes are aliased under an actual version (e.g. `ee/lib/elastic/v12p3`). When referencing these classes, never use the `Latest` namespace directly, but use the actual version (e.g. `V12p3`).
-The version name basically follows GitLab's release version. If setting is changed in 12.3, we will create a new namespace called `V12p3` (p stands for "point"). Raise an issue if there is a need to name a version differently.
+The version name basically follows the GitLab release version. If setting is changed in 12.3, we will create a new namespace called `V12p3` (p stands for "point"). Raise an issue if there is a need to name a version differently.
If the current version is `v12p1`, and we need to create a new version for `v12p3`, the steps are as follows:
@@ -188,11 +188,11 @@ If the current version is `v12p1`, and we need to create a new version for `v12p
> This functionality was introduced by [#234046](https://gitlab.com/gitlab-org/gitlab/-/issues/234046).
-NOTE: **Note:**
+NOTE:
This only supported for indices created with GitLab 13.0 or greater.
Migrations are stored in the [`ee/elastic/migrate/`](https://gitlab.com/gitlab-org/gitlab/-/tree/master/ee/elastic/migrate) folder with `YYYYMMDDHHMMSS_migration_name.rb`
-file name format, which is similar to Rails database migrations:
+filename format, which is similar to Rails database migrations:
```ruby
# frozen_string_literal: true
@@ -216,6 +216,29 @@ cron worker sequentially.
Any update to the Elastic index mappings should be replicated in [`Elastic::Latest::Config`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/lib/elastic/latest/config.rb).
+### Migration options supported by the [`Elastic::MigrationWorker`](https://gitlab.com/gitlab-org/gitlab/blob/master/ee/app/workers/elastic/migration_worker.rb)
+
+- `batched!` - Allow the migration to run in batches. If set, the [`Elastic::MigrationWorker`](https://gitlab.com/gitlab-org/gitlab/blob/master/ee/app/workers/elastic/migration_worker.rb)
+will re-enqueue itself with a delay which is set using the `throttle_delay` option described below. The batching
+must be handled within the `migrate` method, this setting controls the re-enqueuing only.
+
+- `throttle_delay` - Sets the wait time in between batch runs. This time should be set high enough to allow each migration batch
+enough time to finish. Additionally, the time should be less than 30 minutes since that is how often the
+[`Elastic::MigrationWorker`](https://gitlab.com/gitlab-org/gitlab/blob/master/ee/app/workers/elastic/migration_worker.rb)
+cron worker runs. Default value is 5 minutes.
+
+```ruby
+# frozen_string_literal: true
+
+class BatchedMigrationName < Elastic::Migration
+ # Declares a migration should be run in batches
+ batched!
+ throttle_delay 10.minutes
+
+ # ...
+end
+```
+
## Performance Monitoring
### Prometheus
diff --git a/doc/development/emails.md b/doc/development/emails.md
index 4b43bff0e02..0b1f3c5b74c 100644
--- a/doc/development/emails.md
+++ b/doc/development/emails.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Dealing with email in development
@@ -83,7 +83,7 @@ See the [Rails guides](https://guides.rubyonrails.org/action_mailer_basics.html#
expunge_deleted: false
```
- As mentioned, the part after `+` is ignored, and this will end up in the mailbox for `gitlab-incoming@gmail.com`.
+ As mentioned, the part after `+` is ignored, and this message is sent to the mailbox for `gitlab-incoming@gmail.com`.
1. Run this command in the GitLab root directory to launch `mail_room`:
diff --git a/doc/development/event_tracking/backend.md b/doc/development/event_tracking/backend.md
index 79ea80a52ea..24e83ffc524 100644
--- a/doc/development/event_tracking/backend.md
+++ b/doc/development/event_tracking/backend.md
@@ -3,3 +3,6 @@ redirect_to: '../product_analytics/index.md'
---
This document was moved to [another location](../product_analytics/index.md).
+
+<!-- This redirect file can be deleted after February 1, 2021. -->
+<!-- Before deletion, see: https://docs.gitlab.com/ee/development/documentation/#move-or-rename-a-page -->
diff --git a/doc/development/event_tracking/frontend.md b/doc/development/event_tracking/frontend.md
index 79ea80a52ea..24e83ffc524 100644
--- a/doc/development/event_tracking/frontend.md
+++ b/doc/development/event_tracking/frontend.md
@@ -3,3 +3,6 @@ redirect_to: '../product_analytics/index.md'
---
This document was moved to [another location](../product_analytics/index.md).
+
+<!-- This redirect file can be deleted after February 1, 2021. -->
+<!-- Before deletion, see: https://docs.gitlab.com/ee/development/documentation/#move-or-rename-a-page -->
diff --git a/doc/development/event_tracking/index.md b/doc/development/event_tracking/index.md
index 79ea80a52ea..24e83ffc524 100644
--- a/doc/development/event_tracking/index.md
+++ b/doc/development/event_tracking/index.md
@@ -3,3 +3,6 @@ redirect_to: '../product_analytics/index.md'
---
This document was moved to [another location](../product_analytics/index.md).
+
+<!-- This redirect file can be deleted after February 1, 2021. -->
+<!-- Before deletion, see: https://docs.gitlab.com/ee/development/documentation/#move-or-rename-a-page -->
diff --git a/doc/development/experiment_guide/index.md b/doc/development/experiment_guide/index.md
index 3b2f1d21463..35cd55b199c 100644
--- a/doc/development/experiment_guide/index.md
+++ b/doc/development/experiment_guide/index.md
@@ -1,22 +1,22 @@
---
stage: Growth
group: Activation
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Experiment Guide
-Experiments can be conducted by any GitLab team, most often the teams from the [Growth Sub-department](https://about.gitlab.com/handbook/engineering/development/growth/). Experiments are not tied to releases because they will primarily target GitLab.com.
+Experiments can be conducted by any GitLab team, most often the teams from the [Growth Sub-department](https://about.gitlab.com/handbook/engineering/development/growth/). Experiments are not tied to releases because they primarily target GitLab.com.
-Experiments will be run as an A/B test and will be behind a feature flag to turn the test on or off. Based on the data the experiment generates, the team will decide if the experiment had a positive impact and will be the new default or rolled back.
+Experiments are run as an A/B test and are behind a feature flag to turn the test on or off. Based on the data the experiment generates, the team decides if the experiment had a positive impact and should be made the new default or rolled back.
## Experiment tracking issue
Each experiment should have an [Experiment tracking](https://gitlab.com/groups/gitlab-org/-/issues?scope=all&utf8=%E2%9C%93&state=opened&label_name[]=growth%20experiment&search=%22Experiment+tracking%22) issue to track the experiment from roll-out through to cleanup/removal. Immediately after an experiment is deployed, the due date of the issue should be set (this depends on the experiment but can be up to a few weeks in the future).
After the deadline, the issue needs to be resolved and either:
-- It was successful and the experiment will be the new default.
-- It was not successful and all code related to the experiment will be removed.
+- It was successful and the experiment becomes the new default.
+- It was not successful and all code related to the experiment is removed.
In either case, an outcome of the experiment should be posted to the issue with the reasoning for the decision.
@@ -25,7 +25,7 @@ In either case, an outcome of the experiment should be posted to the issue with
Experiments' code quality can fail our standards for several reasons. These
reasons can include not being added to the codebase for a long time, or because
of fast iteration to retrieve data. However, having the experiment run (or not
-run) shouldn't impact GitLab's availability. To avoid or identify issues,
+run) shouldn't impact GitLab availability. To avoid or identify issues,
experiments are initially deployed to a small number of users. Regardless,
experiments still need tests.
@@ -49,7 +49,6 @@ addressed.
},
# Add your experiment here:
signup_flow: {
- environment: ::Gitlab.dev_env_or_com?, # Target environment, defaults to enabled for development and GitLab.com
tracking_category: 'Growth::Activation::Experiment::SignUpFlow' # Used for providing the category when setting up tracking data
}
}.freeze
@@ -57,59 +56,90 @@ addressed.
1. Use the experiment in the code.
+ Experiments can be performed on a `subject`. The `subject` that gets provided needs to respond to `to_global_id` or `to_s`.
+ The resulting string is bucketed and assigned to either the control or the experimental group. It's therefore necessary to always provide the same `subject` for an experiment to have the same experience.
+
- Use this standard for the experiment in a controller:
- ```ruby
- class RegistrationController < ApplicationController
+ Experiment run for a user:
+
+ ```ruby
+ class ProjectController < ApplicationController
def show
# experiment_enabled?(:experiment_key) is also available in views and helpers
+ if experiment_enabled?(:signup_flow, subject: current_user)
+ # render the experiment
+ else
+ # render the original version
+ end
+ end
+ end
+ ```
+
+ or experiment run for a namespace:
+
+ ```ruby
+ if experiment_enabled?(:signup_flow, subject: namespace)
+ # experiment code
+ else
+ # control code
+ end
+ ```
+
+ When no subject is given, it falls back to a cookie that gets set and is consistent until
+ the cookie gets deleted.
+
+ ```ruby
+ class RegistrationController < ApplicationController
+ def show
+ # falls back to a cookie
if experiment_enabled?(:signup_flow)
# render the experiment
else
# render the original version
end
end
- end
- ```
+ end
+ ```
- Make the experiment available to the frontend in a controller:
- ```ruby
- before_action do
- push_frontend_experiment(:signup_flow)
- end
- ```
+ ```ruby
+ before_action do
+ push_frontend_experiment(:signup_flow, subject: current_user)
+ end
+ ```
- The above will check whether the experiment is enabled and push the result to the frontend.
+ The above checks whether the experiment is enabled and pushes the result to the frontend.
- You can check the state of the feature flag in JavaScript:
+ You can check the state of the feature flag in JavaScript:
- ```javascript
- import { isExperimentEnabled } from '~/experimentation';
+ ```javascript
+ import { isExperimentEnabled } from '~/experimentation';
- if ( isExperimentEnabled('signupFlow') ) {
- // ...
- }
- ```
+ if ( isExperimentEnabled('signupFlow') ) {
+ // ...
+ }
+ ```
- It is also possible to run an experiment outside of the controller scope, for example in a worker:
- ```ruby
- class SomeWorker
- def perform
- # Check if the experiment is enabled at all (the percentage_of_time_value > 0)
- return unless Gitlab::Experimentation.enabled?(:experiment_key)
-
- # Since we cannot access cookies in a worker, we need to bucket models based on a unique, unchanging attribute instead.
- # Use the following method to check if the experiment is enabled for a certain attribute, for example a username or email address:
- if Gitlab::Experimentation.enabled_for_attribute?(:experiment_key, some_attribute)
- # execute experimental code
- else
- # execute control code
- end
- end
- end
- ```
+ ```ruby
+ class SomeWorker
+ def perform
+ # Check if the experiment is active at all (the percentage_of_time_value > 0)
+ return unless Gitlab::Experimentation.active?(:experiment_key)
+
+ # Since we cannot access cookies in a worker, we need to bucket models based on a unique, unchanging attribute instead.
+ # It is therefore necessery to always provide the same subject.
+ if Gitlab::Experimentation.in_experiment_group?(:experiment_key, subject: user)
+ # execute experimental code
+ else
+ # execute control code
+ end
+ end
+ end
+ ```
### Implement the tracking events
@@ -123,7 +153,7 @@ The framework provides the following helper method that is available in controll
```ruby
before_action do
- track_experiment_event(:signup_flow, 'action', 'value')
+ track_experiment_event(:signup_flow, 'action', 'value', subject: current_user)
end
```
@@ -133,7 +163,7 @@ Which can be tested as follows:
context 'when the experiment is active and the user is in the experimental group' do
before do
stub_experiment(signup_flow: true)
- stub_experiment_for_user(signup_flow: true)
+ stub_experiment_for_subject(signup_flow: true)
end
it 'tracks an event', :snowplow do
@@ -156,8 +186,8 @@ The framework provides the following helper method that is available in controll
```ruby
before_action do
- push_frontend_experiment(:signup_flow)
- frontend_experimentation_tracking_data(:signup_flow, 'action', 'value')
+ push_frontend_experiment(:signup_flow, subject: current_user)
+ frontend_experimentation_tracking_data(:signup_flow, 'action', 'value', subject: current_user)
end
```
@@ -233,6 +263,50 @@ describe('event tracking', () => {
});
```
+### Record experiment user
+
+In addition to the anonymous tracking of events, we can also record which users have participated in which experiments and whether they were given the control experience or the experimental experience.
+
+The `record_experiment_user` helper method is available to all controllers, and it enables you to record these experiment participants (the current user) and which experience they were given:
+
+```ruby
+before_action do
+ record_experiment_user(:signup_flow)
+end
+```
+
+Subsequent calls to this method for the same experiment and the same user have no effect unless the user has gets enrolled into a different experience. This happens when we roll out the experimental experience to a greater percentage of users.
+
+Note that this data is completely separate from the [events tracking data](#implement-the-tracking-events). They are not linked together in any way.
+
+#### Add context
+
+You can add arbitrary context data in a hash which gets stored as part of the experiment user record.
+This data can then be used by data analytics dashboards.
+
+```ruby
+before_action do
+ record_experiment_user(:signup_flow, foo: 42)
+end
+```
+
+### Record experiment conversion event
+
+Along with the tracking of backend and frontend events and the [recording of experiment participants](#record-experiment-user), we can also record when a user performs the desired conversion event action. For example:
+
+- **Experimental experience:** Show an in-product nudge to see if it causes more people to sign up for trials.
+- **Conversion event:** The user starts a trial.
+
+The `record_experiment_conversion_event` helper method is available to all controllers. It enables us to record the conversion event for the current user, regardless of whether they are in the control or experimental group:
+
+```ruby
+before_action do
+ record_experiment_conversion_event(:signup_flow)
+end
+```
+
+Note that the use of this method requires that we have first [recorded the user as being part of the experiment](#record-experiment-user).
+
### Enable the experiment
After all merge requests have been merged, use [`chatops`](../../ci/chatops/README.md) in the
@@ -250,6 +324,19 @@ For visibility, please also share any commands run against production in the `#s
/chatops run feature delete signup_flow_experiment_percentage
```
+### Manually force the current user to be in the experiment group
+
+You may force the application to put your current user in the experiment group. To do so
+add a query string parameter to the path where the experiment runs. If you do so,
+the experiment will work only for this request and won't work after following links or submitting forms.
+
+For example, to forcibly enable the `EXPERIMENT_KEY` experiment, add `force_experiment=EXPERIMENT_KEY`
+to the URL:
+
+```shell
+https://gitlab.com/<EXPERIMENT_ENTRY_URL>?force_experiment=<EXPERIMENT_KEY>
+```
+
### Testing and test helpers
#### RSpec
@@ -264,7 +351,7 @@ context 'when the experiment is active' do
context 'when the user is in the experimental group' do
before do
- stub_experiment_for_user(signup_flow: true)
+ stub_experiment_for_subject(signup_flow: true)
end
it { is_expected.to do_experimental_thing }
@@ -272,7 +359,7 @@ context 'when the experiment is active' do
context 'when the user is in the control group' do
before do
- stub_experiment_for_user(signup_flow: false)
+ stub_experiment_for_subject(signup_flow: false)
end
it { is_expected.to do_control_thing }
diff --git a/doc/development/export_csv.md b/doc/development/export_csv.md
new file mode 100644
index 00000000000..99b6062736d
--- /dev/null
+++ b/doc/development/export_csv.md
@@ -0,0 +1,21 @@
+---
+stage: Manage
+group: Import
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+---
+
+# Export to CSV
+
+This document lists the different implementations of CSV export in GitLab codebase.
+
+| Export type | How it works | Advantages | Disadvantages | Existing examples |
+|---|---|---|---|---|
+| Streaming | - Query and yield data in batches to a response stream.<br>- Download starts immediately. | - Report available immediately. | - No progress indicator.<br>- Requires a reliable connection. | [Export Audit Event Log](../administration/audit_events.md#export-to-csv) |
+| Downloading | - Query and write data in batches to a temporary file.<br>- Loads the file into memory.<br>- Sends the file to the client. | - Report available immediately. | - Large amount of data might cause request timeout.<br>- Memory intensive.<br>- Request expires when user navigates to a different page. | [Export Chain of Custody Report](../user/compliance/compliance_dashboard/#chain-of-custody-report) |
+| As email attachment | - Asynchronously process the query with background job.<br>- Email uses the export as an attachment. | - Asynchronous processing. | - Requires users use a different app (email) to download the CSV.<br>- Email providers may limit attachment size. | - [Export Issues](../user/project/issues/csv_export.md)<br>- [Export Merge Requests](../user/project/merge_requests/csv_export.md) |
+| As downloadable link in email (*) | - Asynchronously process the query with background job.<br>- Email uses an export link. | - Asynchronous processing.<br>- Bypasses email provider attachment size limit. | - Requires users use a different app (email).<br>- Requires additional storage and cleanup. | [Export User Permissions](https://gitlab.com/gitlab-org/gitlab/-/issues/1772) |
+| Polling (non-persistent state) | - Asynchronously processes the query with the background job.<br>- Frontend(FE) polls every few seconds to check if CSV file is ready. | - Asynchronous processing.<br>- Automatically downloads to local machine on completion.<br>- In-app solution. | - Non-persistable request - request expires when user navigates to a different page.<br>- API is processed for each polling request. | [Export Vulnerabilities](../user/application_security/security_dashboard/#export-vulnerabilities) |
+| Polling (persistent state) (*) | - Asynchronously processes the query with background job.<br>- Backend (BE) maintains the export state<br>- FE polls every few seconds to check status.<br>- FE shows 'Download link' when export is ready.<br>- User can download or regenerate a new report. | - Asynchronous processing.<br>- No database calls made during the polling requests (HTTP 304 status is returned until export status changes).<br>- Does not require user to stay on page until export is complete.<br>- In-app solution.<br>- Can be expanded into a generic CSV feature (such as dashboard / CSV API). | - Requires to maintain export states in DB.<br>- Does not automatically download the CSV export to local machine, requires users to click 'Download' button. | [Export Merge Commits Report](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/43055) |
+
+NOTE:
+Export types marked as * are currently work in progress.
diff --git a/doc/development/fe_guide/accessibility.md b/doc/development/fe_guide/accessibility.md
index 8cb61019556..92730e8139f 100644
--- a/doc/development/fe_guide/accessibility.md
+++ b/doc/development/fe_guide/accessibility.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Accessibility & Readability
diff --git a/doc/development/fe_guide/architecture.md b/doc/development/fe_guide/architecture.md
index 4b45fb97c76..964837dc5f7 100644
--- a/doc/development/fe_guide/architecture.md
+++ b/doc/development/fe_guide/architecture.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Architecture
diff --git a/doc/development/fe_guide/axios.md b/doc/development/fe_guide/axios.md
index 856b03e4b47..cf5a4970c04 100644
--- a/doc/development/fe_guide/axios.md
+++ b/doc/development/fe_guide/axios.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Axios
diff --git a/doc/development/fe_guide/dependencies.md b/doc/development/fe_guide/dependencies.md
index bbe4cdc285d..0ec10399ae0 100644
--- a/doc/development/fe_guide/dependencies.md
+++ b/doc/development/fe_guide/dependencies.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Frontend dependencies
@@ -32,7 +32,7 @@ updated using renovate are:
We discourage installing some dependencies in [GitLab repository](https://gitlab.com/gitlab-org/gitlab)
because they can create conflicts in the dependency tree. Blocked dependencies are declared in the
-`blockDependencies` property of GitLab’s [`package.json` file](https://gitlab.com/gitlab-org/gitlab/-/blob/master/package.json).
+`blockDependencies` property of the GitLab [`package.json` file](https://gitlab.com/gitlab-org/gitlab/-/blob/master/package.json).
## Dependency notes
diff --git a/doc/development/fe_guide/design_patterns.md b/doc/development/fe_guide/design_patterns.md
index ed4d91f34bd..784612682f8 100644
--- a/doc/development/fe_guide/design_patterns.md
+++ b/doc/development/fe_guide/design_patterns.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Design Patterns
diff --git a/doc/development/fe_guide/development_process.md b/doc/development/fe_guide/development_process.md
index 915dab06ac2..d122459f51c 100644
--- a/doc/development/fe_guide/development_process.md
+++ b/doc/development/fe_guide/development_process.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Frontend Development Process
@@ -85,7 +85,7 @@ With the purpose of being [respectful of others' time](https://about.gitlab.com/
### Share your work early
1. Before writing code, ensure your vision of the architecture is aligned with
- GitLab's architecture.
+ GitLab architecture.
1. Add a diagram to the issue and ask a frontend maintainer in the Slack channel `#frontend_maintainers` about it.
![Diagram of Issue Boards Architecture](img/boards_diagram.png)
diff --git a/doc/development/fe_guide/droplab/droplab.md b/doc/development/fe_guide/droplab/droplab.md
index fe0f07b3019..8f1ecc115fe 100644
--- a/doc/development/fe_guide/droplab/droplab.md
+++ b/doc/development/fe_guide/droplab/droplab.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# DropLab
@@ -22,7 +22,7 @@ The value is irrelevant.
The DropLab class has no side effects, so you must always call `.init` when the
DOM is ready. `DropLab.prototype.init` takes the same arguments as `DropLab.prototype.addHook`.
-If you don't provide any arguments, it will globally query and instantiate all
+If you don't provide any arguments, it globally queries and instantiates all
DropLab-compatible dropdowns.
```html
@@ -103,7 +103,7 @@ droplab.addHook(trigger, list);
### Dynamic data
-Adding `data-dynamic` to your dropdown element will enable dynamic list
+Adding `data-dynamic` to your dropdown element enables dynamic list
rendering.
You can template a list item using the keys of the data object provided. Use the
@@ -111,7 +111,7 @@ handlebars syntax `{{ value }}` to HTML escape the value. Use the `<%= value %>`
syntax to interpolate the value. Use the `<%= value %>` syntax to evaluate the
value.
-Passing an array of objects to `DropLab.prototype.addData` will render that data
+Passing an array of objects to `DropLab.prototype.addData` renders that data
for all `data-dynamic` dropdown lists tracked by that DropLab instance.
```html
@@ -227,14 +227,14 @@ provides some potentially useful data.
Plugins are objects that are registered to be executed when a hook is added (when
a DropLab trigger and dropdown are instantiated).
-If no modules API is detected, the library will fall back as it does with
-`window.DropLab` and will add `window.DropLab.plugins.PluginName`.
+If no modules API is detected, the library falls back as it does with
+`window.DropLab` and adds `window.DropLab.plugins.PluginName`.
### Usage
To use plugins, you can pass them in an array as the third argument of
`DropLab.prototype.init` or `DropLab.prototype.addHook`. Some plugins require
-configuration values; the config object can be passed as the fourth argument.
+configuration values; the configuration object can be passed as the fourth argument.
```html
<a href="#" id="trigger" data-dropdown-trigger="#list">Toggle</a>
diff --git a/doc/development/fe_guide/droplab/plugins/ajax.md b/doc/development/fe_guide/droplab/plugins/ajax.md
index 3ac876ad062..f12f8f260c7 100644
--- a/doc/development/fe_guide/droplab/plugins/ajax.md
+++ b/doc/development/fe_guide/droplab/plugins/ajax.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Ajax plugin
diff --git a/doc/development/fe_guide/droplab/plugins/filter.md b/doc/development/fe_guide/droplab/plugins/filter.md
index 9ab4946d21d..79f10cdb6c1 100644
--- a/doc/development/fe_guide/droplab/plugins/filter.md
+++ b/doc/development/fe_guide/droplab/plugins/filter.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Filter plugin
@@ -49,7 +49,7 @@ droplab.addData('trigger', [{
In the previous code, the input string is compared against the `test` key of the
passed data objects.
-Optionally you can set `filterFunction` to a function. This function will be
+Optionally you can set `filterFunction` to a function. This function is then
used instead of `Filter`'s built-in string search. `filterFunction` is passed
two arguments: the first is one of the data objects, and the second is the
current input value.
diff --git a/doc/development/fe_guide/droplab/plugins/index.md b/doc/development/fe_guide/droplab/plugins/index.md
index d44670bfb3c..c7a2865ca83 100644
--- a/doc/development/fe_guide/droplab/plugins/index.md
+++ b/doc/development/fe_guide/droplab/plugins/index.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
description: A list of DropLab plugins.
---
diff --git a/doc/development/fe_guide/droplab/plugins/input_setter.md b/doc/development/fe_guide/droplab/plugins/input_setter.md
index ab8a5cebd08..a3c073520cb 100644
--- a/doc/development/fe_guide/droplab/plugins/input_setter.md
+++ b/doc/development/fe_guide/droplab/plugins/input_setter.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# InputSetter plugin
@@ -67,6 +67,6 @@ element's `data-selected-id` to `1`.
Optionally, you can set `inputAttribute` to a string that's the name of an
attribute on your `input` element that you want to update. If you don't provide
-an `inputAttribute`, `InputSetter` will update the `value` of the `input`
+an `inputAttribute`, `InputSetter` updates the `value` of the `input`
element if it's an `INPUT` element, or the `textContent` of the `input` element
if it isn't an `INPUT` element.
diff --git a/doc/development/fe_guide/editor_lite.md b/doc/development/fe_guide/editor_lite.md
index eb5852d258d..465d64ff63c 100644
--- a/doc/development/fe_guide/editor_lite.md
+++ b/doc/development/fe_guide/editor_lite.md
@@ -1,7 +1,7 @@
---
-stage: none
-group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+stage: Create
+group: Editor
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Editor Lite
@@ -58,7 +58,7 @@ The editor follows the same public API as [provided by Monaco editor](https://mi
| Function | Arguments | Description
| ----- | ----- | ----- |
| `updateModelLanguage` | `path`: String | Updates the instance's syntax highlighting to follow the extension of the passed `path`. Available only on _instance_ level|
-| `use` | Array of objects | Array of **extensions** to apply to the instance. Accepts only the array of _objects_, which means that the extensions' ES6 modules should be fetched and resolved in your views/components before being passed to `use`. This prop is available on _instance_ (applies extension to this particular instance) and _global edtor_ (applies the same extension to all instances) levels. |
+| `use` | Array of objects | Array of **extensions** to apply to the instance. Accepts only the array of _objects_, which means that the extensions' ES6 modules should be fetched and resolved in your views/components before being passed to `use`. This prop is available on _instance_ (applies extension to this particular instance) and _global editor_ (applies the same extension to all instances) levels. |
| Monaco Editor options | See [documentation](https://microsoft.github.io/monaco-editor/api/interfaces/monaco.editor.istandalonecodeeditor.html) | Default Monaco editor options |
## Tips
@@ -68,7 +68,7 @@ The editor follows the same public API as [provided by Monaco editor](https://mi
Editor Lite comes with the loading state built-in, making spinners and loaders rarely needed in HTML. To benefit the built-in loading state, set the `data-editor-loading` property on the HTML element that is supposed to contain the editor. Editor Lite will show the loader automatically while it's bootstrapping.
![Editor Lite: loading state](img/editor_lite_loading.png)
-1. Update syntax highlighting if the file name changes.
+1. Update syntax highlighting if the filename changes.
```javascript
// fileNameEl here is the HTML input element that contains the file name
diff --git a/doc/development/fe_guide/emojis.md b/doc/development/fe_guide/emojis.md
index 9b1abaa14ae..7151e2ffeee 100644
--- a/doc/development/fe_guide/emojis.md
+++ b/doc/development/fe_guide/emojis.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Emojis
diff --git a/doc/development/fe_guide/event_tracking.md b/doc/development/fe_guide/event_tracking.md
index 79ea80a52ea..24e83ffc524 100644
--- a/doc/development/fe_guide/event_tracking.md
+++ b/doc/development/fe_guide/event_tracking.md
@@ -3,3 +3,6 @@ redirect_to: '../product_analytics/index.md'
---
This document was moved to [another location](../product_analytics/index.md).
+
+<!-- This redirect file can be deleted after February 1, 2021. -->
+<!-- Before deletion, see: https://docs.gitlab.com/ee/development/documentation/#move-or-rename-a-page -->
diff --git a/doc/development/fe_guide/frontend_faq.md b/doc/development/fe_guide/frontend_faq.md
index 22a48c8f9f9..9612f604b56 100644
--- a/doc/development/fe_guide/frontend_faq.md
+++ b/doc/development/fe_guide/frontend_faq.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Frontend FAQ
@@ -82,13 +82,13 @@ follow up issue and attach it to the component implementation epic found within
### 4. My submit form button becomes disabled after submitting
-If you are using a submit button inside a form and you attach an `onSubmit` event listener on the form element, [this piece of code](https://gitlab.com/gitlab-org/gitlab/blob/794c247a910e2759ce9b401356432a38a4535d49/app/assets/javascripts/main.js#L225) will add a `disabled` class selector to the submit button when the form is submitted.
+If you are using a submit button inside a form and you attach an `onSubmit` event listener on the form element, [this piece of code](https://gitlab.com/gitlab-org/gitlab/blob/794c247a910e2759ce9b401356432a38a4535d49/app/assets/javascripts/main.js#L225) adds a `disabled` class selector to the submit button when the form is submitted.
To avoid this behavior, add the class `js-no-auto-disable` to the button.
### 5. Should I use a full URL (i.e. `gon.gitlab_url`) or a full path (i.e. `gon.relative_url_root`) when referencing backend endpoints?
-It's preferred to use a **full path** over a **full URL** because the URL will use the hostname configured with
-GitLab which may not match the request. This will cause [CORS issues like this Web IDE one](https://gitlab.com/gitlab-org/gitlab/-/issues/36810).
+It's preferred to use a **full path** over a **full URL** because the URL uses the hostname configured with
+GitLab which may not match the request. This causes [CORS issues like this Web IDE one](https://gitlab.com/gitlab-org/gitlab/-/issues/36810).
Example:
@@ -161,8 +161,9 @@ Sometimes it's necessary to test locally what the frontend production build woul
1. Open `gitlab.yaml` located in your `gitlab` installation folder, scroll down to the `webpack` section and change `dev_server` to `enabled: false`.
1. Run `yarn webpack-prod && gdk restart rails-web`.
-The production build takes a few minutes to be completed; any code change at this point will be
+The production build takes a few minutes to be completed; any code changes at this point are
displayed only after executing the item 3 above again.
+
To return to the normal development mode:
1. Open `gitlab.yaml` located in your `gitlab` installation folder, scroll down to the `webpack` section and change back `dev_server` to `enabled: true`.
@@ -181,7 +182,7 @@ we're using that our target browsers don't support. You don't need to add `core-
polyfills manually.
GitLab adds non-`core-js` polyfills for extending browser features (such as
-GitLab's SVG polyfill), which allow us to reference SVGs by using `<use xlink:href>`.
+the GitLab SVG polyfill), which allow us to reference SVGs by using `<use xlink:href>`.
Be sure to add these polyfills to `app/assets/javascripts/commons/polyfills.js`.
To see what polyfills are being used:
diff --git a/doc/development/fe_guide/graphql.md b/doc/development/fe_guide/graphql.md
index cae2435e4ff..b1896863af9 100644
--- a/doc/development/fe_guide/graphql.md
+++ b/doc/development/fe_guide/graphql.md
@@ -143,7 +143,7 @@ More about fragments:
## Global IDs
-GitLab's GraphQL API expresses `id` fields as Global IDs rather than the PostgreSQL
+The GitLab GraphQL API expresses `id` fields as Global IDs rather than the PostgreSQL
primary key `id`. Global ID is [a convention](https://graphql.org/learn/global-object-identification/)
used for caching and fetching in client-side libraries.
@@ -187,7 +187,7 @@ As shown in the code example by using `produce`, we can perform any kind of dire
`draftState`. Besides, `immer` guarantees that a new state which includes the changes to `draftState` will be generated.
Finally, to verify whether the immutable cache update is working properly, we need to change
-`assumeImmutableResults` to `true` in the `default client config` (see [Apollo Client](#apollo-client) for more info).
+`assumeImmutableResults` to `true` in the default client configuration (see [Apollo Client](#apollo-client) for more information).
If everything is working properly `assumeImmutableResults` should remain set to `true`.
@@ -411,7 +411,7 @@ handleClick() {
### Working with pagination
-GitLab's GraphQL API uses [Relay-style cursor pagination](https://www.apollographql.com/docs/react/data/pagination/#cursor-based)
+The GitLab GraphQL API uses [Relay-style cursor pagination](https://www.apollographql.com/docs/react/pagination/overview/#cursor-based)
for connection types. This means a "cursor" is used to keep track of where in the data
set the next items should be fetched from. [GraphQL Ruby Connection Concepts](https://graphql-ruby.org/pagination/connection_concepts.html)
is a good overview and introduction to connections.
@@ -439,9 +439,11 @@ parameter, indicating a starting or ending point of our pagination. They should
followed with `first` or `last` parameter respectively to indicate _how many_ items
we want to fetch after or before a given endpoint.
-For example, here we're fetching 10 designs after a cursor:
+For example, here we're fetching 10 designs after a cursor (let us call this `projectQuery`):
```javascript
+#import "~/graphql_shared/fragments/pageInfo.fragment.graphql"
+
query {
project(fullPath: "root/my-project") {
id
@@ -453,6 +455,9 @@ query {
id
}
}
+ pageInfo {
+ ...PageInfo
+ }
}
}
}
@@ -460,21 +465,31 @@ query {
}
```
+Note that we are using the [`pageInfo.fragment.graphql`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/assets/javascripts/graphql_shared/fragments/pageInfo.fragment.graphql) to populate the `pageInfo` information.
+
#### Using `fetchMore` method in components
+This approach makes sense to use with user-handled pagination (e.g. when the scrolls to fetch more data or explicitly clicks a "Next Page"-button).
+When we need to fetch all the data initially, it is recommended to use [a (non-smart) query, instead](#using-a-recursive-query-in-components).
+
When making an initial fetch, we usually want to start a pagination from the beginning.
In this case, we can either:
- Skip passing a cursor.
- Pass `null` explicitly to `after`.
-After data is fetched, we should save a `pageInfo` object. Let's assume we're storing
-it to Vue component `data`:
+After data is fetched, we can use the `update`-hook as an opportunity [to customize
+the data that is set in the Vue component property](https://apollo.vuejs.org/api/smart-query.html#options), getting a hold of the `pageInfo` object among other data.
+
+In the `result`-hook, we can inspect the `pageInfo` object to see if we need to fetch
+the next page. Note that we also keep a `requestCount` to ensure that the application
+does not keep requesting the next page, indefinitely:
```javascript
data() {
return {
pageInfo: null,
+ requestCount: 0,
}
},
apollo: {
@@ -482,13 +497,29 @@ apollo: {
query: projectQuery,
variables() {
return {
- // rest of design variables
- ...
+ // ... The rest of the design variables
first: 10,
};
},
- result(res) {
- this.pageInfo = res.data?.project?.issue?.designCollection?.designs?.pageInfo;
+ update(data) {
+ const { id = null, issue = {} } = data.project || {};
+ const { edges = [], pageInfo } = issue.designCollection?.designs || {};
+
+ return {
+ id,
+ edges,
+ pageInfo,
+ };
+ },
+ result() {
+ const { pageInfo } = this.designs;
+
+ // Increment the request count with each new result
+ this.requestCount += 1;
+ // Only fetch next page if we have more requests and there is a next page to fetch
+ if (this.requestCount < MAX_REQUEST_COUNT && pageInfo?.hasNextPage) {
+ this.fetchNextPage(pageInfo.endCursor);
+ }
},
},
},
@@ -497,34 +528,154 @@ apollo: {
When we want to move to the next page, we use an Apollo `fetchMore` method, passing a
new cursor (and, optionally, new variables) there. In the `updateQuery` hook, we have
to return a result we want to see in the Apollo cache after fetching the next page.
+[`Immer`s `produce`](#immutability-and-cache-updates)-function can help us with the immutability here:
```javascript
-fetchNextPage() {
- // as a first step, we're checking if we have more pages to move forward
- if (this.pageInfo?.hasNextPage) {
- this.$apollo.queries.designs.fetchMore({
- variables: {
- // rest of design variables
- ...
- first: 10,
- after: this.pageInfo?.endCursor,
- },
- updateQuery(previousResult, { fetchMoreResult }) {
- // here we can implement the logic of adding new designs to fetched one (for example, if we use infinite scroll)
- // or replacing old result with the new one if we use numbered pages
+fetchNextPage(endCursor) {
+ this.$apollo.queries.designs.fetchMore({
+ variables: {
+ // ... The rest of the design variables
+ first: 10,
+ after: endCursor,
+ },
+ updateQuery(previousResult, { fetchMoreResult }) {
+ // Here we can implement the logic of adding new designs to existing ones
+ // (for example, if we use infinite scroll) or replacing old result
+ // with the new one if we use numbered pages
+
+ const { designs: previousDesigns } = previousResult.project.issue.designCollection;
+ const { designs: newDesigns } = fetchMoreResult.project.issue.designCollection
+
+ return produce(previousResult, draftData => {
+ // `produce` gives us a working copy, `draftData`, that we can modify
+ // as we please and from it will produce the next immutable result for us
+ draftData.project.issue.designCollection.designs = [...previousDesigns, ...newDesigns];
+ });
+ },
+ });
+}
+```
- const newDesigns = fetchMoreResult.project.issue.designCollection.designs;
- previousResult.project.issue.designCollection.designs.push(...newDesigns)
+#### Using a recursive query in components
- return previousResult;
- },
- });
+When it is necessary to fetch all paginated data initially an Apollo query can do the trick for us.
+If we need to fetch the next page based on user interactions, it is recommend to use a [`smartQuery`](https://apollo.vuejs.org/api/smart-query.html) along with the [`fetchMore`-hook](#using-fetchmore-method-in-components).
+
+When the query resolves we can update the component data and inspect the `pageInfo` object
+to see if we need to fetch the next page, i.e. call the method recursively.
+
+Note that we also keep a `requestCount` to ensure that the application does not keep
+requesting the next page, indefinitely.
+
+```javascript
+data() {
+ return {
+ requestCount: 0,
+ isLoading: false,
+ designs: {
+ edges: [],
+ pageInfo: null,
+ },
+ }
+},
+created() {
+ this.fetchDesigns();
+},
+methods: {
+ handleError(error) {
+ this.isLoading = false;
+ // Do something with `error`
+ },
+ fetchDesigns(endCursor) {
+ this.isLoading = true;
+
+ return this.$apollo
+ .query({
+ query: projectQuery,
+ variables() {
+ return {
+ // ... The rest of the design variables
+ first: 10,
+ endCursor,
+ };
+ },
+ })
+ .then(({ data }) => {
+ const { id = null, issue = {} } = data.project || {};
+ const { edges = [], pageInfo } = issue.designCollection?.designs || {};
+
+ // Update data
+ this.designs = {
+ id,
+ edges: [...this.designs.edges, ...edges];
+ pageInfo: pageInfo;
+ };
+
+ // Increment the request count with each new result
+ this.requestCount += 1;
+ // Only fetch next page if we have more requests and there is a next page to fetch
+ if (this.requestCount < MAX_REQUEST_COUNT && pageInfo?.hasNextPage) {
+ this.fetchDesigns(pageInfo.endCursor);
+ } else {
+ this.isLoading = false;
+ }
+ })
+ .catch(this.handleError);
+ },
+},
+```
+
+#### Pagination and optimistic updates
+
+When Apollo caches paginated data client-side, it includes `pageInfo` variables in the cache key.
+If you wanted to optimistically update that data, you'd have to provide `pageInfo` variables
+when interacting with the cache via [`.readQuery()`](https://www.apollographql.com/docs/react/v2/api/apollo-client/#ApolloClient.readQuery)
+or [`.writeQuery()`](https://www.apollographql.com/docs/react/v2/api/apollo-client/#ApolloClient.writeQuery).
+This can be tedious and counter-intuitive.
+
+To make it easier to deal with cached paginated queries, Apollo provides the `@connection` directive.
+The directive accepts a `key` parameter that will be used as a static key when caching the data.
+You'd then be able to retrieve the data without providing any pagination-specific variables.
+
+Here's an example of a query using the `@connection` directive:
+
+```graphql
+#import "~/graphql_shared/fragments/pageInfo.fragment.graphql"
+
+query DastSiteProfiles($fullPath: ID!, $after: String, $before: String, $first: Int, $last: Int) {
+ project(fullPath: $fullPath) {
+ siteProfiles: dastSiteProfiles(after: $after, before: $before, first: $first, last: $last)
+ @connection(key: "dastSiteProfiles") {
+ pageInfo {
+ ...PageInfo
+ }
+ edges {
+ cursor
+ node {
+ id
+ # ...
+ }
+ }
+ }
}
}
```
-Please note we don't have to save `pageInfo` one more time; `fetchMore` triggers a query
-`result` hook as well.
+In this example, Apollo will store the data with the stable `dastSiteProfiles` cache key.
+
+To retrieve that data from the cache, you'd then only need to provide the `$fullPath` variable,
+omitting pagination-specific variables like `after` or `before`:
+
+```javascript
+const data = store.readQuery({
+ query: dastSiteProfilesQuery,
+ variables: {
+ fullPath: 'namespace/project',
+ },
+});
+```
+
+Read more about the `@connection` directive in [Apollo's documentation](https://www.apollographql.com/docs/react/v2/caching/cache-interaction/#the-connection-directive).
### Managing performance
@@ -561,7 +712,7 @@ it('tests apollo component', () => {
const vm = shallowMount(App);
vm.setData({
- ...mock data
+ ...mockData
});
});
```
@@ -633,7 +784,7 @@ function createComponent(props = {}) {
ApolloMutation,
},
mocks: {
- $apollo:
+ $apollo,
}
});
}
@@ -666,34 +817,51 @@ it('calls mutation on submitting form ', () => {
To test the logic of Apollo cache updates, we might want to mock an Apollo Client in our unit tests. We use [`mock-apollo-client`](https://www.npmjs.com/package/mock-apollo-client) library to mock Apollo client and [`createMockApollo` helper](https://gitlab.com/gitlab-org/gitlab/-/blob/master/spec/frontend/helpers/mock_apollo_helper.js) we created on top of it.
-To separate tests with mocked client from 'usual' unit tests, it's recommended to create an additional component factory. This way we only create Apollo Client instance when it's necessary:
-
-```javascript
-function createComponent() {...}
-
-function createComponentWithApollo() {...}
-```
+To separate tests with mocked client from 'usual' unit tests, it's recommended to create an additional factory and pass the created `mockApollo` as an option to the `createComponent`-factory. This way we only create Apollo Client instance when it's necessary.
-Then we need to inject `VueApollo` to Vue local instance (`localVue.use()` can also be called within `createComponentWithApollo()`)
+We need to inject `VueApollo` to the Vue local instance and, likewise, it is recommended to call `localVue.use()` within `createMockApolloProvider()` to only load it when it is necessary.
```javascript
import VueApollo from 'vue-apollo';
import { createLocalVue } from '@vue/test-utils';
const localVue = createLocalVue();
-localVue.use(VueApollo);
+
+function createMockApolloProvider() {
+ localVue.use(VueApollo);
+
+ return createMockApollo(requestHandlers);
+}
+
+function createComponent(options = {}) {
+ const { mockApollo } = options;
+ ...
+ return shallowMount(..., {
+ localVue,
+ apolloProvider: mockApollo,
+ ...
+ });
+}
```
-After this, on the global `describe`, we should create a variable for `fakeApollo`:
+After this, you can control whether you need a variable for `mockApollo` and assign it in the appropriate `describe`-scope:
```javascript
-describe('Some component with Apollo mock', () => {
+describe('Some component', () => {
let wrapper;
- let fakeApollo
-})
+
+ describe('with Apollo mock', () => {
+ let mockApollo;
+
+ beforeEach(() => {
+ mockApollo = createMockApolloProvider();
+ wrapper = createComponent({ mockApollo });
+ });
+ });
+});
```
-Within component factory, we need to define an array of _handlers_ for every query or mutation:
+Within `createMockApolloProvider`-factory, we need to define an array of _handlers_ for every query or mutation:
```javascript
import getDesignListQuery from '~/design_management/graphql/queries/get_design_list.query.graphql';
@@ -702,13 +870,16 @@ import moveDesignMutation from '~/design_management/graphql/mutations/move_desig
describe('Some component with Apollo mock', () => {
let wrapper;
- let fakeApollo;
+ let mockApollo;
+
+ function createMockApolloProvider() {
+ Vue.use(VueApollo);
- function createComponentWithApollo() {
const requestHandlers = [
[getDesignListQuery, jest.fn().mockResolvedValue(designListQueryResponse)],
[permissionsQuery, jest.fn().mockResolvedValue(permissionsQueryResponse)],
];
+ ...
}
})
```
@@ -718,23 +889,38 @@ After this, we need to create a mock Apollo Client instance using a helper:
```javascript
import createMockApollo from 'jest/helpers/mock_apollo_helper';
-describe('Some component with Apollo mock', () => {
+describe('Some component', () => {
let wrapper;
- let fakeApollo;
- function createComponentWithApollo() {
+ function createMockApolloProvider() {
+ Vue.use(VueApollo);
+
const requestHandlers = [
[getDesignListQuery, jest.fn().mockResolvedValue(designListQueryResponse)],
[permissionsQuery, jest.fn().mockResolvedValue(permissionsQueryResponse)],
];
- fakeApollo = createMockApollo(requestHandlers);
- wrapper = shallowMount(Index, {
+ return createMockApollo(requestHandlers);
+ }
+
+ function createComponent(options = {}) {
+ const { mockApollo } = options;
+
+ return shallowMount(Index, {
localVue,
- apolloProvider: fakeApollo,
+ apolloProvider: mockApollo,
});
}
-})
+
+ describe('with Apollo mock', () => {
+ let mockApollo;
+
+ beforeEach(() => {
+ mockApollo = createMockApolloProvider();
+ wrapper = createComponent({ mockApollo });
+ });
+ });
+});
```
When mocking resolved values, ensure the structure of the response is the same
@@ -744,13 +930,15 @@ When testing queries, please keep in mind they are promises, so they need to be
```javascript
it('renders a loading state', () => {
- createComponentWithApollo();
+ const mockApollo = createMockApolloProvider();
+ const wrapper = createComponent({ mockApollo });
expect(wrapper.find(LoadingSpinner).exists()).toBe(true)
});
it('renders designs list', async () => {
- createComponentWithApollo();
+ const mockApollo = createMockApolloProvider();
+ const wrapper = createComponent({ mockApollo });
jest.runOnlyPendingTimers();
await wrapper.vm.$nextTick();
@@ -762,7 +950,7 @@ it('renders designs list', async () => {
If we need to test a query error, we need to mock a rejected value as request handler:
```javascript
-function createComponentWithApollo() {
+function createMockApolloProvider() {
...
const requestHandlers = [
[getDesignListQuery, jest.fn().mockRejectedValue(new Error('GraphQL error')],
@@ -772,7 +960,7 @@ function createComponentWithApollo() {
...
it('renders error if query fails', async () => {
- createComponent()
+ const wrapper = createComponent();
jest.runOnlyPendingTimers();
await wrapper.vm.$nextTick();
@@ -786,9 +974,11 @@ Request handlers can also be passed to component factory as a parameter.
Mutations could be tested the same way with a few additional `nextTick`s to get the updated result:
```javascript
-function createComponentWithApollo({
+function createMockApolloProvider({
moveHandler = jest.fn().mockResolvedValue(moveDesignMutationResponse),
}) {
+ Vue.use(VueApollo);
+
moveDesignHandler = moveHandler;
const requestHandlers = [
@@ -797,15 +987,21 @@ function createComponentWithApollo({
[moveDesignMutation, moveDesignHandler],
];
- fakeApollo = createMockApollo(requestHandlers);
- wrapper = shallowMount(Index, {
+ return createMockApollo(requestHandlers);
+}
+
+function createComponent(options = {}) {
+ const { mockApollo } = options;
+
+ return shallowMount(Index, {
localVue,
- apolloProvider: fakeApollo,
+ apolloProvider: mockApollo,
});
}
...
it('calls a mutation with correct parameters and reorders designs', async () => {
- createComponentWithApollo({});
+ const mockApollo = createMockApolloProvider({});
+ const wrapper = createComponent({ mockApollo });
wrapper.find(VueDraggable).vm.$emit('change', {
moved: {
@@ -828,14 +1024,100 @@ it('calls a mutation with correct parameters and reorders designs', async () =>
#### Testing `@client` queries
-If your application contains `@client` queries, most probably you will have an Apollo Client warning saying that you have a local query but no resolvers are defined. In order to fix it, you need to pass resolvers to the mocked client with a second parameter (bare minimum is an empty object):
+##### Using mock resolvers
+
+If your application contains `@client` queries, you get
+the following Apollo Client warning when passing only handlers:
+
+```shell
+Unexpected call of console.warn() with:
+
+Warning: mock-apollo-client - The query is entirely client-side (using @client directives) and resolvers have been configured. The request handler will not be called.
+```
+
+To fix this you should define mock `resolvers` instead of
+mock `handlers`. For example, given the following `@client` query:
+
+```graphql
+query getBlobContent($path: String, $ref: String!) {
+ blobContent(path: $path, ref: $ref) @client {
+ rawData
+ }
+}
+```
+
+And its actual client-side resolvers:
```javascript
-import createMockApollo from 'jest/helpers/mock_apollo_helper';
-...
-fakeApollo = createMockApollo(requestHandlers, {});
+import Api from '~/api';
+
+export const resolvers = {
+ Query: {
+ blobContent(_, { path, ref }) {
+ return {
+ __typename: 'BlobContent',
+ rawData: Api.getRawFile(path, { ref }).then(({ data }) => {
+ return data;
+ }),
+ };
+ },
+ },
+};
+
+export default resolvers;
+```
+
+We can use a **mock resolver** that returns data with the
+same shape, while mock the result with a mock function:
+
+```javascript
+let mockApollo;
+let mockBlobContentData; // mock function, jest.fn();
+
+const mockResolvers = {
+ Query: {
+ blobContent() {
+ return {
+ __typename: 'BlobContent',
+ rawData: mockBlobContentData(), // the mock function can resolve mock data
+ };
+ },
+ },
+};
+
+const createComponentWithApollo = ({ props = {} } = {}) => {
+ mockApollo = createMockApollo([], mockResolvers); // resolvers are the second parameter
+
+ wrapper = shallowMount(MyComponent, {
+ localVue,
+ propsData: {},
+ apolloProvider: mockApollo,
+ // ...
+ })
+};
+
```
+After which, you can resolve or reject the value needed.
+
+```javascript
+beforeEach(() => {
+ mockBlobContentData = jest.fn();
+});
+
+it('shows data', async() => {
+ mockBlobContentData.mockResolvedValue(data); // you may resolve or reject to mock the result
+
+ createComponentWithApollo();
+
+ await waitForPromises(); // wait on the resolver mock to execute
+
+ expect(findContent().text()).toBe(mockCiYml);
+});
+```
+
+##### Using `cache.writeQuery`
+
Sometimes we want to test a `result` hook of the local query. In order to have it triggered, we need to populate a cache with correct data to be fetched with this query:
```javascript
@@ -849,14 +1131,16 @@ query fetchLocalUser {
```javascript
import fetchLocalUserQuery from '~/design_management/graphql/queries/fetch_local_user.query.graphql';
-function createComponentWithApollo() {
+function createMockApolloProvider() {
+ Vue.use(VueApollo);
+
const requestHandlers = [
[getDesignListQuery, jest.fn().mockResolvedValue(designListQueryResponse)],
[permissionsQuery, jest.fn().mockResolvedValue(permissionsQueryResponse)],
];
- fakeApollo = createMockApollo(requestHandlers, {});
- fakeApollo.clients.defaultClient.cache.writeQuery({
+ const mockApollo = createMockApollo(requestHandlers, {});
+ mockApollo.clients.defaultClient.cache.writeQuery({
query: fetchLocalUserQuery,
data: {
fetchLocalUser: {
@@ -864,18 +1148,111 @@ function createComponentWithApollo() {
name: 'Test',
},
},
- })
+ });
+
+ return mockApollo;
+}
+
+function createComponent(options = {}) {
+ const { mockApollo } = options;
- wrapper = shallowMount(Index, {
+ return shallowMount(Index, {
localVue,
- apolloProvider: fakeApollo,
+ apolloProvider: mockApollo,
+ });
+}
+```
+
+Sometimes it is necessary to control what the local resolver returns and inspect how it is called by the component. This can be done by mocking your local resolver:
+
+```javascript
+import fetchLocalUserQuery from '~/design_management/graphql/queries/fetch_local_user.query.graphql';
+
+function createMockApolloProvider(options = {}) {
+ Vue.use(VueApollo);
+ const { fetchLocalUserSpy } = options;
+
+ const mockApollo = createMockApollo([], {
+ Query: {
+ fetchLocalUser: fetchLocalUserSpy,
+ },
+ });
+
+ // Necessary for local resolvers to be activated
+ mockApollo.clients.defaultClient.cache.writeQuery({
+ query: fetchLocalUserQuery,
+ data: {},
});
+
+ return mockApollo;
}
```
+In the test you can then control what the spy is supposed to do and inspect the component after the request have returned:
+
+```javascript
+describe('My Index test with `createMockApollo`', () => {
+ let wrapper;
+ let fetchLocalUserSpy;
+
+ afterEach(() => {
+ wrapper.destroy();
+ wrapper = null;
+ fetchLocalUserSpy = null;
+ });
+
+ describe('when loading', () => {
+ beforeEach(() => {
+ const mockApollo = createMockApolloProvider();
+ wrapper = createComponent({ mockApollo });
+ });
+
+ it('displays the loader', () => {
+ // Assess that the loader is present
+ });
+ });
+
+ describe('with data', () => {
+ beforeEach(async () => {
+ fetchLocalUserSpy = jest.fn().mockResolvedValue(localUserQueryResponse);
+ const mockApollo = createMockApolloProvider(fetchLocalUserSpy);
+ wrapper = createComponent({ mockApollo });
+ await waitForPromises();
+ });
+
+ it('should fetch data once', () => {
+ expect(fetchLocalUserSpy).toHaveBeenCalledTimes(1);
+ });
+
+ it('displays data', () => {
+ // Assess that data is present
+ });
+ });
+
+ describe('with error', () => {
+ const error = 'Error!';
+
+ beforeEach(async () => {
+ fetchLocalUserSpy = jest.fn().mockRejectedValueOnce(error);
+ const mockApollo = createMockApolloProvider(fetchLocalUserSpy);
+ wrapper = createComponent({ mockApollo });
+ await waitForPromises();
+ });
+
+ it('should fetch data once', () => {
+ expect(fetchLocalUserSpy).toHaveBeenCalledTimes(1);
+ });
+
+ it('displays the error', () => {
+ // Assess that the error is displayed
+ });
+ });
+});
+```
+
## Handling errors
-GitLab's GraphQL mutations currently have two distinct error modes: [Top-level](#top-level-errors) and [errors-as-data](#errors-as-data).
+The GitLab GraphQL mutations currently have two distinct error modes: [Top-level](#top-level-errors) and [errors-as-data](#errors-as-data).
When utilising a GraphQL mutation, we must consider handling **both of these error modes** to ensure that the user receives the appropriate feedback when an error occurs.
diff --git a/doc/development/fe_guide/icons.md b/doc/development/fe_guide/icons.md
index 994a3aa36fc..1468e886220 100644
--- a/doc/development/fe_guide/icons.md
+++ b/doc/development/fe_guide/icons.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Icons and SVG Illustrations
@@ -22,7 +22,7 @@ Our goal is to replace one by one all inline SVG Icons (as those currently bloat
### Usage in HAML/Rails
-To use a sprite Icon in HAML or Rails we use a specific helper function :
+To use a sprite Icon in HAML or Rails we use a specific helper function:
```ruby
sprite_icon(icon_name, size: nil, css_class: '')
@@ -31,7 +31,7 @@ sprite_icon(icon_name, size: nil, css_class: '')
- **icon_name**: Use the icon_name for the SVG sprite in the list of
([GitLab SVGs](https://gitlab-org.gitlab.io/gitlab-svgs)).
- **size (optional)**: Use one of the following sizes : 16, 24, 32, 48, 72 (this
- will be translated into a `s16` class)
+ is translated into a `s16` class)
- **css_class (optional)**: If you want to add additional CSS classes.
**Example**
@@ -90,7 +90,7 @@ Please use the following function inside JS to render an icon:
### Usage in HAML/Rails
-DANGER: **Warning:**
+WARNING:
Do not use the `spinner` or `icon('spinner spin')` rails helpers to insert
loading icons. These helpers rely on the Font Awesome icon library which is
deprecated.
diff --git a/doc/development/fe_guide/index.md b/doc/development/fe_guide/index.md
index 9f98876bc8e..84c1623f8c0 100644
--- a/doc/development/fe_guide/index.md
+++ b/doc/development/fe_guide/index.md
@@ -1,13 +1,13 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Frontend Development Guidelines
This document describes various guidelines to ensure consistency and quality
-across GitLab's frontend team.
+across the GitLab frontend team.
## Overview
@@ -24,7 +24,7 @@ Working with our frontend assets requires Node (v10.13.0 or greater) and Yarn
For our currently-supported browsers, see our [requirements](../../install/requirements.md#supported-web-browsers).
Use [BrowserStack](https://www.browserstack.com/) to test with our supported browsers.
-Sign in to BrowserStack with the credentials saved in the **Engineering** vault of GitLab's
+Sign in to BrowserStack with the credentials saved in the **Engineering** vault of the GitLab
[shared 1Password account](https://about.gitlab.com/handbook/security/#1password-guide).
## Initiatives
@@ -41,7 +41,7 @@ How we [plan and execute](development_process.md) the work on the frontend.
## Architecture
-How we go about [making fundamental design decisions](architecture.md) in GitLab's frontend team
+How we go about [making fundamental design decisions](architecture.md) in the GitLab frontend team
or make changes to our frontend development guidelines.
## Testing
@@ -56,7 +56,7 @@ Reusable components with technical and usage guidelines can be found in our
## Design Patterns
-Common JavaScript [design patterns](design_patterns.md) in GitLab's codebase.
+Common JavaScript [design patterns](design_patterns.md) in the GitLab codebase.
## Vue.js Best Practices
diff --git a/doc/development/fe_guide/keyboard_shortcuts.md b/doc/development/fe_guide/keyboard_shortcuts.md
index 227b5cfd22c..e50e9ec65df 100644
--- a/doc/development/fe_guide/keyboard_shortcuts.md
+++ b/doc/development/fe_guide/keyboard_shortcuts.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Implementing keyboard shortcuts
diff --git a/doc/development/fe_guide/performance.md b/doc/development/fe_guide/performance.md
index de9a9f5cb14..7825c89b7cf 100644
--- a/doc/development/fe_guide/performance.md
+++ b/doc/development/fe_guide/performance.md
@@ -1,11 +1,205 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Performance
+Performance is an essential part and one of the main areas of concern for any modern application.
+
+## User Timing API
+
+[User Timing API](https://developer.mozilla.org/en-US/docs/Web/API/User_Timing_API) is a web API
+[available in all modern browsers](https://caniuse.com/?search=User%20timing). It allows measuring
+custom times and durations in your applications by placing special marks in your
+code. You can use the User Timing API in GitLab to measure any timing, regardless of the framework,
+including Rails, Vue, or vanilla JavaScript environments. For consistency and
+convenience of adoption, GitLab offers several ways to enable custom user timing metrics in
+your code.
+
+User Timing API introduces two important paradigms: `mark` and `measure`.
+
+**Mark** is the timestamp on the performance timeline. For example,
+`performance.mark('my-component-start');` makes a browser note the time this code
+is met. Then, you can obtain information about this mark by querying the global
+performance object again. For example, in your DevTools console:
+
+```javascript
+performance.getEntriesByName('my-component-start')
+```
+
+**Measure** is the duration between either:
+
+- Two marks
+- The start of navigation and a mark
+- The start of navigation and the moment the measurement is taken
+
+It takes several arguments of which the measurement’s name is the only one required. Examples:
+
+- Duration between the start and end marks:
+
+ ```javascript
+ performance.measure('My component', 'my-component-start', 'my-component-end')
+ ```
+
+- Duration between a mark and the moment the measurement is taken. The end mark is omitted in
+ this case.
+
+ ```javascript
+ performance.measure('My component', 'my-component-start')
+ ```
+
+- Duration between [the navigation start](https://developer.mozilla.org/en-US/docs/Web/API/Performance/timeOrigin)
+ and the moment the actual measurement is taken.
+
+ ```javascript
+ performance.measure('My component')
+ ```
+
+- Duration between [the navigation start](https://developer.mozilla.org/en-US/docs/Web/API/Performance/timeOrigin)
+ and a mark. You cannot omit the start mark in this case but you can set it to `undefined`.
+
+ ```javascript
+ performance.measure('My component', undefined, 'my-component-end')
+ ```
+
+To query a particular `measure`, You can use the same API, as for `mark`:
+
+```javascript
+performance.getEntriesByName('My component')
+```
+
+You can also query for all captured marks and measurements:
+
+```javascript
+performance.getEntriesByType('mark');
+performance.getEntriesByType('measure');
+```
+
+Using `getEntriesByName()` or `getEntriesByType()` returns an Array of [the PerformanceMeasure
+objects](https://developer.mozilla.org/en-US/docs/Web/API/PerformanceMeasure) which contain
+information about the measurement's start time and duration.
+
+### User Timing API utility
+
+You can use the `performanceMarkAndMeasure` utility anywhere in GitLab, as it's not tied to any
+particular environment.
+
+`performanceMarkAndMeasure` takes an object as an argument, where:
+
+| Attribute | Type | Required | Description |
+|:------------|:---------|:---------|:----------------------|
+| `mark` | `String` | no | The name for the mark to set. Used for retrieving the mark later. If not specified, the mark is not set. |
+| `measures` | `Array` | no | The list of the measurements to take at this point. |
+
+In return, the entries in the `measures` array are objects with the following API:
+
+| Attribute | Type | Required | Description |
+|:------------|:---------|:---------|:----------------------|
+| `name` | `String` | yes | The name for the measurement. Used for retrieving the mark later. Must be specified for every measure object, otherwise JavaScript fails. |
+| `start` | `String` | no | The name of a mark **from** which the measurement should be taken. |
+| `end` | `String` | no | The name of a mark **to** which the measurement should be taken. |
+
+Example:
+
+```javascript
+import { performanceMarkAndMeasure } from '~/performance/utils';
+...
+performanceMarkAndMeasure({
+ mark: MR_DIFFS_MARK_DIFF_FILES_END,
+ measures: [
+ {
+ name: MR_DIFFS_MEASURE_DIFF_FILES_DONE,
+ start: MR_DIFFS_MARK_DIFF_FILES_START,
+ end: MR_DIFFS_MARK_DIFF_FILES_END,
+ },
+ ],
+});
+```
+
+### Vue performance plugin
+
+The plugin captures and measures the performance of the specified Vue components automatically
+leveraging the Vue lifecycle and the User Timing API.
+
+To use the Vue performance plugin:
+
+1. Import the plugin:
+
+ ```javascript
+ import PerformancePlugin from '~/performance/vue_performance_plugin';
+ ```
+
+1. Use it before initializing your Vue application:
+
+ ```javascript
+ Vue.use(PerformancePlugin, {
+ components: [
+ 'IdeTreeList',
+ 'FileTree',
+ 'RepoEditor',
+ ]
+ });
+ ```
+
+The plugin accepts the list of components, performance of which should be measured. The components
+should be specified by their `name` option.
+
+You might need to explicitly set this option on the needed components, as
+most components in the codebase don't have this option set:
+
+```javascript
+export default {
+ name: 'IdeTreeList',
+ components: {
+ ...
+ ...
+}
+```
+
+The plugin captures and stores the following:
+
+- The start **mark** for when the component has been initialized (in `beforeCreate()` hook)
+- The end **mark** of the component when it has been rendered (next animation frame after `nextTick`
+ in `mounted()` hook). In most cases, this event does not wait for all sub-components to be
+ bootstrapped. To measure the sub-components, you should include those into the
+ plugin options.
+- **Measure** duration between the two marks above.
+
+### Access stored measurements
+
+To access stored measurements, you can use either:
+
+- **Performance bar**. If you have it enabled (`P` + `B` key-combo), you can see the metrics
+ output in your DevTools console.
+- **"Performance" tab** of the DevTools. You can get the measurements (not the marks, though) in
+ this tab when profiling performance.
+- **DevTools console**. As mentioned above, you can query for the entries:
+
+ ```javascript
+ performance.getEntriesByType('mark');
+ performance.getEntriesByType('measure');
+ ```
+
+### Naming convention
+
+All the marks and measures should be instantiated with the constants from
+`app/assets/javascripts/performance/constants.js`. When you’re ready to add a new mark’s or
+measurement’s label, you can follow the pattern.
+
+NOTE:
+This pattern is a recommendation and not a hard rule.
+
+```javascript
+app-*-start // for a start β€˜mark’
+app-*-end // for an end β€˜mark’
+app-* // for β€˜measure’
+```
+
+For example, `'webide-init-editor-start`, `mr-diffs-mark-file-tree-end`, and so on. We do it to
+help identify marks and measures coming from the different apps on the same page.
+
## Best Practices
### Realtime Components
@@ -18,29 +212,29 @@ When writing code for realtime features we have to keep a couple of things in mi
Thus, we must strike a balance between sending requests and the feeling of realtime.
Use the following rules when creating realtime solutions.
-1. The server will tell you how much to poll by sending `Poll-Interval` in the header.
- Use that as your polling interval. This way it is [easy for system administrators to change the
- polling rate](../../administration/polling.md).
+1. The server tells you how much to poll by sending `Poll-Interval` in the header.
+ Use that as your polling interval. This enables system administrators to change the
+ [polling rate](../../administration/polling.md).
A `Poll-Interval: -1` means you should disable polling, and this must be implemented.
1. A response with HTTP status different from 2XX should disable polling as well.
1. Use a common library for polling.
1. Poll on active tabs only. Please use [Visibility](https://github.com/ai/visibilityjs).
-1. Use regular polling intervals, do not use backoff polling, or jitter, as the interval will be
+1. Use regular polling intervals, do not use backoff polling, or jitter, as the interval is
controlled by the server.
-1. The backend code will most likely be using etags. You do not and should not check for status
- `304 Not Modified`. The browser will transform it for you.
+1. The backend code is likely to be using etags. You do not and should not check for status
+ `304 Not Modified`. The browser transforms it for you.
### Lazy Loading Images
To improve the time to first render we are using lazy loading for images. This works by setting
the actual image source on the `data-src` attribute. After the HTML is rendered and JavaScript is loaded,
-the value of `data-src` will be moved to `src` automatically if the image is in the current viewport.
+the value of `data-src` is moved to `src` automatically if the image is in the current viewport.
- Prepare images in HTML for lazy loading by renaming the `src` attribute to `data-src` AND adding the class `lazy`.
-- If you are using the Rails `image_tag` helper, all images will be lazy-loaded by default unless `lazy: false` is provided.
+- If you are using the Rails `image_tag` helper, all images are lazy-loaded by default unless `lazy: false` is provided.
If you are asynchronously adding content which contains lazy images then you need to call the function
-`gl.lazyLoader.searchLazyImages()` which will search for lazy images and load them if needed.
+`gl.lazyLoader.searchLazyImages()` which searches for lazy images and loads them if needed.
But in general it should be handled automatically through a `MutationObserver` in the lazy loading function.
### Animations
@@ -49,14 +243,14 @@ Only animate `opacity` & `transform` properties. Other properties (such as `top`
Layout to be recalculated, which is much more expensive. For details on this, see "Styles that Affect Layout" in
[High Performance Animations](https://www.html5rocks.com/en/tutorials/speed/high-performance-animations/).
-If you _do_ need to change layout (e.g. a sidebar that pushes main content over), prefer [FLIP](https://aerotwist.com/blog/flip-your-animations/) to change expensive
+If you _do_ need to change layout (for example, a sidebar that pushes main content over), prefer [FLIP](https://aerotwist.com/blog/flip-your-animations/) to change expensive
properties once, and handle the actual animation with transforms.
## Reducing Asset Footprint
### Universal code
-Code that is contained within `main.js` and `commons/index.js` are loaded and
+Code that is contained in `main.js` and `commons/index.js` is loaded and
run on _all_ pages. **DO NOT ADD** anything to these files unless it is truly
needed _everywhere_. These bundles include ubiquitous libraries like `vue`,
`axios`, and `jQuery`, as well as code for the main navigation and sidebar.
@@ -66,26 +260,26 @@ code footprint.
### Page-specific JavaScript
Webpack has been configured to automatically generate entry point bundles based
-on the file structure within `app/assets/javascripts/pages/*`. The directories
-within the `pages` directory correspond to Rails controllers and actions. These
-auto-generated bundles will be automatically included on the corresponding
+on the file structure in `app/assets/javascripts/pages/*`. The directories
+in the `pages` directory correspond to Rails controllers and actions. These
+auto-generated bundles are automatically included on the corresponding
pages.
For example, if you were to visit <https://gitlab.com/gitlab-org/gitlab/-/issues>,
you would be accessing the `app/controllers/projects/issues_controller.rb`
controller with the `index` action. If a corresponding file exists at
-`pages/projects/issues/index/index.js`, it will be compiled into a webpack
+`pages/projects/issues/index/index.js`, it is compiled into a webpack
bundle and included on the page.
Previously, GitLab encouraged the use of
-`content_for :page_specific_javascripts` within HAML files, along with
+`content_for :page_specific_javascripts` in HAML files, along with
manually generated webpack bundles. However under this new system you should
not ever need to manually add an entry point to the `webpack.config.js` file.
-TIP: **Tip:**
+NOTE:
If you are unsure what controller and action corresponds to a given page, you
-can find this out by inspecting `document.body.dataset.page` within your
-browser's developer console while on any page within GitLab.
+can find this out by inspecting `document.body.dataset.page` in your
+browser's developer console while on any page in GitLab.
#### Important Considerations
@@ -97,7 +291,7 @@ browser's developer console while on any page within GitLab.
instantiate, and nothing else.
- **`DOMContentLoaded` should not be used:**
- All of GitLab's JavaScript files are added with the `defer` attribute.
+ All GitLab JavaScript files are added with the `defer` attribute.
According to the [Mozilla documentation](https://developer.mozilla.org/en-US/docs/Web/HTML/Element/script#attr-defer),
this implies that "the script is meant to be executed after the document has
been parsed, but before firing `DOMContentLoaded`". Since the document is already
@@ -119,21 +313,21 @@ browser's developer console while on any page within GitLab.
```
Note that `waitForCSSLoaded()` methods supports receiving the action in different ways:
-
+
- With a callback:
-
+
```javascript
waitForCSSLoaded(action)
```
-
+
- With `then()`:
-
+
```javascript
waitForCSSLoaded().then(action);
```
-
+
- With `await` followed by `action`:
-
+
```javascript
await waitForCSSLoaded;
action();
@@ -150,21 +344,21 @@ browser's developer console while on any page within GitLab.
- **Supporting Module Placement:**
- If a class or a module is _specific to a particular route_, try to locate
- it close to the entry point it will be used. For instance, if
- `my_widget.js` is only imported within `pages/widget/show/index.js`, you
+ it close to the entry point in which it is used. For instance, if
+ `my_widget.js` is only imported in `pages/widget/show/index.js`, you
should place the module at `pages/widget/show/my_widget.js` and import it
- with a relative path (e.g. `import initMyWidget from './my_widget';`).
- - If a class or module is _used by multiple routes_, place it within a
+ with a relative path (for example, `import initMyWidget from './my_widget';`).
+ - If a class or module is _used by multiple routes_, place it in a
shared directory at the closest common parent directory for the entry
- points that import it. For example, if `my_widget.js` is imported within
+ points that import it. For example, if `my_widget.js` is imported in
both `pages/widget/show/index.js` and `pages/widget/run/index.js`, then
place the module at `pages/widget/shared/my_widget.js` and import it with
- a relative path if possible (e.g. `../shared/my_widget`).
+ a relative path if possible (for example, `../shared/my_widget`).
- **Enterprise Edition Caveats:**
- For GitLab Enterprise Edition, page-specific entry points will override their
+ For GitLab Enterprise Edition, page-specific entry points override their
Community Edition counterparts with the same name, so if
- `ee/app/assets/javascripts/pages/foo/bar/index.js` exists, it will take
+ `ee/app/assets/javascripts/pages/foo/bar/index.js` exists, it takes
precedence over `app/assets/javascripts/pages/foo/bar/index.js`. If you want
to minimize duplicate code, you can import one entry point from the other.
This is not done automatically to allow for flexibility in overriding
@@ -172,10 +366,10 @@ browser's developer console while on any page within GitLab.
### Code Splitting
-For any code that does not need to be run immediately upon page load, (e.g.
+For any code that does not need to be run immediately upon page load, (for example,
modals, dropdowns, and other behaviors that can be lazy-loaded), you can split
your module into asynchronous chunks with dynamic import statements. These
-imports return a Promise which will be resolved once the script has loaded:
+imports return a Promise which is resolved after the script has loaded:
```javascript
import(/* webpackChunkName: 'emoji' */ '~/emoji')
@@ -184,7 +378,7 @@ import(/* webpackChunkName: 'emoji' */ '~/emoji')
```
Please try to use `webpackChunkName` when generating these dynamic imports as
-it will provide a deterministic filename for the chunk which can then be cached
+it provides a deterministic filename for the chunk which can then be cached
the browser across GitLab versions.
More information is available in [webpack's code splitting documentation](https://webpack.js.org/guides/code-splitting/#dynamic-imports).
@@ -198,7 +392,7 @@ data is used for users with capped data plans.
General tips:
- Don't add new fonts.
-- Prefer font formats with better compression, e.g. WOFF2 is better than WOFF, which is better than TTF.
+- Prefer font formats with better compression, for example, WOFF2 is better than WOFF, which is better than TTF.
- Compress and minify assets wherever possible (For CSS/JS, Sprockets and webpack do this for us).
- If some functionality can reasonably be achieved without adding extra libraries, avoid them.
- Use page-specific JavaScript as described above to load libraries that are only needed on certain pages.
diff --git a/doc/development/fe_guide/principles.md b/doc/development/fe_guide/principles.md
index 75954a5f988..4952d023d90 100644
--- a/doc/development/fe_guide/principles.md
+++ b/doc/development/fe_guide/principles.md
@@ -1,12 +1,12 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Principles
-These principles will ensure that your frontend contribution starts off in the right direction.
+These principles ensure that your frontend contribution starts off in the right direction.
## Discuss architecture before implementation
@@ -14,7 +14,7 @@ Discuss your architecture design in an issue before writing code. This helps dec
## Be consistent
-There are multiple ways of writing code to accomplish the same results. We should be as consistent as possible in how we write code across our codebases. This will make it easier for us to maintain our code across GitLab.
+There are multiple ways of writing code to accomplish the same results. We should be as consistent as possible in how we write code across our codebases. This makes it easier for us to maintain our code across GitLab.
## Improve code [iteratively](https://about.gitlab.com/handbook/values/#iteration)
diff --git a/doc/development/fe_guide/security.md b/doc/development/fe_guide/security.md
index a82c315032f..627c5f4d12f 100644
--- a/doc/development/fe_guide/security.md
+++ b/doc/development/fe_guide/security.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Security
@@ -31,7 +31,7 @@ GitLab's CSP is used for the following:
Some exceptions include:
-- Scripts from Google Analytics and Piwik if either is enabled.
+- Scripts from Google Analytics and Matomo if either is enabled.
- Connecting with GitHub, Bitbucket, GitLab.com, etc. to allow project importing.
- Connecting with Google, Twitter, GitHub, etc. to allow OAuth authentication.
@@ -66,14 +66,14 @@ Some resources on implementing Subresource Integrity:
## Including external resources
External fonts, CSS, and JavaScript should never be used with the exception of
-Google Analytics and Piwik - and only when the instance has enabled it. Assets
+Google Analytics and Matomo - and only when the instance has enabled it. Assets
should always be hosted and served locally from the GitLab instance. Embedded
resources via `iframes` should never be used except in certain circumstances
such as with reCAPTCHA, which cannot be used without an `iframe`.
## Avoiding inline scripts and styles
-In order to protect users from [XSS vulnerabilities](https://en.wikipedia.org/wiki/Cross-site_scripting), we will disable
+In order to protect users from [XSS vulnerabilities](https://en.wikipedia.org/wiki/Cross-site_scripting), we intend to disable
inline scripts in the future using Content Security Policy.
While inline scripts can be useful, they're also a security concern. If
diff --git a/doc/development/fe_guide/style/html.md b/doc/development/fe_guide/style/html.md
index 61a724cf3c7..7fedbc6ce0d 100644
--- a/doc/development/fe_guide/style/html.md
+++ b/doc/development/fe_guide/style/html.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# HTML style guide
diff --git a/doc/development/fe_guide/style/index.md b/doc/development/fe_guide/style/index.md
index 0d73b16b5d3..89a3d874184 100644
--- a/doc/development/fe_guide/style/index.md
+++ b/doc/development/fe_guide/style/index.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# GitLab development style guides
diff --git a/doc/development/fe_guide/style/javascript.md b/doc/development/fe_guide/style/javascript.md
index b8e7429eb2c..8e3538e891d 100644
--- a/doc/development/fe_guide/style/javascript.md
+++ b/doc/development/fe_guide/style/javascript.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
disqus_identifier: 'https://docs.gitlab.com/ee/development/fe_guide/style_guide_js.html'
---
@@ -13,13 +13,13 @@ linter to manage most of our JavaScript style guidelines.
In addition to the style guidelines set by Airbnb, we also have a few specific rules
listed below.
-TIP: **Tip:**
+NOTE:
You can run eslint locally by running `yarn eslint`
## Avoid forEach
Avoid forEach when mutating data. Use `map`, `reduce` or `filter` instead of `forEach`
-when mutating data. This will minimize mutations in functions,
+when mutating data. This minimizes mutations in functions,
which aligns with [Airbnb's style guide](https://github.com/airbnb/javascript#testing--for-real).
```javascript
@@ -237,7 +237,7 @@ document.addEventListener("DOMContentLoaded", function(event) {
### Avoid side effects in constructors
Avoid making asynchronous calls, API requests or DOM manipulations in the `constructor`.
-Move them into separate functions instead. This will make tests easier to write and
+Move them into separate functions instead. This makes tests easier to write and
avoids violating the [Single Responsibility Principle](https://en.wikipedia.org/wiki/Single_responsibility_principle).
```javascript
diff --git a/doc/development/fe_guide/style/scss.md b/doc/development/fe_guide/style/scss.md
index c6ee1e64a80..a4cae12c4f3 100644
--- a/doc/development/fe_guide/style/scss.md
+++ b/doc/development/fe_guide/style/scss.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
disqus_identifier: 'https://docs.gitlab.com/ee/development/fe_guide/style_guide_scss.html'
---
@@ -12,7 +12,7 @@ easy to maintain, and performant for the end-user.
## Rules
-Our CSS is a mixture of current and legacy approaches. That means sometimes it may be difficult to follow this guide to the letter; it means you will definitely run into exceptions, where following the guide is difficult to impossible without outsized effort. In those cases, you may work with your reviewers and maintainers to identify an approach that does not fit these rules. Please endeavor to limit these cases.
+Our CSS is a mixture of current and legacy approaches. That means sometimes it may be difficult to follow this guide to the letter; it means you are likely to run into exceptions, where following the guide is difficult to impossible without outsized effort. In those cases, you may work with your reviewers and maintainers to identify an approach that does not fit these rules. Please endeavor to limit these cases.
### Utility Classes
@@ -26,7 +26,7 @@ Classes in [`utilities.scss`](https://gitlab.com/gitlab-org/gitlab/blob/master/a
Avoid [Bootstrap's Utility Classes](https://getbootstrap.com/docs/4.3/utilities/).
-NOTE: **Note:**
+NOTE:
While migrating [Bootstrap's Utility Classes](https://getbootstrap.com/docs/4.3/utilities/)
to the [GitLab UI](https://gitlab.com/gitlab-org/gitlab-ui/-/blob/master/doc/css.md#utilities)
utility classes, note both the classes for margin and padding differ. The size scale used at
@@ -79,10 +79,8 @@ CSS classes should use the `lowercase-hyphenated` format rather than
```
Class names should be used instead of tag name selectors.
-Using tag name selectors are discouraged in CSS because
-they can affect unintended elements in the hierarchy.
-Also, since they are not meaningful names, they do not
-add meaning to the code.
+Using tag name selectors is discouraged because they can affect
+unintended elements in the hierarchy.
```scss
// Bad
@@ -94,136 +92,23 @@ ul {
.class-name {
color: #fff;
}
-```
-
-### Formatting
-
-You should always use a space before a brace, braces should be on the same
-line, each property should each get its own line, and there should be a space
-between the property and its value.
-
-```scss
-// Bad
-.container-item {
- width: 100px; height: 100px;
- margin-top: 0;
-}
-
-// Bad
-.container-item
-{
- width: 100px;
- height: 100px;
- margin-top: 0;
-}
-
-// Bad
-.container-item{
- width:100px;
- height:100px;
- margin-top:0;
-}
-
-// Good
-.container-item {
- width: 100px;
- height: 100px;
- margin-top: 0;
-}
-```
-
-Note that there is an exception for single-line rulesets, although these are
-not typically recommended.
-
-```scss
-p { margin: 0; padding: 0; }
-```
-
-### Colors
-
-HEX (hexadecimal) colors should use shorthand where possible, and should use
-lower case letters to differentiate between letters and numbers, e.g. `#E3E3E3`
-vs. `#e3e3e3`.
-
-```scss
-// Bad
-p {
- color: #ffffff;
-}
-
-// Bad
-p {
- color: #FFFFFF;
-}
-
-// Good
-p {
- color: #fff;
-}
-```
-
-### Indentation
-
-Indentation should always use two spaces for each indentation level.
-
-```scss
-// Bad, four spaces
-p {
- color: #f00;
-}
-
-// Good
-p {
- color: #f00;
-}
-```
-
-### Semicolons
-
-Always include semicolons after every property. When the stylesheets are
-minified, the semicolons will be removed automatically.
-
-```scss
-// Bad
-.container-item {
- width: 100px;
- height: 100px
-}
-
-// Good
-.container-item {
- width: 100px;
- height: 100px;
-}
-```
-### Shorthand
+// Best
+// prefer an existing utility class over adding existing styles
+```0
-The shorthand form should be used for properties that support it.
+Class names are also preferable to IDs. Rules that use IDs
+are not-reusable, as there can only be one affected element on
+the page.
```scss
// Bad
-margin: 10px 15px 10px 15px;
-padding: 10px 10px 10px 10px;
-
-// Good
-margin: 10px 15px;
-padding: 10px;
-```
-
-### Zero Units
-
-Omit length units on zero values, they're unnecessary and not including them
-is slightly more performant.
-
-```scss
-// Bad
-.item-with-padding {
- padding: 0px;
+#my-element {
+ padding: 0;
}
// Good
-.item-with-padding {
+.my-element {
padding: 0;
}
```
@@ -234,27 +119,11 @@ Do not use any selector prefixed with `js-` for styling purposes. These
selectors are intended for use only with JavaScript to allow for removal or
renaming without breaking styling.
-### IDs
-
-Don't use ID selectors in CSS.
-
-```scss
-// Bad
-#my-element {
- padding: 0;
-}
-
-// Good
-.my-element {
- padding: 0;
-}
-```
-
### Variables
Before adding a new variable for a color or a size, guarantee:
-- There isn't already one
+- There isn't an existing one.
- There isn't a similar one we can use instead.
## Linting
@@ -263,8 +132,8 @@ We use [SCSS Lint](https://github.com/sds/scss-lint) to check for style guide co
ruleset in `.scss-lint.yml`, which is located in the home directory of the
project.
-To check if any warnings will be produced by your changes, you can run `rake
-scss_lint` in the GitLab directory. SCSS Lint will also run in GitLab CI/CD to
+To check if any warnings are produced by your changes, run `rake
+scss_lint` in the GitLab directory. SCSS Lint also runs in GitLab CI/CD to
catch any warnings.
If the Rake task is throwing warnings you don't understand, SCSS Lint's
@@ -278,23 +147,4 @@ the SCSS style guide, you can use [CSSComb](https://github.com/csscomb/csscomb.j
CSSComb globally (system-wide). Run it in the GitLab directory with
`csscomb app/assets/stylesheets` to automatically fix issues with CSS/SCSS.
-Note that this won't fix every problem, but it should fix a majority.
-
-### Ignoring issues
-
-If you want a line or set of lines to be ignored by the linter, you can use
-`// scss-lint:disable RuleName` ([more information](https://github.com/sds/scss-lint#disabling-linters-via-source)):
-
-```scss
-// This lint rule is disabled because it is supported only in Chrome/Safari
-// scss-lint:disable PropertySpelling
-body {
- text-decoration-skip: ink;
-}
-// scss-lint:enable PropertySpelling
-```
-
-Make sure a comment is added on the line above the `disable` rule, otherwise the
-linter will throw a warning. `DisableLinterReason` is enabled to make sure the
-style guide isn't being ignored, and to communicate to others why the style
-guide is ignored in this instance.
+Note that this doesn't fix every problem, but it should fix a majority.
diff --git a/doc/development/fe_guide/style/vue.md b/doc/development/fe_guide/style/vue.md
index 6b6ca9a6c71..b85c1b1de35 100644
--- a/doc/development/fe_guide/style/vue.md
+++ b/doc/development/fe_guide/style/vue.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Vue.js style guide
@@ -98,7 +98,7 @@ Please check this [rules](https://github.com/vuejs/eslint-plugin-vue#bulb-rules)
We discourage the use of the spread operator in this specific case in
order to keep our codebase explicit, discoverable, and searchable.
- This applies in any place where we'll benefit from the above, such as
+ This applies in any place where we would benefit from the above, such as
when [initializing Vuex state](../vuex.md#why-not-just-spread-the-initial-state).
The pattern above also enables us to easily parse non scalar values during
instantiation.
@@ -667,7 +667,7 @@ The goal of this accord is to make sure we are all on the same page.
1. If you need to grab data from the DOM, you may query the DOM 1 time while bootstrapping your application to grab data attributes using `dataset`. You can do this without jQuery.
1. You may use a jQuery dependency in Vue.js following [this example from the docs](https://vuejs.org/v2/examples/select2.html).
1. If an outside jQuery Event needs to be listen to inside the Vue application, you may use jQuery event listeners.
- 1. We will avoid adding new jQuery events when they are not required. Instead of adding new jQuery events take a look at [different methods to do the same task](https://vuejs.org/v2/api/#vm-emit).
+ 1. We avoid adding new jQuery events when they are not required. Instead of adding new jQuery events take a look at [different methods to do the same task](https://vuejs.org/v2/api/#vm-emit).
1. You may query the `window` object one time, while bootstrapping your application for application specific data (e.g. `scrollTo` is ok to access anytime). Do this access during the bootstrapping of your application.
1. You may have a temporary but immediate need to create technical debt by writing code that does not follow our standards, to be refactored later. Maintainers need to be ok with the tech debt in the first place. An issue should be created for that tech debt to evaluate it further and discuss. In the coming months you should fix that tech debt, with its priority to be determined by maintainers.
1. When creating tech debt you must write the tests for that code before hand and those tests may not be rewritten. e.g. jQuery tests rewritten to Vue tests.
diff --git a/doc/development/fe_guide/style_guide_js.md b/doc/development/fe_guide/style_guide_js.md
index f3fa80325ef..73a5bea189d 100644
--- a/doc/development/fe_guide/style_guide_js.md
+++ b/doc/development/fe_guide/style_guide_js.md
@@ -3,3 +3,6 @@ redirect_to: 'style/javascript.md'
---
This document was moved to [another location](style/javascript.md).
+
+<!-- This redirect file can be deleted after February 1, 2021. -->
+<!-- Before deletion, see: https://docs.gitlab.com/ee/development/documentation/#move-or-rename-a-page -->
diff --git a/doc/development/fe_guide/style_guide_scss.md b/doc/development/fe_guide/style_guide_scss.md
index 2b4e6427a18..eaa6d33fb43 100644
--- a/doc/development/fe_guide/style_guide_scss.md
+++ b/doc/development/fe_guide/style_guide_scss.md
@@ -3,3 +3,6 @@ redirect_to: 'style/scss.md'
---
This document was moved to [another location](style/scss.md).
+
+<!-- This redirect file can be deleted after February 1, 2021. -->
+<!-- Before deletion, see: https://docs.gitlab.com/ee/development/documentation/#move-or-rename-a-page -->
diff --git a/doc/development/fe_guide/testing.md b/doc/development/fe_guide/testing.md
index b23e37d1eef..457d15235ae 100644
--- a/doc/development/fe_guide/testing.md
+++ b/doc/development/fe_guide/testing.md
@@ -3,3 +3,6 @@ redirect_to: '../testing_guide/frontend_testing.md'
---
This document was moved to [another location](../testing_guide/frontend_testing.md).
+
+<!-- This redirect file can be deleted after February 1, 2021. -->
+<!-- Before deletion, see: https://docs.gitlab.com/ee/development/documentation/#move-or-rename-a-page -->
diff --git a/doc/development/fe_guide/tooling.md b/doc/development/fe_guide/tooling.md
index 809e05ea61f..d33022b9355 100644
--- a/doc/development/fe_guide/tooling.md
+++ b/doc/development/fe_guide/tooling.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Tooling
@@ -14,21 +14,21 @@ We use ESLint to encapsulate and enforce frontend code standards. Our configurat
This section describes yarn scripts that are available to validate and apply automatic fixes to files using ESLint.
-To check all currently staged files (based on `git diff`) with ESLint, run the following script:
+To check all staged files (based on `git diff`) with ESLint, run the following script:
```shell
yarn eslint-staged
```
-A list of problems found will be logged to the console.
+A list of problems found are logged to the console.
-To apply automatic ESLint fixes to all currently staged files (based on `git diff`), run the following script:
+To apply automatic ESLint fixes to all staged files (based on `git diff`), run the following script:
```shell
yarn eslint-staged-fix
```
-If manual changes are required, a list of changes will be sent to the console.
+If manual changes are required, a list of changes are sent to the console.
To check **all** files in the repository with ESLint, run the following script:
@@ -36,7 +36,7 @@ To check **all** files in the repository with ESLint, run the following script:
yarn eslint
```
-A list of problems found will be logged to the console.
+A list of problems found are logged to the console.
To apply automatic ESLint fixes to **all** files in the repository, run the following script:
@@ -44,9 +44,9 @@ To apply automatic ESLint fixes to **all** files in the repository, run the foll
yarn eslint-fix
```
-If manual changes are required, a list of changes will be sent to the console.
+If manual changes are required, a list of changes are sent to the console.
-CAUTION: **Caution:**
+WARNING:
Limit use to global rule updates. Otherwise, the changes can lead to huge Merge Requests.
### Disabling ESLint in new files
@@ -156,13 +156,13 @@ The source of these Yarn scripts can be found in `/scripts/frontend/prettier.js`
node ./scripts/frontend/prettier.js check-all ./vendor/
```
-This will go over all files in a specific folder check it.
+This iterates over all files in a specific folder, and checks them.
```shell
node ./scripts/frontend/prettier.js save-all ./vendor/
```
-This will go over all files in a specific folder and save it.
+This iterates over all files in a specific folder and saves them.
### VSCode Settings
diff --git a/doc/development/fe_guide/vue.md b/doc/development/fe_guide/vue.md
index 77bdadfe8da..41fbd128631 100644
--- a/doc/development/fe_guide/vue.md
+++ b/doc/development/fe_guide/vue.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Vue
@@ -62,11 +62,11 @@ Be sure to read about [page-specific JavaScript](performance.md#page-specific-ja
While mounting a Vue application, you might need to provide data from Rails to JavaScript.
To do that, you can use the `data` attributes in the HTML element and query them while mounting the application.
-You should only do this while initializing the application, because the mounted element will be replaced with Vue-generated DOM.
+You should only do this while initializing the application, because the mounted element is replaced with a Vue-generated DOM.
The advantage of providing data from the DOM to the Vue instance through `props` in the `render` function
instead of querying the DOM inside the main Vue component is avoiding the need to create a fixture or an HTML element in the unit test,
-which will make the tests easier.
+which makes the tests easier.
See the following example, also, please refer to our [Vue style guide](style/vue.md#basic-rules) for additional
information on why we explicitly declare the data being passed into the Vue app;
@@ -91,15 +91,15 @@ return new Vue({
},
});
},
-}
+});
```
> When adding an `id` attribute to mount a Vue application, please make sure this `id` is unique across the codebase
#### Accessing the `gl` object
-When we need to query the `gl` object for data that won't change during the application's life cycle, we should do it in the same place where we query the DOM.
-By following this practice, we can avoid the need to mock the `gl` object, which will make tests easier.
+When we need to query the `gl` object for data that doesn't change during the application's life cycle, we should do it in the same place where we query the DOM.
+By following this practice, we can avoid the need to mock the `gl` object, which makes tests easier.
It should be done while initializing our Vue instance, and the data should be provided as `props` to the main component:
```javascript
@@ -162,12 +162,12 @@ This approach has a few benefits:
### A folder for Components
-This folder holds all components that are specific of this new feature.
-If you need to use or create a component that will probably be used somewhere
+This folder holds all components that are specific to this new feature.
+If you need to use or create a component that is likely to be used somewhere
else, please refer to `vue_shared/components`.
A good rule of thumb to know when you should create a component is to think if
-it will be reusable elsewhere.
+it could be reusable elsewhere.
For example, tables are used in a quite amount of places across GitLab, a table
would be a good fit for a component. On the other hand, a table cell used only
@@ -192,7 +192,7 @@ Check this [page](vuex.md) for more details.
In the [Vue documentation](https://vuejs.org/v2/api/#Options-Data) the Data function/object is defined as follows:
-> The data object for the Vue instance. Vue will recursively convert its properties into getter/setters to make it β€œreactive”. The object must be plain: native objects such as browser API objects and prototype properties are ignored. A rule of thumb is that data should just be data - it is not recommended to observe objects with their own stateful behavior.
+> The data object for the Vue instance. Vue recursively converts its properties into getter/setters to make it β€œreactive”. The object must be plain: native objects such as browser API objects and prototype properties are ignored. A rule of thumb is that data should just be data - it is not recommended to observe objects with their own stateful behavior.
Based on the Vue guidance:
diff --git a/doc/development/fe_guide/vue3_migration.md b/doc/development/fe_guide/vue3_migration.md
index 46437f39dbe..9d2f3b27968 100644
--- a/doc/development/fe_guide/vue3_migration.md
+++ b/doc/development/fe_guide/vue3_migration.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Migration to Vue 3
@@ -82,7 +82,7 @@ const FunctionalComp = (props, slots) => {
}
```
-It is not recommended to replace stateful components with functional components unless you absolutely need a performance improvement right now. In Vue 3, performance gains for functional components will be negligible.
+It is not recommended to replace stateful components with functional components unless you absolutely need a performance improvement right now. In Vue 3, performance gains for functional components are negligible.
## Old slots syntax with `slot` attribute
diff --git a/doc/development/fe_guide/vuex.md b/doc/development/fe_guide/vuex.md
index 7765ba04d40..1d83335291a 100644
--- a/doc/development/fe_guide/vuex.md
+++ b/doc/development/fe_guide/vuex.md
@@ -1,19 +1,19 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Vuex
-When there's a clear benefit to separating state management from components (e.g. due to state complexity) we recommend using [Vuex](https://vuex.vuejs.org) over any other Flux pattern. Otherwise, feel free to manage state within the components.
+When there's a clear benefit to separating state management from components (for example, due to state complexity) we recommend using [Vuex](https://vuex.vuejs.org) over any other Flux pattern. Otherwise, feel free to manage state in the components.
Vuex should be strongly considered when:
-- You expect multiple parts of the application to react to state changes
-- There's a need to share data between multiple components
-- There are complex interactions with Backend, e.g. multiple API calls
-- The app involves interacting with backend via both traditional REST API and GraphQL (especially when moving the REST API over to GraphQL is a pending backend task)
+- You expect multiple parts of the application to react to state changes.
+- There's a need to share data between multiple components.
+- There are complex interactions with Backend, for example, multiple API calls.
+- The app involves interacting with backend via both traditional REST API and GraphQL (especially when moving the REST API over to GraphQL is a pending backend task).
The information included in this page is explained in more detail in the
official [Vuex documentation](https://vuex.vuejs.org).
@@ -22,7 +22,7 @@ official [Vuex documentation](https://vuex.vuejs.org).
Vuex is composed of State, Getters, Mutations, Actions, and Modules.
-When a user clicks on an action, we need to `dispatch` it. This action will `commit` a mutation that will change the state. The action itself will not update the state; only a mutation should update the state.
+When a user clicks on an action, we need to `dispatch` it. This action `commits` a mutation that changes the state. The action itself does not update the state; only a mutation should update the state.
## File structure
@@ -92,7 +92,7 @@ An action is a payload of information to send data from our application to our s
An action is usually composed by a `type` and a `payload` and they describe what happened. Unlike [mutations](#mutationsjs), actions can contain asynchronous operations - that's why we always need to handle asynchronous logic in actions.
-In this file, we will write the actions that will call mutations for handling a list of users:
+In this file, we write the actions that call mutations for handling a list of users:
```javascript
import * as types from './mutation_types';
@@ -163,15 +163,15 @@ Instead of creating an mutation to toggle the loading state, we should:
- `PUT`: `updateSomething`
- `DELETE`: `deleteSomething`
-As a result, we can dispatch the `fetchNamespace` action from the component and it will be responsible to commit `REQUEST_NAMESPACE`, `RECEIVE_NAMESPACE_SUCCESS` and `RECEIVE_NAMESPACE_ERROR` mutations.
+As a result, we can dispatch the `fetchNamespace` action from the component and it is responsible to commit `REQUEST_NAMESPACE`, `RECEIVE_NAMESPACE_SUCCESS` and `RECEIVE_NAMESPACE_ERROR` mutations.
-> Previously, we were dispatching actions from the `fetchNamespace` action instead of committing mutation, so please don't be confused if you find a different pattern in the older parts of the codebase. However, we encourage leveraging a new pattern whenever you write new Vuex stores
+> Previously, we were dispatching actions from the `fetchNamespace` action instead of committing mutation, so please don't be confused if you find a different pattern in the older parts of the codebase. However, we encourage leveraging a new pattern whenever you write new Vuex stores.
By following this pattern we guarantee:
-1. All applications follow the same pattern, making it easier for anyone to maintain the code
-1. All data in the application follows the same lifecycle pattern
-1. Unit tests are easier
+1. All applications follow the same pattern, making it easier for anyone to maintain the code.
+1. All data in the application follows the same lifecycle pattern.
+1. Unit tests are easier.
#### Updating complex state
@@ -215,7 +215,7 @@ While this approach works it has several dependencies:
- Correct selection of `item` in the component/action.
- The `item` property is already declared in the `closed` state.
- A new `confidential` property would not be reactive.
-- Noting that `item` is referenced by `items`
+- Noting that `item` is referenced by `items`.
A mutation written like this is harder to maintain and more error prone. We should rather write a mutation like this:
@@ -245,7 +245,7 @@ A mutation written like this is easier to maintain. In addition, we avoid errors
### `getters.js`
Sometimes we may need to get derived state based on store state, like filtering for a specific prop.
-Using a getter will also cache the result based on dependencies due to [how computed props work](https://vuejs.org/v2/guide/computed.html#Computed-Caching-vs-Methods)
+Using a getter also caches the result based on dependencies due to [how computed props work](https://vuejs.org/v2/guide/computed.html#Computed-Caching-vs-Methods)
This can be done through the `getters`:
```javascript
@@ -272,7 +272,10 @@ import { mapGetters } from 'vuex';
### `mutation_types.js`
From [vuex mutations docs](https://vuex.vuejs.org/guide/mutations.html):
-> It is a commonly seen pattern to use constants for mutation types in various Flux implementations. This allows the code to take advantage of tooling like linters, and putting all constants in a single file allows your collaborators to get an at-a-glance view of what mutations are possible in the entire application.
+> It is a commonly seen pattern to use constants for mutation types in various Flux implementations.
+> This allows the code to take advantage of tooling like linters, and putting all constants in a
+> single file allows your collaborators to get an at-a-glance view of what mutations are possible
+> in the entire application.
```javascript
export const ADD_USER = 'ADD_USER';
@@ -346,7 +349,7 @@ export default ({
#### Why not just ...spread the initial state?
-The astute reader will see an opportunity to cut out a few lines of code from
+The astute reader sees an opportunity to cut out a few lines of code from
the example above:
```javascript
@@ -359,17 +362,17 @@ export default initialState => ({
});
```
-We've made the conscious decision to avoid this pattern to aid in the
-discoverability and searchability of our frontend codebase. The same applies
+We made the conscious decision to avoid this pattern to improve the ability to
+discover and search our frontend codebase. The same applies
when [providing data to a Vue app](vue.md#providing-data-from-haml-to-javascript). The reasoning for this is described in [this
discussion](https://gitlab.com/gitlab-org/frontend/rfcs/-/issues/56#note_302514865):
> Consider a `someStateKey` is being used in the store state. You _may_ not be
> able to grep for it directly if it was provided only by `el.dataset`. Instead,
-> you'd have to grep for `some_state_key`, since it could have come from a rails
+> you'd have to grep for `some_state_key`, because it could have come from a Rails
> template. The reverse is also true: if you're looking at a rails template, you
> might wonder what uses `some_state_key`, but you'd _have_ to grep for
-> `someStateKey`
+> `someStateKey`.
### Communicating with the Store
@@ -426,12 +429,12 @@ export default {
#### Testing Vuex concerns
-Refer to [vuex docs](https://vuex.vuejs.org/guide/testing.html) regarding testing Actions, Getters and Mutations.
+Refer to [Vuex docs](https://vuex.vuejs.org/guide/testing.html) regarding testing Actions, Getters and Mutations.
#### Testing components that need a store
-Smaller components might use `store` properties to access the data.
-In order to write unit tests for those components, we need to include the store and provide the correct state:
+Smaller components might use `store` properties to access the data. To write unit tests for those
+components, we need to include the store and provide the correct state:
```javascript
//component_spec.js
@@ -482,8 +485,9 @@ describe('component', () => {
### Two way data binding
-When storing form data in Vuex, it is sometimes necessary to update the value stored. The store should never be mutated directly, and an action should be used instead.
-In order to still use `v-model` in our code, we need to create computed properties in this form:
+When storing form data in Vuex, it is sometimes necessary to update the value stored. The store
+should never be mutated directly, and an action should be used instead.
+To use `v-model` in our code, we need to create computed properties in this form:
```javascript
export default {
@@ -521,7 +525,7 @@ export default {
};
```
-Adding a few of these properties becomes cumbersome, and makes the code more repetitive with more tests to write. To simplify this there is a helper in `~/vuex_shared/bindings.js`
+Adding a few of these properties becomes cumbersome, and makes the code more repetitive with more tests to write. To simplify this there is a helper in `~/vuex_shared/bindings.js`.
The helper can be used like so:
@@ -568,4 +572,4 @@ export default {
}
```
-`mapComputed` will then generate the appropriate computed properties that get the data from the store and dispatch the correct action when updated.
+`mapComputed` then generates the appropriate computed properties that get the data from the store and dispatch the correct action when updated.
diff --git a/doc/development/feature_categorization/index.md b/doc/development/feature_categorization/index.md
index 66ed2952250..dd69d7bcf80 100644
--- a/doc/development/feature_categorization/index.md
+++ b/doc/development/feature_categorization/index.md
@@ -1,7 +1,7 @@
---
stage: Enablement
group: Infrastructure
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Feature Categorization
@@ -11,7 +11,7 @@ info: To determine the technical writer assigned to the Stage/Group associated w
Each Sidekiq worker, controller action, or API endpoint
must declare a `feature_category` attribute. This attribute maps each
of these to a [feature
-category](https://about.gitlab.com/handbook/product/product-categories/). This
+category](https://about.gitlab.com/handbook/product/categories/). This
is done for error budgeting, alert routing, and team attribution.
The list of feature categories can be found in the file `config/feature_categories.yml`.
@@ -121,6 +121,11 @@ the actions used in configuration still exist as routes.
## API endpoints
+The [GraphQL API](../../api/graphql/index.md) is currently categorized
+as `not_owned`. For now, no extra specification is needed. For more
+information, see
+[`gitlab-com/gl-infra/scalability#583`](https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/583/).
+
Grape API endpoints can use the `feature_category` class method, like
[Rails controllers](#rails-controllers) do:
diff --git a/doc/development/feature_flags.md b/doc/development/feature_flags.md
index cff88388ba0..7456e8df8d9 100644
--- a/doc/development/feature_flags.md
+++ b/doc/development/feature_flags.md
@@ -3,3 +3,6 @@ redirect_to: 'feature_flags/index.md'
---
This document was moved to [another location](feature_flags/index.md).
+
+<!-- This redirect file can be deleted after February 1, 2021. -->
+<!-- Before deletion, see: https://docs.gitlab.com/ee/development/documentation/#move-or-rename-a-page -->
diff --git a/doc/development/feature_flags/controls.md b/doc/development/feature_flags/controls.md
index df737912c00..7551199aa58 100644
--- a/doc/development/feature_flags/controls.md
+++ b/doc/development/feature_flags/controls.md
@@ -34,7 +34,7 @@ _before_ the code is being deployed.
This allows you to separate rolling out a feature from a deploy, making it
easier to measure the impact of both separately.
-GitLab's feature library (using
+The GitLab feature library (using
[Flipper](https://github.com/jnunemaker/flipper), and covered in the [Feature
Flags process](process.md) guide) supports rolling out changes to a percentage of
time to users. This in turn can be controlled using [GitLab Chatops](../../ci/chatops/README.md).
@@ -50,7 +50,7 @@ change feature flags or you do not [have access](#access).
### Enabling a feature for preproduction testing
-As a first step in a feature rollout, you should enable the feature on <https://staging.gitlab.com>
+As a first step in a feature rollout, you should enable the feature on <https://about.staging.gitlab.com>
and <https://dev.gitlab.org>.
These two environments have different scopes.
@@ -228,21 +228,29 @@ existing gates (e.g. `--group=gitlab-org`) in the above processes.
### Feature flag change logging
-Any feature flag change that affects GitLab.com (production) will
-automatically be logged in an issue.
+#### Chatops level
+
+Any feature flag change that affects GitLab.com (production) via [Chatops](https://gitlab.com/gitlab-com/chatops)
+is automatically logged in an issue.
The issue is created in the
[gl-infra/feature-flag-log](https://gitlab.com/gitlab-com/gl-infra/feature-flag-log/-/issues?scope=all&utf8=%E2%9C%93&state=closed)
project, and it will at minimum log the Slack handle of person enabling
a feature flag, the time, and the name of the flag being changed.
-The issue is then also posted to GitLab's internal
+The issue is then also posted to the GitLab internal
[Grafana dashboard](https://dashboards.gitlab.net/) as an annotation
marker to make the change even more visible.
Changes to the issue format can be submitted in the
[Chatops project](https://gitlab.com/gitlab-com/chatops).
+#### Instance level
+
+Any feature flag change that affects any GitLab instance is automatically logged in
+[features_json.log](../../administration/logs.md#features_jsonlog).
+You can search the change history in [Kibana](https://about.gitlab.com/handbook/support/workflows/kibana.html).
+
## Cleaning up
A feature flag should be removed as soon as it is no longer needed. Each additional
@@ -266,7 +274,7 @@ To remove a feature flag:
### Cleanup ChatOps
-When a feature gate has been removed from the code base, the feature
+When a feature gate has been removed from the codebase, the feature
record still exists in the database that the flag was deployed too.
The record can be deleted once the MR is deployed to each environment:
diff --git a/doc/development/feature_flags/development.md b/doc/development/feature_flags/development.md
index 2855662e1db..7c5333c9aa6 100644
--- a/doc/development/feature_flags/development.md
+++ b/doc/development/feature_flags/development.md
@@ -17,12 +17,18 @@ request removing the feature flag or the merge request where the default value o
the feature flag is set to enabled. If the feature contains any database migrations, it
*should* include a changelog entry for the database changes.
-CAUTION: **Caution:**
+WARNING:
All newly-introduced feature flags should be [disabled by default](process.md#feature-flags-in-gitlab-development).
-NOTE: **Note:**
+NOTE:
This document is the subject of continued work as part of an epic to [improve internal usage of Feature Flags](https://gitlab.com/groups/gitlab-org/-/epics/3551). Raise any suggestions as new issues and attach them to the epic.
+## Risk of a broken master (main) branch
+
+Feature flags **must** be used in the MR that introduces them. Not doing so causes a
+[broken master](https://about.gitlab.com/handbook/engineering/workflow/#broken-master) scenario due
+to the `rspec:feature-flags` job that only runs on the `master` branch.
+
## Types of feature flags
Choose a feature flag type that matches the expected usage.
@@ -40,11 +46,11 @@ This is the default type used when calling `Feature.enabled?`.
### `ops` type
`ops` feature flags are long-lived feature flags that control operational aspects
-of GitLab's behavior. For example, feature flags that disable features that might
+of GitLab product behavior. For example, feature flags that disable features that might
have a performance impact, like special Sidekiq worker behavior.
`ops` feature flags likely do not have rollout issues, as it is hard to
-predict when they will be enabled or disabled.
+predict when they are enabled or disabled.
To use `ops` feature flags, you must append `type: :ops` to `Feature.enabled?`
invocations:
@@ -88,9 +94,9 @@ Each feature flag is defined in a separate YAML file consisting of a number of f
| `default_enabled` | yes | The default state of the feature flag that is strictly validated, with `default_enabled:` passed as an argument. |
| `introduced_by_url` | no | The URL to the Merge Request that introduced the feature flag. |
| `rollout_issue_url` | no | The URL to the Issue covering the feature flag rollout. |
-| `group` | no | The [group](https://about.gitlab.com/handbook/product/product-categories/#devops-stages) that owns the feature flag. |
+| `group` | no | The [group](https://about.gitlab.com/handbook/product/categories/#devops-stages) that owns the feature flag. |
-TIP: **Tip:**
+NOTE:
All validations are skipped when running in `RAILS_ENV=production`.
## Create a new feature flag
@@ -125,7 +131,7 @@ type: development
default_enabled: false
```
-TIP: **Tip:**
+NOTE:
To create a feature flag that is only used in EE, add the `--ee` flag: `bin/feature-flag --ee`
## Delete a feature flag
@@ -172,6 +178,29 @@ if Feature.disabled?(:my_feature_flag, project, default_enabled: true)
end
```
+If not specified, `default_enabled` is `false`.
+
+To force reading the `default_enabled` value from the relative YAML definition file, use
+`default_enabled: :yaml`:
+
+```ruby
+if Feature.enabled?(:feature_flag, project, default_enabled: :yaml)
+ # execute code if feature flag is enabled
+end
+```
+
+```ruby
+if Feature.disabled?(:feature_flag, project, default_enabled: :yaml)
+ # execute code if feature flag is disabled
+end
+```
+
+This allows to use the same feature flag check across various parts of the codebase and
+maintain the status of `default_enabled` in the YAML definition file which is the SSOT.
+
+If `default_enabled: :yaml` is used, a YAML definition is expected or an error is raised
+in development or test environment, while returning `false` on production.
+
If not specified, the default feature flag type for `Feature.enabled?` and `Feature.disabled?`
is `type: development`. For all other feature flag types, you must specify the `type:`:
@@ -187,7 +216,7 @@ if Feature.disabled?(:my_feature_flag, project, type: :ops)
end
```
-DANGER: **Warning:**
+WARNING:
Don't use feature flags at application load time. For example, using the `Feature` class in
`config/initializers/*` or at the class level could cause an unexpected error. This error occurs
because a database that a feature flag adapter might depend on doesn't exist at load time
@@ -497,10 +526,6 @@ is persisted.
Make sure behavior under feature flag doesn't go untested in some non-specific contexts.
-See the
-[testing guide](../testing_guide/best_practices.md#feature-flags-in-tests)
-for information and examples on how to stub feature flags in tests.
-
### `stub_feature_flags: false`
This disables a memory-stubbed flipper, and uses `Flipper::Adapters::ActiveRecord`
diff --git a/doc/development/feature_flags/index.md b/doc/development/feature_flags/index.md
index a867bcf792a..270e07ed755 100644
--- a/doc/development/feature_flags/index.md
+++ b/doc/development/feature_flags/index.md
@@ -6,18 +6,43 @@ info: "See the Technical Writers assigned to Development Guidelines: https://abo
# Feature flags in development of GitLab
+## When to use feature flags
+
+Starting with GitLab 11.4, developers are required to use feature flags for
+non-trivial changes. Such changes include:
+
+- New features (e.g. a new merge request widget, epics, etc).
+- Complex performance improvements that may require additional testing in
+ production, such as rewriting complex queries.
+- Invasive changes to the user interface, such as a new navigation bar or the
+ removal of a sidebar.
+- Adding support for importing projects from a third-party service.
+- Risk of data loss
+
+In all cases, those working on the changes can best decide if a feature flag is
+necessary. For example, changing the color of a button doesn't need a feature
+flag, while changing the navigation bar definitely needs one. In case you are
+uncertain if a feature flag is necessary, simply ask about this in the merge
+request, and those reviewing the changes will likely provide you with an answer.
+
+When using a feature flag for UI elements, make sure to _also_ use a feature
+flag for the underlying backend code, if there is any. This ensures there is
+absolutely no way to use the feature until it is enabled.
+
+## How to use Feature Flags
+
Feature flags can be used to gradually deploy changes, regardless of whether
they are new features or performance improvements. By using feature flags,
you can determine the impact of GitLab-directed changes, while still being able
to disable those changes without having to revert an entire release.
-Before using feature flags for GitLab's development, review the following development guides:
+Before using feature flags for GitLab development, review the following development guides:
-NOTE: **Note:**
+NOTE:
The feature flags used by GitLab to deploy its own features **are not** the same
as the [feature flags offered as part of the product](../../operations/feature_flags.md).
-For an overview about starting with feature flags in GitLab's development,
+For an overview about starting with feature flags in GitLab development,
use this [training template](https://gitlab.com/gitlab-com/www-gitlab-com/-/blob/master/.gitlab/issue_templates/feature-flag-training.md).
Development guides:
diff --git a/doc/development/feature_flags/process.md b/doc/development/feature_flags/process.md
index b282424f59a..2e3680bb103 100644
--- a/doc/development/feature_flags/process.md
+++ b/doc/development/feature_flags/process.md
@@ -32,7 +32,8 @@ should be leveraged:
requests, you can use the following workflow:
1. [Create a new feature flag](development.md#create-a-new-feature-flag)
- which is **off** by default, in the first merge request.
+ which is **off** by default, in the first merge request which uses the flag.
+ Flags [should not be added separately](development.md#risk-of-a-broken-master-main-branch).
1. Submit incremental changes via one or more merge requests, ensuring that any
new code added can only be reached if the feature flag is **on**.
You can keep the feature flag enabled on your local GDK during development.
@@ -52,28 +53,6 @@ problems, such as outages.
Please also read the [development guide for feature flags](development.md).
-### When to use feature flags
-
-Starting with GitLab 11.4, developers are required to use feature flags for
-non-trivial changes. Such changes include:
-
-- New features (e.g. a new merge request widget, epics, etc).
-- Complex performance improvements that may require additional testing in
- production, such as rewriting complex queries.
-- Invasive changes to the user interface, such as a new navigation bar or the
- removal of a sidebar.
-- Adding support for importing projects from a third-party service.
-
-In all cases, those working on the changes can best decide if a feature flag is
-necessary. For example, changing the color of a button doesn't need a feature
-flag, while changing the navigation bar definitely needs one. In case you are
-uncertain if a feature flag is necessary, simply ask about this in the merge
-request, and those reviewing the changes will likely provide you with an answer.
-
-When using a feature flag for UI elements, make sure to _also_ use a feature
-flag for the underlying backend code, if there is any. This ensures there is
-absolutely no way to use the feature until it is enabled.
-
### Including a feature behind feature flag in the final release
In order to build a final release and present the feature for self-managed
@@ -91,7 +70,7 @@ account for unforeseen problems.
Feature flags must be [documented according to their state (enabled/disabled)](../documentation/feature_flags.md),
and when the state changes, docs **must** be updated accordingly.
-NOTE: **Note:**
+NOTE:
Take into consideration that such action can make the feature available on
GitLab.com shortly after the change to the feature flag is merged.
diff --git a/doc/development/features_inside_dot_gitlab.md b/doc/development/features_inside_dot_gitlab.md
index e8918c76d04..08adb7faeb2 100644
--- a/doc/development/features_inside_dot_gitlab.md
+++ b/doc/development/features_inside_dot_gitlab.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Features inside the `.gitlab/` directory
diff --git a/doc/development/file_storage.md b/doc/development/file_storage.md
index aa91e105513..1f929d64058 100644
--- a/doc/development/file_storage.md
+++ b/doc/development/file_storage.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# File Storage in GitLab
@@ -48,6 +48,7 @@ they are still not 100% standardized. You can see them below:
| CI Artifacts (CE) | yes | `shared/artifacts/:disk_hash[0..1]/:disk_hash[2..3]/:disk_hash/:year_:month_:date/:job_id/:job_artifact_id` (`:disk_hash` is SHA256 digest of `project_id`) | `JobArtifactUploader` | Ci::JobArtifact |
| LFS Objects (CE) | yes | `shared/lfs-objects/:hex/:hex/:object_hash` | `LfsObjectUploader` | LfsObject |
| External merge request diffs | yes | `shared/external-diffs/merge_request_diffs/mr-:parent_id/diff-:id` | `ExternalDiffUploader` | MergeRequestDiff |
+| Issuable metric images | yes | `uploads/-/system/issuable_metric_image/file/:id/:filename` | `IssuableMetricImageUploader` | IssuableMetricImage |
CI Artifacts and LFS Objects behave differently in CE and EE. In CE they inherit the `GitlabUploader`
while in EE they inherit the `ObjectStorage` and store files in and S3 API compatible object store.
@@ -92,7 +93,7 @@ All the `GitlabUploader` derived classes should comply with this path segment sc
| | | `ObjectStorage::Concern#upload_path |
```
-The `RecordsUploads::Concern` concern will create an `Upload` entry for every file stored by a `GitlabUploader` persisting the dynamic parts of the path using
+The `RecordsUploads::Concern` concern creates an `Upload` entry for every file stored by a `GitlabUploader` persisting the dynamic parts of the path using
`GitlabUploader#dynamic_path`. You may then use the `Upload#build_uploader` method to manipulate the file.
## Object Storage
@@ -107,9 +108,9 @@ The `CarrierWave::Uploader#store_dir` is overridden to
### Using `ObjectStorage::Extension::RecordsUploads`
-This concern will automatically include `RecordsUploads::Concern` if not already included.
+This concern includes `RecordsUploads::Concern` if not already included.
-The `ObjectStorage::Concern` uploader will search for the matching `Upload` to select the correct object store. The `Upload` is mapped using `#store_dirs + identifier` for each store (LOCAL/REMOTE).
+The `ObjectStorage::Concern` uploader searches for the matching `Upload` to select the correct object store. The `Upload` is mapped using `#store_dirs + identifier` for each store (LOCAL/REMOTE).
```ruby
class SongUploader < GitlabUploader
@@ -129,7 +130,7 @@ end
### Using a mounted uploader
-The `ObjectStorage::Concern` will query the `model.<mount>_store` attribute to select the correct object store.
+The `ObjectStorage::Concern` queries the `model.<mount>_store` attribute to select the correct object store.
This column must be present in the model schema.
```ruby
diff --git a/doc/development/filtering_by_label.md b/doc/development/filtering_by_label.md
index 9c1993fdf7f..b10e2f6e082 100644
--- a/doc/development/filtering_by_label.md
+++ b/doc/development/filtering_by_label.md
@@ -1,7 +1,7 @@
---
stage: Plan
group: Project Management
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Filtering by label
diff --git a/doc/development/foreign_keys.md b/doc/development/foreign_keys.md
index ff8c03ce76b..0f100c6b66e 100644
--- a/doc/development/foreign_keys.md
+++ b/doc/development/foreign_keys.md
@@ -1,7 +1,7 @@
---
stage: Enablement
group: Database
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Foreign Keys & Associations
diff --git a/doc/development/frontend.md b/doc/development/frontend.md
index 8bb5cf7af62..040004a6917 100644
--- a/doc/development/frontend.md
+++ b/doc/development/frontend.md
@@ -3,3 +3,6 @@ redirect_to: 'fe_guide/index.md'
---
This document was moved to [another location](fe_guide/index.md).
+
+<!-- This redirect file can be deleted after February 1, 2021. -->
+<!-- Before deletion, see: https://docs.gitlab.com/ee/development/documentation/#move-or-rename-a-page -->
diff --git a/doc/development/gemfile.md b/doc/development/gemfile.md
index 1daa3ad5cba..c4a9ec4c454 100644
--- a/doc/development/gemfile.md
+++ b/doc/development/gemfile.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# `Gemfile` guidelines
diff --git a/doc/development/geo.md b/doc/development/geo.md
index 06b032a8f66..ed1837f9564 100644
--- a/doc/development/geo.md
+++ b/doc/development/geo.md
@@ -1,7 +1,7 @@
---
stage: Enablement
group: Geo
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Geo (development) **(PREMIUM ONLY)**
@@ -102,7 +102,7 @@ projects that need updating. Those projects can be:
When we fail to fetch a repository on the secondary `RETRIES_BEFORE_REDOWNLOAD`
times, Geo does a so-called _re-download_. It will do a clean clone
into the `@geo-temporary` directory in the root of the storage. When
-it's successful, we replace the main repo with the newly cloned one.
+it's successful, we replace the main repository with the newly cloned one.
### Uploads replication
@@ -161,7 +161,7 @@ If the requested file matches the requested SHA256 sum, then the Geo
feature, which allows NGINX to handle the file transfer without tying
up Rails or Workhorse.
-NOTE: **Note:**
+NOTE:
JWT requires synchronized clocks between the machines
involved, otherwise it may fail with an encryption error.
diff --git a/doc/development/geo/framework.md b/doc/development/geo/framework.md
index e440e324c4a..e4518ce1b57 100644
--- a/doc/development/geo/framework.md
+++ b/doc/development/geo/framework.md
@@ -1,12 +1,12 @@
---
stage: Enablement
group: Geo
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Geo self-service framework
-NOTE: **Note:**
+NOTE:
This document is subject to change as we continue to implement and iterate on the framework.
Follow the progress in the [epic](https://gitlab.com/groups/gitlab-org/-/epics/2161).
If you need to replicate a new data type, reach out to the Geo
@@ -393,6 +393,8 @@ can track verification state.
def change
change_table(:widgets) do |t|
+ t.integer :verification_state, default: 0, limit: 2, null: false
+ t.column :verification_started_at, :datetime_with_timezone
t.integer :verification_retry_count, limit: 2
t.column :verification_retry_at, :datetime_with_timezone
t.column :verified_at, :datetime_with_timezone
@@ -431,36 +433,53 @@ can track verification state.
end
```
-1. Add a partial index on `verification_failure` and `verification_checksum` to ensure
- re-verification can be performed efficiently:
+1. Add indexes on verification fields to ensure verification can be performed efficiently:
+
+ Some or all of these indexes can be omitted if the table is guaranteed to be small. Ask a database expert if you are unsure.
```ruby
# frozen_string_literal: true
- class AddVerificationFailureIndexToWidgets < ActiveRecord::Migration[6.0]
+ class AddVerificationIndexesToWidgets < ActiveRecord::Migration[6.0]
include Gitlab::Database::MigrationHelpers
DOWNTIME = false
+ VERIFICATION_STATE_INDEX_NAME = "index_widgets_verification_state"
+ PENDING_VERIFICATION_INDEX_NAME = "index_widgets_pending_verification"
+ FAILED_VERIFICATION_INDEX_NAME = "index_widgets_failed_verification"
+ NEEDS_VERIFICATION_INDEX_NAME = "index_widgets_needs_verification"
disable_ddl_transaction!
def up
- add_concurrent_index :widgets, :verification_failure, where: "(verification_failure IS NOT NULL)", name: "widgets_verification_failure_partial"
- add_concurrent_index :widgets, :verification_checksum, where: "(verification_checksum IS NOT NULL)", name: "widgets_verification_checksum_partial"
+ add_concurrent_index :widgets, :verification_state, name: VERIFICATION_STATE_INDEX_NAME
+ add_concurrent_index :widgets, :verified_at, where: "(verification_state = 0)", order: { verified_at: 'ASC NULLS FIRST' }, name: PENDING_VERIFICATION_INDEX_NAME
+ add_concurrent_index :widgets, :verification_retry_at, where: "(verification_state = 3)", order: { verification_retry_at: 'ASC NULLS FIRST' }, name: FAILED_VERIFICATION_INDEX_NAME
+ add_concurrent_index :widgets, :verification_state, where: "(verification_state = 0 OR verification_state = 3)", name: NEEDS_VERIFICATION_INDEX_NAME
end
def down
- remove_concurrent_index :widgets, :verification_failure
- remove_concurrent_index :widgets, :verification_checksum
+ remove_concurrent_index_by_name :widgets, VERIFICATION_STATE_INDEX_NAME
+ remove_concurrent_index_by_name :widgets, PENDING_VERIFICATION_INDEX_NAME
+ remove_concurrent_index_by_name :widgets, FAILED_VERIFICATION_INDEX_NAME
+ remove_concurrent_index_by_name :widgets, NEEDS_VERIFICATION_INDEX_NAME
end
end
```
+1. Add the `Gitlab::Geo::VerificationState` concern to the `widget` model if it is not already included in `Gitlab::Geo::ReplicableModel`:
+
+ ```ruby
+ class Widget < ApplicationRecord
+ ...
+ include ::Gitlab::Geo::VerificationState
+ ...
+ end
+ ```
+
##### Option 2: Create a separate `widget_states` table with verification state fields
-1. Create a `widget_states` table and add a partial index on `verification_failure` and
- `verification_checksum` to ensure re-verification can be performed efficiently. Order
- the columns according to [the guidelines](../ordering_table_columns.md):
+1. Create a `widget_states` table and add an index on `verification_state` to ensure verification can be performed efficiently. Order the columns according to [the guidelines](../ordering_table_columns.md):
```ruby
# frozen_string_literal: true
@@ -477,14 +496,15 @@ can track verification state.
with_lock_retries do
create_table :widget_states, id: false do |t|
t.references :widget, primary_key: true, null: false, foreign_key: { on_delete: :cascade }
+ t.integer :verification_state, default: 0, limit: 2, null: false
+ t.column :verification_started_at, :datetime_with_timezone
t.datetime_with_timezone :verification_retry_at
t.datetime_with_timezone :verified_at
t.integer :verification_retry_count, limit: 2
t.binary :verification_checksum, using: 'verification_checksum::bytea'
t.text :verification_failure
- t.index :verification_failure, where: "(verification_failure IS NOT NULL)", name: "widgets_verification_failure_partial"
- t.index :verification_checksum, where: "(verification_checksum IS NOT NULL)", name: "widgets_verification_checksum_partial"
+ t.index :verification_state, name: "index_widget_states_on_verification_state"
end
end
end
@@ -498,6 +518,20 @@ can track verification state.
end
```
+1. Add the following lines to the `widget_state.rb` model:
+
+ ```ruby
+ class WidgetState < ApplicationRecord
+ ...
+ self.primary_key = :widget_id
+
+ include ::Gitlab::Geo::VerificationState
+
+ belongs_to :widget, inverse_of: :widget_state
+ ...
+ end
+ ```
+
1. Add the following lines to the `widget` model:
```ruby
@@ -547,14 +581,16 @@ Metrics are gathered by `Geo::MetricsUpdateWorker`, persisted in
1. Add the following to `spec/factories/widgets.rb`:
```ruby
- trait(:checksummed) do
+ trait(:verification_succeeded) do
with_file
verification_checksum { 'abc' }
+ verification_state { Widget.verification_state_value(:verification_succeeded) }
end
- trait(:checksum_failure) do
+ trait(:verification_failed) do
with_file
verification_failure { 'Could not calculate the checksum' }
+ verification_state { Widget.verification_state_value(:verification_failed) }
end
```
diff --git a/doc/development/git_object_deduplication.md b/doc/development/git_object_deduplication.md
index 4f1afed24ba..00993cc2932 100644
--- a/doc/development/git_object_deduplication.md
+++ b/doc/development/git_object_deduplication.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# How Git object deduplication works in GitLab
@@ -11,7 +11,7 @@ GitLab creates a new Project with an associated Git repository that is a
copy of the original project at the time of the fork. If a large project
gets forked often, this can lead to a quick increase in Git repository
storage disk use. To counteract this problem, we are adding Git object
-deduplication for forks to GitLab. In this document, we will describe how
+deduplication for forks to GitLab. In this document, we describe how
GitLab implements Git object deduplication.
## Pool repositories
@@ -27,13 +27,13 @@ If we want repository A to borrow from repository B, we first write a
path that resolves to `B.git/objects` in the special file
`A.git/objects/info/alternates`. This establishes the alternates link.
Next, we must perform a Git repack in A. After the repack, any objects
-that are duplicated between A and B will get deleted from A. Repository
+that are duplicated between A and B are deleted from A. Repository
A is now no longer self-contained, but it still has its own refs and
-configuration. Objects in A that are not in B will remain in A. For this
+configuration. Objects in A that are not in B remain in A. For this
to work, it is of course critical that **no objects ever get deleted from
B** because A might need them.
-DANGER: **Warning:**
+WARNING:
Do not run `git prune` or `git gc` in pool repositories! This can
cause data loss in "real" repositories that depend on the pool in
question.
@@ -49,7 +49,7 @@ repositories** which are hidden from the user. We then use Git
alternates to let a collection of project repositories borrow from a
single pool repository. We call such a collection of project
repositories a pool. Pools form star-shaped networks of repositories
-that borrow from a single pool, which will resemble (but not be
+that borrow from a single pool, which resemble (but not be
identical to) the fork networks that get formed when users fork
projects.
@@ -72,9 +72,9 @@ across a collection of GitLab project repositories at the Git level:
The effectiveness of Git object deduplication in GitLab depends on the
amount of overlap between the pool repository and each of its
participants. Each time garbage collection runs on the source project,
-Git objects from the source project will get migrated to the pool
+Git objects from the source project are migrated to the pool
repository. One by one, as garbage collection runs, other member
-projects will benefit from the new objects that got added to the pool.
+projects benefit from the new objects that got added to the pool.
## SQL model
@@ -123,19 +123,19 @@ are as follows:
the fork parent and the fork child project become members of the new
pool.
- Once project A has become the source project of a pool, all future
- eligible forks of A will become pool members.
+ eligible forks of A become pool members.
- If the fork source is itself a fork, the resulting repository will
- neither join the repository nor will a new pool repository be
+ neither join the repository nor is a new pool repository
seeded.
- eg:
+ Such as:
Suppose fork A is part of a pool repository, any forks created off
- of fork A *will not* be a part of the pool repository that fork A is
+ of fork A *are not* a part of the pool repository that fork A is
a part of.
Suppose B is a fork of A, and A does not belong to an object pool.
- Now C gets created as a fork of B. C will not be part of a pool
+ Now C gets created as a fork of B. C is not part of a pool
repository.
> TODO should forks of forks be deduplicated?
@@ -146,11 +146,11 @@ are as follows:
- If a normal Project participating in a pool gets moved to another
Gitaly storage shard, its "belongs to PoolRepository" relation will
be broken. Because of the way moving repositories between shard is
- implemented, we will automatically get a fresh self-contained copy
+ implemented, we get a fresh self-contained copy
of the project's repository on the new storage shard.
- If the source project of a pool gets moved to another Gitaly storage
shard or is deleted the "source project" relation is not broken.
- However, as of GitLab 12.0 a pool will not fetch from a source
+ However, as of GitLab 12.0 a pool does not fetch from a source
unless the source is on the same Gitaly shard.
## Consistency between the SQL pool relation and Gitaly
@@ -163,37 +163,37 @@ repository and a pool.
### Pool existence
If GitLab thinks a pool repository exists (i.e. it exists according to
-SQL), but it does not on the Gitaly server, then it will be created on
+SQL), but it does not on the Gitaly server, then it is created on
the fly by Gitaly.
### Pool relation existence
There are three different things that can go wrong here.
-#### 1. SQL says repo A belongs to pool P but Gitaly says A has no alternate objects
+#### 1. SQL says repository A belongs to pool P but Gitaly says A has no alternate objects
In this case, we miss out on disk space savings but all RPC's on A
-itself will function fine. The next time garbage collection runs on A,
+itself function fine. The next time garbage collection runs on A,
the alternates connection gets established in Gitaly. This is done by
`Projects::GitDeduplicationService` in GitLab Rails.
-#### 2. SQL says repo A belongs to pool P1 but Gitaly says A has alternate objects in pool P2
+#### 2. SQL says repository A belongs to pool P1 but Gitaly says A has alternate objects in pool P2
-In this case `Projects::GitDeduplicationService` will throw an exception.
+In this case `Projects::GitDeduplicationService` throws an exception.
-#### 3. SQL says repo A does not belong to any pool but Gitaly says A belongs to P
+#### 3. SQL says repository A does not belong to any pool but Gitaly says A belongs to P
-In this case `Projects::GitDeduplicationService` will try to
+In this case `Projects::GitDeduplicationService` tries to
"re-duplicate" the repository A using the DisconnectGitAlternates RPC.
## Git object deduplication and GitLab Geo
When a pool repository record is created in SQL on a Geo primary, this
-will eventually trigger an event on the Geo secondary. The Geo secondary
-will then create the pool repository in Gitaly. This leads to an
+eventually triggers an event on the Geo secondary. The Geo secondary
+then creates the pool repository in Gitaly. This leads to an
"eventually consistent" situation because as each pool participant gets
-synchronized, Geo will eventually trigger garbage collection in Gitaly on
-the secondary, at which stage Git objects will get deduplicated.
+synchronized, Geo eventually triggers garbage collection in Gitaly on
+the secondary, at which stage Git objects are deduplicated.
> TODO How do we handle the edge case where at the time the Geo
> secondary tries to create the pool repository, the source project does
diff --git a/doc/development/gitaly.md b/doc/development/gitaly.md
index 8b4e5090abb..57a4e24679c 100644
--- a/doc/development/gitaly.md
+++ b/doc/development/gitaly.md
@@ -1,7 +1,7 @@
---
stage: Create
group: Gitaly
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
type: reference
---
@@ -13,9 +13,9 @@ Workhorse and GitLab Shell.
## Deep Dive
In May 2019, Bob Van Landuyt hosted a Deep Dive (GitLab team members only: `https://gitlab.com/gitlab-org/create-stage/issues/1`)
-on GitLab's [Gitaly project](https://gitlab.com/gitlab-org/gitaly) and how to contribute to it as a
+on the [Gitaly project](https://gitlab.com/gitlab-org/gitaly) and how to contribute to it as a
Ruby developer, to share his domain specific knowledge with anyone who may work in this part of the
-code base in the future.
+codebase in the future.
You can find the [recording on YouTube](https://www.youtube.com/watch?v=BmlEWFS8ORo), and the slides
on [Google Slides](https://docs.google.com/presentation/d/1VgRbiYih9ODhcPnL8dS0W98EwFYpJ7GXMPpX-1TM6YE/edit)
@@ -84,7 +84,7 @@ If your test-suite is failing with Gitaly issues, as a first step, try running:
rm -rf tmp/tests/gitaly
```
-During RSpec tests, the Gitaly instance will write logs to `gitlab/log/gitaly-test.log`.
+During RSpec tests, the Gitaly instance writes logs to `gitlab/log/gitaly-test.log`.
## Legacy Rugged code
@@ -121,26 +121,26 @@ bundle exec rake gitlab:features:disable_rugged
Most of this code exists in the `lib/gitlab/git/rugged_impl` directory.
-NOTE: **Note:**
+NOTE:
You should NOT need to add or modify code related to
Rugged unless explicitly discussed with the [Gitaly
-Team](https://gitlab.com/groups/gl-gitaly/group_members). This code will
+Team](https://gitlab.com/groups/gl-gitaly/group_members). This code does
NOT work on GitLab.com or other GitLab instances that do not use NFS.
## `TooManyInvocationsError` errors
During development and testing, you may experience `Gitlab::GitalyClient::TooManyInvocationsError` failures.
-The `GitalyClient` will attempt to block against potential n+1 issues by raising this error
+The `GitalyClient` attempts to block against potential n+1 issues by raising this error
when Gitaly is called more than 30 times in a single Rails request or Sidekiq execution.
-As a temporary measure, export `GITALY_DISABLE_REQUEST_LIMITS=1` to suppress the error. This will disable the n+1 detection
+As a temporary measure, export `GITALY_DISABLE_REQUEST_LIMITS=1` to suppress the error. This disables the n+1 detection
in your development environment.
Please raise an issue in the GitLab CE or EE repositories to report the issue. Include the labels ~Gitaly
~performance ~"technical debt". Please ensure that the issue contains the full stack trace and error message of the
`TooManyInvocationsError`. Also include any known failing tests if possible.
-Isolate the source of the n+1 problem. This will normally be a loop that results in Gitaly being called for each
+Isolate the source of the n+1 problem. This is normally a loop that results in Gitaly being called for each
element in an array. If you are unable to isolate the problem, please contact a member
of the [Gitaly Team](https://gitlab.com/groups/gl-gitaly/group_members) for assistance.
@@ -154,7 +154,7 @@ Gitlab::GitalyClient.allow_n_plus_1_calls do
end
```
-Once the code is wrapped in this block, this code-path will be excluded from n+1 detection.
+Once the code is wrapped in this block, this code path is excluded from n+1 detection.
## Request counts
@@ -181,15 +181,19 @@ Normally, GitLab CE/EE tests use a local clone of Gitaly in
`GITALY_SERVER_VERSION`. The `GITALY_SERVER_VERSION` file supports also
branches and SHA to use a custom commit in <https://gitlab.com/gitlab-org/gitaly>.
-NOTE: **Note:**
+NOTE:
With the introduction of auto-deploy for Gitaly, the format of
`GITALY_SERVER_VERSION` was aligned with Omnibus syntax.
-It no longer supports `=revision`, it will evaluate the file content as a Git
-reference (branch or SHA), only if it matches a semver it will prepend a `v`.
+It no longer supports `=revision`, it evaluates the file content as a Git
+reference (branch or SHA). Only if it matches a semver does it prepend a `v`.
If you want to run tests locally against a modified version of Gitaly you
can replace `tmp/tests/gitaly` with a symlink. This is much faster
-because if will avoid a Gitaly re-install each time you run `rspec`.
+because it avoids a Gitaly re-install each time you run `rspec`.
+
+Make sure this directory contains the files `config.toml` and `praefect.config.toml`.
+You can copy them from `config.toml.example` and `config.praefect.toml.example` respectively.
+After copying, make sure to edit them so everything points to the correct paths.
```shell
rm -rf tmp/tests/gitaly
@@ -197,12 +201,12 @@ ln -s /path/to/gitaly tmp/tests/gitaly
```
Make sure you run `make` in your local Gitaly directory before running
-tests. Otherwise, Gitaly will fail to boot.
+tests. Otherwise, Gitaly fails to boot.
If you make changes to your local Gitaly in between test runs you need
to manually run `make` again.
-Note that CI tests will not use your locally modified version of
+Note that CI tests do not use your locally modified version of
Gitaly. To use a custom Gitaly version in CI you need to update
GITALY_SERVER_VERSION as described at the beginning of this paragraph.
@@ -312,7 +316,7 @@ Here are the steps to gate a new feature in Gitaly behind a feature flag.
1. Test in a Rails console by setting the feature flag:
- NOTE: **Note:**
+ NOTE:
Pay attention to the name of the flag and the one used in the Rails console.
There is a difference between them (dashes replaced by underscores and name
prefix is changed). Make sure to prefix all flags with `gitaly_`.
@@ -341,7 +345,7 @@ the integration by using GDK:
1. Check that the list of current metrics has the new counter for the feature flag:
```shell
- curl --silent http://localhost:9236/metrics | grep go_find_all_tags
+ curl --silent "http://localhost:9236/metrics" | grep go_find_all_tags
```
1. Once you observe the metrics for the new feature flag and it increments, you
@@ -371,5 +375,5 @@ the integration by using GDK:
1. Verify the feature is on by observing the metrics for it:
```shell
- curl --silent http://localhost:9236/metrics | grep go_find_all_tags
+ curl --silent "http://localhost:9236/metrics" | grep go_find_all_tags
```
diff --git a/doc/development/github_importer.md b/doc/development/github_importer.md
index 9fa55d2662a..cc289496301 100644
--- a/doc/development/github_importer.md
+++ b/doc/development/github_importer.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Working with the GitHub importer
@@ -50,28 +50,28 @@ called `Gitlab::GithubImport::AdvanceStageWorker`.
### 1. RepositoryImportWorker
-This worker will kick off the import process by simply scheduling a job for the
+This worker starts the import process by scheduling a job for the
next worker.
### 2. Stage::ImportRepositoryWorker
-This worker will import the repository and wiki, scheduling the next stage when
+This worker imports the repository and wiki, scheduling the next stage when
done.
### 3. Stage::ImportBaseDataWorker
-This worker will import base data such as labels, milestones, and releases. This
-work is done in a single thread since it can be performed fast enough that we
+This worker imports base data such as labels, milestones, and releases. This
+work is done in a single thread because it can be performed fast enough that we
don't need to perform this work in parallel.
### 4. Stage::ImportPullRequestsWorker
-This worker will import all pull requests. For every pull request a job for the
+This worker imports all pull requests. For every pull request a job for the
`Gitlab::GithubImport::ImportPullRequestWorker` worker is scheduled.
### 5. Stage::ImportIssuesAndDiffNotesWorker
-This worker will import all issues and pull request comments. For every issue, we
+This worker imports all issues and pull request comments. For every issue, we
schedule a job for the `Gitlab::GithubImport::ImportIssueWorker` worker. For
pull request comments, we instead schedule jobs for the
`Gitlab::GithubImport::DiffNoteImporter` worker.
@@ -91,14 +91,14 @@ This worker imports regular comments for both issues and pull requests. For
every comment, we schedule a job for the
`Gitlab::GithubImport::ImportNoteWorker` worker.
-Regular comments have to be imported at the end since the GitHub API used
+Regular comments have to be imported at the end because the GitHub API used
returns comments for both issues and pull requests. This means we have to wait
for all issues and pull requests to be imported before we can import regular
comments.
### 7. Stage::FinishImportWorker
-This worker will wrap up the import process by performing some housekeeping
+This worker completes the import process by performing some housekeeping
(such as flushing any caches) and by marking the import as completed.
## Advancing stages
@@ -113,22 +113,22 @@ The first approach should only be used by workers that perform all their work in
a single thread, while `AdvanceStageWorker` should be used for everything else.
The way `AdvanceStageWorker` works is fairly simple. When scheduling a job it
-will be given a project ID, a list of Redis keys, and the name of the next
+is given a project ID, a list of Redis keys, and the name of the next
stage. The Redis keys (produced by `Gitlab::JobWaiter`) are used to check if the
currently running stage has been completed or not. If the stage has not yet been
-completed `AdvanceStageWorker` will reschedule itself. Once a stage finishes
-`AdvanceStageworker` will refresh the import JID (more on this below) and
+completed `AdvanceStageWorker` reschedules itself. After a stage finishes
+`AdvanceStageworker` refreshes the import JID (more on this below) and
schedule the worker of the next stage.
-To reduce the number of `AdvanceStageWorker` jobs scheduled this worker will
-briefly wait for jobs to complete before deciding what the next action should
-be. For small projects, this may slow down the import process a bit, but it will
-also reduce pressure on the system as a whole.
+To reduce the number of `AdvanceStageWorker` jobs scheduled this worker
+briefly waits for jobs to complete before deciding what the next action should
+be. For small projects, this may slow down the import process a bit, but it
+also reduces pressure on the system as a whole.
## Refreshing import JIDs
GitLab includes a worker called `Gitlab::Import::StuckProjectImportJobsWorker`
-that will periodically run and mark project imports as failed if they have been
+that periodically runs and marks project imports as failed if they have been
running for more than 15 hours. For GitHub projects, this poses a bit of a
problem: importing large projects could take several hours depending on how
often we hit the GitHub rate limit (more on this below), but we don't want
@@ -151,7 +151,7 @@ because we need the Email address of users in order to map them to GitLab users.
We handle this by doing the following:
-1. Once we hit the rate limit all jobs will automatically reschedule themselves
+1. After we hit the rate limit all jobs automatically reschedule themselves
in such a way that they are not executed until the rate limit has been reset.
1. We cache the mapping of GitHub users to GitLab users in Redis.
@@ -164,7 +164,7 @@ perform:
1. One API call to get the user's Email address.
1. Two database queries to see if a corresponding GitLab user exists. One query
- will try to find the user based on the GitHub user ID, while the second query
+ tries to find the user based on the GitHub user ID, while the second query
is used to find the user using their GitHub Email address.
Because this process is quite expensive we cache the result of these lookups in
@@ -186,11 +186,11 @@ positive lookup, we refresh the TTL automatically. The TTL of false lookups is
never refreshed.
Because of this caching layer, it's possible newly registered GitLab accounts
-won't be linked to their corresponding GitHub accounts. This, however, will sort
-itself out once the cached keys expire.
+aren't linked to their corresponding GitHub accounts. This, however, is resolved
+after the cached keys expire.
-The user cache lookup is shared across projects. This means that the more
-projects get imported the fewer GitHub API calls will be needed.
+The user cache lookup is shared across projects. This means that the greater the number of
+projects that are imported, fewer GitHub API calls are needed.
The code for this resides in:
diff --git a/doc/development/go_guide/dependencies.md b/doc/development/go_guide/dependencies.md
index 461ee394533..72b3f82d86f 100644
--- a/doc/development/go_guide/dependencies.md
+++ b/doc/development/go_guide/dependencies.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Dependency Management in Go
@@ -89,28 +89,28 @@ Go 1.12 introduced checksum databases and module proxies.
### Checksums
-In addition to `go.mod`, a module will have a `go.sum` file. This file records a
+In addition to `go.mod`, a module has a `go.sum` file. This file records a
SHA-256 checksum of the code and the `go.mod` file of every version of every
dependency that is referenced by the module or one of the module's dependencies.
Go continually updates `go.sum` as new dependencies are referenced.
When Go fetches the dependencies of a module, if those dependencies already have
-an entry in `go.sum`, Go will verify the checksum of these dependencies. If the
-checksum does not match what is in `go.sum`, the build will fail. This ensures
+an entry in `go.sum`, Go verifies the checksum of these dependencies. If the
+checksum does not match what is in `go.sum`, the build fails. This ensures
that a given version of a module cannot be changed by its developers or by a
malicious party without causing build failures.
Go 1.12+ can be configured to use a checksum database. If configured to do so,
when Go fetches a dependency and there is no corresponding entry in `go.sum`, Go
-will query the configured checksum database(s) for the checksum of the
+queries the configured checksum database(s) for the checksum of the
dependency instead of calculating it from the downloaded dependency. If the
-dependency cannot be found in the checksum database, the build will fail. If the
+dependency cannot be found in the checksum database, the build fails. If the
downloaded dependency's checksum does not match the result from the checksum
-database, the build will fail. The following environment variables control this:
+database, the build fails. The following environment variables control this:
- `GOSUMDB` identifies the name, and optionally the public key and server URL,
of the checksum database to query.
- - A value of `off` will entirely disable checksum database queries.
+ - A value of `off` entirely disables checksum database queries.
- Go 1.13+ uses `sum.golang.org` if `GOSUMDB` is not defined.
- `GONOSUMDB` is a comma-separated list of module suffixes that checksum
database queries should be disabled for. Wildcards are supported.
@@ -125,8 +125,8 @@ attempts to fetch the dependency from the configured proxies, in order. The
following environment variables control this:
- `GOPROXY` is a comma-separated list of module proxies to query.
- - A value of `direct` will entirely disable module proxy queries.
- - If the last entry in the list is `direct`, Go will fall back to the process
+ - A value of `direct` entirely disables module proxy queries.
+ - If the last entry in the list is `direct`, Go falls back to the process
described [above](#fetching-packages) if none of the proxies can provide the
dependency.
- Go 1.13+ uses `proxy.golang.org,direct` if `GOPROXY` is not defined.
@@ -159,7 +159,7 @@ From Go 1.12 onward, the process for fetching a module or package is as follows:
The downloaded source must contain a `go.mod` file. The `go.mod` file must
contain a `module` directive that specifies the name of the module. If the
module name as specified by `go.mod` does not match the name that was used to
-fetch the module, the module will fail to compile.
+fetch the module, the module fails to compile.
If the module is being fetched directly and no version was specified, or if the
module is being added as a dependency and no version was specified, Go uses the
@@ -172,9 +172,9 @@ latest that is also a valid semantic version.
In versions prior to Go 1.13, support for authenticating requests made by Go was
somewhat inconsistent. Go 1.13 improved support for `.netrc` authentication. If
-a request is made over HTTPS and a matching `.netrc` entry can be found, Go will
-add HTTP Basic authentication credentials to the request. Go will not
-authenticate requests made over HTTP. Go will reject HTTP-only entries in
+a request is made over HTTPS and a matching `.netrc` entry can be found, Go
+adds HTTP Basic authentication credentials to the request. Go does not
+authenticate requests made over HTTP. Go rejects HTTP-only entries in
`GOPROXY` that have embedded credentials.
In a future version, Go may add support for arbitrary authentication headers.
diff --git a/doc/development/go_guide/index.md b/doc/development/go_guide/index.md
index 4077cf2a2c2..b2405f4ce2a 100644
--- a/doc/development/go_guide/index.md
+++ b/doc/development/go_guide/index.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Go standards and style guidelines
@@ -20,7 +20,7 @@ the two is best for the job.
This page aims to define and organize our Go guidelines, based on our various
experiences. Several projects were started with different standards and they
-can still have specifics. They will be described in their respective
+can still have specifics. They are described in their respective
`README.md` or `PROCESS.md` files.
## Dependency Management
@@ -89,7 +89,7 @@ projects:
## Code style and format
-- Avoid global variables, even in packages. By doing so you will introduce side
+- Avoid global variables, even in packages. By doing so you introduce side
effects if the package is included multiple times.
- Use `goimports` before committing.
[goimports](https://godoc.org/golang.org/x/tools/cmd/goimports)
@@ -97,7 +97,7 @@ projects:
[Gofmt](https://golang.org/cmd/gofmt/), in addition to formatting import lines,
adding missing ones and removing unreferenced ones.
- Most editors/IDEs will allow you to run commands before/after saving a file, you can set it
+ Most editors/IDEs allow you to run commands before/after saving a file, you can set it
up to run `goimports` so that it's applied to every file when saving.
- Place private methods below the first caller method in the source file.
@@ -128,9 +128,11 @@ configuration of `golangci-lint`. All options for `golangci-lint` are listed in
this [example](https://github.com/golangci/golangci-lint/blob/master/.golangci.example.yml).
Once [recursive includes](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/56836)
-become available, you will be able to share job templates like this
+become available, you can share job templates like this
[analyzer](https://gitlab.com/gitlab-org/security-products/ci-templates/raw/master/includes-dev/analyzer.yml).
+Go GitLab linter plugins are maintained in the [`gitlab-org/language-tools/go/linters`](https://gitlab.com/gitlab-org/language-tools/go/linters/) namespace.
+
## Dependencies
Dependencies should be kept to the minimum. The introduction of a new
@@ -150,7 +152,7 @@ define and lock dependencies for reproducible builds. It should be used
whenever possible.
When Go Modules are in use, there should not be a `vendor/` directory. Instead,
-Go will automatically download dependencies when they are needed to build the
+Go automatically downloads dependencies when they are needed to build the
project. This is in line with how dependencies are handled with Bundler in Ruby
projects, and makes merge requests easier to review.
@@ -177,7 +179,7 @@ databases.
In the rare event of managing a hosted database, it's necessary to use a
migration system like ActiveRecord is providing. A simple library like
[Journey](https://github.com/db-journey/journey), designed to be used in
-`postgres` containers, can be deployed as long-running pods. New versions will
+`postgres` containers, can be deployed as long-running pods. New versions
deploy a new pod, migrating the data automatically.
## Testing
@@ -255,7 +257,7 @@ to make the test output easily readable.
to use for naming subtests. In the Go standard library, this is commonly the
`name string` field.
- Use `want`/`expect`/`actual` when you are specifying something in the
- test case that will be used for assertion.
+ test case that is used for assertion.
#### Variable names
@@ -414,13 +416,13 @@ builds](https://docs.docker.com/develop/develop-images/multistage-build/):
Generated Docker images should have the program at their `Entrypoint` to create
portable commands. That way, anyone can run the image, and without parameters
-it will display its help message (if `cli` has been used).
+it displays its help message (if `cli` has been used).
## Distributing Go binaries
With the exception of [GitLab Runner](https://gitlab.com/gitlab-org/gitlab-runner),
which publishes its own binaries, our Go binaries are created by projects
-managed by the [Distribution group](https://about.gitlab.com/handbook/product/product-categories/#distribution-group).
+managed by the [Distribution group](https://about.gitlab.com/handbook/product/categories/#distribution-group).
The [Omnibus GitLab](https://gitlab.com/gitlab-org/omnibus-gitlab) project creates a
single, monolithic operating system package containing all the binaries, while
@@ -476,7 +478,7 @@ Example:
In case we want to drop support for `go 1.11` in GitLab `12.10`, we need to verify which Go versions we are using in `12.9`, `12.8`, and `12.7`.
-We will not consider the active milestone, `12.10`, because a backport for `12.7` will be required in case of a critical security release.
+We do not consider the active milestone, `12.10`, because a backport for `12.7` is required in case of a critical security release.
1. If both [Omnibus and CNG](#updating-go-version) were using Go `1.12` in GitLab `12.7` and later, then we safely drop support for `1.11`.
1. If Omnibus or CNG were using `1.11` in GitLab `12.7`, then we still need to keep support for Go `1.11` for easier backporting of security fixes.
@@ -492,11 +494,11 @@ Use `goimports -local gitlab.com/gitlab-org` before committing.
is a tool that automatically formats Go source code using
[Gofmt](https://golang.org/cmd/gofmt/), in addition to formatting import lines,
adding missing ones and removing unreferenced ones.
-By using the `-local gitlab.com/gitlab-org` option, `goimports` will group locally referenced
+By using the `-local gitlab.com/gitlab-org` option, `goimports` groups locally referenced
packages separately from external ones. See
[the imports section](https://github.com/golang/go/wiki/CodeReviewComments#imports)
of the Code Review Comments page on the Go wiki for more details.
-Most editors/IDEs will allow you to run commands before/after saving a file, you can set it
+Most editors/IDEs allow you to run commands before/after saving a file, you can set it
up to run `goimports -local gitlab.com/gitlab-org` so that it's applied to every file when saving.
---
diff --git a/doc/development/gotchas.md b/doc/development/gotchas.md
index 691027f385f..2b34aedddf6 100644
--- a/doc/development/gotchas.md
+++ b/doc/development/gotchas.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Gotchas
@@ -14,7 +14,7 @@ might encounter or should avoid during development of GitLab CE and EE.
In GitLab 10.8 and later, Omnibus has [dropped the `app/assets` directory](https://gitlab.com/gitlab-org/omnibus-gitlab/-/merge_requests/2456),
after asset compilation. The `ee/app/assets`, `vendor/assets` directories are dropped as well.
-This means that reading files from that directory will fail in Omnibus-installed GitLab instances:
+This means that reading files from that directory fails in Omnibus-installed GitLab instances:
```ruby
file = Rails.root.join('app/assets/images/logo.svg')
@@ -243,8 +243,8 @@ end
In this case, if for any reason the top level `ApplicationController`
is loaded but `Projects::ApplicationController` is not, `ApplicationController`
-would be resolved to `::ApplicationController` and then the `project` method will
-be undefined and we will get an error.
+would be resolved to `::ApplicationController` and then the `project` method is
+undefined, causing an error.
#### Solution
@@ -264,8 +264,8 @@ end
By specifying `Projects::`, we tell Rails exactly what class we are referring
to and we would avoid the issue.
-NOTE: **Note:**
-This problem will disappear as soon as we upgrade to Rails 6 and use the Zeitwerk autoloader.
+NOTE:
+This problem disappears as soon as we upgrade to Rails 6 and use the Zeitwerk autoloader.
### Further reading
@@ -292,7 +292,7 @@ While the code above works in local environments, it errors out in production in
### Solution
-The alternative is the `lib/assets` folder. Use it if you need to add assets (like images) to the repo that meet the following conditions:
+The alternative is the `lib/assets` folder. Use it if you need to add assets (like images) to the repository that meet the following conditions:
- The assets do not need to be directly served to the user (and hence need not be pre-compiled).
- The assets do need to be accessed via application code.
diff --git a/doc/development/graphql_guide/batchloader.md b/doc/development/graphql_guide/batchloader.md
index 6d529358499..e965e678aba 100644
--- a/doc/development/graphql_guide/batchloader.md
+++ b/doc/development/graphql_guide/batchloader.md
@@ -1,7 +1,7 @@
---
stage: Enablement
group: Database
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# GraphQL BatchLoader
diff --git a/doc/development/graphql_guide/index.md b/doc/development/graphql_guide/index.md
index 12b4f9796c7..658bba96f33 100644
--- a/doc/development/graphql_guide/index.md
+++ b/doc/development/graphql_guide/index.md
@@ -1,12 +1,12 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# GraphQL development guidelines
-This guide contains all the information to successfully contribute to GitLab's
+This guide contains all the information to successfully contribute to the GitLab
GraphQL API. This is a living document, and we welcome contributions,
feedback, and suggestions.
diff --git a/doc/development/graphql_guide/pagination.md b/doc/development/graphql_guide/pagination.md
index d5140363396..130ed5721f3 100644
--- a/doc/development/graphql_guide/pagination.md
+++ b/doc/development/graphql_guide/pagination.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# GraphQL pagination
@@ -86,6 +86,20 @@ However, there are some cases where we have to use the offset
pagination connection, `OffsetActiveRecordRelationConnection`, such as when
sorting by label priority in issues, due to the complexity of the sort.
+If you return a relation from a resolver that is not suitable for keyset
+pagination (due to the sort order for example), then you can use the
+`BaseResolver#offset_pagination` method to wrap the relation in the correct
+connection type. For example:
+
+```ruby
+def resolve(**args)
+ result = Finder.new(object, current_user, args).execute
+ result = offset_pagination(result) if needs_offset?(args[:sort])
+
+ result
+end
+```
+
### Keyset pagination
The keyset pagination implementation is a subclass of `GraphQL::Pagination::ActiveRecordRelationConnection`,
@@ -225,7 +239,7 @@ instead of an `ActiveRecord::Relation`:
if non_stable_cursor_sort?(args[:sort])
# Certain complex sorts are not supported by the stable cursor pagination yet.
# In these cases, we use offset pagination, so we return the correct connection.
- Gitlab::Graphql::Pagination::OffsetActiveRecordRelationConnection.new(issues)
+ offset_pagination(issues)
else
issues
end
@@ -280,28 +294,30 @@ The shared example requires certain `let` variables and methods to be set up:
```ruby
describe 'sorting and pagination' do
- let(:sort_project) { create(:project, :public) }
+ let_it_be(:sort_project) { create(:project, :public) }
let(:data_path) { [:project, :issues] }
- def pagination_query(params, page_info)
- graphql_query_for(
- 'project',
- { 'fullPath' => sort_project.full_path },
- query_graphql_field('issues', params, "#{page_info} edges { node { id } }")
+ def pagination_query(params)
+ graphql_query_for( :project, { full_path: sort_project.full_path },
+ query_nodes(:issues, :id, include_pagination_info: true, args: params))
)
end
- def pagination_results_data(data)
- data.map { |issue| issue.dig('node', 'iid').to_i }
+ def pagination_results_data(nodes)
+ nodes.map { |issue| issue['iid'].to_i }
end
context 'when sorting by weight' do
- ...
+ let_it_be(:issues) { make_some_issues_with_weights }
+
context 'when ascending' do
+ let(:ordered_issues) { issues.sort_by(&:weight) }
+
it_behaves_like 'sorted paginated query' do
- let(:sort_param) { 'WEIGHT_ASC' }
+ let(:sort_param) { :WEIGHT_ASC }
let(:first_param) { 2 }
- let(:expected_results) { [weight_issue3.iid, weight_issue5.iid, weight_issue1.iid, weight_issue4.iid, weight_issue2.iid] }
+ let(:expected_results) { ordered_issues.map(&:iid) }
end
end
+ end
```
diff --git a/doc/development/hash_indexes.md b/doc/development/hash_indexes.md
index cc0b0b3a736..881369e429b 100644
--- a/doc/development/hash_indexes.md
+++ b/doc/development/hash_indexes.md
@@ -1,7 +1,7 @@
---
stage: Enablement
group: Database
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Hash Indexes
diff --git a/doc/development/i18n/externalization.md b/doc/development/i18n/externalization.md
index 65a1d83a8fc..90355e1cccb 100644
--- a/doc/development/i18n/externalization.md
+++ b/doc/development/i18n/externalization.md
@@ -1,7 +1,7 @@
---
stage: Manage
group: Import
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Internationalization for GitLab
@@ -10,10 +10,10 @@ info: To determine the technical writer assigned to the Stage/Group associated w
For working with internationalization (i18n),
[GNU gettext](https://www.gnu.org/software/gettext/) is used given it's the most
-used tool for this task and there are a lot of applications that will help us to
+used tool for this task and there are a lot of applications that help us
work with it.
-TIP: **Tip:**
+NOTE:
All `rake` commands described on this page must be run on a GitLab instance, usually GDK.
## Setting up GitLab Development Kit (GDK)
@@ -376,7 +376,7 @@ Namespaces should be PascalCase.
s_('OpenedNDaysAgo|Opened')
```
- In case the translation is not found it will return `Opened`.
+ In case the translation is not found it returns `Opened`.
- In JavaScript:
@@ -417,12 +417,12 @@ To include formatting in the translated string, we can do the following:
See the section on [interpolation](#interpolation).
-When [this translation helper issue](https://gitlab.com/gitlab-org/gitlab/-/issues/217935) is complete, we'll update the
+When [this translation helper issue](https://gitlab.com/gitlab-org/gitlab/-/issues/217935) is complete, we plan to update the
process of including formatting in translated strings.
#### Including Angle Brackets
-If a string contains angles brackets (`<`/`>`) that are not used for HTML, it will still be flagged by the
+If a string contains angles brackets (`<`/`>`) that are not used for HTML, it is still flagged by the
`rake gettext:lint` linter.
To avoid this error, use the applicable HTML entity code (`&lt;` or `&gt;`) instead:
@@ -473,7 +473,7 @@ This makes use of [`Intl.DateTimeFormat`](https://developer.mozilla.org/en-US/do
1. **Through the `l` helper**, i.e. `l(active_session.created_at, format: :short)`. We have some predefined formats for
[dates](https://gitlab.com/gitlab-org/gitlab/blob/4ab54c2233e91f60a80e5b6fa2181e6899fdcc3e/config/locales/en.yml#L54) and [times](https://gitlab.com/gitlab-org/gitlab/blob/4ab54c2233e91f60a80e5b6fa2181e6899fdcc3e/config/locales/en.yml#L262).
If you need to add a new format, because other parts of the code could benefit from it,
- you'll need to add it to [en.yml](https://gitlab.com/gitlab-org/gitlab/blob/master/config/locales/en.yml) file.
+ you can add it to [en.yml](https://gitlab.com/gitlab-org/gitlab/blob/master/config/locales/en.yml) file.
1. **Through `strftime`**, i.e. `milestone.start_date.strftime('%b %-d')`. We use `strftime` in case none of the formats
defined on [en.yml](https://gitlab.com/gitlab-org/gitlab/blob/master/config/locales/en.yml) matches the date/time
specifications we need, and if there is no need to add it as a new format because is very particular (i.e. it's only used in a single view).
@@ -504,7 +504,7 @@ Examples:
- Mappings for a dropdown list
- Error messages
-To store these kinds of data, using a constant seems like the best choice, however this won't work for translations.
+To store these kinds of data, using a constant seems like the best choice, however this doesn't work for translations.
Bad, avoid it:
@@ -518,7 +518,7 @@ class MyPresenter
end
```
-The translation method (`_`) will be called when the class is loaded for the first time and translates the text to the default locale. Regardless of what's the user's locale, these values will not be translated again.
+The translation method (`_`) is called when the class is loaded for the first time and translates the text to the default locale. Regardless of the user's locale, these values are not translated a second time.
Similar thing happens when using class methods with memoization.
@@ -536,7 +536,7 @@ class MyModel
end
```
-This method will memoize the translations using the locale of the user, who first "called" this method.
+This method memorizes the translations using the locale of the user, who first "called" this method.
To avoid these problems, keep the translations dynamic.
@@ -700,9 +700,9 @@ Now that the new content is marked for translation, we need to update
bin/rake gettext:regenerate
```
-This command will update `locale/gitlab.pot` file with the newly externalized
+This command updates `locale/gitlab.pot` file with the newly externalized
strings and remove any strings that aren't used anymore. You should check this
-file in. Once the changes are on master, they will be picked up by
+file in. Once the changes are on master, they are picked up by
[CrowdIn](https://translate.gitlab.com) and be presented for
translation.
@@ -719,7 +719,7 @@ running on CI as part of the `static-analysis` job.
To lint the adjustments in PO files locally you can run `rake gettext:lint`.
-The linter will take the following into account:
+The linter takes the following into account:
- Valid PO-file syntax
- Variable usage
@@ -756,9 +756,9 @@ aren't in the message with ID `1 pipeline`.
## Adding a new language
-NOTE: **Note:**
+NOTE:
[Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/221012) in GitLab 13.3:
-Languages with less than 2% of translations won't be available in the UI.
+Languages with less than 2% of translations are not available in the UI.
Let's suppose you want to add translations for a new language, let's say French.
diff --git a/doc/development/i18n/index.md b/doc/development/i18n/index.md
index 13f2b43b788..b0e34052d2a 100644
--- a/doc/development/i18n/index.md
+++ b/doc/development/i18n/index.md
@@ -1,21 +1,21 @@
---
stage: Manage
group: Import
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Translate GitLab to your language
-The text in GitLab's user interface is in American English by default.
+The text in the GitLab user interface is in American English by default.
Each string can be translated to other languages.
As each string is translated, it is added to the languages translation file,
-and will be available in future releases of GitLab.
+and is made available in future releases of GitLab.
Contributions to translations are always needed.
Many strings are not yet available for translation because they have not been externalized.
Helping externalize strings benefits all languages.
Some translations are incomplete or inconsistent.
-Translating strings will help complete and improve each language.
+Translating strings helps complete and improve each language.
## How to contribute
@@ -37,7 +37,7 @@ See [Externalization for GitLab](externalization.md).
The translation process is managed at <https://translate.gitlab.com>
using [CrowdIn](https://crowdin.com/).
-You will need to create an account before you can submit translations.
+You need to create an account before you can submit translations.
Once you are signed in, select the language you wish to contribute translations to.
Voting for translations is also valuable, helping to confirm good and flag inaccurate translations.
@@ -48,7 +48,7 @@ See [Translation guidelines](translation.md).
Proofreading helps ensure the accuracy and consistency of translations. All
translations are proofread before being accepted. If a translations requires
-changes, you will be notified with a comment explaining why.
+changes, you are notified with a comment explaining why.
See [Proofreading Translations](proofreader.md) for more information on who's
able to proofread and instructions on becoming a proofreader yourself.
diff --git a/doc/development/i18n/merging_translations.md b/doc/development/i18n/merging_translations.md
index c4a709f84d3..582c1428bdd 100644
--- a/doc/development/i18n/merging_translations.md
+++ b/doc/development/i18n/merging_translations.md
@@ -1,7 +1,7 @@
---
stage: Manage
group: Import
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Merging translations from CrowdIn
@@ -27,8 +27,8 @@ If there are validation errors, the easiest solution is to disapprove
the offending string in CrowdIn, leaving a comment with what is
required to fix the offense. There is an
[issue](https://gitlab.com/gitlab-org/gitlab/-/issues/23256)
-suggesting to automate this process. Disapproving will exclude the
-invalid translation, the merge request will be updated within a few
+suggesting to automate this process. Disapproving excludes the
+invalid translation, the merge request is then updated within a few
minutes.
If the translation has failed validation due to angle brackets `<` or `>`
@@ -52,7 +52,7 @@ We are discussing [automating this entire process](https://gitlab.com/gitlab-org
## Recreate the merge request
CrowdIn creates a new merge request as soon as the old one is closed
-or merged. But it won't recreate the `master-i18n` branch every
+or merged. But it does not recreate the `master-i18n` branch every
time. To force CrowdIn to recreate the branch, close any [open merge
request](https://gitlab.com/gitlab-org/gitlab/-/merge_requests?scope=all&utf8=%E2%9C%93&state=opened&author_username=gitlab-crowdin-bot)
and delete the
@@ -63,7 +63,7 @@ have been fixed on master.
## Recreate the GitLab integration in CrowdIn
-NOTE: **Note:**
+NOTE:
These instructions work only for GitLab Team Members.
If for some reason the GitLab integration in CrowdIn does not exist, it can be
diff --git a/doc/development/i18n/proofreader.md b/doc/development/i18n/proofreader.md
index 56a4f835b3f..a15eb4c3bc2 100644
--- a/doc/development/i18n/proofreader.md
+++ b/doc/development/i18n/proofreader.md
@@ -1,7 +1,7 @@
---
stage: Manage
group: Import
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Proofread Translations
@@ -147,9 +147,9 @@ of contributing translations to the GitLab project.
In the merge request description, include links to any projects you have
previously translated.
-1. Your request to become a proofreader will be considered on the merits of
+1. Your request to become a proofreader is considered on the merits of
your previous translations by [GitLab team members](https://about.gitlab.com/company/team/)
or [Core team members](https://about.gitlab.com/community/core-team/) who are fluent in
the language or current proofreaders.
- When a request is made for the first proofreader for a language and there are no [GitLab team members](https://about.gitlab.com/company/team/)
- or [Core team members](https://about.gitlab.com/community/core-team/) who speak the language, we will request links to previous translation work in other communities or projects.
+ or [Core team members](https://about.gitlab.com/community/core-team/) who speak the language, we shall request links to previous translation work in other communities or projects.
diff --git a/doc/development/i18n/translation.md b/doc/development/i18n/translation.md
index 128bacfda97..25dc854d2a3 100644
--- a/doc/development/i18n/translation.md
+++ b/doc/development/i18n/translation.md
@@ -1,7 +1,7 @@
---
stage: Manage
group: Import
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Translating GitLab
@@ -23,7 +23,7 @@ You may create a new account or use any of their supported sign in services.
GitLab is being translated into many languages.
1. Select the language you would like to contribute translations to by clicking the flag
-1. You will see a list of files and folders.
+1. Next, you can view list of files and folders.
Click `gitlab.pot` to open the translation editor.
### Translation Editor
@@ -34,7 +34,7 @@ The online translation editor is the easiest way to contribute translations.
1. Strings for translation are listed in the left panel
1. Translations are entered into the central panel.
- Multiple translations will be required for strings that contains plurals.
+ Multiple translations are required for strings that contains plurals.
The string to be translated is shown above with glossary terms highlighted.
If the string to be translated is not clear, you can 'Request Context'
@@ -79,7 +79,7 @@ determining a suitable level of formality.
### Inclusive language
-[Diversity](https://about.gitlab.com/handbook/values/#diversity) is one of GitLab's values.
+[Diversity](https://about.gitlab.com/handbook/values/#diversity) is a GitLab value.
We ask you to avoid translations which exclude people based on their gender or
ethnicity.
In languages which distinguish between a male and female form, use both or
@@ -100,7 +100,7 @@ To propose additions to the glossary please
### Inclusive language in French
<!-- vale gitlab.Spelling = NO -->
-In French, the "Γ©criture inclusive" is now over (see on [Legifrance](https://www.legifrance.gouv.fr/affichTexte.do?cidTexte=JORFTEXT000036068906&categorieLien=id)).
+In French, the "Γ©criture inclusive" is now over (see on [Legifrance](https://www.legifrance.gouv.fr/jorf/id/JORFTEXT000036068906/)).
So, to include both genders, write β€œUtilisateurs et utilisatrices” instead of β€œUtilisateurΒ·riceΒ·s”.
When space is missing, the male gender should be used alone.
<!-- vale gitlab.Spelling = YES -->
diff --git a/doc/development/i18n_guide.md b/doc/development/i18n_guide.md
index e588b47e203..ddc91f9308e 100644
--- a/doc/development/i18n_guide.md
+++ b/doc/development/i18n_guide.md
@@ -3,3 +3,6 @@ redirect_to: 'i18n/index.md'
---
This document was moved to [another location](i18n/index.md).
+
+<!-- This redirect file can be deleted after February 1, 2021. -->
+<!-- Before deletion, see: https://docs.gitlab.com/ee/development/documentation/#move-or-rename-a-page -->
diff --git a/doc/development/image_scaling.md b/doc/development/image_scaling.md
index 05f44209bc4..d447b6baf57 100644
--- a/doc/development/image_scaling.md
+++ b/doc/development/image_scaling.md
@@ -1,12 +1,12 @@
---
stage: Enablement
group: Memory
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Image scaling guide
-This section contains a brief overview of GitLab's image scaler and how to work with it.
+This section contains a brief overview of the GitLab image scaler and how to work with it.
For a general introduction to the history of image scaling at GitLab, you might be interested in
[this Unfiltered blog post](https://about.gitlab.com/blog/2020/11/02/scaling-down-how-we-prototyped-an-image-scaler-at-gitlab/).
diff --git a/doc/development/img/bulk_imports_overview_v13_7.png b/doc/development/img/bulk_imports_overview_v13_7.png
new file mode 100644
index 00000000000..25c8685c390
--- /dev/null
+++ b/doc/development/img/bulk_imports_overview_v13_7.png
Binary files differ
diff --git a/doc/development/img/each_batch_users_table_bad_index_v13_7.png b/doc/development/img/each_batch_users_table_bad_index_v13_7.png
new file mode 100644
index 00000000000..dad3f37566a
--- /dev/null
+++ b/doc/development/img/each_batch_users_table_bad_index_v13_7.png
Binary files differ
diff --git a/doc/development/img/each_batch_users_table_filter_v13_7.png b/doc/development/img/each_batch_users_table_filter_v13_7.png
new file mode 100644
index 00000000000..79fa1cdaf55
--- /dev/null
+++ b/doc/development/img/each_batch_users_table_filter_v13_7.png
Binary files differ
diff --git a/doc/development/img/each_batch_users_table_filtered_index_v13_7.png b/doc/development/img/each_batch_users_table_filtered_index_v13_7.png
new file mode 100644
index 00000000000..4fbe186ef28
--- /dev/null
+++ b/doc/development/img/each_batch_users_table_filtered_index_v13_7.png
Binary files differ
diff --git a/doc/development/img/each_batch_users_table_good_index_v13_7.png b/doc/development/img/each_batch_users_table_good_index_v13_7.png
new file mode 100644
index 00000000000..cae01d5bee8
--- /dev/null
+++ b/doc/development/img/each_batch_users_table_good_index_v13_7.png
Binary files differ
diff --git a/doc/development/img/each_batch_users_table_iteration_1_v13_7.png b/doc/development/img/each_batch_users_table_iteration_1_v13_7.png
new file mode 100644
index 00000000000..3dac195e8db
--- /dev/null
+++ b/doc/development/img/each_batch_users_table_iteration_1_v13_7.png
Binary files differ
diff --git a/doc/development/img/each_batch_users_table_iteration_2_v13_7.png b/doc/development/img/each_batch_users_table_iteration_2_v13_7.png
new file mode 100644
index 00000000000..c0cc82f14c6
--- /dev/null
+++ b/doc/development/img/each_batch_users_table_iteration_2_v13_7.png
Binary files differ
diff --git a/doc/development/img/each_batch_users_table_iteration_3_v13_7.png b/doc/development/img/each_batch_users_table_iteration_3_v13_7.png
new file mode 100644
index 00000000000..c032924d3d7
--- /dev/null
+++ b/doc/development/img/each_batch_users_table_iteration_3_v13_7.png
Binary files differ
diff --git a/doc/development/img/each_batch_users_table_iteration_4_v13_7.png b/doc/development/img/each_batch_users_table_iteration_4_v13_7.png
new file mode 100644
index 00000000000..9ebd931d194
--- /dev/null
+++ b/doc/development/img/each_batch_users_table_iteration_4_v13_7.png
Binary files differ
diff --git a/doc/development/img/each_batch_users_table_iteration_5_v13_7.png b/doc/development/img/each_batch_users_table_iteration_5_v13_7.png
new file mode 100644
index 00000000000..3a5baeadaaf
--- /dev/null
+++ b/doc/development/img/each_batch_users_table_iteration_5_v13_7.png
Binary files differ
diff --git a/doc/development/img/each_batch_users_table_v13_7.png b/doc/development/img/each_batch_users_table_v13_7.png
new file mode 100644
index 00000000000..2e8b3fdff80
--- /dev/null
+++ b/doc/development/img/each_batch_users_table_v13_7.png
Binary files differ
diff --git a/doc/development/import_export.md b/doc/development/import_export.md
index a73144abce6..fd795880d2d 100644
--- a/doc/development/import_export.md
+++ b/doc/development/import_export.md
@@ -1,7 +1,7 @@
---
stage: Manage
group: Import
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Import/Export development documentation
@@ -42,7 +42,7 @@ SIDEKIQ_MEMORY_KILLER_HARD_LIMIT_RSS = 3000000
SIDEKIQ_MEMORY_KILLER_GRACE_TIME = 900
```
-An import status `started`, and the following Sidekiq logs will signal a memory issue:
+An import status `started`, and the following Sidekiq logs signal a memory issue:
```shell
WARN: Work still in progress <struct with JID>
@@ -218,7 +218,7 @@ for compatibility when importing and exporting projects.
### When to bump the version up
-We will have to bump the version if we rename model/columns or perform any format
+If we rename model/columns or perform any format, we need to bump the version
modifications in the JSON structure or the file structure of the archive file.
We do not need to bump the version up in any of the following cases:
@@ -227,7 +227,7 @@ We do not need to bump the version up in any of the following cases:
- Remove a column or model (unless there is a DB constraint)
- Export new things (such as a new type of upload)
-Every time we bump the version, the integration specs will fail and can be fixed with:
+Every time we bump the version, the integration specs fail and can be fixed with:
```shell
bundle exec rake gitlab:import_export:bump_version
@@ -355,7 +355,7 @@ The tools to generate the NDJSON tree from the human-readable JSON files live in
**Please use `legacy-project-json-to-ndjson.sh` to generate the NDJSON tree.**
-The NDJSON tree will look like this:
+The NDJSON tree looks like:
```shell
tree
@@ -389,7 +389,7 @@ tree
**Please use `legacy-group-json-to-ndjson.rb` to generate the NDJSON tree.**
-The NDJSON tree will look like this:
+The NDJSON tree looks like this:
```shell
tree
@@ -413,5 +413,5 @@ tree
└── 4352.json
```
-CAUTION: **Caution:**
+WARNING:
When updating these fixtures, please ensure you update both `json` files and `tree` folder, as the tests apply to both.
diff --git a/doc/development/import_project.md b/doc/development/import_project.md
index 8b27b7d28a5..dfe6153ad45 100644
--- a/doc/development/import_project.md
+++ b/doc/development/import_project.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Test Import Project
@@ -23,13 +23,13 @@ The first option is to simply [import the Project tarball file via the GitLab UI
It should take up to 15 minutes for the project to fully import. You can head to the project's main page for the current status.
-This method ignores all the errors silently (including the ones related to `GITALY_DISABLE_REQUEST_LIMITS`) and is used by GitLab's users. For development and testing, check the other methods below.
+This method ignores all the errors silently (including the ones related to `GITALY_DISABLE_REQUEST_LIMITS`) and is used by GitLab users. For development and testing, check the other methods below.
### Importing via the `import-project` script
A convenient script, [`bin/import-project`](https://gitlab.com/gitlab-org/quality/performance/blob/master/bin/import-project), is provided with [performance](https://gitlab.com/gitlab-org/quality/performance) project to import the Project tarball into a GitLab environment via API from the terminal.
-Note that to use the script, it will require some preparation if you haven't done so already:
+Note that to use the script, it requires some preparation if you haven't done so already:
1. First, set up [`Ruby`](https://www.ruby-lang.org/en/documentation/installation/) and [`Ruby Bundler`](https://bundler.io) if they aren't already available on the machine.
1. Next, install the required Ruby Gems via Bundler with `bundle install`.
@@ -40,7 +40,7 @@ For details how to use `bin/import-project`, run:
bin/import-project --help
```
-The process should take up to 15 minutes for the project to import fully. The script will keep checking periodically for the status and exit once import has completed.
+The process should take up to 15 minutes for the project to import fully. The script checks the status periodically and exits after the import has completed.
### Importing via GitHub
@@ -49,7 +49,7 @@ There is also an option to [import the project via GitHub](../user/project/impor
1. Create the group `qa-perf-testing`
1. Import the GitLab FOSS repository that's [mirrored on GitHub](https://github.com/gitlabhq/gitlabhq) into the group via the UI.
-This method will take longer to import than the other methods and will depend on several factors. It's recommended to use the other methods.
+This method takes longer to import than the other methods and depends on several factors. It's recommended to use the other methods.
### Importing via a Rake task
@@ -94,7 +94,7 @@ The `namespace_path` does not exist.
For example, one of the groups or subgroups is mistyped or missing
or you've specified the project name in the path.
-The task will only create the project.
+The task only creates the project.
If you want to import it to a new group or subgroup then create it first.
##### `Exception: No such file or directory @ rb_sysopen - (filename)`
@@ -118,8 +118,8 @@ with '-', end in '.git' or end in '.atom'
The project name specified in `project_path` is not valid for one of the specified reasons.
-Only put the project name in `project_path`. If, for example, you put a path of subgroups in there
-it will fail with this error as `/` is not a valid character in a project name.
+Only put the project name in `project_path`. For example, if you provide a path of subgroups
+it fails with this error as `/` is not a valid character in a project name.
##### `Name has already been taken and Path has already been taken`
@@ -190,7 +190,7 @@ For Performance testing, we should:
- Count the number of executed SQL queries during the restore.
- Observe the number of GC cycles happening.
-You can use this snippet: `https://gitlab.com/gitlab-org/gitlab/snippets/1924954` (must be logged in), which will restore the project, and measure the execution time of `Project::TreeRestorer`, number of SQL queries and number of GC cycles happening.
+You can use this snippet: `https://gitlab.com/gitlab-org/gitlab/snippets/1924954` (must be logged in), which restores the project, and measures the execution time of `Project::TreeRestorer`, number of SQL queries and number of GC cycles happening.
You can execute the script from the `gdk/gitlab` directory like this:
@@ -200,7 +200,7 @@ bundle exec rails r /path_to_sript/script.rb project_name /path_to_extracted_pr
## Troubleshooting
-In this section we'll detail any known issues we've seen when trying to import a project and how to manage them.
+This section details known issues we've seen when trying to import a project and how to manage them.
### Gitaly calls error when importing
diff --git a/doc/development/insert_into_tables_in_batches.md b/doc/development/insert_into_tables_in_batches.md
index 5fe37a95b59..cfa0c862471 100644
--- a/doc/development/insert_into_tables_in_batches.md
+++ b/doc/development/insert_into_tables_in_batches.md
@@ -1,7 +1,7 @@
---
stage: Enablement
group: Database
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
description: "Sometimes it is necessary to store large amounts of records at once, which can be inefficient
when iterating collections and performing individual `save`s. With the arrival of `insert_all`
in Rails 6, which operates at the row level (that is, using `Hash`es), GitLab has added a set
@@ -101,7 +101,7 @@ performance impact this might have on your code. There is a trade-off between th
### Handling duplicate records
-NOTE: **Note:**
+NOTE:
This parameter applies only to `bulk_insert!`. If you intend to update existing
records, use `bulk_upsert!` instead.
diff --git a/doc/development/instrumentation.md b/doc/development/instrumentation.md
index bdbcd52eb61..8fb7f29c86c 100644
--- a/doc/development/instrumentation.md
+++ b/doc/development/instrumentation.md
@@ -1,7 +1,7 @@
---
stage: Monitor
group: Health
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Instrumenting Ruby code
@@ -11,7 +11,7 @@ blocks of Ruby code. Method instrumentation is the primary form of
instrumentation with block-based instrumentation only being used when we want to
drill down to specific regions of code within a method.
-Please refer to [Product Analytics](product_analytics/index.md) if you are tracking product usage patterns.
+Please refer to [Product Analytics](https://about.gitlab.com/handbook/product/product-analytics-guide/) if you are tracking product usage patterns.
## Instrumenting Methods
@@ -19,13 +19,14 @@ Instrumenting methods is done by using the `Gitlab::Metrics::Instrumentation`
module. This module offers a few different methods that can be used to
instrument code:
-- `instrument_method`: instruments a single class method.
-- `instrument_instance_method`: instruments a single instance method.
-- `instrument_class_hierarchy`: given a Class this method will recursively
- instrument all sub-classes (both class and instance methods).
-- `instrument_methods`: instruments all public and private class methods of a Module.
-- `instrument_instance_methods`: instruments all public and private instance methods of a
+- `instrument_method`: Instruments a single class method.
+- `instrument_instance_method`: Instruments a single instance method.
+- `instrument_class_hierarchy`: Given a Class, this method recursively
+ instruments all sub-classes (both class and instance methods).
+- `instrument_methods`: Instruments all public and private class methods of a
Module.
+- `instrument_instance_methods`: Instruments all public and private instance
+ methods of a Module.
To remove the need for typing the full `Gitlab::Metrics::Instrumentation`
namespace you can use the `configure` class method. This method simply yields
@@ -91,7 +92,7 @@ Ruby code. In case of the above snippet you'd run the following:
- `$ Banzai::Renderer.render`
-This will print out something along the lines of:
+This prints a result similar to:
```plaintext
From: /path/to/your/gitlab/lib/gitlab/metrics/instrumentation.rb @ line 148:
@@ -131,7 +132,7 @@ Three values are measured for a block:
Both the real and CPU timings are measured in milliseconds.
-Multiple calls to the same block will result in the final values being the sum
+Multiple calls to the same block results in the final values being the sum
of all individual values. Take this code for example:
```ruby
@@ -142,7 +143,7 @@ of all individual values. Take this code for example:
end
```
-Here the final value of `sleep_real_time` will be `3`, _not_ `1`.
+Here, the final value of `sleep_real_time` is `3`, and not `1`.
## Tracking Custom Events
diff --git a/doc/development/integrations/codesandbox.md b/doc/development/integrations/codesandbox.md
new file mode 100644
index 00000000000..1641f4656a0
--- /dev/null
+++ b/doc/development/integrations/codesandbox.md
@@ -0,0 +1,140 @@
+# Set up local Codesandbox development environment
+
+This guide walks through setting up a local [Codesandbox repository](https://github.com/codesandbox/codesandbox-client) and integrating it with a local GitLab instance. Codesandbox
+is used to power the Web IDE's [Live Preview feature](../../user/project/web_ide/index.md#live-preview). Having a local Codesandbox setup is useful for debugging upstream issues or
+creating upstream contributions like [this one](https://github.com/codesandbox/codesandbox-client/pull/5137).
+
+## Initial setup
+
+Before using Codesandbox with your local GitLab instance, you must:
+
+1. Enable HTTPS on your GDK. Codesandbox uses Service Workers that require `https`.
+ Follow the GDK [NGINX configuration instructions](https://gitlab.com/gitlab-org/gitlab-development-kit/-/blob/master/doc/howto/nginx.md) to enable HTTPS for GDK.
+1. Clone the [`codesandbox-client` project](https://github.com/codesandbox/codesandbox-client)
+ locally. If you plan on contributing upstream, you might want to fork and clone first.
+1. (Optional) Use correct `python` and `nodejs` versions. Otherwise, `yarn` may fail to
+ install or build some packages. If you're using `asdf` you can run the following commands:
+
+ ```shell
+ asdf local nodejs 10.14.2
+ asdf local python 2.7.18
+ ```
+
+1. Run the following commands in the `codesandbox-client` project checkout:
+
+ ```shell
+ # This might be necessary for the `prepublishOnly` job that is run later
+ yarn global add lerna
+
+ # Install packages
+ yarn
+ ```
+
+ You can run `yarn build:clean` to clean up the build assets.
+
+## Use local GitLab instance with local Codesandbox
+
+GitLab integrates with two parts of Codesandbox:
+
+- An NPM package called `smooshpack` (called `sandpack` in the `codesandbox-client` project).
+ This exposes an entrypoint for us to kick off Codesandbox's bundler.
+- A server that houses Codesandbox assets for bundling and previewing. This is hosted
+ on a separate server for security.
+
+Each time you want to run GitLab and Codesandbox together, you need to perform the
+steps in the following sections.
+
+### Use local `smooshpack` for GitLab
+
+GitLab usually satisfies its `smooshpack` dependency with a remote module, but we want
+to use a locally-built module. To build and use a local `smooshpack` module:
+
+1. In the `codesandbox-client` project directory, run:
+
+ ```shell
+ cd standalone-packages/sandpack
+ yarn link
+
+ # (Optional) you might want to start a development build
+ yarn run start
+ ```
+
+ Now, in the GitLab project, you can run `yarn link "smooshpack"`. `yarn` looks
+ for `smooshpack` **on disk** as opposed to the one hosted remotely.
+
+1. In the `gitlab` project directory, run:
+
+ ```shell
+ # Remove and reinstall node_modules just to be safe
+ rm -rf node_modules
+ yarn install
+
+ # Use the "smooshpack" package on disk
+ yarn link "smooshpack"
+ ```
+
+### Fix possible GDK webpack problem
+
+`webpack` in GDK can fail to find packages inside a linked package. This step can help
+you avoid `webpack` breaking with messages saying that it can't resolve packages from
+`smooshpack/dist/sandpack.es5.js`.
+
+In the `codesandbox-client` project directory, run:
+
+```shell
+cd standalone-packages
+
+mkdir node_modules
+ln -s $PATH_TO_LOCAL_GITLAB/node_modules/core-js ./node_modules/core-js
+```
+
+### Start building codesandbox app assets
+
+In the `codesandbox-client` project directory:
+
+```shell
+cd packages/app
+
+yarn start:sandpack-sandbox
+```
+
+### Create HTTPS proxy for Codesandbox `sandpack` assets
+
+Because we need `https`, we need to create a proxy to the webpack server. We can use
+[`http-server`](https://www.npmjs.com/package/http-server), which can do this proxying
+out of the box:
+
+```shell
+npx http-server --proxy http://localhost:3000 -S -C $PATH_TO_CERT_PEM -K $PATH_TO_KEY_PEM -p 8044 -d false
+```
+
+### Update `bundler_url` setting in GitLab
+
+We need to update our `application_setting_implementation.rb` to point to the server that hosts the
+Codesandbox `sandpack` assets. For instance, if these assets are hosted by a server at `https://sandpack.local:8044`:
+
+```patch
+diff --git a/app/models/application_setting_implementation.rb b/app/models/application_setting_implementation.rb
+index 6eed627b502..1824669e881 100644
+--- a/app/models/application_setting_implementation.rb
++++ b/app/models/application_setting_implementation.rb
+@@ -391,7 +391,7 @@ def static_objects_external_storage_enabled?
+ # This will eventually be configurable
+ # https://gitlab.com/gitlab-org/gitlab/issues/208161
+ def web_ide_clientside_preview_bundler_url
+- 'https://sandbox-prod.gitlab-static.net'
++ 'https://sandpack.local:8044'
+ end
+
+ private
+
+```
+
+NOTE:
+You can apply this patch by copying it to your clipboard and running `pbpaste | git apply`.
+
+You'll might want to restart the GitLab Rails server after making this change:
+
+```shell
+gdk restart rails-web
+```
diff --git a/doc/development/integrations/img/copy_cookies.png b/doc/development/integrations/img/copy_cookies.png
new file mode 100644
index 00000000000..21575987173
--- /dev/null
+++ b/doc/development/integrations/img/copy_cookies.png
Binary files differ
diff --git a/doc/development/integrations/img/copy_curl.png b/doc/development/integrations/img/copy_curl.png
new file mode 100644
index 00000000000..9fa871efbd5
--- /dev/null
+++ b/doc/development/integrations/img/copy_curl.png
Binary files differ
diff --git a/doc/development/integrations/example_vuln.png b/doc/development/integrations/img/example_vuln.png
index bbfeb3c805f..bbfeb3c805f 100644
--- a/doc/development/integrations/example_vuln.png
+++ b/doc/development/integrations/img/example_vuln.png
Binary files differ
diff --git a/doc/development/integrations/jenkins.md b/doc/development/integrations/jenkins.md
index d81dee94f17..c87b15e192a 100644
--- a/doc/development/integrations/jenkins.md
+++ b/doc/development/integrations/jenkins.md
@@ -1,7 +1,7 @@
---
stage: Create
group: Ecosystem
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# How to run Jenkins in development environment (on macOS) **(STARTER)**
@@ -56,7 +56,8 @@ For more details, see [GitLab documentation about Jenkins CI](../../integration/
## Configure Jenkins Project
-Set up the Jenkins project you're going to run your build on. A **Freestyle** project is the easiest option because the Jenkins plugin will update the build status on GitLab. In a **Pipeline** project, updating the status on GitLab needs to be configured in a script.
+Set up the Jenkins project to run your build on. A **Freestyle** project is the easiest
+option because the Jenkins plugin updates the build status on GitLab. In a **Pipeline** project, updating the status on GitLab needs to be configured in a script.
1. On your Jenkins instance, go to **New Item**.
1. Pick a name, choose **Freestyle** or **Pipeline** and click **ok**.
@@ -97,4 +98,5 @@ To activate the Jenkins service you must have a Starter subscription or higher.
## Test your setup
-Make a change in your repository and open an MR. In your Jenkins project it should have triggered a new build and on your MR, there should be a widget saying **Pipeline #NUMBER passed**. It will also include a link to your Jenkins build.
+Make a change in your repository and open an MR. In your Jenkins project it should have triggered a new build and on your MR, there should be a widget saying **Pipeline #NUMBER passed**.
+It should also include a link to your Jenkins build.
diff --git a/doc/development/integrations/jira_connect.md b/doc/development/integrations/jira_connect.md
index 66a93f8c947..408b0e6068e 100644
--- a/doc/development/integrations/jira_connect.md
+++ b/doc/development/integrations/jira_connect.md
@@ -1,7 +1,7 @@
---
stage: Create
group: Ecosystem
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Set up a development environment
@@ -47,3 +47,37 @@ To install the app in Jira:
You can also click **Getting Started** to open the configuration page rendered from your GitLab instance.
_Note that any changes to the app descriptor requires you to uninstall then reinstall the app._
+
+### Troubleshooting
+
+If the app install failed, you might need to delete `jira_connect_installations` from your database.
+
+1. Open the [database console](https://gitlab.com/gitlab-org/gitlab-development-kit/-/blob/master/doc/howto/postgresql.md#access-postgresql).
+1. Run `TRUNCATE TABLE jira_connect_installations CASCADE;`.
+
+## Add a namespace
+
+To add a [namespace](../../user/group/index.md#namespaces) to Jira:
+
+1. Make sure you are logged in on your GitLab development instance.
+1. On the GitLab app page in Jira, click **Get started**.
+1. Open your browser's developer tools and navigate to the **Network** tab.
+1. Try to add the namespace in Jira.
+1. If the request fails with 401 "not authorized", copy the request as a cURL command
+ and paste it in your terminal.
+
+ ![Example Vulnerability](img/copy_curl.png)
+
+1. Go to your development instance (usually at: <http://localhost:3000>), open developer
+ tools, navigate to the Network tab and reload the page.
+1. Copy all cookies from the first request.
+
+ ![Example Vulnerability](img/copy_cookies.png)
+
+1. Append the cookies to the cURL command in your terminal:
+ `--cookies "<cookies from the request>"`.
+1. Submit the cURL request.
+1. If the response is `{"success":true}`, the namespace was added.
+1. Append the cookies to the cURL command in your terminal `--cookies "PASTE COOKIES HERE"`.
+1. Submit the cURL request.
+1. If the response is `{"success":true}` the namespace was added.
diff --git a/doc/development/integrations/secure.md b/doc/development/integrations/secure.md
index 9bb92709d54..fb9d894d203 100644
--- a/doc/development/integrations/secure.md
+++ b/doc/development/integrations/secure.md
@@ -1,7 +1,7 @@
---
stage: Protect
group: Container Security
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Security scanner integration
@@ -82,9 +82,9 @@ mysec_sast:
sast: gl-sast-report.json
```
-Note that `gl-sast-report.json` is an example file path but any other file name can be used. See
+Note that `gl-sast-report.json` is an example file path but any other filename can be used. See
[the Output file section](#output-file) for more details. It's processed as a SAST report because
-it's declared under the `reports:sast` key in the job definition, not because of the file name.
+it's declared under the `reports:sast` key in the job definition, not because of the filename.
### Policies
@@ -207,17 +207,17 @@ given by the `CI_PROJECT_DIR` environment variable.
It is recommended to name the output file after the type of scanning, and to use `gl-` as a prefix.
Since all Secure reports are JSON files, it is recommended to use `.json` as a file extension.
-For instance, a suggested file name for a Dependency Scanning report is `gl-dependency-scanning.json`.
+For instance, a suggested filename for a Dependency Scanning report is `gl-dependency-scanning.json`.
The [`artifacts:reports`](../../ci/pipelines/job_artifacts.md#artifactsreports) keyword
of the job definition must be consistent with the file path where the Security report is written.
For instance, if a Dependency Scanning analyzer writes its report to the CI project directory,
-and if this report file name is `depscan.json`,
+and if this report filename is `depscan.json`,
then `artifacts:reports:dependency_scanning` must be set to `depscan.json`.
### Exit code
-Following the POSIX exit code standard, the scanner will exit with 0 for success and any number from 1 to 255 for anything else.
+Following the POSIX exit code standard, the scanner exits with 0 for success and any number from 1 to 255 for anything else.
Success also includes the case when vulnerabilities are found.
When executing a scanning job using the [Docker-in-Docker privileged mode](../../user/application_security/sast/index.md#requirements),
@@ -324,7 +324,7 @@ whereas the `message` may repeat the location.
As a visual example, this screenshot highlights where these fields are used when viewing a
vulnerability as part of a pipeline view.
-![Example Vulnerability](example_vuln.png)
+![Example Vulnerability](img/example_vuln.png)
For instance, a `message` for a vulnerability
reported by Dependency Scanning gives information on the vulnerable dependency,
@@ -397,7 +397,9 @@ Not all vulnerabilities have CVEs, and a CVE can be identified multiple times. A
isn't a stable identifier and you shouldn't assume it as such when tracking vulnerabilities.
The maximum number of identifiers for a vulnerability is set as 20. If a vulnerability has more than 20 identifiers,
-the system will save only the first 20 of them.
+the system saves only the first 20 of them. Note that vulnerabilities in the [Pipeline
+Security](../../user/application_security/security_dashboard/#pipeline-security)
+tab do not enforce this limit and all identifiers present in the report artifact are displayed.
### Location
diff --git a/doc/development/integrations/secure_partner_integration.md b/doc/development/integrations/secure_partner_integration.md
index 52d10f0bd3c..80f632639ca 100644
--- a/doc/development/integrations/secure_partner_integration.md
+++ b/doc/development/integrations/secure_partner_integration.md
@@ -1,13 +1,13 @@
---
stage: Secure
group: Static Analysis
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Secure Partner Integration - Onboarding Process
If you want to integrate your product with the [Secure Stage](https://about.gitlab.com/direction/secure/),
-this page will help you understand the developer workflow GitLab intends for
+this page describes the developer workflow GitLab intends for
our users to follow with regards to security results. These should be used as
guidelines so you can build an integration that fits with the workflow GitLab
users are already familiar with.
@@ -19,7 +19,7 @@ integration as well as linking to more detailed resources for how to do so.
## Integration Tiers
-GitLab's security offerings are designed for GitLab Gold and GitLab Ultimate users, and the
+The security offerings in GitLab are designed for GitLab Gold and GitLab Ultimate users, and the
[DevSecOps](https://about.gitlab.com/handbook/use-cases/#4-devsecops-shift-left-security)
use case. All the features are in those tiers. This includes the APIs and standard reporting
framework needed to provide a consistent experience for users to easily bring their preferred
@@ -29,7 +29,7 @@ tiers so that we can provide the most value to our mutual customers.
## What is the GitLab Developer Workflow?
This workflow is how GitLab users interact with our product and expect it to
-function. Understanding how users use GitLab today will help you choose the
+function. Understanding how users use GitLab today helps you choose the
best place to integrate your own product and its results into GitLab.
- Developers want to write code without using a new tool to consume results
@@ -99,9 +99,9 @@ and complete an integration with the Secure stage.
- In the [Security Dashboard](../../user/application_security/security_dashboard/index.md) ([Dashboard data flow](https://gitlab.com/snippets/1910005#project-and-group-dashboards)).
1. Optional: Provide a way to interact with results as Vulnerabilities:
- Users can interact with the findings from your artifact within their workflow. They can dismiss the findings or accept them and create a backlog issue.
- - To automatically create issues without user interaction, use the [issue API](../../api/issues.md). This will be replaced by [Standalone Vulnerabilities](https://gitlab.com/groups/gitlab-org/-/epics/634) in the future.
+ - To automatically create issues without user interaction, use the [issue API](../../api/issues.md).
1. Optional: Provide auto-remediation steps:
- - If you specified `remediations` in your artifact, it is proposed through our [auto-remediation](../../user/application_security/index.md#automatic-remediation-for-vulnerabilities)
+ - If you specified `remediations` in your artifact, it is proposed through our [automatic remediation](../../user/application_security/index.md#automatic-remediation-for-vulnerabilities)
interface.
1. Demo the integration to GitLab:
- After you have tested and are ready to demo your integration please
@@ -112,7 +112,7 @@ and complete an integration with the Secure stage.
to support your go-to-market as appropriate.
- Examples of supported marketing could include being listed on our [Security Partner page](https://about.gitlab.com/partners/#security),
doing an [Unfiltered blog post](https://about.gitlab.com/handbook/marketing/blog/unfiltered/),
- doing a co-branded webinar, or producing a co-branded whitepaper.
+ doing a co-branded webinar, or producing a co-branded white paper.
We have a [video playlist](https://www.youtube.com/playlist?list=PL05JrBw4t0KpMqYxJiOLz-uBIr5w-yP4A)
that may be helpful as part of this process. This covers various topics related to integrating your
diff --git a/doc/development/interacting_components.md b/doc/development/interacting_components.md
index 87bbe30d9fd..50751dcc539 100644
--- a/doc/development/interacting_components.md
+++ b/doc/development/interacting_components.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Developing against interacting components or features
diff --git a/doc/development/internal_api.md b/doc/development/internal_api.md
index e25feda356d..4971e4d629d 100644
--- a/doc/development/internal_api.md
+++ b/doc/development/internal_api.md
@@ -1,7 +1,7 @@
---
stage: Create
group: Source Code
-info: "To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers"
+info: "To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments"
type: reference, api
---
@@ -25,7 +25,7 @@ To authenticate using that token, clients read the contents of that
file, and include the token Base64 encoded in a `secret_token` parameter
or in the `Gitlab-Shared-Secret` header.
-NOTE: **Note:**
+NOTE:
The internal API used by GitLab Pages, and GitLab Kubernetes Agent Server (kas) uses JSON Web Token (JWT)
authentication, which is different from GitLab Shell.
@@ -60,7 +60,7 @@ POST /internal/allowed
Example request:
```shell
-curl --request POST --header "Gitlab-Shared-Secret: <Base64 encoded token>" --data "key_id=11&project=gnuwget/wget2&action=git-upload-pack&protocol=ssh" http://localhost:3001/api/v4/internal/allowed
+curl --request POST --header "Gitlab-Shared-Secret: <Base64 encoded token>" --data "key_id=11&project=gnuwget/wget2&action=git-upload-pack&protocol=ssh" "http://localhost:3001/api/v4/internal/allowed"
```
Example response:
@@ -108,7 +108,7 @@ information for LFS clients when the repository is accessed over SSH.
Example request:
```shell
-curl --request POST --header "Gitlab-Shared-Secret: <Base64 encoded token>" --data "key_id=11&project=gnuwget/wget2" http://localhost:3001/api/v4/internal/lfs_authenticate
+curl --request POST --header "Gitlab-Shared-Secret: <Base64 encoded token>" --data "key_id=11&project=gnuwget/wget2" "http://localhost:3001/api/v4/internal/lfs_authenticate"
```
```json
@@ -141,7 +141,7 @@ GET /internal/authorized_keys
Example request:
```shell
-curl --request GET --header "Gitlab-Shared-Secret: <Base64 encoded secret>""http://localhost:3001/api/v4/internal/authorized_keys?key=<key as passed by OpenSSH>"
+curl --request GET --header "Gitlab-Shared-Secret: <Base64 encoded secret>" "http://localhost:3001/api/v4/internal/authorized_keys?key=<key as passed by OpenSSH>"
```
Example response:
@@ -242,7 +242,7 @@ GET /internal/two_factor_recovery_codes
Example request:
```shell
-curl --request POST --header "Gitlab-Shared-Secret: <Base64 encoded secret>" --data "key_id=7" http://localhost:3001/api/v4/internal/two_factor_recovery_codes
+curl --request POST --header "Gitlab-Shared-Secret: <Base64 encoded secret>" --data "key_id=7" "http://localhost:3001/api/v4/internal/two_factor_recovery_codes"
```
Example response:
@@ -289,7 +289,7 @@ POST /internal/personal_access_token
Example request:
```shell
-curl --request POST --header "Gitlab-Shared-Secret: <Base64 encoded secret>" --data "user_id=29&name=mytokenname&scopes[]=read_user&scopes[]=read_repository&expires_at=2020-07-24" http://localhost:3001/api/v4/internal/personal_access_token
+curl --request POST --header "Gitlab-Shared-Secret: <Base64 encoded secret>" --data "user_id=29&name=mytokenname&scopes[]=read_user&scopes[]=read_repository&expires_at=2020-07-24" "http://localhost:3001/api/v4/internal/personal_access_token"
```
Example response:
@@ -323,7 +323,7 @@ POST /internal/pre_receive
Example request:
```shell
-curl --request POST --header "Gitlab-Shared-Secret: <Base64 encoded secret>" --data "gl_repository=project-7" http://localhost:3001/api/v4/internal/pre_receive
+curl --request POST --header "Gitlab-Shared-Secret: <Base64 encoded secret>" --data "gl_repository=project-7" "http://localhost:3001/api/v4/internal/pre_receive"
```
Example response:
@@ -355,7 +355,7 @@ POST /internal/post_receive
Example Request:
```shell
-curl --request POST --header "Gitlab-Shared-Secret: <Base64 encoded secret>" --data "gl_repository=project-7" --data "identifier=user-1" --data "changes=0000000000000000000000000000000000000000 fd9e76b9136bdd9fe217061b497745792fe5a5ee gh-pages\n" http://localhost:3001/api/v4/internal/post_receive
+curl --request POST --header "Gitlab-Shared-Secret: <Base64 encoded secret>" --data "gl_repository=project-7" --data "identifier=user-1" --data "changes=0000000000000000000000000000000000000000 fd9e76b9136bdd9fe217061b497745792fe5a5ee gh-pages\n" "http://localhost:3001/api/v4/internal/post_receive"
```
Example response:
@@ -385,7 +385,7 @@ These endpoints are all authenticated using JWT. The JWT secret is stored in a f
specified in `config/gitlab.yml`. By default, the location is in the root of the
GitLab Rails app in a file called `.gitlab_kas_secret`.
-CAUTION: **Caution:**
+WARNING:
The Kubernetes agent is under development and is not recommended for production use.
### Kubernetes agent information
diff --git a/doc/development/internal_users.md b/doc/development/internal_users.md
index 9f1428dce80..35db2016e1c 100644
--- a/doc/development/internal_users.md
+++ b/doc/development/internal_users.md
@@ -11,7 +11,7 @@ info: "See the Technical Writers assigned to Development Guidelines: https://abo
GitLab uses internal users (sometimes referred to as "bots") to perform
actions or functions that cannot be attributed to a regular user.
-These users are created programatically throughout the codebase itself when
+These users are created programmatically throughout the codebase itself when
necessary, and do not count towards a license limit.
They are used when a traditional user account would not be applicable, for
diff --git a/doc/development/issuable-like-models.md b/doc/development/issuable-like-models.md
index 9029886c334..b3be2b1b2e6 100644
--- a/doc/development/issuable-like-models.md
+++ b/doc/development/issuable-like-models.md
@@ -1,7 +1,7 @@
---
stage: Plan
group: Project Management
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Issuable-like Rails models utilities
diff --git a/doc/development/issue_types.md b/doc/development/issue_types.md
index 416aa65b13f..d02ff590352 100644
--- a/doc/development/issue_types.md
+++ b/doc/development/issue_types.md
@@ -1,7 +1,7 @@
---
stage: Plan
group: Project Management
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Issue Types
diff --git a/doc/development/iterating_tables_in_batches.md b/doc/development/iterating_tables_in_batches.md
index 062b755de38..3953e7097dd 100644
--- a/doc/development/iterating_tables_in_batches.md
+++ b/doc/development/iterating_tables_in_batches.md
@@ -1,7 +1,7 @@
---
stage: Enablement
group: Database
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Iterating Tables In Batches
@@ -41,3 +41,334 @@ User Load (0.7ms) SELECT "users"."id" FROM "users" WHERE ("users"."id" >= 4165
The API of this method is similar to `in_batches`, though it doesn't support
all of the arguments that `in_batches` supports. You should always use
`each_batch` _unless_ you have a specific need for `in_batches`.
+
+## Column definition
+
+`EachBatch` uses the primary key of the model by default for the iteration. This works most of the cases, however in some cases, you might want to use a different column for the iteration.
+
+```ruby
+Project.distinct.each_batch(column: :creator_id, of: 10) do |relation|
+ puts User.where(id: relation.select(:creator_id)).map(&:id)
+end
+```
+
+The query above iterates over the project creators and prints them out without duplications.
+
+NOTE:
+In case the column is not unique (no unique index definition), calling the `distinct` method on the relation is necessary.
+
+## `EachBatch` in data migrations
+
+When dealing with data migrations the preferred way to iterate over large volume of data is using `EachBatch`.
+
+A special case of data migration is a [background migration](background_migrations.md#scheduling)
+where the actual data modification is executed in a background job. The migration code that determines
+the data ranges (slices) and schedules the background jobs uses `each_batch`.
+
+## Efficient usage of `each_batch`
+
+`EachBatch` helps iterating over large tables. It's important to highlight that `EachBatch` is not going to magically solve all iteration related performance problems and it might not help at all in some scenarios. From the database point of view, correctly configured database indexes are also necessary to make `EachBatch` perform well.
+
+### Example 1: Simple iteration
+
+Let's consider that we want to iterate over the `users` table and print the `User` records to the standard output. The `users` table contains millions of records, thus running one query to fetch the users will likely time out.
+
+![Users table overview](img/each_batch_users_table_v13_7.png)
+
+This is a simplified version of the `users` table which contains several rows. We have a few smaller gaps in the `id` column to make the example a bit more realistic (a few records were already deleted). Currently we have one index on the `id` field.
+
+Loading all users into memory (avoid):
+
+```ruby
+users = User.all
+
+users.each { |user| puts user.inspect }
+```
+
+Use `each_batch`:
+
+```ruby
+# Note: for this example I picked 5 as the batch size, the default is 1_000
+User.each_batch(of: 5) do |relation|
+ relation.each { |user| puts user.inspect }
+end
+```
+
+#### How does `each_batch` work?
+
+As the first step, it finds the lowest `id` (start `id`) in the table by executing the following database query:
+
+```sql
+SELECT "users"."id" FROM "users" ORDER BY "users"."id" ASC LIMIT 1
+```
+
+![Reading the start id value](img/each_batch_users_table_iteration_1_v13_7.png)
+
+Notice that the query only reads data from the index (`INDEX ONLY SCAN`), the table is not accessed. Database indexes are sorted so taking out the first item is a very cheap operation.
+
+The next step is to find the next `id` (end `id`) which should respect the batch size configuration. In this example we used batch size of 5. `EachBatch` uses the `OFFSET` clause to get a "shifted" `id` value.
+
+```sql
+SELECT "users"."id" FROM "users" WHERE "users"."id" >= 1 ORDER BY "users"."id" ASC LIMIT 1 OFFSET 5
+```
+
+![Reading the end id value](img/each_batch_users_table_iteration_2_v13_7.png)
+
+Again, the query only looks into the index. The `OFFSET 5` takes out the sixth `id` value: this query reads a maximum of six items from the index regardless of the table size or the iteration count.
+
+At this point we know the `id` range for the first batch. Now it's time to construct the query for the `relation` block.
+
+```sql
+SELECT "users".* FROM "users" WHERE "users"."id" >= 1 AND "users"."id" < 302
+```
+
+![Reading the rows from the users table](img/each_batch_users_table_iteration_3_v13_7.png)
+
+Notice the `<` sign. Previously six items were read from the index and in this query the last value is "excluded". The query will look at the index to get the location of the five `user` rows on the disk and read the rows from the table. The returned array is processed in Ruby.
+
+The first iteration is done. For the next iteration, the last `id` value is reused from the previous iteration in order to find out the next end `id` value.
+
+```sql
+SELECT "users"."id" FROM "users" WHERE "users"."id" >= 302 ORDER BY "users"."id" ASC LIMIT 1 OFFSET 5
+```
+
+![Reading the second end id value](img/each_batch_users_table_iteration_4_v13_7.png)
+
+Now we can easily construct the `users` query for the second iteration.
+
+```sql
+SELECT "users".* FROM "users" WHERE "users"."id" >= 302 AND "users"."id" < 353
+```
+
+![Reading the rows for the second iteration from the users table](img/each_batch_users_table_iteration_5_v13_7.png)
+
+### Example 2: Iteration with filters
+
+Building on top of the previous example, we want to print users with zero sign-in count. We keep track of the number of sign-ins in the `sign_in_count` column so we write the following code:
+
+```ruby
+users = User.where(sign_in_count: 0)
+
+users.each_batch(of: 5) do |relation|
+ relation.each { |user| puts user.inspect }
+end
+```
+
+`each_batch` will produce the following SQL query for the start `id` value:
+
+```sql
+SELECT "users"."id" FROM "users" WHERE "users"."sign_in_count" = 0 ORDER BY "users"."id" ASC LIMIT 1
+```
+
+Selecting only the `id` column and ordering by `id` is going to "force" the database to use the index on the `id` (primary key index) column, however we also have an extra condition on the `sign_in_count` column. The column is not part of the index, so the database needs to look into the actual table to find the first matching row.
+
+![Reading the index with extra filter](img/each_batch_users_table_filter_v13_7.png)
+
+NOTE:
+The number of scanned rows depends on the data distribution in the table.
+
+- Best case scenario: the first user was never logged in. The database reads only one row.
+- Worst case scenario: all users were logged in at least once. The database reads all rows.
+
+In this particular example the database had to read 10 rows (regardless of our batch size setting) to determine the first `id` value. In a "real-world" application it's hard to predict whether the filtering is going to cause problems or not. In case of GitLab, verifying the data on a production replica is a good start, but keep in mind that data distribution on GitLab.com can be different from self-managed instances.
+
+#### Improve filtering with `each_batch`
+
+##### Specialized conditional index
+
+```sql
+CREATE INDEX index_on_users_never_logged_in ON users (id) WHERE sign_in_count = 0
+```
+
+This is how our table and the newly created index looks like:
+
+![Reading the specialized index](img/each_batch_users_table_filtered_index_v13_7.png)
+
+This index definition covers the conditions on the `id` and `sign_in_count` columns thus makes the `each_batch` queries very effective (similar to the simple iteration example).
+
+It's rare when a user was never signed in so we anticipate small index size. Including only the `id` in the index definition also helps keeping the index size small.
+
+##### Index on columns
+
+Later on we might want to iterate over the table filtering for different `sign_in_count` values, in those cases we cannot use the previously suggested conditional index because the `WHERE` condition does not match with our new filter (`sign_in_count > 10`).
+
+To address this problem, we have two options:
+
+- Create another, conditional index to cover the new query.
+- Replace the index with more generalized configuration.
+
+NOTE:
+Having multiple indexes on the same table and on the same columns could be a performance bottleneck when writing data.
+
+Let's consider the following index (avoid):
+
+```sql
+CREATE INDEX index_on_users_never_logged_in ON users (id, sign_in_count)
+```
+
+The index definition starts with the `id` column which makes the index very inefficient from data selectivity point of view.
+
+```sql
+SELECT "users"."id" FROM "users" WHERE "users"."sign_in_count" = 0 ORDER BY "users"."id" ASC LIMIT 1
+```
+
+Executing the query above results in an `INDEX ONLY SCAN`. However, the query still needs to iterate over unknown number of entries in the index, and then find the first item where the `sign_in_count` is `0`.
+
+![Reading the an ineffective index](img/each_batch_users_table_bad_index_v13_7.png)
+
+We can improve the query significantly by swapping the columns in the index definition (prefer).
+
+```sql
+CREATE INDEX index_on_users_never_logged_in ON users (sign_in_count, id)
+```
+
+![Reading a good index](img/each_batch_users_table_good_index_v13_7.png)
+
+The following index definition is not going to work well with `each_batch` (avoid).
+
+```sql
+CREATE INDEX index_on_users_never_logged_in ON users (sign_in_count)
+```
+
+Since `each_batch` builds range queries based on the `id` column, this index cannot be used efficiently. The DB reads the rows from the table or uses a bitmap search where the primary key index is also read.
+
+##### "Slow" iteration
+
+Slow iteration means that we use a good index configuration to iterate over the table and apply filtering on the yielded relation.
+
+```ruby
+User.each_batch(of: 5) do |relation|
+ relation.where(sign_in_count: 0).each { |user| puts user inspect }
+end
+```
+
+The iteration uses the primary key index (on the `id` column) which makes it safe from statement
+timeouts. The filter (`sign_in_count: 0`) is applied on the `relation` where the `id` is already constrained (range). The number of rows are limited.
+
+Slow iteration generally takes more time to finish. The iteration count is higher and
+one iteration could yield fewer records than the batch size. Iterations may even yield
+0 records. This is not an optimal solution; however, in some cases (especially when
+dealing with large tables) this is the only viable option.
+
+### Using Subqueries
+
+Using subqueries in your `each_batch` query does not work well in most cases. Consider the following example:
+
+```ruby
+projects = Project.where(creator_id: Issue.where(confidential: true).select(:author_id))
+
+projects.each_batch do |relation|
+ # do something
+end
+```
+
+The iteration uses the `id` column of the `projects` table. The batching does not affect the subquery.
+This means for each iteration, the subquery is executed by the database. This adds a constant "load"
+on the query which often ends up in statement timeouts. We have an unknown number of confidential
+issues, the execution time and the accessed database rows depends on the data distribution in the
+`issues` table.
+
+NOTE:
+Using subqueries works only when the subquery returns a small number of rows.
+
+#### Improving Subqueries
+
+When dealing with subqueries, a slow iteration approach could work: the filter on `creator_id` can be part of the generated `relation` object.
+
+```ruby
+projects = Project.all
+
+projects.each_batch do |relation|
+ relation.where(creator_id: Issue.where(confidential: true).select(:author_id))
+end
+```
+
+If the query on the `issues` table itself is not performant enough, a nested loop could be constructed. Try to avoid it when possible.
+
+```ruby
+projects = Project.all
+
+projects.each_batch do |relation|
+ issues = Issue.where(confidential: true)
+
+ issues.each_batch do |issues_relation|
+ relation.where(creator_id: issues_relation.select(:author_id))
+ end
+end
+```
+
+If we know that the `issues` table has many more rows than `projects`, it would make sense to flip the queries, where the `issues` table is batched first.
+
+### Using `JOIN` and `EXISTS`
+
+When to use `JOINS`:
+
+- When there's a 1:1 or 1:N relationship between the tables where we know that the joined record
+(almost) always exists. This works well for "extension-like" tables:
+ - `projects` - `project_settings`
+ - `users` - `user_details`
+ - `users` - `user_statuses`
+- `LEFT JOIN` works well in this case. Conditions on the joined table need to go to the yielded relation so the iteration is not affected by the data distribution in the joined table.
+
+Example:
+
+```ruby
+users = User.joins("LEFT JOIN personal_access_tokens on personal_access_tokens.user_id = users.id")
+
+users.each_batch do |relation|
+ relation.where("personal_access_tokens.name = 'name'")
+end
+```
+
+`EXISTS` queries should be added only to the inner `relation` of the `each_batch` query:
+
+```ruby
+User.each_batch do |relation|
+ relation.where("EXISTS (SELECT 1 FROM ...")
+end
+```
+
+### Complex queries on the relation object
+
+When the `relation` object has several extra conditions, the execution plans might become "unstable".
+
+Example:
+
+```ruby
+Issue.each_batch do |relation|
+ relation
+ .joins(:metrics)
+ .joins(:merge_requests_closing_issues)
+ .where("id IN (SELECT ...)")
+ .where(confidential: true)
+end
+```
+
+Here, we expect that the `relation` query reads the `BATCH_SIZE` of user records and then
+filters down the results according to the provided queries. The planner might decide that
+using a bitmap index lookup with the index on the `confidential` column is a better way to
+execute the query. This can cause unexpectedly high amount of rows to be read and the query
+could time out.
+
+Problem: we know for sure that the relation is returning maximum `BATCH_SIZE` of records, however the planner does not know this.
+
+Common table expression (CTE) trick to force the range query to execute first:
+
+```ruby
+Issue.each_batch(of: 1000) do |relation|
+ cte = Gitlab::SQL::CTE.new(:batched_relation, relation.limit(1000))
+
+ scope = cte
+ .apply_to(Issue.all)
+ .joins(:metrics)
+ .joins(:merge_requests_closing_issues)
+ .where("id IN (SELECT ...)")
+ .where(confidential: true)
+
+ puts scope.to_a
+end
+```
+
+### `EachBatch` vs `BatchCount`
+
+When adding new counters for usage ping, the preferred way to count records is using the `Gitlab::Database::BatchCount` class. The iteration logic implemented in `BatchCount` has similar performance characteristics like `EachBatch`. Most of the tips and suggestions for improving `BatchCount` mentioned above applies to `BatchCount` as well.
diff --git a/doc/development/kubernetes.md b/doc/development/kubernetes.md
index a54798b4c35..a56a0bd48be 100644
--- a/doc/development/kubernetes.md
+++ b/doc/development/kubernetes.md
@@ -1,12 +1,12 @@
---
stage: Configure
group: Configure
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Kubernetes integration - development guidelines
-This document provides various guidelines when developing for GitLab's
+This document provides various guidelines when developing for the GitLab
[Kubernetes integration](../user/project/clusters/index.md).
## Development
diff --git a/doc/development/lfs.md b/doc/development/lfs.md
index 64bc709ff9a..9df1f659654 100644
--- a/doc/development/lfs.md
+++ b/doc/development/lfs.md
@@ -1,7 +1,7 @@
---
stage: Create
group: Source Code
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Git LFS
@@ -9,8 +9,8 @@ info: To determine the technical writer assigned to the Stage/Group associated w
## Deep Dive
In April 2019, Francisco Javier LΓ³pez hosted a Deep Dive (GitLab team members only: `https://gitlab.com/gitlab-org/create-stage/issues/1`)
-on GitLab's [Git LFS](../topics/git/lfs/index.md) implementation to share his domain
-specific knowledge with anyone who may work in this part of the code base in the future.
+on the GitLab [Git LFS](../topics/git/lfs/index.md) implementation to share his domain
+specific knowledge with anyone who may work in this part of the codebase in the future.
You can find the [recording on YouTube](https://www.youtube.com/watch?v=Yyxwcksr0Qc),
and the slides on [Google Slides](https://docs.google.com/presentation/d/1E-aw6-z0rYd0346YhIWE7E9A65zISL9iIMAOq2zaw9E/edit)
and in [PDF](https://gitlab.com/gitlab-org/create-stage/uploads/07a89257a140db067bdfb484aecd35e1/Git_LFS_Deep_Dive__Create_.pdf).
diff --git a/doc/development/licensed_feature_availability.md b/doc/development/licensed_feature_availability.md
index f83a57fe1ca..3a6bdbc973f 100644
--- a/doc/development/licensed_feature_availability.md
+++ b/doc/development/licensed_feature_availability.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Licensed feature availability **(STARTER)**
@@ -30,7 +30,7 @@ project.feature_available?(:feature_symbol)
However, for features such as [Geo](../administration/geo/index.md) and
[Load balancing](../administration/database_load_balancing.md), which cannot be restricted
-to only a subset of projects or namespaces, the check will be made directly in
+to only a subset of projects or namespaces, the check is made directly in
the instance license.
1. Add the feature symbol on `EES_FEATURES`, `EEP_FEATURES` or `EEU_FEATURES` constants in
diff --git a/doc/development/licensing.md b/doc/development/licensing.md
index d1808822ba6..d7c2c764883 100644
--- a/doc/development/licensing.md
+++ b/doc/development/licensing.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# GitLab Licensing and Compatibility
@@ -12,16 +12,16 @@ info: To determine the technical writer assigned to the Stage/Group associated w
In order to comply with the terms the libraries we use are licensed under, we have to make sure to check new gems for compatible licenses whenever they're added. To automate this process, we use the [license_finder](https://github.com/pivotal/LicenseFinder) gem by Pivotal. It runs every time a new commit is pushed and verifies that all gems and node modules in the bundle use a license that doesn't conflict with the licensing of either GitLab Community Edition or GitLab Enterprise Edition.
-There are some limitations with the automated testing, however. CSS, JavaScript, or Ruby libraries which are not included by way of Bundler, NPM, or Yarn (for instance those manually copied into our source tree in the `vendor` directory), must be verified manually and independently. Take care whenever one such library is used, as automated tests won't catch problematic licenses from them.
+There are some limitations with the automated testing, however. CSS, JavaScript, or Ruby libraries which are not included by way of Bundler, NPM, or Yarn (for instance those manually copied into our source tree in the `vendor` directory), must be verified manually and independently. Take care whenever one such library is used, as automated tests don't catch problematic licenses from them.
-Some gems may not include their license information in their `gemspec` file, and some node modules may not include their license information in their `package.json` file. These won't be detected by License Finder, and will have to be verified manually.
+Some gems may not include their license information in their `gemspec` file, and some node modules may not include their license information in their `package.json` file. These aren't detected by License Finder, and must be verified manually.
### License Finder commands
-NOTE: **Note:**
+NOTE:
License Finder currently uses GitLab misused terms of `whitelist` and `blacklist`. As a result, the commands below reference those terms. We've created an [issue on their project](https://github.com/pivotal/LicenseFinder/issues/745) to propose that they rename their commands.
-There are a few basic commands License Finder provides that you'll need in order to manage license detection.
+There are a few basic commands License Finder provides that you need in order to manage license detection.
To verify that the checks are passing, and/or to see what dependencies are causing the checks to fail:
diff --git a/doc/development/logging.md b/doc/development/logging.md
index 14812978f2d..07ec9d53145 100644
--- a/doc/development/logging.md
+++ b/doc/development/logging.md
@@ -1,7 +1,7 @@
---
stage: Monitor
group: Health
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# GitLab Developers Guide to Logging
@@ -12,7 +12,7 @@ administrators and GitLab team members to diagnose problems in the field.
## Don't use `Rails.logger`
Currently `Rails.logger` calls all get saved into `production.log`, which contains
-a mix of Rails' logs and other calls developers have inserted in the code base.
+a mix of Rails' logs and other calls developers have inserted in the codebase.
For example:
```plaintext
@@ -35,14 +35,14 @@ Completed 200 OK in 166ms (Views: 117.4ms | ActiveRecord: 27.2ms)
These logs suffer from a number of problems:
-1. They often lack timestamps or other contextual information (e.g. project ID, user)
+1. They often lack timestamps or other contextual information (for example, project ID or user)
1. They may span multiple lines, which make them hard to find via Elasticsearch.
1. They lack a common structure, which make them hard to parse by log
forwarders, such as Logstash or Fluentd. This also makes them hard to
search.
-Note that currently on GitLab.com, any messages in `production.log` will
-NOT get indexed by Elasticsearch due to the sheer volume and noise. They
+Note that currently on GitLab.com, any messages in `production.log` aren't
+indexed by Elasticsearch due to the sheer volume and noise. They
do end up in Google Stackdriver, but it is still harder to search for
logs there. See the [GitLab.com logging
documentation](https://gitlab.com/gitlab-com/runbooks/blob/master/logging/doc/README.md)
@@ -73,7 +73,7 @@ importer progresses. Here's what to do:
make it easy for people to search pertinent logs in one place. For
example, `geo.log` contains all logs pertaining to GitLab Geo.
To create a new file:
- 1. Choose a filename (e.g. `importer_json.log`).
+ 1. Choose a filename (for example, `importer_json.log`).
1. Create a new subclass of `Gitlab::JsonLogger`:
```ruby
@@ -99,7 +99,7 @@ importer progresses. Here's what to do:
```
Note that it's useful to memoize this because creating a new logger
- each time you log will open a file, adding unnecessary overhead.
+ each time you log opens a file, adding unnecessary overhead.
1. Now insert log messages into your code. When adding logs,
make sure to include all the context as key-value pairs:
@@ -129,16 +129,16 @@ an Elasticsearch-specific way, the concepts should translate to many systems you
might use to index structured logs. GitLab.com uses Elasticsearch to index log
data.
-Unless a field type is explicitly mapped, Elasticsearch will infer the type from
+Unless a field type is explicitly mapped, Elasticsearch infers the type from
the first instance of that field value it sees. Subsequent instances of that
-field value with different types will either fail to be indexed, or in some
-cases (scalar/object conflict), the whole log line will be dropped.
+field value with different types either fail to be indexed, or in some
+cases (scalar/object conflict), the whole log line is dropped.
GitLab.com's logging Elasticsearch sets
[`ignore_malformed`](https://www.elastic.co/guide/en/elasticsearch/reference/current/ignore-malformed.html),
which allows documents to be indexed even when there are simpler sorts of
mapping conflict (for example, number / string), although indexing on the affected fields
-will break.
+breaks.
Examples:
@@ -177,17 +177,24 @@ challenged to choose between seconds, milliseconds or any other unit, lean towar
(with microseconds precision, i.e. `Gitlab::InstrumentationHelper::DURATION_PRECISION`).
In order to make it easier to track timings in the logs, make sure the log key has `_s` as
-suffix and `duration` within its name (e.g., `view_duration_s`).
+suffix and `duration` within its name (for example, `view_duration_s`).
## Multi-destination Logging
-GitLab is transitioning from unstructured/plaintext logs to structured/JSON logs. During this transition period some logs will be recorded in multiple formats through multi-destination logging.
+GitLab is transitioning from unstructured/plaintext logs to structured/JSON logs. During this transition period some logs are recorded in multiple formats through multi-destination logging.
### How to use multi-destination logging
-Create a new logger class, inheriting from `MultiDestinationLogger` and add an array of loggers to a `LOGGERS` constant. The loggers should be classes that descend from `Gitlab::Logger`. e.g. the user defined loggers in the following examples, could be inheriting from `Gitlab::Logger` and `Gitlab::JsonLogger`, respectively.
+Create a new logger class, inheriting from `MultiDestinationLogger` and add an
+array of loggers to a `LOGGERS` constant. The loggers should be classes that
+descend from `Gitlab::Logger`. For example, the user-defined loggers in the
+following examples could be inheriting from `Gitlab::Logger` and
+`Gitlab::JsonLogger`, respectively.
-You must specify one of the loggers as the `primary_logger`. The `primary_logger` will be used when information about this multi-destination logger is displayed in the app, e.g. using the `Gitlab::Logger.read_latest` method.
+You must specify one of the loggers as the `primary_logger`. The
+`primary_logger` is used when information about this multi-destination logger is
+displayed in the application (for example, using the `Gitlab::Logger.read_latest`
+method).
The following example sets one of the defined `LOGGERS` as a `primary_logger`.
@@ -207,19 +214,19 @@ module Gitlab
end
```
-You can now call the usual logging methods on this multi-logger, e.g.
+You can now call the usual logging methods on this multi-logger. For example:
```ruby
FancyMultiLogger.info(message: "Information")
```
-This message will be logged by each logger registered in `FancyMultiLogger.loggers`.
+This message is logged by each logger registered in `FancyMultiLogger.loggers`.
### Passing a string or hash for logging
When passing a string or hash to a `MultiDestinationLogger`, the log lines could be formatted differently, depending on the kinds of `LOGGERS` set.
-e.g. let's partially define the loggers from the previous example:
+For example, let's partially define the loggers from the previous example:
```ruby
module Gitlab
@@ -304,7 +311,7 @@ It should be noted that manual logging of exceptions is not allowed, as:
1. Very often manually logged exception needs to be tracked to Sentry as well,
1. Manually logged exceptions does not use `correlation_id`, which makes hard
to pin them to request, user and context in which this exception was raised,
-1. It is very likely that manually logged exceptions will end-up across
+1. Manually logged exceptions often end up across
multiple files, which increases burden scraping all logging files.
To avoid duplicating and having consistent behavior the `Gitlab::ErrorTracking`
@@ -356,7 +363,7 @@ end
## Additional steps with new log files
-1. Consider log retention settings. By default, Omnibus will rotate any
+1. Consider log retention settings. By default, Omnibus rotates any
logs in `/var/log/gitlab/gitlab-rails/*.log` every hour and [keep at
most 30 compressed files](https://docs.gitlab.com/omnibus/settings/logs.html#logrotate).
On GitLab.com, that setting is only 6 compressed files. These settings should suffice
diff --git a/doc/development/mass_insert.md b/doc/development/mass_insert.md
index dfffab564bc..4b1716ff00a 100644
--- a/doc/development/mass_insert.md
+++ b/doc/development/mass_insert.md
@@ -1,13 +1,13 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Mass inserting Rails models
Setting the environment variable [`MASS_INSERT=1`](rake_tasks.md#environment-variables)
-when running [`rake setup`](rake_tasks.md) will create millions of records, but these records
+when running [`rake setup`](rake_tasks.md) creates millions of records, but these records
aren't visible to the `root` user by default.
To make any number of the mass-inserted projects visible to the `root` user, run
diff --git a/doc/development/merge_request_performance_guidelines.md b/doc/development/merge_request_performance_guidelines.md
index a5d9a653472..8d5b2db828e 100644
--- a/doc/development/merge_request_performance_guidelines.md
+++ b/doc/development/merge_request_performance_guidelines.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Merge Request Performance Guidelines
@@ -46,8 +46,8 @@ and running.
Can the queries used potentially take down any critical services and result in
engineers being woken up in the night? Can a malicious user abuse the code to
-take down a GitLab instance? Will my changes simply make loading a certain page
-slower? Will execution time grow exponentially given enough load or data in the
+take down a GitLab instance? Do my changes simply make loading a certain page
+slower? Does execution time grow exponentially given enough load or data in the
database?
These are all questions one should ask themselves before submitting a merge
@@ -67,14 +67,14 @@ in turn can request a performance specialist to review the changes.
## Think outside of the box
-Everyone has their own perception how the new feature is going to be used.
+Everyone has their own perception of how to use the new feature.
Always consider how users might be using the feature instead. Usually,
users test our features in a very unconventional way,
like by brute forcing or abusing edge conditions that we have.
## Data set
-The data set that will be processed by the merge request should be known
+The data set the merge request processes should be known
and documented. The feature should clearly document what the expected
data set is for this feature to process, and what problems it might cause.
@@ -86,8 +86,8 @@ from the repository and perform search for the set of files.
As an author you should in context of that problem consider
the following:
-1. What repositories are going to be supported?
-1. How long it will take for big repositories like Linux kernel?
+1. What repositories are planned to be supported?
+1. How long it do big repositories like Linux kernel take?
1. Is there something that we can do differently to not process such a
big data set?
1. Should we build some fail-safe mechanism to contain
@@ -96,28 +96,28 @@ the following:
## Query plans and database structure
-The query plan can tell us if we will need additional
+The query plan can tell us if we need additional
indexes, or expensive filtering (such as using sequential scans).
Each query plan should be run against substantial size of data set.
For example, if you look for issues with specific conditions,
you should consider validating a query against
a small number (a few hundred) and a big number (100_000) of issues.
-See how the query will behave if the result will be a few
+See how the query behaves if the result is a few
and a few thousand.
This is needed as we have users using GitLab for very big projects and
in a very unconventional way. Even if it seems that it's unlikely
-that such a big data set will be used, it's still plausible that one
-of our customers will encounter a problem with the feature.
+that such a big data set is used, it's still plausible that one
+of our customers could encounter a problem with the feature.
-Understanding ahead of time how it's going to behave at scale, even if we accept it,
-is the desired outcome. We should always have a plan or understanding of what it will take
+Understanding ahead of time how it behaves at scale, even if we accept it,
+is the desired outcome. We should always have a plan or understanding of what is needed
to optimize the feature for higher usage patterns.
Every database structure should be optimized and sometimes even over-described
in preparation for easy extension. The hardest part after some point is
-data migration. Migrating millions of rows will always be troublesome and
+data migration. Migrating millions of rows is always troublesome and
can have a negative impact on the application.
To better understand how to get help with the query plan reviews
@@ -130,7 +130,7 @@ queries unless absolutely necessary.
The total number of queries executed by the code modified or added by a merge request
must not increase unless absolutely necessary. When building features it's
-entirely possible you will need some extra queries, but you should try to keep
+entirely possible you need some extra queries, but you should try to keep
this at a minimum.
As an example, say you introduce a feature that updates a number of database
@@ -144,7 +144,7 @@ objects_to_update.each do |object|
end
```
-This will end up running one query for every object to update. This code can
+This means running one query for every object to update. This code can
easily overload a database given enough rows to update or many instances of this
code running in parallel. This particular problem is known as the
["N+1 query problem"](https://guides.rubyonrails.org/active_record_querying.html#eager-loading-associations). You can write a test with [QueryRecorder](query_recorder.md) to detect this and prevent regressions.
@@ -162,11 +162,11 @@ query. This in turn makes it much harder for this code to overload a database.
**Summary:** a merge request **should not** execute duplicated cached queries.
-Rails provides an [SQL Query Cache](cached_queries.md#cached-queries-guidelines),
-used to cache the results of database queries for the duration of the request.
+Rails provides an [SQL Query Cache](cached_queries.md#cached-queries-guidelines),
+used to cache the results of database queries for the duration of the request.
-See [why cached queries are considered bad](cached_queries.md#why-cached-queries-are-considered-bad) and
-[how to detect them](cached_queries.md#how-to-detect).
+See [why cached queries are considered bad](cached_queries.md#why-cached-queries-are-considered-bad) and
+[how to detect them](cached_queries.md#how-to-detect-cached-queries).
The code introduced by a merge request, should not execute multiple duplicated cached queries.
@@ -189,8 +189,8 @@ build.project == pipeline_project
# => true
```
-When we call `build.project`, it will not hit the database, it will use the cached result, but it will re-instantiate
-same pipeline project object. It turns out that associated objects do not point to the same in-memory object.
+When we call `build.project`, it doesn't hit the database, it uses the cached result, but it re-instantiates
+the same pipeline project object. It turns out that associated objects do not point to the same in-memory object.
If we try to serialize each build:
@@ -200,7 +200,7 @@ pipeline.builds.each do |build|
end
```
-It will re-instantiate project object for each build, instead of using the same in-memory object.
+It re-instantiates project object for each build, instead of using the same in-memory object.
In this particular case the workaround is fairly easy:
@@ -212,7 +212,7 @@ end
```
We can assign `pipeline.project` to each `build.project`, since we know it should point to the same project.
-This will allow us that each build point to the same in-memory project,
+This allows us that each build point to the same in-memory project,
avoiding the cached SQL query and re-instantiation of the project object for each build.
## Executing Queries in Loops
@@ -323,7 +323,7 @@ Certain UI elements may not always be needed. For example, when hovering over a
diff line there's a small icon displayed that can be used to create a new
comment. Instead of always rendering these kind of elements they should only be
rendered when actually needed. This ensures we don't spend time generating
-Haml/HTML when it's not going to be used.
+Haml/HTML when it's not used.
## Instrumenting New Code
@@ -411,8 +411,8 @@ The quota should be optimised to a level that we consider the feature to
be performant and usable for the user, but **not limiting**.
**We want the features to be fully usable for the users.**
-**However, we want to ensure that the feature will continue to perform well if used at its limit**
-**and it won't cause availability issues.**
+**However, we want to ensure that the feature continues to perform well if used at its limit**
+**and it doesn't cause availability issues.**
Consider that it's always better to start with some kind of limitation,
instead of later introducing a breaking change that would result in some
@@ -433,11 +433,11 @@ The intent of quotas could be different:
Examples:
-1. Pipeline Schedules: It's very unlikely that user will want to create
+1. Pipeline Schedules: It's very unlikely that user wants to create
more than 50 schedules.
In such cases it's rather expected that this is either misuse
or abuse of the feature. Lack of the upper limit can result
- in service degradation as the system will try to process all schedules
+ in service degradation as the system tries to process all schedules
assigned the project.
1. GitLab CI/CD includes: We started with the limit of maximum of 50 nested includes.
@@ -477,7 +477,7 @@ We can consider the following types of storages:
for most of our implementations. Even though this allows the above limit to be significantly larger,
it does not really mean that you can use more. The shared temporary storage is shared by
all nodes. Thus, the job that uses significant amount of that space or performs a lot
- of operations will create a contention on execution of all other jobs and request
+ of operations creates a contention on execution of all other jobs and request
across the whole application, this can easily impact stability of the whole GitLab.
Be respectful of that.
diff --git a/doc/development/migration_style_guide.md b/doc/development/migration_style_guide.md
index 84679a78545..8cdfbd558ca 100644
--- a/doc/development/migration_style_guide.md
+++ b/doc/development/migration_style_guide.md
@@ -1,13 +1,13 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Migration Style Guide
When writing migrations for GitLab, you have to take into account that
-these will be run by hundreds of thousands of organizations of all sizes, some with
+these are run by hundreds of thousands of organizations of all sizes, some with
many years of data in their database.
In addition, having to take a server offline for an upgrade small or big is a
@@ -44,15 +44,15 @@ and post-deployment migrations (`db/post_migrate`) are run after the deployment
Changes to the schema should be committed to `db/structure.sql`. This
file is automatically generated by Rails, so you normally should not
edit this file by hand. If your migration is adding a column to a
-table, that column will be added at the bottom. Please do not reorder
-columns manually for existing tables as this will cause confusion to
+table, that column is added at the bottom. Please do not reorder
+columns manually for existing tables as this causes confusion to
other people using `db/structure.sql` generated by Rails.
When your local database in your GDK is diverging from the schema from
`master` it might be hard to cleanly commit the schema changes to
Git. In that case you can use the `scripts/regenerate-schema` script to
regenerate a clean `db/structure.sql` for the migrations you're
-adding. This script will apply all migrations found in `db/migrate`
+adding. This script applies all migrations found in `db/migrate`
or `db/post_migrate`, so if there are any migrations you don't want to
commit to the schema, rename or remove them. If your branch is not
targeting `master` you can set the `TARGET` environment variable.
@@ -80,7 +80,7 @@ and whether they require downtime and how to work around that whenever possible.
Every migration must specify if it requires downtime or not, and if it should
require downtime it must also specify a reason for this. This is required even
-if 99% of the migrations won't require downtime as this makes it easier to find
+if 99% of the migrations don't require downtime as this makes it easier to find
the migrations that _do_ require downtime.
To tag a migration, add the following two constants to the migration class'
@@ -104,7 +104,7 @@ class MyMigration < ActiveRecord::Migration[6.0]
end
```
-It is an error (that is, CI will fail) if the `DOWNTIME` constant is missing
+It is an error (that is, CI fails) if the `DOWNTIME` constant is missing
from a migration class.
## Reversibility
@@ -136,7 +136,7 @@ By default, migrations are single transaction. That is, a transaction is opened
at the beginning of the migration, and committed after all steps are processed.
Running migrations in a single transaction makes sure that if one of the steps fails,
-none of the steps will be executed, leaving the database in valid state.
+none of the steps are executed, leaving the database in valid state.
Therefore, either:
- Put all migrations in one single-transaction migration.
@@ -146,15 +146,16 @@ Therefore, either:
For example, if you create an empty table and need to build an index for it,
it is recommended to use a regular single-transaction migration and the default
rails schema statement: [`add_index`](https://api.rubyonrails.org/v5.2/classes/ActiveRecord/ConnectionAdapters/SchemaStatements.html#method-i-add_index).
-This is a blocking operation, but it won't cause problems because the table is not yet used,
+This is a blocking operation, but it doesn't cause problems because the table is not yet used,
and therefore it does not have any records yet.
## Heavy operations in a single transaction
-When using a single-transaction migration, a transaction will hold on a database connection
+When using a single-transaction migration, a transaction holds a database connection
for the duration of the migration, so you must make sure the actions in the migration
-do not take too much time: In general, queries executed in a migration need to fit comfortably
-within `15s` on GitLab.com.
+do not take too much time: GitLab.com’s production database has a `15s` timeout, so
+in general, the cumulative execution time in a migration should aim to fit comfortably
+in that limit. Singular query timings should fit within the [standard limit](query_performance.md#timing-guidelines-for-queries)
In case you need to insert, update, or delete a significant amount of data, you:
@@ -182,8 +183,8 @@ on the `users` table once it has been enqueued.
More information about PostgresSQL locks: [Explicit Locking](https://www.postgresql.org/docs/current/explicit-locking.html)
For stability reasons, GitLab.com has a specific [`statement_timeout`](../user/gitlab_com/index.md#postgresql)
-set. When the migration is invoked, any database query will have
-a fixed time to execute. In a worst-case scenario, the request will sit in the
+set. When the migration is invoked, any database query has
+a fixed time to execute. In a worst-case scenario, the request sits in the
lock queue, blocking other queries for the duration of the configured statement timeout,
then failing with `canceling statement due to statement timeout` error.
@@ -274,7 +275,7 @@ end
**Creating a new table when we have two foreign keys:**
-For this, we'll need three migrations:
+For this, we need three migrations:
1. Creating the table without foreign keys (with the indices).
1. Add foreign key to the first table.
@@ -354,7 +355,7 @@ def up
end
```
-The RuboCop rule generally allows standard Rails migration methods, listed below. This example will cause a Rubocop offense:
+The RuboCop rule generally allows standard Rails migration methods, listed below. This example causes a Rubocop offense:
```ruby
disable_ddl_transaction!
@@ -399,9 +400,9 @@ In a worst-case scenario, the method:
- Executes the block for a maximum of 50 times over 40 minutes.
- Most of the time is spent in a pre-configured sleep period after each iteration.
-- After the 50th retry, the block will be executed without `lock_timeout`, just
+- After the 50th retry, the block is executed without `lock_timeout`, just
like a standard migration invocation.
-- If a lock cannot be acquired, the migration will fail with `statement timeout` error.
+- If a lock cannot be acquired, the migration fails with `statement timeout` error.
The migration might fail if there is a very long running transaction (40+ minutes)
accessing the `users` table.
@@ -437,9 +438,9 @@ class MyMigration < ActiveRecord::Migration[6.0]
end
```
-Here the call to `disable_statement_timeout` will use the connection local to
+Here the call to `disable_statement_timeout` uses the connection local to
the `with_multiple_threads` block, instead of re-using the global connection
-pool. This ensures each thread has its own connection object, and won't time
+pool. This ensures each thread has its own connection object, and doesn't time
out when trying to obtain one.
PostgreSQL has a maximum amount of connections that it allows. This
@@ -461,8 +462,9 @@ class MyMigration < ActiveRecord::Migration[6.0]
include Gitlab::Database::MigrationHelpers
disable_ddl_transaction!
+ INDEX_NAME = 'index_name'
def up
- remove_concurrent_index :table_name, :column_name, name: :index_name
+ remove_concurrent_index :table_name, :column_name, name: INDEX_NAME
end
end
```
@@ -582,7 +584,7 @@ remember to add an index on the column.
This is **required** for all foreign-keys, e.g., to support efficient cascading
deleting: when a lot of rows in a table get deleted, the referenced records need
to be deleted too. The database has to look for corresponding records in the
-referenced table. Without an index, this will result in a sequential scan on the
+referenced table. Without an index, this results in a sequential scan on the
table, which can take a long time.
Here's an example where we add a new column with a foreign key
@@ -620,7 +622,7 @@ the standard `add_column` helper should be used in all cases.
Before PostgreSQL 11, adding a column with a default was problematic as it would
have caused a full table rewrite. The corresponding helper `add_column_with_default`
-has been deprecated and will be removed in a later release.
+has been deprecated and is scheduled to be removed in a later release.
If a backport adding a column with a default value is needed for %12.9 or earlier versions,
it should use `add_column_with_default` helper. If a [large table](https://gitlab.com/gitlab-org/gitlab/-/blob/master/rubocop/rubocop-migrations.yml#L3)
@@ -656,7 +658,7 @@ In this particular case, the default value exists and we're just changing the me
`request_access_enabled` column, which does not imply a rewrite of all the existing records
in the `namespaces` table. Only when creating a new column with a default, all the records are going be rewritten.
-NOTE: **Note:**
+NOTE:
A faster [ALTER TABLE ADD COLUMN with a non-null default](https://www.depesz.com/2018/04/04/waiting-for-postgresql-11-fast-alter-table-add-column-with-a-non-null-default/)
was introduced on PostgresSQL 11.0, removing the need of rewriting the table when a new column with a default value is added.
@@ -666,7 +668,7 @@ without requiring `disable_ddl_transaction!`.
## Updating an existing column
To update an existing column to a particular value, you can use
-`update_column_in_batches`. This will split the updates into batches, so we
+`update_column_in_batches`. This splits the updates into batches, so we
don't update too many rows at in a single statement.
This updates the column `foo` in the `projects` table to 10, where `some_column`
@@ -781,12 +783,12 @@ end
## Integer column type
By default, an integer column can hold up to a 4-byte (32-bit) number. That is
-a max value of 2,147,483,647. Be aware of this when creating a column that will
-hold file sizes in byte units. If you are tracking file size in bytes, this
+a max value of 2,147,483,647. Be aware of this when creating a column that
+holds file sizes in byte units. If you are tracking file size in bytes, this
restricts the maximum file size to just over 2GB.
To allow an integer column to hold up to an 8-byte (64-bit) number, explicitly
-set the limit to 8-bytes. This will allow the column to hold a value up to
+set the limit to 8-bytes. This allows the column to hold a value up to
`9,223,372,036,854,775,807`.
Rails migration example:
@@ -836,7 +838,7 @@ timestamps with timezones:
- `datetime_with_timezone`
This ensures all timestamps have a time zone specified. This, in turn, means
-existing timestamps won't suddenly use a different timezone when the system's
+existing timestamps don't suddenly use a different timezone when the system's
timezone changes. It also makes it very clear which timezone was used in the
first place.
@@ -937,19 +939,17 @@ Since we had to do this a few times already, there are now some helpers to help
with this.
To use this you can include `Gitlab::Database::RenameReservedPathsMigration::V1`
-in your migration. This will provide 3 methods which you can pass one or more
+in your migration. This provides 3 methods which you can pass one or more
paths that need to be rejected.
-**`rename_root_paths`**: This will rename the path of all _namespaces_ with the
+- **`rename_root_paths`**: Renames the path of all _namespaces_ with the
given name that don't have a `parent_id`.
-
-**`rename_child_paths`**: This will rename the path of all _namespaces_ with the
+- **`rename_child_paths`**: Renames the path of all _namespaces_ with the
given name that have a `parent_id`.
-
-**`rename_wildcard_paths`**: This will rename the path of all _projects_, and all
+- **`rename_wildcard_paths`**: Renames the path of all _projects_, and all
_namespaces_ that have a `project_id`.
-The `path` column for these rows will be renamed to their previous value followed
+The `path` column for these rows are renamed to their previous value followed
by an integer. For example: `users` would turn into `users0`
## Using models in migrations (discouraged)
diff --git a/doc/development/module_with_instance_variables.md b/doc/development/module_with_instance_variables.md
index 80e926f800c..75575105178 100644
--- a/doc/development/module_with_instance_variables.md
+++ b/doc/development/module_with_instance_variables.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Modules with instance variables could be considered harmful
@@ -121,7 +121,7 @@ module Gitlab
end
```
-Now the cop won't complain. Here's a bad example which we could rewrite:
+Now the cop doesn't complain. Here's a bad example which we could rewrite:
``` ruby
module SpamCheckService
@@ -213,14 +213,14 @@ module M
end
```
-Note that you need to enable it at some point, otherwise everything below
-won't be checked.
+Note that you need to enable it at some point, otherwise nothing below
+that point is checked.
## Things we might need to ignore right now
Because of the way Rails helpers and mailers work, we might not be able to
avoid the use of instance variables there. For those cases, we could ignore
-them at the moment. At least we're not going to share those modules with
+them at the moment. Those modules are not shared with
other random objects, so they're still somewhat isolated.
## Instance variables in views
diff --git a/doc/development/multi_version_compatibility.md b/doc/development/multi_version_compatibility.md
index 2501f8a169f..25f4b3b5699 100644
--- a/doc/development/multi_version_compatibility.md
+++ b/doc/development/multi_version_compatibility.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Compatibility with multiple versions of the application running at the same time
@@ -21,7 +21,7 @@ We must make sure that the application works properly in these states.
For GitLab.com, we also run a set of canary servers which run a more recent version of the application. Users with
the canary cookie set would be handled by these servers. Some URL patterns may also be forced to the canary servers,
even without the cookie being set. This also means that some pages may match the pattern and get handled by canary servers,
-but AJAX requests to URLs (like the GraphQL endpoint) won't match the pattern.
+but AJAX requests to URLs (like the GraphQL endpoint) fail to match the pattern.
With this canary setup, we'd be in this mixed-versions state for an extended period of time until canary is promoted to
production and post-deployment migrations run.
@@ -38,7 +38,7 @@ default. The feature flag can be enabled when the deployment is in a
consistent state. However, this method of synchronization doesn't
guarantee that customers with on-premise instances can [upgrade with
zero downtime](https://docs.gitlab.com/omnibus/update/#zero-downtime-updates)
-since point releases bundle many changes together. Minimizing the time
+because point releases bundle many changes together. Minimizing the time
between when versions are out of sync across the fleet may help mitigate
errors caused by upgrades.
@@ -70,7 +70,7 @@ This type of change may look like an immediate switch between the two implementa
especially with the canary stage, there is an extended period of time where both version of the code
coexists in production.
-1. **expand**: a new route is added, pointing to the same controller as the old one. But nothing in the application will generate links for the new routes.
+1. **expand**: a new route is added, pointing to the same controller as the old one. But nothing in the application generates links for the new routes.
1. **migrate**: now that every machine in the fleet can understand the new route, we can generate links with the new routing.
1. **contract**: the old route can be safely removed. (If the old route was likely to be widely shared, like the link to a repository file, we might want to add redirects and keep the old route for a longer period.)
@@ -84,12 +84,12 @@ When we need to add a new parameter to a Sidekiq worker class, we can split this
1. **migrate**: we add the new parameter to all the invocations of the worker.
1. **contract**: we remove the default value.
-At a first look, it may seem safe to bundle expand and migrate into a single milestone, but this will cause an outage if Puma restarts before Sidekiq.
+At a first look, it may seem safe to bundle expand and migrate into a single milestone, but this causes an outage if Puma restarts before Sidekiq.
Puma enqueues jobs with an extra parameter that the old Sidekiq cannot handle.
### Database migrations
-The following graph is a simplified visual representation of a deployment, this will guide us in understanding how expand and contract is implemented in our migrations strategy.
+The following graph is a simplified visual representation of a deployment, this guides us in understanding how expand and contract is implemented in our migrations strategy.
There's a special consideration here. Using our post-deployment migrations framework allows us to bundle all three phases into one milestone.
@@ -134,12 +134,12 @@ And these deployments align perfectly with application changes.
With all those details in mind, let's imagine we need to replace a query, and this query has an index to support it.
-1. **expand**: this is the from `Schema A` to `Schema B` deployment. We add the new index, but the application will ignore it for now
-1. **migrate**: this is the `Version N` to `Version N+1` application deployment. The new code is deployed, at this point in time only the new query will run.
+1. **expand**: this is the from `Schema A` to `Schema B` deployment. We add the new index, but the application ignores it for now.
+1. **migrate**: this is the `Version N` to `Version N+1` application deployment. The new code is deployed, at this point in time only the new query runs.
1. **contract**: from `Schema B` to `Schema C` (post-deployment migration). Nothing uses the old index anymore, we can safely remove it.
-This is only an example. More complex migrations, especially when background migrations are needed will
-still require more than one milestone. For details please refer to our [migration style guide](migration_style_guide.md).
+This is only an example. More complex migrations, especially when background migrations are needed may
+require more than one milestone. For details please refer to our [migration style guide](migration_style_guide.md).
## Examples of previous incidents
diff --git a/doc/development/namespaces_storage_statistics.md b/doc/development/namespaces_storage_statistics.md
index b4a7c8c3132..373b1e38dfc 100644
--- a/doc/development/namespaces_storage_statistics.md
+++ b/doc/development/namespaces_storage_statistics.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Database case study: Namespaces storage statistics
@@ -33,7 +33,7 @@ Additionally, the pattern that is currently used to update the project statistic
(the callback) doesn't scale adequately. It is currently one of the largest
[database queries transactions on production](https://gitlab.com/gitlab-org/gitlab/-/issues/29070)
that takes the most time overall. We can't add one more query to it as
-it will increase the transaction's length.
+it increases the transaction's length.
Because of all of the above, we can't apply the same pattern to store
and update the namespaces statistics, as the `namespaces` table is one
@@ -137,12 +137,12 @@ WHERE namespace_id IN (
Even though this approach would make aggregating much easier, it has some major downsides:
-- We'd have to migrate **all namespaces** by adding and filling a new column. Because of the size of the table, dealing with time/cost will not be great. The background migration will take approximately `153h`, see <https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/29772>.
+- We'd have to migrate **all namespaces** by adding and filling a new column. Because of the size of the table, dealing with time/cost would be significant. The background migration would take approximately `153h`, see <https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/29772>.
- Background migration has to be shipped one release before, delaying the functionality by another milestone.
### Attempt E (final): Update the namespace storage statistics in async way
-This approach consists of keep using the incremental statistics updates we currently already have,
+This approach consists of continuing to use the incremental statistics updates we already have,
but we refresh them through Sidekiq jobs and in different transactions:
1. Create a second table (`namespace_aggregation_schedules`) with two columns `id` and `namespace_id`.
diff --git a/doc/development/new_fe_guide/dependencies.md b/doc/development/new_fe_guide/dependencies.md
index fad004f9df5..6db3b401025 100644
--- a/doc/development/new_fe_guide/dependencies.md
+++ b/doc/development/new_fe_guide/dependencies.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Dependencies
diff --git a/doc/development/new_fe_guide/development/accessibility.md b/doc/development/new_fe_guide/development/accessibility.md
index 1189dd1137b..81f3773dd5c 100644
--- a/doc/development/new_fe_guide/development/accessibility.md
+++ b/doc/development/new_fe_guide/development/accessibility.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Accessibility
@@ -42,7 +42,7 @@ In forms we should use the `for` attribute in the label statement:
## Testing
-1. On MacOS you can use [VoiceOver](https://www.apple.com/accessibility/mac/vision/) by pressing `cmd+F5`.
+1. On MacOS you can use [VoiceOver](http://www.apple.com/accessibility/vision/) by pressing `cmd+F5`.
1. On Windows you can use [Narrator](https://www.microsoft.com/en-us/accessibility/windows) by pressing Windows logo key + Control + Enter.
## Online resources
diff --git a/doc/development/new_fe_guide/development/components.md b/doc/development/new_fe_guide/development/components.md
index 1597d4e62e7..1d56419028e 100644
--- a/doc/development/new_fe_guide/development/components.md
+++ b/doc/development/new_fe_guide/development/components.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Components
diff --git a/doc/development/new_fe_guide/development/index.md b/doc/development/new_fe_guide/development/index.md
index 52435ca719d..5922c3aeeed 100644
--- a/doc/development/new_fe_guide/development/index.md
+++ b/doc/development/new_fe_guide/development/index.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Development
diff --git a/doc/development/new_fe_guide/development/performance.md b/doc/development/new_fe_guide/development/performance.md
index 44f5bccde95..8ae503ec709 100644
--- a/doc/development/new_fe_guide/development/performance.md
+++ b/doc/development/new_fe_guide/development/performance.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Performance
@@ -11,7 +11,7 @@ info: To determine the technical writer assigned to the Stage/Group associated w
We have a performance dashboard available in one of our [Grafana instances](https://dashboards.gitlab.net/d/1EBTz3Dmz/sitespeed-page-summary?orgId=1). This dashboard automatically aggregates metric data from [sitespeed.io](https://www.sitespeed.io/) every 6 hours. These changes are displayed after a set number of pages are aggregated.
These pages can be found inside a text file in the [`gitlab-build-images` repository](https://gitlab.com/gitlab-org/gitlab-build-images) called [`gitlab.txt`](https://gitlab.com/gitlab-org/gitlab-build-images/blob/master/scripts/gitlab.txt)
-Any frontend engineer can contribute to this dashboard. They can contribute by adding or removing URLs of pages from this text file. Please have a [frontend monitoring expert](https://about.gitlab.com/company/team/) review your changes before assigning to a maintainer of the `gitlab-build-images` project. The changes will go live on the next scheduled run after the changes are merged into `master`.
+Any frontend engineer can contribute to this dashboard. They can contribute by adding or removing URLs of pages from this text file. Please have a [frontend monitoring expert](https://about.gitlab.com/company/team/) review your changes before assigning to a maintainer of the `gitlab-build-images` project. The changes are pushed live on the next scheduled run after the changes are merged into `master`.
There are 3 recommended high impact metrics to review on each page:
diff --git a/doc/development/new_fe_guide/development/testing.md b/doc/development/new_fe_guide/development/testing.md
index b990425ca3c..034324989ef 100644
--- a/doc/development/new_fe_guide/development/testing.md
+++ b/doc/development/new_fe_guide/development/testing.md
@@ -3,3 +3,6 @@ redirect_to: '../../testing_guide/frontend_testing.md'
---
This document was moved to [another location](../../testing_guide/frontend_testing.md).
+
+<!-- This redirect file can be deleted after February 1, 2021. -->
+<!-- Before deletion, see: https://docs.gitlab.com/ee/development/documentation/#move-or-rename-a-page -->
diff --git a/doc/development/new_fe_guide/index.md b/doc/development/new_fe_guide/index.md
index c9b655c2274..a62ea53de9f 100644
--- a/doc/development/new_fe_guide/index.md
+++ b/doc/development/new_fe_guide/index.md
@@ -1,12 +1,12 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Frontend Development Guidelines
-This guide contains all the information to successfully contribute to GitLab's frontend.
+This guide contains all the information to successfully contribute to the GitLab frontend.
This is a living document, and we welcome contributions, feedback, and suggestions.
## [Development](development/index.md)
diff --git a/doc/development/new_fe_guide/modules/dirty_submit.md b/doc/development/new_fe_guide/modules/dirty_submit.md
index 17cdab3f240..f9ef96c65dc 100644
--- a/doc/development/new_fe_guide/modules/dirty_submit.md
+++ b/doc/development/new_fe_guide/modules/dirty_submit.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Dirty Submit
diff --git a/doc/development/new_fe_guide/modules/index.md b/doc/development/new_fe_guide/modules/index.md
index 18c5b05432c..a9bdcda4a2d 100644
--- a/doc/development/new_fe_guide/modules/index.md
+++ b/doc/development/new_fe_guide/modules/index.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Modules
diff --git a/doc/development/new_fe_guide/modules/widget_extensions.md b/doc/development/new_fe_guide/modules/widget_extensions.md
index 3844fedb126..ffcd736511a 100644
--- a/doc/development/new_fe_guide/modules/widget_extensions.md
+++ b/doc/development/new_fe_guide/modules/widget_extensions.md
@@ -1,7 +1,7 @@
---
stage: Create
group: Source Code
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Merge request widget extensions
@@ -17,7 +17,7 @@ into the widget that will match the existing design and interaction as other ext
To use extensions you need to first create a new extension object that will be used to fetch the
data that will be rendered in the extension. See the example file in
-app/assets/javascripts/vue_merge_request_widget/extensions/issues.js for a working example.
+`app/assets/javascripts/vue_merge_request_widget/extensions/issues.js` for a working example.
The basic object structure is as below:
diff --git a/doc/development/new_fe_guide/style/html.md b/doc/development/new_fe_guide/style/html.md
index 0b4fce13d90..afbfbdcf834 100644
--- a/doc/development/new_fe_guide/style/html.md
+++ b/doc/development/new_fe_guide/style/html.md
@@ -3,3 +3,6 @@ redirect_to: '../../fe_guide/style/html.md'
---
This document was moved to [another location](../../fe_guide/style/html.md).
+
+<!-- This redirect file can be deleted after February 1, 2021. -->
+<!-- Before deletion, see: https://docs.gitlab.com/ee/development/documentation/#move-or-rename-a-page -->
diff --git a/doc/development/new_fe_guide/style/index.md b/doc/development/new_fe_guide/style/index.md
index 284862a2be9..77700441aa3 100644
--- a/doc/development/new_fe_guide/style/index.md
+++ b/doc/development/new_fe_guide/style/index.md
@@ -3,3 +3,6 @@ redirect_to: '../../fe_guide/style/index.md'
---
This document was moved to [another location](../../fe_guide/style/index.md).
+
+<!-- This redirect file can be deleted after February 1, 2021. -->
+<!-- Before deletion, see: https://docs.gitlab.com/ee/development/documentation/#move-or-rename-a-page -->
diff --git a/doc/development/new_fe_guide/style/javascript.md b/doc/development/new_fe_guide/style/javascript.md
index 003880c2592..17722d2c41b 100644
--- a/doc/development/new_fe_guide/style/javascript.md
+++ b/doc/development/new_fe_guide/style/javascript.md
@@ -3,3 +3,6 @@ redirect_to: '../../fe_guide/style/javascript.md'
---
This document was moved to [another location](../../fe_guide/style/javascript.md).
+
+<!-- This redirect file can be deleted after February 1, 2021. -->
+<!-- Before deletion, see: https://docs.gitlab.com/ee/development/documentation/#move-or-rename-a-page -->
diff --git a/doc/development/new_fe_guide/style/prettier.md b/doc/development/new_fe_guide/style/prettier.md
index 9a95aa96dff..6c0f3b77505 100644
--- a/doc/development/new_fe_guide/style/prettier.md
+++ b/doc/development/new_fe_guide/style/prettier.md
@@ -3,3 +3,6 @@ redirect_to: '../../fe_guide/tooling.md#formatting-with-prettier'
---
This document was moved to [another location](../../fe_guide/tooling.md#formatting-with-prettier).
+
+<!-- This redirect file can be deleted after February 1, 2021. -->
+<!-- Before deletion, see: https://docs.gitlab.com/ee/development/documentation/#move-or-rename-a-page -->
diff --git a/doc/development/new_fe_guide/tips.md b/doc/development/new_fe_guide/tips.md
index d7e9c440335..d38e261b99f 100644
--- a/doc/development/new_fe_guide/tips.md
+++ b/doc/development/new_fe_guide/tips.md
@@ -1,7 +1,7 @@
---
stage: none
group: Development
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Tips
diff --git a/doc/development/newlines_styleguide.md b/doc/development/newlines_styleguide.md
index f123482fa5a..57962129b2f 100644
--- a/doc/development/newlines_styleguide.md
+++ b/doc/development/newlines_styleguide.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Newlines style guide
diff --git a/doc/development/omnibus.md b/doc/development/omnibus.md
index 84c395e2e57..5b97221bd29 100644
--- a/doc/development/omnibus.md
+++ b/doc/development/omnibus.md
@@ -1,7 +1,7 @@
---
stage: Enablement
group: Distribution
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# What you should know about Omnibus packages
diff --git a/doc/development/ordering_table_columns.md b/doc/development/ordering_table_columns.md
index 124f82ac2c8..a0a38acd816 100644
--- a/doc/development/ordering_table_columns.md
+++ b/doc/development/ordering_table_columns.md
@@ -1,7 +1,7 @@
---
stage: Enablement
group: Database
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Ordering Table Columns in PostgreSQL
diff --git a/doc/development/packages.md b/doc/development/packages.md
index de6cac2ce73..689dc6b4141 100644
--- a/doc/development/packages.md
+++ b/doc/development/packages.md
@@ -1,12 +1,12 @@
---
stage: Package
group: Package
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Packages
-This document will guide you through adding another [package management system](../administration/packages/index.md) support to GitLab.
+This document guides you through adding another [package management system](../administration/packages/index.md) support to GitLab.
See already supported package types in [Packages documentation](../administration/packages/index.md)
@@ -56,12 +56,12 @@ Group-level and instance-level endpoints are good to have but are optional.
Packages are scoped within various levels of access, which is generally configured by setting your remote. A
remote endpoint may be set at the project level, meaning when installing packages, only packages belonging to that
-project will be visible. Alternatively, a group-level endpoint may be used to allow visibility to all packages
+project are visible. Alternatively, a group-level endpoint may be used to allow visibility to all packages
within a given group. Lastly, an instance-level endpoint can be used to allow visibility to all packages within an
entire GitLab instance.
-Using group and project level endpoints will allow for more flexibility in package naming, however, more remotes
-will have to be managed. Using instance level endpoints requires [stricter naming conventions](#naming-conventions).
+Using group and project level endpoints allows for more flexibility in package naming, however, more remotes
+have to be managed. Using instance level endpoints requires [stricter naming conventions](#naming-conventions).
The current state of existing package registries availability is:
@@ -76,22 +76,22 @@ The current state of existing package registries availability is:
| Composer | Yes | Yes | No |
| Generic | Yes | No | No |
-NOTE: **Note:**
+NOTE:
NPM is currently a hybrid of the instance level and group level.
It is using the top-level group or namespace as the defining portion of the name
(for example, `@my-group-name/my-package-name`).
-NOTE: **Note:**
+NOTE:
Composer package naming scope is Instance Level.
### Naming conventions
-To avoid name conflict for instance-level endpoints you will need to define a package naming convention
+To avoid name conflict for instance-level endpoints you must define a package naming convention
that gives a way to identify the project that the package belongs to. This generally involves using the project
ID or full project path in the package name. See
[Conan's naming convention](../user/packages/conan_repository/index.md#package-recipe-naming-convention-for-instance-remotes) as an example.
-For group and project-level endpoints, naming can be less constrained and it will be up to the group and project
+For group and project-level endpoints, naming can be less constrained and it is up to the group and project
members to be certain that there is no conflict between two package names. However, the system should prevent
a user from reusing an existing name within a given scope.
@@ -121,7 +121,7 @@ The way new package systems are integrated in GitLab is using an [MVC](https://a
- Pulling a package
- Required actions
-Required actions are all the additional requests that GitLab will need to handle so the corresponding package manager CLI can work properly. It could be a search feature or an endpoint providing meta information about a package. For example:
+Required actions are all the additional requests that GitLab needs to handle so the corresponding package manager CLI can work properly. It could be a search feature or an endpoint providing meta information about a package. For example:
- For NuGet, the search request was implemented during the first MVC iteration, to support Visual Studio.
- For NPM, there is a metadata endpoint used by `npm` to get the tarball URL.
@@ -146,22 +146,22 @@ process.
During this phase, the idea is to collect as much information as possible about the API used by the package system. Here some aspects that can be useful to include:
- **Authentication**: What authentication mechanisms are available (OAuth, Basic
- Authorization, other). Keep in mind that GitLab users will often want to use their
+ Authorization, other). Keep in mind that GitLab users often want to use their
[Personal Access Tokens](../user/profile/personal_access_tokens.md).
Although not needed for the MVC first iteration, the [CI job tokens](../user/project/new_ci_build_permissions_model.md#job-token)
have to be supported at some point in the future.
- **Requests**: Which requests are needed to have a working MVC. Ideally, produce
a list of all the requests needed for the MVC (including required actions). Further
investigation could provide an example for each request with the request and the response bodies.
-- **Upload**: Carefully analyze how the upload process works. This will probably be the most
+- **Upload**: Carefully analyze how the upload process works. This is likely the most
complex request to implement. A detailed analysis is desired here as uploads can be
encoded in different ways (body or multipart) and can even be in a totally different
format (for example, a JSON structure where the package file is a Base64 value of
a particular field). These different encodings lead to slightly different implementations
on GitLab and GitLab Workhorse. For more detailed information, review [file uploads](#file-uploads).
-- **Endpoints**: Suggest a list of endpoint URLs that will be implemented in GitLab.
+- **Endpoints**: Suggest a list of endpoint URLs to implement in GitLab.
- **Split work**: Suggest a list of changes to do to incrementally build the MVC.
- This will give a good idea of how much work there is to be done. Here is an example
+ This gives a good idea of how much work there is to be done. Here is an example
list that would need to be adapted on a case by case basis:
1. Empty file structure (API file, base service for this package)
1. Authentication system for "logging in" to the package manager
@@ -177,7 +177,7 @@ In particular, the upload request can have some [requirements in the GitLab Work
### Implementation
-The implementation of the different Merge Requests will vary between different package system integrations. Contributors should take into account some important aspects of the implementation phase.
+The implementation of the different Merge Requests varies between different package system integrations. Contributors should take into account some important aspects of the implementation phase.
#### Authentication
@@ -217,23 +217,23 @@ If there is package specific behavior for a given package manager, add those met
delegate from the package model.
Note that the existing package UI only displays information within the `packages_packages` and `packages_package_files`
-tables. If the data stored in the metadata tables need to be displayed, a ~frontend change will be required.
+tables. If the data stored in the metadata tables need to be displayed, a ~frontend change is required.
#### File uploads
File uploads should be handled by GitLab Workhorse using object accelerated uploads. What this means is that
-the workhorse proxy that checks all incoming requests to GitLab will intercept the upload request,
+the workhorse proxy that checks all incoming requests to GitLab intercept the upload request,
upload the file, and forward a request to the main GitLab codebase only containing the metadata
and file location rather than the file itself. An overview of this process can be found in the
[development documentation](uploads.md#direct-upload).
-In terms of code, this means a route will need to be added to the
+In terms of code, this means a route must be added to the
[GitLab Workhorse project](https://gitlab.com/gitlab-org/gitlab-workhorse) for each upload endpoint being added
(instance, group, project). [This merge request](https://gitlab.com/gitlab-org/gitlab-workhorse/-/merge_requests/412/diffs)
demonstrates adding an instance-level endpoint for Conan to workhorse. You can also see the Maven project level endpoint
implemented in the same file.
-Once the route has been added, you will need to add an additional `/authorize` version of the upload endpoint to your API file.
+Once the route has been added, you must add an additional `/authorize` version of the upload endpoint to your API file.
[Here is an example](https://gitlab.com/gitlab-org/gitlab/blob/398fef1ca26ae2b2c3dc89750f6b20455a1e5507/ee/lib/api/maven_packages.rb#L164)
of the additional endpoint added for Maven. The `/authorize` endpoint verifies and authorizes the request from workhorse,
then the normal upload endpoint is implemented below, consuming the metadata that workhorse provides in order to
@@ -244,7 +244,7 @@ in your local development environment.
### Future Work
-While working on the MVC, contributors will probably find features that are not mandatory for the MVC but can provide a better user experience. It's generally a good idea to keep an eye on those and open issues.
+While working on the MVC, contributors might find features that are not mandatory for the MVC but can provide a better user experience. It's generally a good idea to keep an eye on those and open issues.
Here are some examples
diff --git a/doc/development/performance.md b/doc/development/performance.md
index e20633a05f6..f3ce924de38 100644
--- a/doc/development/performance.md
+++ b/doc/development/performance.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Performance Guidelines
@@ -104,7 +104,7 @@ In short:
- Never make claims based on just benchmarks, always measure in production to
confirm your findings.
- X being N times faster than Y is meaningless if you don't know what impact it
- will actually have on your production environment.
+ has on your production environment.
- A production environment is the _only_ benchmark that always tells the truth
(unless your performance monitoring systems are not set up correctly).
- If you must write a benchmark use the benchmark-ips Gem instead of Ruby's
@@ -119,7 +119,7 @@ allowing you to profile which code is running on CPU in detail.
It's important to note that profiling an application *alters its performance*.
Different profiling strategies have different overheads. Stackprof is a sampling
-profiler. It will sample stack traces from running threads at a configurable
+profiler. It samples stack traces from running threads at a configurable
frequency (e.g. 100hz, that is 100 stacks per second). This type of profiling
has quite a low (albeit non-zero) overhead and is generally considered to be
safe for production.
@@ -241,7 +241,7 @@ BasePolicy#abilities (/Users/lupine/dev/gitlab.com/gitlab-org/gitlab-development
Since the profile includes the work done by the test suite as well as the
application code, these profiles can be used to investigate slow tests as well.
However, for smaller runs (like this example), this means that the cost of
-setting up the test suite will tend to dominate.
+setting up the test suite tends to dominate.
### Production
@@ -256,7 +256,7 @@ The following configuration options can be configured:
- `STACKPROF_MODE`: See [sampling modes](https://github.com/tmm1/stackprof#sampling).
Defaults to `cpu`.
- `STACKPROF_INTERVAL`: Sampling interval. Unit semantics depend on `STACKPROF_MODE`.
- For `object` mode this is a per-event interval (every `n`th event will be sampled)
+ For `object` mode this is a per-event interval (every `n`th event is sampled)
and defaults to `1000`.
For other modes such as `cpu` this is a frequency and defaults to `10000` ΞΌs (100hz).
- `STACKPROF_FILE_PREFIX`: File path prefix where profiles are stored. Defaults
@@ -268,8 +268,8 @@ The following configuration options can be configured:
and disk overhead. Defaults to `true`.
Once enabled, profiling can be triggered by sending a `SIGUSR2` signal to the
-Ruby process. The process will begin sampling stacks. Profiling can be stopped
-by sending another `SIGUSR2`. Alternatively, it will automatically stop after
+Ruby process. The process begins sampling stacks. Profiling can be stopped
+by sending another `SIGUSR2`. Alternatively, it stops automatically after
the timeout.
Once profiling stops, the profile is written out to disk at
@@ -282,9 +282,9 @@ Currently supported profiling targets are:
- Puma worker
- Sidekiq
-NOTE: **Note:**
+NOTE:
The Puma master process is not supported. Neither is Unicorn.
-Sending SIGUSR2 to either of those will trigger restarts. In the case of Puma,
+Sending SIGUSR2 to either of those triggers restarts. In the case of Puma,
take care to only send the signal to Puma workers.
This can be done via `pkill -USR2 puma:`. The `:` disambiguates between `puma
@@ -292,7 +292,7 @@ This can be done via `pkill -USR2 puma:`. The `:` disambiguates between `puma
worker processes), selecting the latter.
For Sidekiq, the signal can be sent to the `sidekiq-cluster` process via `pkill
--USR2 bin/sidekiq-cluster`, which will forward the signal to all Sidekiq
+-USR2 bin/sidekiq-cluster`, which forwards the signal to all Sidekiq
children. Alternatively, you can also select a specific pid of interest.
Production profiles can be especially noisy. It can be helpful to visualize them
@@ -305,7 +305,7 @@ bundle exec stackprof --stackcollapse /tmp/stackprof.55769.c6c3906452.profile |
## RSpec profiling
-GitLab's development environment also includes the
+The GitLab development environment also includes the
[rspec_profiling](https://github.com/foraker/rspec_profiling) gem, which is used
to collect data on spec execution times. This is useful for analyzing the
performance of the test suite itself, or seeing how the performance of a spec
@@ -358,7 +358,7 @@ We can use two approaches, often in combination, to track down memory issues:
We can use `memory_profiler` for profiling.
-The [`memory_profiler`](https://github.com/SamSaffron/memory_profiler) gem is already present in GitLab's `Gemfile`,
+The [`memory_profiler`](https://github.com/SamSaffron/memory_profiler) gem is already present in the GitLab `Gemfile`,
you just need to require it:
```ruby
@@ -377,9 +377,9 @@ The report breaks down 2 key concepts:
- Retained: long lived memory use and object count retained due to the execution of the code block.
- Allocated: all object allocation and memory allocation during code block.
-As a general rule, **retained** will always be smaller than or equal to allocated.
+As a general rule, **retained** is always smaller than or equal to **allocated**.
-The actual RSS cost will always be slightly higher as MRI heaps are not squashed to size and memory fragments.
+The actual RSS cost is always slightly higher as MRI heaps are not squashed to size and memory fragments.
### Rbtrace
@@ -444,11 +444,11 @@ Slow operations, like merging branches, or operations that are prone to errors
directly in a web request as much as possible. This has numerous benefits such
as:
-1. An error won't prevent the request from completing.
-1. The process being slow won't affect the loading time of a page.
-1. In case of a failure it's easy to re-try the process (Sidekiq takes care of
+1. An error doesn't prevent the request from completing.
+1. The process being slow doesn't affect the loading time of a page.
+1. In case of a failure you can retry the process (Sidekiq takes care of
this automatically).
-1. By isolating the code from a web request it will hopefully be easier to test
+1. By isolating the code from a web request it should be easier to test
and maintain.
It's especially important to use Sidekiq as much as possible when dealing with
@@ -480,7 +480,7 @@ end
## Caching
-Operations that will often return the same result should be cached using Redis,
+Operations that often return the same result should be cached using Redis,
in particular Git operations. When caching data in Redis, make sure the cache is
flushed whenever needed. For example, a cache for the list of tags should be
flushed whenever a new tag is pushed or a tag is removed.
@@ -494,7 +494,7 @@ the Repository class instead of leaking into other classes.
When caching data, make sure to also memoize the result in an instance variable.
While retrieving data from Redis is much faster than raw Git operations, it still
has overhead. By caching the result in an instance variable, repeated calls to
-the same method won't end up retrieving data from Redis upon every call. When
+the same method don't retrieve data from Redis upon every call. When
memoizing cached data in an instance variable, make sure to also reset the
instance variable when flushing the cache. An example:
@@ -512,7 +512,7 @@ end
## String Freezing
In recent Ruby versions calling `freeze` on a String leads to it being allocated
-only once and re-used. For example, on Ruby 2.3 or later this will only allocate the
+only once and re-used. For example, on Ruby 2.3 or later this only allocates the
"foo" String once:
```ruby
@@ -523,10 +523,10 @@ end
Depending on the size of the String and how frequently it would be allocated
(before the `.freeze` call was added), this _may_ make things faster, but
-there's no guarantee it will.
+this isn't guaranteed.
-Strings will be frozen by default in Ruby 3.0. To prepare our code base for
-this eventuality, we will be adding the following header to all Ruby files:
+Strings are frozen by default in Ruby 3.0. To prepare our codebase for
+this eventuality, we are adding the following header to all Ruby files:
```ruby
# frozen_string_literal: true
@@ -549,8 +549,8 @@ Ruby offers several convenience functions that deal with file contents specifica
or I/O streams in general. Functions such as `IO.read` and `IO.readlines` make
it easy to read data into memory, but they can be inefficient when the
data grows large. Because these functions read the entire contents of a data
-source into memory, memory use will grow by _at least_ the size of the data source.
-In the case of `readlines`, it will grow even further, due to extra bookkeeping
+source into memory, memory use grows by _at least_ the size of the data source.
+In the case of `readlines`, it grows even further, due to extra bookkeeping
the Ruby VM has to perform to represent each line.
Consider the following program, which reads a text file that is 750MB on disk:
@@ -588,12 +588,12 @@ which is roughly two orders of magnitude more compared to reading the file line
line instead. It was not just the raw memory usage that increased, but also how the garbage collector (GC)
responded to this change in anticipation of future memory use. We can see that `malloc_increase_bytes` jumped
to ~30MB, which compares to just ~4kB for a "fresh" Ruby program. This figure specifies how
-much additional heap space the Ruby GC will claim from the operating system next time it runs out of memory.
+much additional heap space the Ruby GC claims from the operating system next time it runs out of memory.
Not only did we occupy more memory, we also changed the behavior of the application
to increase memory use at a faster rate.
-The `IO.read` function exhibits similar behavior, with the difference that no extra memory will
-be allocated for each line object.
+The `IO.read` function exhibits similar behavior, with the difference that no extra memory is
+allocated for each line object.
### Recommendations
@@ -630,7 +630,7 @@ production environments.
### Moving Allocations to Constants
Storing an object as a constant so you only allocate it once _may_ improve
-performance, but there's no guarantee this will. Looking up constants has an
+performance, but this is not guaranteed. Looking up constants has an
impact on runtime performance, and as such, using a constant instead of
referencing an object directly may even slow code down. For example:
diff --git a/doc/development/permissions.md b/doc/development/permissions.md
index 859c838280e..35f0941b756 100644
--- a/doc/development/permissions.md
+++ b/doc/development/permissions.md
@@ -1,7 +1,7 @@
---
stage: Manage
group: Access
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# GitLab permissions guide
@@ -26,7 +26,7 @@ The visibility level of a group can be changed only if all subgroups and
sub-projects have the same or lower visibility level. For example, a group can be set
to internal only if all subgroups and projects are internal or private.
-CAUTION: **Warning:**
+WARNING:
If you migrate an existing group to a lower visibility level, that action does not migrate subgroups
in the same way. This is a [known issue](https://gitlab.com/gitlab-org/gitlab/-/issues/22406).
@@ -70,9 +70,9 @@ can still view the groups and their entities (like epics).
Project membership (where the group membership is already taken into account)
is stored in the `project_authorizations` table.
-CAUTION: **Caution:**
+WARNING:
Due to [an issue](https://gitlab.com/gitlab-org/gitlab/-/issues/219299),
-projects in personal namespace will not show owner (`50`) permission in
+projects in personal namespace do not show owner (`50`) permission in
`project_authorizations` table. Note however that [`user.owned_projects`](https://gitlab.com/gitlab-org/gitlab/blob/0d63823b122b11abd2492bca47cc26858eee713d/app/models/user.rb#L906-916)
is calculated properly.
@@ -98,7 +98,7 @@ In the case of a complex resource, it should be broken into smaller pieces of in
and each piece should be granted a different permission.
A good example in this case is the _Merge Request widget_ and the _Security reports_.
-Depending on the visibility level of the _Pipelines_, the _Security reports_ will be either visible
+Depending on the visibility level of the _Pipelines_, the _Security reports_ are either visible
in the widget or not. So, the _Merge Request widget_, the _Pipelines_, and the _Security reports_,
have separate permissions. Moreover, the permissions for the _Merge Request widget_
and the _Pipelines_ are dependencies of the _Security reports_.
diff --git a/doc/development/pipelines.md b/doc/development/pipelines.md
index a8ef5701d50..3243c7ec753 100644
--- a/doc/development/pipelines.md
+++ b/doc/development/pipelines.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Pipelines for the GitLab project
@@ -412,7 +412,7 @@ Merge Request. This prevents `rspec fail-fast` duration from exceeding the avera
This number can be overridden by setting a CI variable named `RSPEC_FAIL_FAST_TEST_FILE_COUNT_THRESHOLD`.
-NOTE: **Note:**
+NOTE:
This experiment is only enabled when the CI variable `RSPEC_FAIL_FAST_ENABLED=true` is set.
#### Determining related test files in a Merge Request
@@ -435,7 +435,7 @@ We are using a custom mapping between source file to test files, maintained in t
We follow the [PostgreSQL versions shipped with Omnibus GitLab](https://docs.gitlab.com/omnibus/package-information/postgresql_versions.html):
-| PostgreSQL version | 13.0 (May 2020) | 13.1 (June 2020) | 13.2 (July 2020) | 13.3 (August 2020) | 13.4, 13.5 | 13.6 (November 2020) | 14.0 (May 2021?) |
+| PostgreSQL version | 13.0 (May 2020) | 13.1 (June 2020) | 13.2 (July 2020) | 13.3 (August 2020) | 13.4, 13.5 | [13.7 (December 2020)](https://gitlab.com/gitlab-org/omnibus-gitlab/-/issues/5722) | 14.0 (May 2021?) |
| ------ | --------------- | ---------------- | ---------------- | ------------------ | ------------ | -------------------- | ---------------- |
| PG11 | MRs/`master`/`2-hour`/`nightly` | MRs/`master`/`2-hour`/`nightly` | MRs/`master`/`2-hour`/`nightly` | MRs/`master`/`2-hour`/`nightly` | MRs/`master`/`2-hour`/`nightly` | `nightly` | - |
| PG12 | - | - | `nightly` | `2-hour`/`nightly` | `2-hour`/`nightly` | MRs/`2-hour`/`nightly` | `2-hour`/`nightly` |
@@ -457,7 +457,7 @@ Consult the [Review Apps](testing_guide/review_apps.md) dedicated page for more
### As-if-FOSS jobs
-The `* as-if-foss` jobs allows to run GitLab's test suite "as-if-FOSS", meaning as if the jobs would run in the context
+The `* as-if-foss` jobs allows the GitLab test suite "as-if-FOSS", meaning as if the jobs would run in the context
of the `gitlab-org/gitlab-foss` project. These jobs are only created in the following cases:
- `gitlab-org/security/gitlab` merge requests.
@@ -467,7 +467,7 @@ of the `gitlab-org/gitlab-foss` project. These jobs are only created in the foll
The `* as-if-foss` jobs are run in addition to the regular EE-context jobs. They have the `FOSS_ONLY='1'` variable
set and get their EE-specific folders removed before the tests start running.
-The intent is to ensure that a change won't introduce a failure once the `gitlab-org/gitlab` project will be synced to
+The intent is to ensure that a change doesn't introduce a failure after the `gitlab-org/gitlab` project is synced to
the `gitlab-org/gitlab-foss` project.
## Performance
@@ -485,22 +485,24 @@ request, be sure to start the `dont-interrupt-me` job before pushing.
1. All jobs must only pull caches by default.
1. All jobs must be able to pass with an empty cache. In other words, caches are only there to speed up jobs.
-1. We currently have 6 different caches defined in
+1. We currently have several different caches defined in
[`.gitlab/ci/global.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/blob/master/.gitlab/ci/global.gitlab-ci.yml),
with fixed keys:
- `.rails-cache`.
- `.static-analysis-cache`.
+ - `.coverage-cache`
- `.qa-cache`
- `.yarn-cache`.
- `.assets-compile-cache` (the key includes `${NODE_ENV}` so it's actually two different caches).
1. Only 6 specific jobs, running in 2-hourly scheduled pipelines, are pushing (i.e. updating) to the caches:
- `update-rails-cache`, defined in [`.gitlab/ci/rails.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/blob/master/.gitlab/ci/rails.gitlab-ci.yml).
- `update-static-analysis-cache`, defined in [`.gitlab/ci/rails.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/blob/master/.gitlab/ci/rails.gitlab-ci.yml).
+ - `update-coverage-cache`, defined in [`.gitlab/ci/rails.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/blob/master/.gitlab/ci/rails.gitlab-ci.yml).
- `update-qa-cache`, defined in [`.gitlab/ci/qa.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/blob/master/.gitlab/ci/qa.gitlab-ci.yml).
- `update-assets-compile-production-cache`, defined in [`.gitlab/ci/frontend.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/blob/master/.gitlab/ci/frontend.gitlab-ci.yml).
- `update-assets-compile-test-cache`, defined in [`.gitlab/ci/frontend.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/blob/master/.gitlab/ci/frontend.gitlab-ci.yml).
- `update-yarn-cache`, defined in [`.gitlab/ci/frontend.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/blob/master/.gitlab/ci/frontend.gitlab-ci.yml).
-1. These jobs will run in merge requests whose title include `UPDATE CACHE`.
+1. These jobs run in merge requests whose title include `UPDATE CACHE`.
### Pre-clone step
@@ -519,7 +521,7 @@ variable:
```shell
echo "Downloading archived master..."
-wget -O /tmp/gitlab.tar.gz https://storage.googleapis.com/gitlab-ci-git-repo-cache/project-278964/gitlab-master.tar.gz
+wget -O /tmp/gitlab.tar.gz https://storage.googleapis.com/gitlab-ci-git-repo-cache/project-278964/gitlab-master-shallow.tar.gz
if [ ! -f /tmp/gitlab.tar.gz ]; then
echo "Repository cache not available, cloning a new directory..."
@@ -544,8 +546,7 @@ on a scheduled pipeline, it does the following:
1. Saves the data as a `.tar.gz`.
1. Uploads it into the Google Cloud Storage bucket.
-When a CI job runs with this configuration, you'll see something like
-this:
+When a CI job runs with this configuration, the output looks something like this:
```shell
$ eval "$CI_PRE_CLONE_SCRIPT"
@@ -566,7 +567,7 @@ GitLab Team Member, find credentials in the
[GitLab shared 1Password account](https://about.gitlab.com/handbook/security/#1password-for-teams).
Note that this bucket should be located in the same continent as the
-runner, or [network egress charges will apply](https://cloud.google.com/storage/pricing).
+runner, or [you can incur network egress charges](https://cloud.google.com/storage/pricing).
## CI configuration internals
@@ -628,17 +629,21 @@ that are scoped to a single [configuration keyword](../ci/yaml/README.md#job-key
| Job definitions | Description |
|------------------|-------------|
-| `.default-tags` | Ensures a job has the `gitlab-org` tag to ensure it's using our dedicated runners. |
| `.default-retry` | Allows a job to [retry](../ci/yaml/README.md#retry) upon `unknown_failure`, `api_failure`, `runner_system_failure`, `job_execution_timeout`, or `stuck_or_timeout_failure`. |
| `.default-before_script` | Allows a job to use a default `before_script` definition suitable for Ruby/Rails tasks that may need a database running (e.g. tests). |
| `.rails-cache` | Allows a job to use a default `cache` definition suitable for Ruby/Rails tasks. |
| `.static-analysis-cache` | Allows a job to use a default `cache` definition suitable for static analysis tasks. |
+| `.coverage-cache` | Allows a job to use a default `cache` definition suitable for coverage tasks. |
+| `.qa-cache` | Allows a job to use a default `cache` definition suitable for QA tasks. |
| `.yarn-cache` | Allows a job to use a default `cache` definition suitable for frontend jobs that do a `yarn install`. |
| `.assets-compile-cache` | Allows a job to use a default `cache` definition suitable for frontend jobs that compile assets. |
| `.use-pg11` | Allows a job to use the `postgres:11.6` and `redis:4.0-alpine` services. |
-| `.use-pg11-ee` | Same as `.use-pg11` but also use the `docker.elastic.co/elasticsearch/elasticsearch:6.4.2` services. |
+| `.use-pg11-ee` | Same as `.use-pg11` but also use the `docker.elastic.co/elasticsearch/elasticsearch:7.9.2` services. |
+| `.use-pg12` | Allows a job to use the `postgres:12` and `redis:4.0-alpine` services. |
+| `.use-pg12-ee` | Same as `.use-pg12` but also use the `docker.elastic.co/elasticsearch/elasticsearch:7.9.2` services. |
| `.use-kaniko` | Allows a job to use the `kaniko` tool to build Docker images. |
| `.as-if-foss` | Simulate the FOSS project by setting the `FOSS_ONLY='1'` environment variable. |
+| `.use-docker-in-docker` | Allows a job to use Docker in Docker. |
### `rules`, `if:` conditions and `changes:` patterns
@@ -655,33 +660,55 @@ and included in `rules` definitions via [YAML anchors](../ci/yaml/README.md#anch
#### `if:` conditions
+<!-- vale gitlab.Substitutions = NO -->
+
| `if:` conditions | Description | Notes |
|------------------|-------------|-------|
-| `if-not-canonical-namespace` | Matches if the project isn't in the canonical (`gitlab-org/`) or security (`gitlab-org/security`) namespace. | Use to create a job for forks (by using `when: on_success\|manual`), or **not** create a job for forks (by using `when: never`). |
+| `if-not-canonical-namespace` | Matches if the project isn't in the canonical (`gitlab-org/`) or security (`gitlab-org/security`) namespace. | Use to create a job for forks (by using `when: on_success|manual`), or **not** create a job for forks (by using `when: never`). |
| `if-not-ee` | Matches if the project isn't EE (i.e. project name isn't `gitlab` or `gitlab-ee`). | Use to create a job only in the FOSS project (by using `when: on_success|manual`), or **not** create a job if the project is EE (by using `when: never`). |
| `if-not-foss` | Matches if the project isn't FOSS (i.e. project name isn't `gitlab-foss`, `gitlab-ce`, or `gitlabhq`). | Use to create a job only in the EE project (by using `when: on_success|manual`), or **not** create a job if the project is FOSS (by using `when: never`). |
-| `if-default-refs` | Matches if the pipeline is for `master`, `/^[\d-]+-stable(-ee)?$/` (stable branches), `/^\d+-\d+-auto-deploy-\d+$/` (auto-deploy branches), `/^security\//` (security branches), merge requests, and tags. | Note that jobs won't be created for branches with this default configuration. |
+| `if-default-refs` | Matches if the pipeline is for `master`, `/^[\d-]+-stable(-ee)?$/` (stable branches), `/^\d+-\d+-auto-deploy-\d+$/` (auto-deploy branches), `/^security\//` (security branches), merge requests, and tags. | Note that jobs aren't created for branches with this default configuration. |
| `if-master-refs` | Matches if the current branch is `master`. | |
+| `if-master-push` | Matches if the current branch is `master` and pipeline source is `push`. | |
+| `if-master-schedule-2-hourly` | Matches if the current branch is `master` and pipeline runs on a 2-hourly schedule. | |
+| `if-master-schedule-2-nightly` | Matches if the current branch is `master` and pipeline runs on a nightly schedule. | |
+| `if-auto-deploy-branches` | Matches if the current branch is an auto-deploy one. | |
| `if-master-or-tag` | Matches if the pipeline is for the `master` branch or for a tag. | |
| `if-merge-request` | Matches if the pipeline is for a merge request. | |
+| `if-merge-request-title-as-if-foss` | Matches if the pipeline is for a merge request and the MR title includes "RUN AS-IF-FOSS". | |
+| `if-merge-request-title-update-caches` | Matches if the pipeline is for a merge request and the MR title includes "UPDATE CACHE". | |
+| `if-merge-request-title-run-all-rspec` | Matches if the pipeline is for a merge request and the MR title includes "RUN ALL RSPEC". | |
+| `if-security-merge-request` | Matches if the pipeline is for a security merge request. | |
+| `if-security-schedule` | Matches if the pipeline is for a security scheduled pipeline. | |
| `if-nightly-master-schedule` | Matches if the pipeline is for a `master` scheduled pipeline with `$NIGHTLY` set. | |
| `if-dot-com-gitlab-org-schedule` | Limits jobs creation to scheduled pipelines for the `gitlab-org` group on GitLab.com. | |
| `if-dot-com-gitlab-org-master` | Limits jobs creation to the `master` branch for the `gitlab-org` group on GitLab.com. | |
| `if-dot-com-gitlab-org-merge-request` | Limits jobs creation to merge requests for the `gitlab-org` group on GitLab.com. | |
| `if-dot-com-gitlab-org-and-security-tag` | Limits job creation to tags for the `gitlab-org` and `gitlab-org/security` groups on GitLab.com. | |
| `if-dot-com-gitlab-org-and-security-merge-request` | Limit jobs creation to merge requests for the `gitlab-org` and `gitlab-org/security` groups on GitLab.com. | |
+| `if-dot-com-gitlab-org-and-security-tag` | Limit jobs creation to tags for the `gitlab-org` and `gitlab-org/security` groups on GitLab.com. | |
| `if-dot-com-ee-schedule` | Limits jobs to scheduled pipelines for the `gitlab-org/gitlab` project on GitLab.com. | |
| `if-cache-credentials-schedule` | Limits jobs to scheduled pipelines with the `$CI_REPO_CACHE_CREDENTIALS` variable set. | |
+| `if-rspec-fail-fast-disabled` | Limits jobs to pipelines with `$RSPEC_FAIL_FAST_ENABLED` variable not set to `"true"`. | |
+| `if-rspec-fail-fast-skipped` | Matches if the pipeline is for a merge request and the MR title includes "SKIP RSPEC FAIL-FAST". | |
+| `if-security-pipeline-merge-result` | Matches if the pipeline is for a security merge request triggered by `@gitlab-release-tools-bot`. | |
+
+<!-- vale gitlab.Substitutions = YES -->
#### `changes:` patterns
| `changes:` patterns | Description |
|------------------------------|--------------------------------------------------------------------------|
-| `ci-patterns` | Only create job for CI config-related changes. |
-| `yaml-patterns` | Only create job for YAML-related changes. |
+| `ci-patterns` | Only create job for CI configuration-related changes. |
+| `ci-build-images-patterns` | Only create job for CI configuration-related changes related to the `build-images` stage. |
+| `ci-review-patterns` | Only create job for CI configuration-related changes related to the `review` stage. |
+| `ci-qa-patterns` | Only create job for CI configuration-related changes related to the `qa` stage. |
+| `yaml-lint-patterns` | Only create job for YAML-related changes. |
| `docs-patterns` | Only create job for docs-related changes. |
| `frontend-dependency-patterns` | Only create job when frontend dependencies are updated (i.e. `package.json`, and `yarn.lock`). changes. |
| `frontend-patterns` | Only create job for frontend-related changes. |
+| `backend-patterns` | Only create job for backend-related changes. |
+| `db-patterns` | Only create job for DB-related changes. |
| `backstage-patterns` | Only create job for backstage-related changes (i.e. Danger, fixtures, RuboCop, specs). |
| `code-patterns` | Only create job for code-related changes. |
| `qa-patterns` | Only create job for QA-related changes. |
diff --git a/doc/development/policies.md b/doc/development/policies.md
index b44367f7075..c1a87990bc9 100644
--- a/doc/development/policies.md
+++ b/doc/development/policies.md
@@ -1,14 +1,14 @@
---
stage: Manage
group: Access
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# `DeclarativePolicy` framework
The DeclarativePolicy framework is designed to assist in performance of policy checks, and to enable ease of extension for EE. The DSL code in `app/policies` is what `Ability.allowed?` uses to check whether a particular action is allowed on a subject.
-The policy used is based on the subject's class name - so `Ability.allowed?(user, :some_ability, project)` will create a `ProjectPolicy` and check permissions on that.
+The policy used is based on the subject's class name - so `Ability.allowed?(user, :some_ability, project)` creates a `ProjectPolicy` and check permissions on that.
## Managing Permission Rules
@@ -16,7 +16,7 @@ Permissions are broken into two parts: `conditions` and `rules`. Conditions are
### Conditions
-Conditions are defined by the `condition` method, and are given a name and a block. The block will be executed in the context of the policy object - so it can access `@user` and `@subject`, as well as call any methods defined on the policy. Note that `@user` may be nil (in the anonymous case), but `@subject` is guaranteed to be a real instance of the subject class.
+Conditions are defined by the `condition` method, and are given a name and a block. The block is executed in the context of the policy object - so it can access `@user` and `@subject`, as well as call any methods defined on the policy. Note that `@user` may be nil (in the anonymous case), but `@subject` is guaranteed to be a real instance of the subject class.
```ruby
class FooPolicy < BasePolicy
@@ -34,9 +34,9 @@ class FooPolicy < BasePolicy
end
```
-When you define a condition, a predicate method is defined on the policy to check whether that condition passes - so in the above example, an instance of `FooPolicy` will also respond to `#is_public?` and `#thing?`.
+When you define a condition, a predicate method is defined on the policy to check whether that condition passes - so in the above example, an instance of `FooPolicy` also responds to `#is_public?` and `#thing?`.
-Conditions are cached according to their scope. Scope and ordering will be covered later.
+Conditions are cached according to their scope. Scope and ordering is covered later.
### Rules
@@ -63,13 +63,23 @@ end
Within the rule DSL, you can use:
- A regular word mentions a condition by name - a rule that is in effect when that condition is truthy.
-- `~` indicates negation.
+- `~` indicates negation, also available as `negate`.
- `&` and `|` are logical combinations, also available as `all?(...)` and `any?(...)`.
- `can?(:other_ability)` delegates to the rules that apply to `:other_ability`. Note that this is distinct from the instance method `can?`, which can check dynamically - this only configures a delegation to another ability.
+`~`, `&` and `|` operators are overridden methods in
+[`DeclarativePolicy::Rule::Base`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/declarative_policy/rule.rb).
+
+Do not use boolean operators such as `&&` and `||` within the rule DSL,
+as conditions within rule blocks are objects, not booleans. The same
+applies for ternary operators (`condition ? ... : ...`), and `if`
+blocks. These operators cannot be overridden, and are hence banned via a
+[custom
+cop](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/49771).
+
## Scores, Order, Performance
-To see how the rules get evaluated into a judgment, it is useful in a console to use `policy.debug(:some_ability)`. This will print the rules in the order they are evaluated.
+To see how the rules get evaluated into a judgment, it is useful in a console to use `policy.debug(:some_ability)`. This prints the rules in the order they are evaluated.
For example, let's say you wanted to debug `IssuePolicy`. You might run
the debugger in this way:
@@ -109,9 +119,9 @@ When a policy is asked whether a particular ability is allowed
compute all the conditions on the policy. First, only the rules relevant
to that particular ability are selected. Then, the execution model takes
advantage of short-circuiting, and attempts to sort rules based on a
-heuristic of how expensive they will be to calculate. The sorting is
-dynamic and cache-aware, so that previously calculated conditions will
-be considered first, before computing other conditions.
+heuristic of how expensive they are to calculate. The sorting is
+dynamic and cache-aware, so that previously calculated conditions are
+considered first, before computing other conditions.
Note that the score is chosen by a developer via the `score:` parameter
in a `condition` to denote how expensive evaluating this rule would be
@@ -119,7 +129,7 @@ relative to other rules.
## Scope
-Sometimes, a condition will only use data from `@user` or only from `@subject`. In this case, we want to change the scope of the caching, so that we don't recalculate conditions unnecessarily. For example, given:
+Sometimes, a condition only uses data from `@user` or only from `@subject`. In this case, we want to change the scope of the caching, so that we don't recalculate conditions unnecessarily. For example, given:
```ruby
class FooPolicy < BasePolicy
@@ -135,10 +145,10 @@ Naively, if we call `Ability.allowed?(user1, :some_ability, foo)` and `Ability.a
condition(:expensive_condition, scope: :subject) { @subject.expensive_query? }
```
-then the result of the condition will be cached globally only based on the subject - so it will not be calculated repeatedly for different users. Similarly, `scope: :user` will cache only based on the user.
+then the result of the condition is cached globally only based on the subject - so it is not calculated repeatedly for different users. Similarly, `scope: :user` caches only based on the user.
**DANGER**: If you use a `:scope` option when the condition actually uses data from
-both user and subject (including a simple anonymous check!) your result will be cached at too global of a scope and will result in cache bugs.
+both user and subject (including a simple anonymous check!) your result is cached at too global of a scope and results in cache bugs.
Sometimes we are checking permissions for a lot of users for one subject, or a lot of subjects for one user. In this case, we want to set a *preferred scope* - i.e. tell the system that we prefer rules that can be cached on the repeated parameter. For example, in `Ability.users_that_can_read_project`:
@@ -150,7 +160,7 @@ def users_that_can_read_project(users, project)
end
```
-This will, for example, prefer checking `project.public?` to checking `user.admin?`.
+This, for example, prefers checking `project.public?` to checking `user.admin?`.
## Delegation
@@ -162,7 +172,7 @@ class FooPolicy < BasePolicy
end
```
-will include all rules from `ProjectPolicy`. The delegated conditions will be evaluated with the correct delegated subject, and will be sorted along with the regular rules in the policy. Note that only the relevant rules for a particular ability will actually be considered.
+includes all rules from `ProjectPolicy`. The delegated conditions are evaluated with the correct delegated subject, and are sorted along with the regular rules in the policy. Note that only the relevant rules for a particular ability are actually considered.
### Overrides
@@ -203,7 +213,7 @@ end
But the food preferences one is harder - because of the `prevent` call in the
parent policy, if the parent dislikes it, even calling `enable` in the child
-will not enable `:eat_broccoli`.
+does not enable `:eat_broccoli`.
We could remove the `prevent` call in the parent policy, but that still doesn't
help us, since the rules are different: parents get to eat what they like, and
@@ -226,7 +236,7 @@ class ChildPolicy < BasePolicy
end
```
-With this definition, the `ChildPolicy` will _never_ look in the `ParentPolicy` to
+With this definition, the `ChildPolicy` _never_ looks in the `ParentPolicy` to
satisfy `:eat_broccoli`, but it _will_ use it for any other abilities. The child
policy can then define `:eat_broccoli` in a way that makes sense for `Child` and not
`Parent`.
@@ -243,7 +253,7 @@ Other approaches can include for example using different ability names. Choosing
to eat a food and eating foods you are given are semantically distinct, and they
could be named differently (perhaps `chooses_to_eat_broccoli` and
`eats_what_is_given` in this case). It can depend on how polymorphic the call
-site is. If you know that we will always check the policy with a `Parent` or a
+site is. If you know that we always check the policy with a `Parent` or a
`Child`, then we can choose the appropriate ability name. If the call site is
polymorphic, then we cannot do that.
@@ -260,4 +270,4 @@ class Foo
end
```
-This will use & check permissions on the `SomeOtherPolicy` class rather than the usual calculated `FooPolicy` class.
+This uses and checks permissions on the `SomeOtherPolicy` class rather than the usual calculated `FooPolicy` class.
diff --git a/doc/development/polling.md b/doc/development/polling.md
index b06507787b0..18f9fb954dd 100644
--- a/doc/development/polling.md
+++ b/doc/development/polling.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Polling with ETag caching
@@ -47,7 +47,7 @@ Cache Hit:
resource.
1. If the `If-None-Match` header matches the current value in Redis we know
that the resource did not change so we can send 304 response immediately,
- without querying the database at all. The client's browser will use the
+ without querying the database at all. The client's browser uses the
cached response.
1. If the `If-None-Match` header does not match the current value in Redis
we have to generate a new response, because the resource changed.
diff --git a/doc/development/polymorphic_associations.md b/doc/development/polymorphic_associations.md
index 66ea974063a..cabd2e3fb41 100644
--- a/doc/development/polymorphic_associations.md
+++ b/doc/development/polymorphic_associations.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Polymorphic Associations
@@ -16,7 +16,7 @@ target ID. For example, at the time of writing we have such a setup for
- `source_type`: a string defining the model to use, can be either `Project` or
`Namespace`.
- `source_id`: the ID of the row to retrieve based on `source_type`. For
- example, when `source_type` is `Project` then `source_id` will contain a
+ example, when `source_type` is `Project` then `source_id` contains a
project ID.
While such a setup may appear to be useful, it comes with many drawbacks; enough
@@ -24,8 +24,8 @@ that you should avoid this at all costs.
## Space Wasted
-Because this setup relies on string values to determine the model to use it will
-end up wasting a lot of space. For example, for `Project` and `Namespace` the
+Because this setup relies on string values to determine the model to use, it
+wastes a lot of space. For example, for `Project` and `Namespace` the
maximum size is 9 bytes, plus 1 extra byte for every string when using
PostgreSQL. While this may only be 10 bytes per row, given enough tables and
rows using such a setup we can end up wasting quite a bit of disk space and
@@ -84,7 +84,7 @@ Let's say you have a `members` table storing both approved and pending members,
for both projects and groups, and the pending state is determined by the column
`requested_at` being set or not. Schema wise such a setup can lead to various
columns only being set for certain rows, wasting space. It's also possible that
-certain indexes will only be set for certain rows, again wasting space. Finally,
+certain indexes are only set for certain rows, again wasting space. Finally,
querying such a table requires less than ideal queries. For example:
```sql
@@ -121,7 +121,7 @@ WHERE group_id = 4
```
If you want to get both you can use a UNION, though you need to be explicit
-about what columns you want to SELECT as otherwise the result set will use the
+about what columns you want to SELECT as otherwise the result set uses the
columns of the first query. For example:
```sql
@@ -147,6 +147,6 @@ filter rows using the `IS NULL` condition.
To summarize: using separate tables allows us to use foreign keys effectively,
create indexes only where necessary, conserve space, query data more
efficiently, and scale these tables more easily (e.g. by storing them on
-separate disks). A nice side effect of this is that code can also become easier
-as you won't end up with a single model having to handle different kinds of
+separate disks). A nice side effect of this is that code can also become easier,
+as a single model isn't responsible for handling different kinds of
data.
diff --git a/doc/development/post_deployment_migrations.md b/doc/development/post_deployment_migrations.md
index 2550f9547df..6ab3620c197 100644
--- a/doc/development/post_deployment_migrations.md
+++ b/doc/development/post_deployment_migrations.md
@@ -1,14 +1,14 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Post Deployment Migrations
Post deployment migrations are regular Rails migrations that can optionally be
executed after a deployment. By default these migrations are executed alongside
-the other migrations. To skip these migrations you will have to set the
+the other migrations. To skip these migrations you must set the
environment variable `SKIP_POST_DEPLOYMENT_MIGRATIONS` to a non-empty value
when running `rake db:migrate`.
@@ -19,7 +19,7 @@ migrations:
bundle exec rake db:migrate
```
-This however will skip post deployment migrations:
+This however skips post deployment migrations:
```shell
SKIP_POST_DEPLOYMENT_MIGRATIONS=true bundle exec rake db:migrate
@@ -40,7 +40,7 @@ Once all servers have been updated you can run `chef-client` again on a single
server _without_ the environment variable.
The process is similar for other deployment techniques: first you would deploy
-with the environment variable set, then you'll essentially re-deploy a single
+with the environment variable set, then you re-deploy a single
server but with the variable _unset_.
## Creating Migrations
@@ -51,7 +51,7 @@ To create a post deployment migration you can use the following Rails generator:
bundle exec rails g post_deployment_migration migration_name_here
```
-This will generate the migration file in `db/post_migrate`. These migrations
+This generates the migration file in `db/post_migrate`. These migrations
behave exactly like regular Rails migrations.
## Use Cases
diff --git a/doc/development/product_analytics/event_dictionary.md b/doc/development/product_analytics/event_dictionary.md
index 88cb75fdb83..9c363f08cb4 100644
--- a/doc/development/product_analytics/event_dictionary.md
+++ b/doc/development/product_analytics/event_dictionary.md
@@ -3,3 +3,6 @@ redirect_to: 'https://about.gitlab.com/handbook/product/product-analytics-guide/
---
This document was moved to [another location](https://about.gitlab.com/handbook/product/product-analytics-guide/).
+
+<!-- This redirect file can be deleted after February 1, 2021. -->
+<!-- Before deletion, see: https://docs.gitlab.com/ee/development/documentation/#move-or-rename-a-page -->
diff --git a/doc/development/product_analytics/index.md b/doc/development/product_analytics/index.md
index 88cb75fdb83..9c363f08cb4 100644
--- a/doc/development/product_analytics/index.md
+++ b/doc/development/product_analytics/index.md
@@ -3,3 +3,6 @@ redirect_to: 'https://about.gitlab.com/handbook/product/product-analytics-guide/
---
This document was moved to [another location](https://about.gitlab.com/handbook/product/product-analytics-guide/).
+
+<!-- This redirect file can be deleted after February 1, 2021. -->
+<!-- Before deletion, see: https://docs.gitlab.com/ee/development/documentation/#move-or-rename-a-page -->
diff --git a/doc/development/product_analytics/snowplow.md b/doc/development/product_analytics/snowplow.md
index c5f48994d5c..48b816f0b83 100644
--- a/doc/development/product_analytics/snowplow.md
+++ b/doc/development/product_analytics/snowplow.md
@@ -1,7 +1,7 @@
---
stage: Growth
group: Product Analytics
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Snowplow Guide
@@ -71,8 +71,8 @@ The following example shows a basic request/response flow between the following
- Snowplow JS / Ruby Trackers on GitLab.com
- [GitLab.com Snowplow Collector](https://gitlab.com/gitlab-com/gl-infra/readiness/-/blob/master/library/snowplow/index.md)
-- GitLab's S3 Bucket
-- GitLab's Snowflake Data Warehouse
+- The GitLab S3 Bucket
+- The GitLab Snowflake Data Warehouse
- Sisense:
```mermaid
@@ -98,7 +98,7 @@ sequenceDiagram
## Structured event taxonomy
-When adding new click events, we should add them in a way that's internally consistent. If we don't, it'll be very painful to perform analysis across features since each feature will be capturing events differently.
+When adding new click events, we should add them in a way that's internally consistent. If we don't, it is very painful to perform analysis across features since each feature captures events differently.
The current method provides several attributes that are sent on each click event. Please try to follow these guidelines when specifying events to capture:
@@ -110,6 +110,10 @@ The current method provides several attributes that are sent on each click event
| property | text | false | Any additional property of the element, or object being acted on. |
| value | decimal | false | Describes a numeric value or something directly related to the event. This could be the value of an input (e.g. `10` when clicking `internal` visibility). |
+### Web-specific parameters
+
+Snowplow JS adds many [web-specific parameters](https://docs.snowplowanalytics.com/docs/collecting-data/collecting-from-own-applications/snowplow-tracker-protocol/#Web-specific_parameters) to all web events by default.
+
## Implementing Snowplow JS (Frontend) tracking
GitLab provides `Tracking`, an interface that wraps the [Snowplow JavaScript Tracker](https://github.com/snowplow/snowplow/wiki/javascript-tracker) for tracking custom events. There are a few ways to use tracking, but each generally requires at minimum, a `category` and an `action`. Additional data can be provided that adheres to our [Structured event taxonomy](#structured-event-taxonomy).
@@ -150,6 +154,17 @@ Below is a list of supported `data-track-*` attributes:
| `data-track-value` | false | The `value` as described in our [Structured event taxonomy](#structured-event-taxonomy). If omitted, this is the element's `value` property or an empty string. For checkboxes, the default value is the element's checked attribute or `false` when unchecked. |
| `data-track-context` | false | The `context` as described in our [Structured event taxonomy](#structured-event-taxonomy). |
+#### Caveats
+
+When using the GitLab helper method [`nav_link`](https://gitlab.com/gitlab-org/gitlab/-/blob/898b286de322e5df6a38d257b10c94974d580df8/app/helpers/tab_helper.rb#L69) be sure to wrap `html_options` under the `html_options` keyword argument.
+Be careful, as this behavior can be confused with the `ActionView` helper method [`link_to`](https://api.rubyonrails.org/v5.2.3/classes/ActionView/Helpers/UrlHelper.html#method-i-link_to) that does not require additional wrapping of `html_options`
+
+`nav_link(controller: ['dashboard/groups', 'explore/groups'], html_options: { data: { track_label: "groups_dropdown", track_event: "click_dropdown" } })`
+
+vs
+
+`link_to assigned_issues_dashboard_path, title: _('Issues'), data: { track_label: 'main_navigation', track_event: 'click_issues_link' }`
+
### Tracking within Vue components
There's a tracking Vue mixin that can be used in components if more complex tracking is required. To use it, first import the `Tracking` library and request a mixin.
@@ -366,48 +381,79 @@ Snowplow Micro is a Docker-based solution for testing frontend and backend event
- Look at the [Snowplow Micro repository](https://github.com/snowplow-incubator/snowplow-micro)
- Watch our [installation guide recording](https://www.youtube.com/watch?v=OX46fo_A0Ag)
-1. Install [Snowplow Micro](https://github.com/snowplow-incubator/snowplow-micro):
-
- ```shell
- docker run --mount type=bind,source=$(pwd)/example,destination=/config -p 9090:9090 snowplow/snowplow-micro:latest --collector-config /config/micro.conf --iglu /config/iglu.json
- ```
+1. Ensure Docker is installed and running.
-1. Install Snowplow Micro by cloning the settings in [this project](https://gitlab.com/gitlab-org/snowplow-micro-configuration):
+1. Install [Snowplow Micro](https://github.com/snowplow-incubator/snowplow-micro) by cloning the settings in [this project](https://gitlab.com/gitlab-org/snowplow-micro-configuration):
+1. Navigate to the directory with the cloned project, and start the appropriate Docker
+ container with the following script:
```shell
- git clone git@gitlab.com:gitlab-org/snowplow-micro-configuration.git
./snowplow-micro.sh
```
-1. Update port in SQL to set `9090`:
+1. Update your instance's settings to enable Snowplow events and point to the Snowplow Micro collector:
```shell
gdk psql -d gitlabhq_development
update application_settings set snowplow_collector_hostname='localhost:9090', snowplow_enabled=true, snowplow_cookie_domain='.gitlab.com';
```
-1. Update `app/assets/javascripts/tracking.js` to [remove this line](https://gitlab.com/snippets/1918635):
+1. Update `DEFAULT_SNOWPLOW_OPTIONS` in `app/assets/javascripts/tracking.js` to remove `forceSecureTracker: true`:
+
+ ```diff
+ diff --git a/app/assets/javascripts/tracking.js b/app/assets/javascripts/tracking.js
+ index 0a1211d0a76..3b98c8f28f2 100644
+ --- a/app/assets/javascripts/tracking.js
+ +++ b/app/assets/javascripts/tracking.js
+ @@ -7,7 +7,6 @@ const DEFAULT_SNOWPLOW_OPTIONS = {
+ appId: '',
+ userFingerprint: false,
+ respectDoNotTrack: true,
+ - forceSecureTracker: true,
+ eventMethod: 'post',
+ contexts: { webPage: true, performanceTiming: true },
+ formTracking: false,
- ```javascript
- forceSecureTracker: true
```
-1. Update `lib/gitlab/tracking.rb` to [add these lines](https://gitlab.com/snippets/1918635):
-
- ```ruby
- protocol: 'http',
- port: 9090,
+1. Update `snowplow_options` in `lib/gitlab/tracking.rb` to add `protocol` and `port`:
+
+ ```diff
+ diff --git a/lib/gitlab/tracking.rb b/lib/gitlab/tracking.rb
+ index 618e359211b..e9084623c43 100644
+ --- a/lib/gitlab/tracking.rb
+ +++ b/lib/gitlab/tracking.rb
+ @@ -41,7 +41,9 @@ def snowplow_options(group)
+ cookie_domain: Gitlab::CurrentSettings.snowplow_cookie_domain,
+ app_id: Gitlab::CurrentSettings.snowplow_app_id,
+ form_tracking: additional_features,
+ - link_click_tracking: additional_features
+ + link_click_tracking: additional_features,
+ + protocol: 'http',
+ + port: 9090
+ }.transform_keys! { |key| key.to_s.camelize(:lower).to_sym }
+ end
```
-1. Update `lib/gitlab/tracking.rb` to [change async emitter from https to http](https://gitlab.com/snippets/1918635):
+1. Update `emitter` in `lib/gitlab/tracking/destinations/snowplow.rb` to change `protocol`:
+
+ ```diff
+ diff --git a/lib/gitlab/tracking/destinations/snowplow.rb b/lib/gitlab/tracking/destinations/snowplow.rb
+ index 4fa844de325..5dd9d0eacfb 100644
+ --- a/lib/gitlab/tracking/destinations/snowplow.rb
+ +++ b/lib/gitlab/tracking/destinations/snowplow.rb
+ @@ -40,7 +40,7 @@ def tracker
+ def emitter
+ SnowplowTracker::AsyncEmitter.new(
+ Gitlab::CurrentSettings.snowplow_collector_hostname,
+ - protocol: 'https'
+ + protocol: 'http'
+ )
+ end
+ end
- ```ruby
- SnowplowTracker::AsyncEmitter.new(Gitlab::CurrentSettings.snowplow_collector_hostname, protocol: 'http'),
```
-1. Enable Snowplow in the admin area, Settings::Integrations::Snowplow to point to:
- `http://localhost:3000/admin/application_settings/integrations#js-snowplow-settings`.
-
1. Restart GDK:
```shell
@@ -417,9 +463,11 @@ Snowplow Micro is a Docker-based solution for testing frontend and backend event
1. Send a test Snowplow event from the Rails console:
```ruby
- Gitlab::Tracking.self_describing_event('iglu:com.gitlab/pageview_context/jsonschema/1-0-0', { page_type: 'MY_TYPE' }, context: nil )
+ Gitlab::Tracking.self_describing_event('iglu:com.gitlab/pageview_context/jsonschema/1-0-0', data: { page_type: 'MY_TYPE' }, context: nil)
```
+1. Navigate to `localhost:9090/micro/good` to see the event.
+
### Snowplow Mini
[Snowplow Mini](https://github.com/snowplow/snowplow-mini) is an easily-deployable, single-instance version of Snowplow.
@@ -427,3 +475,142 @@ Snowplow Micro is a Docker-based solution for testing frontend and backend event
Snowplow Mini can be used for testing frontend and backend events on a production, staging and local development environment.
For GitLab.com, we're setting up a [QA and Testing environment](https://gitlab.com/gitlab-org/telemetry/-/issues/266) using Snowplow Mini.
+
+## Snowplow Schemas
+
+### Default Schema
+
+| Field Name | Required | Type | Description |
+|--------------------------|---------------------|-----------|----------------------------------------------------------------------------------------------------------------------------------|
+| app_id | **{check-circle}** | string | Unique identifier for website / application |
+| base_currency | **{dotted-circle}** | string | Reporting currency |
+| br_colordepth | **{dotted-circle}** | integer | Browser color depth |
+| br_cookies | **{dotted-circle}** | boolean | Does the browser permit cookies? |
+| br_family | **{dotted-circle}** | string | Browser family |
+| br_features_director | **{dotted-circle}** | boolean | Director plugin installed? |
+| br_features_flash | **{dotted-circle}** | boolean | Flash plugin installed? |
+| br_features_gears | **{dotted-circle}** | boolean | Google gears installed? |
+| br_features_java | **{dotted-circle}** | boolean | Java plugin installed? |
+| br_features_pdf | **{dotted-circle}** | boolean | Adobe PDF plugin installed? |
+| br_features_quicktime | **{dotted-circle}** | boolean | Quicktime plugin installed? |
+| br_features_realplayer | **{dotted-circle}** | boolean | Realplayer plugin installed? |
+| br_features_silverlight | **{dotted-circle}** | boolean | Silverlight plugin installed? |
+| br_features_windowsmedia | **{dotted-circle}** | boolean | Windows media plugin installed? |
+| br_lang | **{dotted-circle}** | string | Language the browser is set to |
+| br_name | **{dotted-circle}** | string | Browser name |
+| br_renderengine | **{dotted-circle}** | string | Browser rendering engine |
+| br_type | **{dotted-circle}** | string | Browser type |
+| br_version | **{dotted-circle}** | string | Browser version |
+| br_viewheight | **{dotted-circle}** | string | Browser viewport height |
+| br_viewwidth | **{dotted-circle}** | string | Browser viewport width |
+| collector_tstamp | **{dotted-circle}** | timestamp | Time stamp for the event recorded by the collector |
+| contexts | **{dotted-circle}** | | |
+| derived_contexts | **{dotted-circle}** | | Contexts derived in the Enrich process |
+| derived_tstamp | **{dotted-circle}** | timestamp | Timestamp making allowance for innaccurate device clock |
+| doc_charset | **{dotted-circle}** | string | Web page’s character encoding |
+| doc_height | **{dotted-circle}** | string | Web page height |
+| doc_width | **{dotted-circle}** | string | Web page width |
+| domain_sessionid | **{dotted-circle}** | string | Unique identifier (UUID) for this visit of this user_id to this domain |
+| domain_sessionidx | **{dotted-circle}** | integer | Index of number of visits that this user_id has made to this domain (The first visit is `1`) |
+| domain_userid | **{dotted-circle}** | string | Unique identifier for a user, based on a first party cookie (so domain specific) |
+| dvce_created_tstamp | **{dotted-circle}** | timestamp | Timestamp when event occurred, as recorded by client device |
+| dvce_ismobile | **{dotted-circle}** | boolean | Indicates whether device is mobile |
+| dvce_screenheight | **{dotted-circle}** | string | Screen / monitor resolution |
+| dvce_screenwidth | **{dotted-circle}** | string | Screen / monitor resolution |
+| dvce_sent_tstamp | **{dotted-circle}** | timestamp | Timestamp when event was sent by client device to collector |
+| dvce_type | **{dotted-circle}** | string | Type of device |
+| etl_tags | **{dotted-circle}** | string | JSON of tags for this ETL run |
+| etl_tstamp | **{dotted-circle}** | timestamp | Timestamp event began ETL |
+| event | **{dotted-circle}** | string | Event type |
+| event_fingerprint | **{dotted-circle}** | string | Hash client-set event fields |
+| event_format | **{dotted-circle}** | string | Format for event |
+| event_id | **{dotted-circle}** | string | Event UUID |
+| event_name | **{dotted-circle}** | string | Event name |
+| event_vendor | **{dotted-circle}** | string | The company who developed the event model |
+| event_version | **{dotted-circle}** | string | Version of event schema |
+| geo_city | **{dotted-circle}** | string | City of IP origin |
+| geo_country | **{dotted-circle}** | string | Country of IP origin |
+| geo_latitude | **{dotted-circle}** | string | An approximate latitude |
+| geo_longitude | **{dotted-circle}** | string | An approximate longitude |
+| geo_region | **{dotted-circle}** | string | Region of IP origin |
+| geo_region_name | **{dotted-circle}** | string | Region of IP origin |
+| geo_timezone | **{dotted-circle}** | string | Timezone of IP origin |
+| geo_zipcode | **{dotted-circle}** | string | Zip (postal) code of IP origin |
+| ip_domain | **{dotted-circle}** | string | Second level domain name associated with the visitor’s IP address |
+| ip_isp | **{dotted-circle}** | string | Visitor’s ISP |
+| ip_netspeed | **{dotted-circle}** | string | Visitor’s connection type |
+| ip_organization | **{dotted-circle}** | string | Organization associated with the visitor’s IP address – defaults to ISP name if none is found |
+| mkt_campaign | **{dotted-circle}** | string | The campaign ID |
+| mkt_clickid | **{dotted-circle}** | string | The click ID |
+| mkt_content | **{dotted-circle}** | string | The content or ID of the ad. |
+| mkt_medium | **{dotted-circle}** | string | Type of traffic source |
+| mkt_network | **{dotted-circle}** | string | The ad network to which the click ID belongs |
+| mkt_source | **{dotted-circle}** | string | The company / website where the traffic came from |
+| mkt_term | **{dotted-circle}** | string | Keywords associated with the referrer |
+| name_tracker | **{dotted-circle}** | string | The tracker namespace |
+| network_userid | **{dotted-circle}** | string | Unique identifier for a user, based on a cookie from the collector (so set at a network level and shouldn’t be set by a tracker) |
+| os_family | **{dotted-circle}** | string | Operating system family |
+| os_manufacturer | **{dotted-circle}** | string | Manufacturers of operating system |
+| os_name | **{dotted-circle}** | string | Name of operating system |
+| os_timezone | **{dotted-circle}** | string | Client operating system timezone |
+| page_referrer | **{dotted-circle}** | string | Referrer URL |
+| page_title | **{dotted-circle}** | string | Page title |
+| page_url | **{dotted-circle}** | string | Page URL |
+| page_urlfragment | **{dotted-circle}** | string | Fragment aka anchor |
+| page_urlhost | **{dotted-circle}** | string | Host aka domain |
+| page_urlpath | **{dotted-circle}** | string | Path to page |
+| page_urlport | **{dotted-circle}** | integer | Port if specified, 80 if not |
+| page_urlquery | **{dotted-circle}** | string | Query string |
+| page_urlscheme | **{dotted-circle}** | string | Scheme (protocol name) |
+| platform | **{dotted-circle}** | string | The platform the app runs on |
+| pp_xoffset_max | **{dotted-circle}** | integer | Maximum page x offset seen in the last ping period |
+| pp_xoffset_min | **{dotted-circle}** | integer | Minimum page x offset seen in the last ping period |
+| pp_yoffset_max | **{dotted-circle}** | integer | Maximum page y offset seen in the last ping period |
+| pp_yoffset_min | **{dotted-circle}** | integer | Minimum page y offset seen in the last ping period |
+| refr_domain_userid | **{dotted-circle}** | string | The Snowplow domain_userid of the referring website |
+| refr_dvce_tstamp | **{dotted-circle}** | timestamp | The time of attaching the domain_userid to the inbound link |
+| refr_medium | **{dotted-circle}** | string | Type of referer |
+| refr_source | **{dotted-circle}** | string | Name of referer if recognised |
+| refr_term | **{dotted-circle}** | string | Keywords if source is a search engine |
+| refr_urlfragment | **{dotted-circle}** | string | Referer URL fragment |
+| refr_urlhost | **{dotted-circle}** | string | Referer host |
+| refr_urlpath | **{dotted-circle}** | string | Referer page path |
+| refr_urlport | **{dotted-circle}** | integer | Referer port |
+| refr_urlquery | **{dotted-circle}** | string | Referer URL querystring |
+| refr_urlscheme | **{dotted-circle}** | string | Referer scheme |
+| se_action | **{dotted-circle}** | string | The action / event itself |
+| se_category | **{dotted-circle}** | string | The category of event |
+| se_label | **{dotted-circle}** | string | A label often used to refer to the β€˜object’ the action is performed on |
+| se_property | **{dotted-circle}** | string | A property associated with either the action or the object |
+| se_value | **{dotted-circle}** | decimal | A value associated with the user action |
+| ti_category | **{dotted-circle}** | string | Item category |
+| ti_currency | **{dotted-circle}** | string | Currency |
+| ti_name | **{dotted-circle}** | string | Item name |
+| ti_orderid | **{dotted-circle}** | string | Order ID |
+| ti_price | **{dotted-circle}** | decimal | Item price |
+| ti_price_base | **{dotted-circle}** | decimal | Item price in base currency |
+| ti_quantity | **{dotted-circle}** | integer | Item quantity |
+| ti_sku | **{dotted-circle}** | string | Item SKU |
+| tr_affiliation | **{dotted-circle}** | string | Transaction affiliation (such as channel) |
+| tr_city | **{dotted-circle}** | string | Delivery address: city |
+| tr_country | **{dotted-circle}** | string | Delivery address: country |
+| tr_currency | **{dotted-circle}** | string | Transaction Currency |
+| tr_orderid | **{dotted-circle}** | string | Order ID |
+| tr_shipping | **{dotted-circle}** | decimal | Delivery cost charged |
+| tr_shipping_base | **{dotted-circle}** | decimal | Shipping cost in base currency |
+| tr_state | **{dotted-circle}** | string | Delivery address: state |
+| tr_tax | **{dotted-circle}** | decimal | Transaction tax value (such as amount of VAT included) |
+| tr_tax_base | **{dotted-circle}** | decimal | Tax applied in base currency |
+| tr_total | **{dotted-circle}** | decimal | Transaction total value |
+| tr_total_base | **{dotted-circle}** | decimal | Total amount of transaction in base currency |
+| true_tstamp | **{dotted-circle}** | timestamp | User-set exact timestamp |
+| txn_id | **{dotted-circle}** | string | Transaction ID |
+| unstruct_event | **{dotted-circle}** | JSON | The properties of the event |
+| uploaded_at | **{dotted-circle}** | | |
+| user_fingerprint | **{dotted-circle}** | integer | User identifier based on (hopefully unique) browser features |
+| user_id | **{dotted-circle}** | string | Unique identifier for user, set by the business using setUserId |
+| user_ipaddress | **{dotted-circle}** | string | IP address |
+| useragent | **{dotted-circle}** | string | User agent (expressed as a browser string) |
+| v_collector | **{dotted-circle}** | string | Collector version |
+| v_etl | **{dotted-circle}** | string | ETL version |
+| v_tracker | **{dotted-circle}** | string | Identifier for Snowplow tracker |
diff --git a/doc/development/product_analytics/usage_ping.md b/doc/development/product_analytics/usage_ping.md
index fa785d934cb..37363bbabbc 100644
--- a/doc/development/product_analytics/usage_ping.md
+++ b/doc/development/product_analytics/usage_ping.md
@@ -1,7 +1,7 @@
---
stage: Growth
group: Product Analytics
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Usage Ping Guide
@@ -31,15 +31,15 @@ More useful links:
- The usage data is primarily composed of row counts for different tables in the instance’s database. By comparing these counts month over month (or week over week), we can get a rough sense for how an instance is using the different features within the product. In addition to counts, other facts
that help us classify and understand GitLab installations are collected.
- Usage ping is important to GitLab as we use it to calculate our Stage Monthly Active Users (SMAU) which helps us measure the success of our stages and features.
-- While usage ping is enabled, GitLab will gather data from the other instances and will be able to show usage statistics of your instance to your users.
+- While usage ping is enabled, GitLab gathers data from the other instances and can show usage statistics of your instance to your users.
### Why should we enable Usage Ping?
- The main purpose of Usage Ping is to build a better GitLab. Data about how GitLab is used is collected to better understand feature/stage adoption and usage, which helps us understand how GitLab is adding value and helps our team better understand the reasons why people use GitLab and with this knowledge we're able to make better product decisions.
- As a benefit of having the usage ping active, GitLab lets you analyze the users’ activities over time of your GitLab installation.
- As a benefit of having the usage ping active, GitLab provides you with The DevOps Report,which gives you an overview of your entire instance’s adoption of Concurrent DevOps from planning to monitoring.
-- You will get better, more proactive support. (assuming that our TAMs and support organization used the data to deliver more value)
-- You will get insight and advice into how to get the most value out of your investment in GitLab. Wouldn't you want to know that a number of features or values are not being adopted in your organization?
+- You get better, more proactive support. (assuming that our TAMs and support organization used the data to deliver more value)
+- You get insight and advice into how to get the most value out of your investment in GitLab. Wouldn't you want to know that a number of features or values are not being adopted in your organization?
- You get a report that illustrates how you compare against other similar organizations (anonymized), with specific advice and recommendations on how to improve your DevOps processes.
- Usage Ping is enabled by default. To disable it, see [Disable Usage Ping](#disable-usage-ping).
@@ -80,7 +80,7 @@ production: &base
## Usage Ping request flow
-The following example shows a basic request/response flow between a GitLab instance, the Versions Application, the License Application, Salesforce, GitLab's S3 Bucket, GitLab's Snowflake Data Warehouse, and Sisense:
+The following example shows a basic request/response flow between a GitLab instance, the Versions Application, the License Application, Salesforce, the GitLab S3 Bucket, the GitLab Snowflake Data Warehouse, and Sisense:
```mermaid
sequenceDiagram
@@ -117,7 +117,10 @@ sequenceDiagram
1. When the cron job runs, it calls [`GitLab::UsageData.to_json`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/services/submit_usage_ping_service.rb#L22).
1. `GitLab::UsageData.to_json` [cascades down](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/usage_data.rb#L22) to ~400+ other counter method calls.
1. The response of all methods calls are [merged together](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/usage_data.rb#L14) into a single JSON payload in `GitLab::UsageData.to_json`.
-1. The JSON payload is then [posted to the Versions application]( https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/services/submit_usage_ping_service.rb#L20).
+1. The JSON payload is then [posted to the Versions application]( https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/services/submit_usage_ping_service.rb#L20)
+ If a firewall exception is needed, the required URL depends on several things. If
+ the hostname is `version.gitlab.com`, the protocol is `TCP`, and the port number is `443`,
+ the required URL is <https://version.gitlab.com/>.
## Implementing Usage Ping
@@ -134,7 +137,7 @@ There are several types of counters which are all found in `usage_data.rb`:
- **Alternative Counters:** Used for settings and configurations
- **Redis Counters:** Used for in-memory counts.
-NOTE: **Note:**
+NOTE:
Only use the provided counter methods. Each counter method contains a built in fail safe to isolate each counter to avoid breaking the entire Usage Ping.
### Why batch counting
@@ -189,7 +192,7 @@ Arguments:
- `relation` the ActiveRecord_Relation to perform the count
- `column` the column to perform the distinct count, by default is the primary key
- `batch`: default `true` in order to use batch counting
-- `batch_size`: if none set it will use default value 10000 from `Gitlab::Database::BatchCounter`
+- `batch_size`: if none set it uses default value 10000 from `Gitlab::Database::BatchCounter`
- `start`: custom start of the batch counting in order to avoid complex min calculations
- `end`: custom end of the batch counting in order to avoid complex min calculations
@@ -213,7 +216,7 @@ Arguments:
- `relation` the ActiveRecord_Relation to perform the operation
- `column` the column to sum on
-- `batch_size`: if none set it will use default value 1000 from `Gitlab::Database::BatchCounter`
+- `batch_size`: if none set it uses default value 1000 from `Gitlab::Database::BatchCounter`
- `start`: custom start of the batch counting in order to avoid complex min calculations
- `end`: custom end of the batch counting in order to avoid complex min calculations
@@ -262,6 +265,45 @@ Examples of implementation:
- Using Redis methods [`INCR`](https://redis.io/commands/incr), [`GET`](https://redis.io/commands/get), and [`Gitlab::UsageDataCounters::WikiPageCounter`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/usage_data_counters/wiki_page_counter.rb)
- Using Redis methods [`HINCRBY`](https://redis.io/commands/hincrby), [`HGETALL`](https://redis.io/commands/hgetall), and [`Gitlab::UsageCounters::PodLogs`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/usage_counters/pod_logs.rb)
+##### UsageData API Tracking
+
+<!-- There's nearly identical content in `##### Adding new events`. If you fix errors here, you may need to fix the same errors in the other location. -->
+
+1. Track event using `UsageData` API
+
+ Increment event count using ordinary Redis counter, for given event name.
+
+ Tracking events using the `UsageData` API requires the `usage_data_api` feature flag to be enabled, which is enabled by default.
+
+ API requests are protected by checking for a valid CSRF token.
+
+ In order to be able to increment the values the related feature `usage_data_<event_name>` should be enabled.
+
+ ```plaintext
+ POST /usage_data/increment_counter
+ ```
+
+ | Attribute | Type | Required | Description |
+ | :-------- | :--- | :------- | :---------- |
+ | `event` | string | yes | The event name it should be tracked |
+
+ Response
+
+ - `200` if event was tracked
+ - `400 Bad request` if event parameter is missing
+ - `401 Unauthorized` if user is not authenticated
+ - `403 Forbidden` for invalid CSRF token provided
+
+1. Track events using JavaScript/Vue API helper which calls the API above
+
+ Note that `usage_data_api` and `usage_data_#{event_name}` should be enabled in order to be able to track events
+
+ ```javascript
+ import api from '~/api';
+
+ api.trackRedisCounterEvent('my_already_defined_event_name'),
+ ```
+
#### Redis HLL Counters
With `Gitlab::UsageDataCounters::HLLRedisCounter` we have available data structures used to count unique values.
@@ -304,14 +346,13 @@ Implemented using Redis methods [PFADD](https://redis.io/commands/pfadd) and [PF
access to a group of events.
- `redis_slot`: optional Redis slot; default value: event name. Used if needed to calculate totals
for a group of metrics. Ensure keys are in the same slot. For example:
- `i_compliance_credential_inventory` with `redis_slot: 'compliance'` will build Redis key
+ `i_compliance_credential_inventory` with `redis_slot: 'compliance'` builds Redis key
`i_{compliance}_credential_inventory-2020-34`. If `redis_slot` is not defined the Redis key will
be `{i_compliance_credential_inventory}-2020-34`.
- `expiry`: expiry time in days. Default: 29 days for daily aggregation and 6 weeks for weekly
aggregation.
- - `aggregation`: aggregation `:daily` or `:weekly`. The argument defines how we build the Redis
- keys for data storage. For `daily` we keep a key for metric per day of the year, for `weekly` we
- keep a key for metric per week of the year.
+ - `aggregation`: may be set to a `:daily` or `:weekly` key. Defines how counting data is stored in Redis.
+ Aggregation on a `daily` basis does not pull more fine grained data.
- `feature_flag`: optional. For details, see our [GitLab internal Feature flags](../feature_flags/) documentation.
1. Track event in controller using `RedisTracking` module with `track_redis_hll_event(*controller_actions, name:, feature:, feature_default_enabled: false)`.
@@ -384,6 +425,8 @@ Implemented using Redis methods [PFADD](https://redis.io/commands/pfadd) and [PF
track_usage_event(:incident_management_incident_created, current_user.id)
```
+<!-- There's nearly identical content in `##### UsageData API Tracking`. If you find / fix errors here, you may need to fix errors in that section too. -->
+
1. Track event using `UsageData` API
Increment unique users count using Redis HLL, for given event name.
@@ -392,7 +435,9 @@ Implemented using Redis methods [PFADD](https://redis.io/commands/pfadd) and [PF
API requests are protected by checking for a valid CSRF token.
- In order to be able to increment the values the related feature `usage_data<event_name>` should be enabled.
+ In order to increment the values, the related feature `usage_data_<event_name>` should be
+ set to `default_enabled: true`. For more information, see
+ [Feature flags in development of GitLab](../feature_flags/index.md).
```plaintext
POST /usage_data/increment_unique_users
@@ -415,7 +460,10 @@ Implemented using Redis methods [PFADD](https://redis.io/commands/pfadd) and [PF
Example usage for an existing event already defined in [known events](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/usage_data_counters/known_events/):
- Note that `usage_data_api` and `usage_data_#{event_name}` should be enabled in order to be able to track events
+ Usage Data API is behind `usage_data_api` feature flag which, as of GitLab 13.7, is
+ now set to `default_enabled: true`.
+
+ Each event tracked using Usage Data API is behind a feature flag `usage_data_#{event_name}` which should be `default_enabled: true`
```javascript
import api from '~/api';
@@ -464,21 +512,25 @@ Next, get the unique events for the current week.
start_date: Date.current.beginning_of_week, end_date: Date.current.end_of_week)
```
-Recommendations:
+##### Recommendations
+
+We have the following recommendations for [Adding new events](#adding-new-events):
-- Key should expire in 29 days for daily and 42 days for weekly.
-- If possible, data granularity should be a week. For example a key could be composed from the
- metric's name and week of the year, `2020-33-{metric_name}`.
-- Use a [feature flag](../../operations/feature_flags.md) to have a control over the impact when
- adding new metrics.
+- Event aggregation: weekly.
+- Key expiry time:
+ - Daily: 29 days.
+ - Weekly: 42 days.
+- When adding new metrics, use a [feature flag](../../operations/feature_flags.md) to control the impact.
+- For feature flags triggered by another service, set `default_enabled: false`,
+ - Events can be triggered using the `UsageData` API, which helps when there are > 10 events per change
##### Enable/Disable Redis HLL tracking
Events are tracked behind [feature flags](../feature_flags/index.md) due to concerns for Redis performance and scalability.
-For a full list of events and coresponding feature flags see, [known_events](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/usage_data_counters/known_events/) files.
+For a full list of events and corresponding feature flags see, [known_events](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/usage_data_counters/known_events/) files.
-To enable or disable tracking for specific event within <https://gitlab.com> or <https://staging.gitlab.com>, run commands such as the following to
+To enable or disable tracking for specific event within <https://gitlab.com> or <https://about.staging.gitlab.com>, run commands such as the following to
[enable or disable the corresponding feature](../feature_flags/index.md).
```shell
@@ -567,14 +619,14 @@ alt_usage_data(999)
### Prometheus Queries
In those cases where operational metrics should be part of Usage Ping, a database or Redis query is unlikely
-to provide useful data. Instead, Prometheus might be more appropriate, since most of GitLab's architectural
+to provide useful data. Instead, Prometheus might be more appropriate, since most GitLab architectural
components publish metrics to it that can be queried back, aggregated, and included as usage data.
-NOTE: **Note:**
+NOTE:
Prometheus as a data source for Usage Ping is currently only available for single-node Omnibus installations
that are running the [bundled Prometheus](../../administration/monitoring/prometheus/index.md) instance.
-In order to query Prometheus for metrics, a helper method is available that will `yield` a fully configured
+To query Prometheus for metrics, a helper method is available to `yield` a fully configured
`PrometheusClient`, given it is available as per the note above:
```ruby
@@ -613,7 +665,7 @@ Gitlab::UsageData.distinct_count(::Note.with_suggestions.where(time_period), :au
### 3. Generate the SQL query
-Your Rails console will return the generated SQL queries.
+Your Rails console returns the generated SQL queries.
Example:
@@ -631,7 +683,7 @@ Paste the SQL query into `#database-lab` to see how the query performs at scale.
- `#database-lab` is a Slack channel which uses a production-sized environment to test your queries.
- GitLab.com’s production database has a 15 second timeout.
-- Any single query must stay below 1 second execution time with cold caches.
+- Any single query must stay below [1 second execution time](../query_performance.md#timing-guidelines-for-queries) with cold caches.
- Add a specialized index on columns involved to reduce the execution time.
In order to have an understanding of the query's execution we add in the MR description the following information:
@@ -654,7 +706,7 @@ We also use `#database-lab` and [explain.depesz.com](https://explain.depesz.com/
### 5. Add the metric definition
-When adding, changing, or updating metrics, please update the [Event Dictionary's **Usage Ping** table](https://about.gitlab.com/handbook/product/product-analytics-guide#event-dictionary).
+When adding, changing, or updating metrics, please update the [Event Dictionary's **Usage Ping** table](https://about.gitlab.com/handbook/product/product-analytics-guide/#event-dictionary).
### 6. Add new metric to Versions Application
@@ -670,7 +722,7 @@ Ensure you comply with the [Changelog entries guide](../changelog.md).
### 9. Ask for a Product Analytics Review
-On GitLab.com, we have DangerBot setup to monitor Product Analytics related files and DangerBot will recommend a Product Analytics review. Mention `@gitlab-org/growth/product_analytics/engineers` in your MR for a review.
+On GitLab.com, we have DangerBot setup to monitor Product Analytics related files and DangerBot recommends a Product Analytics review. Mention `@gitlab-org/growth/product_analytics/engineers` in your MR for a review.
### 10. Verify your metric
@@ -696,10 +748,10 @@ This is the recommended approach to test Prometheus based Usage Ping.
The easiest way to verify your changes is to build a new Omnibus image from your code branch via CI, then download the image
and run a local container instance:
-1. From your merge request, click on the `qa` stage, then trigger the `package-and-qa` job. This job will trigger an Omnibus
+1. From your merge request, click on the `qa` stage, then trigger the `package-and-qa` job. This job triggers an Omnibus
build in a [downstream pipeline of the `omnibus-gitlab-mirror` project](https://gitlab.com/gitlab-org/build/omnibus-gitlab-mirror/-/pipelines).
1. In the downstream pipeline, wait for the `gitlab-docker` job to finish.
-1. Open the job logs and locate the full container name including the version. It will take the following form: `registry.gitlab.com/gitlab-org/build/omnibus-gitlab-mirror/gitlab-ee:<VERSION>`.
+1. Open the job logs and locate the full container name including the version. It takes the following form: `registry.gitlab.com/gitlab-org/build/omnibus-gitlab-mirror/gitlab-ee:<VERSION>`.
1. On your local machine, make sure you are logged in to the GitLab Docker registry. You can find the instructions for this in
[Authenticate to the GitLab Container Registry](../../user/packages/container_registry/index.md#authenticate-with-the-container-registry).
1. Once logged in, download the new image via `docker pull registry.gitlab.com/gitlab-org/build/omnibus-gitlab-mirror/gitlab-ee:<VERSION>`
@@ -720,7 +772,7 @@ but with the following limitations:
- While it runs a `node_exporter`, `docker-compose` services emulate hosts, meaning that it would normally report itself to not be associated
with any of the other services that are running. That is not how node metrics are reported in a production setup, where `node_exporter`
always runs as a process alongside other GitLab components on any given node. From Usage Ping's perspective none of the node data would therefore
-appear to be associated to any of the services running, since they all appear to be running on different hosts. To alleviate this problem, the `node_exporter` in GCK was arbitrarily "assigned" to the `web` service, meaning only for this service `node_*` metrics will appear in Usage Ping.
+appear to be associated to any of the services running, since they all appear to be running on different hosts. To alleviate this problem, the `node_exporter` in GCK was arbitrarily "assigned" to the `web` service, meaning only for this service `node_*` metrics appears in Usage Ping.
## Aggregated metrics
@@ -728,19 +780,19 @@ appear to be associated to any of the services running, since they all appear to
> - It's [deployed behind a feature flag](../../user/feature_flags.md), disabled by default.
> - It's enabled on GitLab.com.
-CAUTION: **Warning:**
+WARNING:
This feature is intended solely for internal GitLab use.
In order to add data for aggregated metrics into Usage Ping payload you should add corresponding definition into [`aggregated_metrics.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/usage_data_counters/aggregated_metrics.yml) file. Each aggregate definition includes following parts:
-- name: unique name under which aggregate metric will be added to Usage Ping payload
-- operator: operator that defines how aggregated metric data will be counted. Available operators are:
+- name: unique name under which aggregate metric is added to Usage Ping payload
+- operator: operator that defines how aggregated metric data is counted. Available operators are:
- `OR`: removes duplicates and counts all entries that triggered any of listed events
- `AND`: removes duplicates and counts all elements that were observed triggering all of following events
- events: list of events names (from [`known_events.yml`](#known-events-in-usage-data-payload)) to aggregate into metric. All events in this list must have the same `redis_slot` and `aggregation` attributes.
-- feature_flag: name of [development feature flag](../feature_flags/development.md#development-type) that will be checked before
-metrics aggregation is performed. Corresponding feature flag should have `default_enabled` attribute set to `false`.
-`feature_flag` attribute is **OPTIONAL** and can be omitted, when `feature_flag` is missing no feature flag will be checked.
+- feature_flag: name of [development feature flag](../feature_flags/development.md#development-type) that is checked before
+metrics aggregation is performed. Corresponding feature flag should have `default_enabled` attribute set to `false`.
+`feature_flag` attribute is **OPTIONAL** and can be omitted, when `feature_flag` is missing no feature flag is checked.
Example aggregated metric entries:
@@ -754,7 +806,7 @@ Example aggregated metric entries:
feature_flag: example_aggregated_metric
```
-Aggregated metrics will be added under `aggregated_metrics` key in both `counts_weekly` and `counts_monthly` top level keys in Usage Ping payload.
+Aggregated metrics are added under `aggregated_metrics` key in both `counts_weekly` and `counts_monthly` top level keys in Usage Ping payload.
```ruby
{
@@ -857,44 +909,6 @@ The following is example content of the Usage Ping payload.
"version": "9.6.15",
"pg_system_id": 6842684531675334351
},
- "avg_cycle_analytics": {
- "issue": {
- "average": 999,
- "sd": 999,
- "missing": 999
- },
- "plan": {
- "average": null,
- "sd": 999,
- "missing": 999
- },
- "code": {
- "average": null,
- "sd": 999,
- "missing": 999
- },
- "test": {
- "average": null,
- "sd": 999,
- "missing": 999
- },
- "review": {
- "average": null,
- "sd": 999,
- "missing": 999
- },
- "staging": {
- "average": null,
- "sd": 999,
- "missing": 999
- },
- "production": {
- "average": null,
- "sd": 999,
- "missing": 999
- },
- "total": 999
- },
"analytics_unique_visits": {
"g_analytics_contribution": 999,
...
diff --git a/doc/development/profiling.md b/doc/development/profiling.md
index 6e1b81e659d..76c89d361fc 100644
--- a/doc/development/profiling.md
+++ b/doc/development/profiling.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Profiling
@@ -24,7 +24,7 @@ When using the script, command-line documentation is available by passing no
arguments.
When using the method in an interactive console session, any changes to the
-application code within that console session will be reflected in the profiler
+application code within that console session is reflected in the profiler
output.
For example:
@@ -37,14 +37,14 @@ Gitlab::Profiler.profile('/my-user')
# Returns a RubyProf::Profile where 100 seconds is spent in UsersController#show
```
-For routes that require authorization you will need to provide a user to
+For routes that require authorization you must provide a user to
`Gitlab::Profiler`. You can do this like so:
```ruby
Gitlab::Profiler.profile('/gitlab-org/gitlab-test', user: User.first)
```
-Passing a `logger:` keyword argument to `Gitlab::Profiler.profile` will send
+Passing a `logger:` keyword argument to `Gitlab::Profiler.profile` sends
ActiveRecord and ActionController log output to that logger. Further options are
documented with the method source.
@@ -123,7 +123,7 @@ starting GitLab. For example:
ENABLE_BULLET=true bundle exec rails s
```
-Bullet will log query problems to both the Rails log as well as the Chrome
+Bullet logs query problems to both the Rails log as well as the Chrome
console.
As a follow up to finding `N+1` queries with Bullet, consider writing a [QueryRecoder test](query_recorder.md) to prevent a regression.
diff --git a/doc/development/projections.md b/doc/development/projections.md
index b8f476c53e9..e62f13981c9 100644
--- a/doc/development/projections.md
+++ b/doc/development/projections.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Projections
diff --git a/doc/development/prometheus.md b/doc/development/prometheus.md
index fc1f2303d0a..62f30871f54 100644
--- a/doc/development/prometheus.md
+++ b/doc/development/prometheus.md
@@ -3,3 +3,6 @@ redirect_to: '../user/project/integrations/prometheus.md'
---
This document was moved to [another location](../user/project/integrations/prometheus.md).
+
+<!-- This redirect file can be deleted after February 1, 2021. -->
+<!-- Before deletion, see: https://docs.gitlab.com/ee/development/documentation/#move-or-rename-a-page -->
diff --git a/doc/development/prometheus_metrics.md b/doc/development/prometheus_metrics.md
index 8fcc025b35b..df13cb7d56c 100644
--- a/doc/development/prometheus_metrics.md
+++ b/doc/development/prometheus_metrics.md
@@ -1,7 +1,7 @@
---
stage: Monitor
group: Health
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Working with Prometheus Metrics
@@ -31,7 +31,7 @@ The requirement for adding a new metric is to make each query to have an unique
### Update existing metrics
-After you add or change an existing common metric, you must [re-run the import script](../administration/raketasks/maintenance.md#import-common-metrics) that will query and update all existing metrics.
+After you add or change an existing common metric, you must [re-run the import script](../administration/raketasks/maintenance.md#import-common-metrics) that queries and updates all existing metrics.
Or, you can create a database migration:
@@ -51,7 +51,7 @@ class ImportCommonMetrics < ActiveRecord::Migration[4.2]
end
```
-If a query metric (which is identified by `id:`) is removed it will not be removed from database by default.
+If a query metric (which is identified by `id:`) is removed, it isn't removed from database by default.
You might want to add additional database migration that makes a decision what to do with removed one.
For example: you might be interested in migrating all dependent data to a different metric.
@@ -75,5 +75,5 @@ This section describes how to add new metrics for self-monitoring
1. Select the appropriate name for your metric. Refer to the guidelines
for [Prometheus metric names](https://prometheus.io/docs/practices/naming/#metric-names).
1. Update the list of [GitLab Prometheus metrics](../administration/monitoring/prometheus/gitlab_metrics.md).
-1. Trigger the relevant page/code that will record the new metric.
+1. Trigger the relevant page or code that records the new metric.
1. Check that the new metric appears at `/-/metrics`.
diff --git a/doc/development/pry_debugging.md b/doc/development/pry_debugging.md
index 7f9a49d4d50..f29e0d403cd 100644
--- a/doc/development/pry_debugging.md
+++ b/doc/development/pry_debugging.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Pry debugging
@@ -9,7 +9,7 @@ info: To determine the technical writer assigned to the Stage/Group associated w
## Invoking pry debugging
To invoke the debugger, place `binding.pry` somewhere in your
-code. When the Ruby interpreter hits that code, execution will stop,
+code. When the Ruby interpreter hits that code, execution stops,
and you can type in commands to debug the state of the program
## `byebug` vs `binding.pry`
@@ -20,7 +20,7 @@ use the powerful Pry REPL.
`binding.pry` uses Pry, but lacks some of the `byebug`
features. GitLab uses the [`pry-byebug`](https://github.com/deivid-rodriguez/pry-byebug)
gem. This gem brings some capabilities `byebug` to `binding.pry`, so
-using that, will give you the most debugging powers.
+using that gives you the most debugging powers.
## `byebug`
@@ -104,7 +104,7 @@ You also can move around in the callstack with these commands:
- `down`: Moves the stack frame down. Takes an optional numeric
argument to move multiple frames.
- `frame <n>`: Moves to a specific frame. Called without arguments
- will show the current frame.
+ displays the current frame.
## Short commands
diff --git a/doc/development/python_guide/index.md b/doc/development/python_guide/index.md
index 22a01c3b877..3291b6c31a9 100644
--- a/doc/development/python_guide/index.md
+++ b/doc/development/python_guide/index.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Python Development Guidelines
@@ -30,16 +30,16 @@ brew install pyenv
To install `pyenv` on Linux, you can run the command below:
```shell
-curl https://pyenv.run | bash
+curl "https://pyenv.run" | bash
```
-Alternatively, you may find `pyenv` available as a system package via your distro package manager.
+Alternatively, you may find `pyenv` available as a system package via your distribution's package manager.
You can read more about it in: <https://github.com/pyenv/pyenv-installer#prerequisites>.
### Shell integration
-Pyenv installation will add required changes to Bash. If you use a different shell,
+Pyenv installation adds required changes to Bash. If you use a different shell,
check for any additional steps required for it.
For Fish, you can install a plugin for [Fisher](https://github.com/jorgebucaran/fisher):
@@ -63,13 +63,13 @@ on the main project level, so we can run that on our development machines.
Recently, an equivalent to the `Gemfile` and the [Bundler](https://bundler.io/) project has been introduced to Python:
`Pipfile` and [Pipenv](https://pipenv.readthedocs.io/en/latest/).
-You will now find a `Pipfile` with the dependencies in the root folder. To install them, run:
+A `Pipfile` with the dependencies now exists in the root folder. To install them, run:
```shell
pipenv install
```
-Running this command will install both the required Python version as well as required pip dependencies.
+Running this command installs both the required Python version as well as required pip dependencies.
## Use instructions
@@ -80,5 +80,5 @@ of the application. With Pipenv, this is a simple as running:
pipenv shell
```
-After running that command, you can run GitLab on the same shell and it will be using the Python and dependencies
+After running that command, you can run GitLab on the same shell and it uses the Python and dependencies
installed from the `pipenv install` command.
diff --git a/doc/development/query_count_limits.md b/doc/development/query_count_limits.md
index 07dcb6c7d90..a9569fee8cc 100644
--- a/doc/development/query_count_limits.md
+++ b/doc/development/query_count_limits.md
@@ -1,13 +1,13 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Query Count Limits
Each controller or API endpoint is allowed to execute up to 100 SQL queries and
-in test environments we'll raise an error when this threshold is exceeded.
+in test environments we raise an error when this threshold is exceeded.
## Solving Failing Tests
@@ -20,18 +20,18 @@ solutions to this problem:
You should only resort to whitelisting when an existing controller or endpoint
is to blame as in this case reducing the number of SQL queries can take a lot of
effort. Newly added controllers and endpoints are not allowed to execute more
-than 100 SQL queries and no exceptions will be made for this rule. _If_ a large
+than 100 SQL queries and no exceptions are made for this rule. _If_ a large
number of SQL queries is necessary to perform certain work it's best to have
this work performed by Sidekiq instead of doing this directly in a web request.
## Whitelisting
-In the event that you _have_ to whitelist a controller you'll first need to
+In the event that you _have_ to whitelist a controller you must first
create an issue. This issue should (preferably in the title) mention the
controller or endpoint and include the appropriate labels (`database`,
`performance`, and at least a team specific label such as `Discussion`).
-Once the issue has been created you can whitelist the code in question. For
+After the issue has been created you can whitelist the code in question. For
Rails controllers it's best to create a `before_action` hook that runs as early
as possible. The called method in turn should call
`Gitlab::QueryLimiting.whitelist('issue URL here')`. For example:
diff --git a/doc/development/query_performance.md b/doc/development/query_performance.md
new file mode 100644
index 00000000000..c61d2a0864f
--- /dev/null
+++ b/doc/development/query_performance.md
@@ -0,0 +1,74 @@
+---
+stage: Enablement
+group: Database
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
+---
+
+# Query performance guidelines
+
+This document describes various guidelines to follow when optimizing SQL queries.
+
+When you are optimizing your SQL queries, there are two dimensions to pay attention to:
+
+1. The query execution time. This is paramount as it reflects how the user experiences GitLab.
+1. The query plan. Optimizing the query plan is important in allowing queries to independently scale over time. Realizing that an index will keep a query performing well as the table grows before the query degrades is an example of why we analyze these plans.
+
+## Timing guidelines for queries
+
+| Query Type | Maximum Query Time | Notes |
+|----|----|---|
+| General queries | `100ms` | This is not a hard limit, but if a query is getting above it, it is important to spend time understanding why it can or cannot be optimized. |
+| Queries in a migration | `100ms` | This is different than the total [migration time](database_review.md#timing-guidelines-for-migrations). |
+| Concurrent operations in a migration | `5min` | Concurrent operations do not block the database, but they block the GitLab update. This includes operations such as `add_concurrent_index` and `add_concurrent_foreign_key`. |
+| Background migrations | `1s` | |
+| Usage Ping | `1s` | See the [usage ping docs](product_analytics/usage_ping.md#developing-and-testing-usage-ping) for more details. |
+
+- When analyzing your query's performance, pay attention to if the time you are seeing is on a [cold or warm cache](#cold-and-warm-cache). These guidelines apply for both cache types.
+- When working with batched queries, change the range and batch size to see how it effects the query timing and caching.
+- If an existing query is already underperforming, make an effort to improve it. If it is too complex or would stall development, create a follow-up so it can be addressed in a timely manner. You can always ask the database reviewer or maintainer for help and guidance.
+
+## Cold and warm cache
+
+When evaluating query performance it is important to understand the difference between
+cold and warm cached queries.
+
+The first time a query is made, it is made on a "cold cache". Meaning it needs
+to read from disk. If you run the query again, the data can be read from the
+cache, or what PostgreSQL calls shared buffers. This is the "warm cache" query.
+
+When analyzing an [`EXPLAIN` plan](understanding_explain_plans.md), you can see
+the difference not only in the timing, but by looking at the output for `Buffers`
+by running your explain with `EXPLAIN(analyze, buffers)`. The [#database-lab](understanding_explain_plans.md#database-lab)
+tool will automatically include these options.
+
+If you are making a warm cache query, you will only see the `shared hits`.
+
+For example in #database-lab:
+
+```plaintext
+Shared buffers:
+ - hits: 36467 (~284.90 MiB) from the buffer pool
+ - reads: 0 from the OS file cache, including disk I/O
+```
+
+Or in the explain plan from `psql`:
+
+```sql
+Buffers: shared hit=7323
+```
+
+If the cache is cold, you will also see `reads`.
+
+In #database-lab:
+
+```plaintext
+Shared buffers:
+ - hits: 17204 (~134.40 MiB) from the buffer pool
+ - reads: 15229 (~119.00 MiB) from the OS file cache, including disk I/O
+```
+
+In `psql`:
+
+```sql
+Buffers: shared hit=7202 read=121
+```
diff --git a/doc/development/query_recorder.md b/doc/development/query_recorder.md
index 4a02af3348c..3cc7b140e89 100644
--- a/doc/development/query_recorder.md
+++ b/doc/development/query_recorder.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# QueryRecorder
@@ -10,7 +10,7 @@ QueryRecorder is a tool for detecting the [N+1 queries problem](https://guides.r
> Implemented in [spec/support/query_recorder.rb](https://gitlab.com/gitlab-org/gitlab/blob/master/spec/support/helpers/query_recorder.rb) via [9c623e3e](https://gitlab.com/gitlab-org/gitlab-foss/commit/9c623e3e5d7434f2e30f7c389d13e5af4ede770a)
-As a rule, merge requests [should not increase query counts](merge_request_performance_guidelines.md#query-counts). If you find yourself adding something like `.includes(:author, :assignee)` to avoid having `N+1` queries, consider using QueryRecorder to enforce this with a test. Without this, a new feature which causes an additional model to be accessed will silently reintroduce the problem.
+As a rule, merge requests [should not increase query counts](merge_request_performance_guidelines.md#query-counts). If you find yourself adding something like `.includes(:author, :assignee)` to avoid having `N+1` queries, consider using QueryRecorder to enforce this with a test. Without this, a new feature which causes an additional model to be accessed can silently reintroduce the problem.
## How it works
@@ -30,7 +30,7 @@ In some cases the query count might change slightly between runs for unrelated r
## Cached queries
-By default, QueryRecorder will ignore [cached queries](merge_request_performance_guidelines.md#cached-queries) in the count. However, it may be better to count
+By default, QueryRecorder ignores [cached queries](merge_request_performance_guidelines.md#cached-queries) in the count. However, it may be better to count
all queries to avoid introducing an N+1 query that may be masked by the statement cache.
To do this, this requires the `:use_sql_query_cache` flag to be set.
You should pass the `skip_cached` variable to `QueryRecorder` and use the `exceed_all_query_limit` matcher:
@@ -73,7 +73,7 @@ There are multiple ways to find the source of queries.
- View the call backtrace for the specific `QueryRecorder` instance you want
by using `ActiveRecord::QueryRecorder.new(query_recorder_debug: true)`. The output
- will be in `test.log`
+ is stored in file `test.log`.
- Enable the call backtrace for all tests using the `QUERY_RECORDER_DEBUG` environment variable.
@@ -83,7 +83,7 @@ There are multiple ways to find the source of queries.
QUERY_RECORDER_DEBUG=1 bundle exec rspec spec/requests/api/projects_spec.rb
```
- This will log calls to QueryRecorder into the `test.log` file. For example:
+ This logs calls to QueryRecorder into the `test.log` file. For example:
```sql
QueryRecorder SQL: SELECT COUNT(*) FROM "issues" WHERE "issues"."deleted_at" IS NULL AND "issues"."project_id" = $1 AND ("issues"."state" IN ('opened')) AND "issues"."confidential" = $2
diff --git a/doc/development/rails_initializers.md b/doc/development/rails_initializers.md
index bc0ac963f3f..89902c81cd5 100644
--- a/doc/development/rails_initializers.md
+++ b/doc/development/rails_initializers.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Rails initializers
diff --git a/doc/development/rake_tasks.md b/doc/development/rake_tasks.md
index 2dca747425c..09e8f780fe1 100644
--- a/doc/development/rake_tasks.md
+++ b/doc/development/rake_tasks.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Rake tasks for developers
@@ -61,7 +61,7 @@ bin/rake "gitlab:seed:insights:issues[group-path/project-path]"
```
By default, this seeds an average of 10 issues per week for the last 52 weeks
-per project. All issues will also be randomly labeled with team, type, severity,
+per project. All issues are also randomly labeled with team, type, severity,
and priority.
#### Seeding groups with sub-groups
@@ -116,8 +116,8 @@ so no worries about missing errors.
There are a few environment flags you can pass to change how projects are seeded
- `SIZE`: defaults to `8`, max: `32`. Amount of projects to create.
-- `LARGE_PROJECTS`: defaults to false. If set will clone 6 large projects to help with testing.
-- `FORK`: defaults to false. If set to `true` will fork `torvalds/linux` five times. Can also be set to an existing project full_path and it will fork that instead.
+- `LARGE_PROJECTS`: defaults to false. If set, clones 6 large projects to help with testing.
+- `FORK`: defaults to false. If set to `true`, forks `torvalds/linux` five times. Can also be set to an existing project `full_path` to fork that instead.
## Run tests
@@ -132,10 +132,10 @@ In order to run the test you can use the following commands:
`bin/rake spec` takes significant time to pass.
Instead of running the full test suite locally, you can save a lot of time by running
a single test or directory related to your changes. After you submit a merge request,
-CI will run full test suite for you. Green CI status in the merge request means
+CI runs full test suite for you. Green CI status in the merge request means
full test suite is passed.
-You can't run `rspec .` since this will try to run all the `_spec.rb`
+You can't run `rspec .` since this tries to run all the `_spec.rb`
files it can find, also the ones in `/tmp`
You can pass RSpec command line options to the `spec:unit`,
@@ -157,7 +157,7 @@ To run several tests inside one directory:
speeds up development by keeping your application running in the background so
you don't need to boot it every time you run a test, Rake task or migration.
-If you want to use it, you'll need to export the `ENABLE_SPRING` environment
+If you want to use it, you must export the `ENABLE_SPRING` environment
variable to `1`:
```shell
@@ -180,7 +180,7 @@ environment you can do so with the following command:
RAILS_ENV=production NODE_ENV=production bundle exec rake gitlab:assets:compile
```
-This will compile and minify all JavaScript and CSS assets and copy them along
+This compiles and minifies all JavaScript and CSS assets and copy them along
with all other frontend assets (images, fonts, etc) into `/public/assets` where
they can be easily inspected.
@@ -200,7 +200,7 @@ following:
bundle exec rake gemojione:digests
```
-This will update the file `fixtures/emojis/digests.json` based on the currently
+This updates the file `fixtures/emojis/digests.json` based on the currently
available Emoji.
To generate a sprite file containing all the Emoji, run:
@@ -276,8 +276,13 @@ In its current state, the Rake task:
This uses some features from `graphql-docs` gem like its schema parser and helper methods.
The docs generator code comes from our side giving us more flexibility, like using Haml templates and generating Markdown files.
-To edit the template used, please take a look at `lib/gitlab/graphql/docs/templates/default.md.haml`.
-The actual renderer is at `Gitlab::Graphql::Docs::Renderer`.
+To edit the content, you may need to edit the following:
+
+- The template. You can edit the template at `lib/gitlab/graphql/docs/templates/default.md.haml`.
+ The actual renderer is at `Gitlab::Graphql::Docs::Renderer`.
+- The applicable `description` field in the code, which
+ [Updates machine-readable schema files](#update-machine-readable-schema-files),
+ which is then used by the `rake` task described earlier.
`@parsed_schema` is an instance variable that the `graphql-docs` gem expects to have available.
`Gitlab::Graphql::Docs::Helper` defines the `object` method we currently use. This is also where you
diff --git a/doc/development/reactive_caching.md b/doc/development/reactive_caching.md
index 106a9b5de31..5d514ffbfc9 100644
--- a/doc/development/reactive_caching.md
+++ b/doc/development/reactive_caching.md
@@ -1,17 +1,17 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# `ReactiveCaching`
> This doc refers to <https://gitlab.com/gitlab-org/gitlab/blob/master/app/models/concerns/reactive_caching.rb>.
-The `ReactiveCaching` concern is used for fetching some data in the background and store it
+The `ReactiveCaching` concern is used for fetching some data in the background and storing it
in the Rails cache, keeping it up-to-date for as long as it is being requested. If the
-data hasn't been requested for `reactive_cache_lifetime`, it will stop being refreshed,
-and then be removed.
+data hasn't been requested for `reactive_cache_lifetime`, it stops being refreshed,
+and is removed.
## Examples
@@ -35,38 +35,40 @@ class Foo < ApplicationRecord
end
```
-In this example, the first time `#result` is called, it will return `nil`. However,
-it will enqueue a background worker to call `#calculate_reactive_cache` and set an
-initial cache lifetime of 10 min.
+In this example, the first time `#result` is called, it returns `nil`. However,
+it enqueues a background worker to call `#calculate_reactive_cache` and set an
+initial cache lifetime of 10 minutes.
## How it works
The first time `#with_reactive_cache` is called, a background job is enqueued and
`with_reactive_cache` returns `nil`. The background job calls `#calculate_reactive_cache`
and stores its return value. It also re-enqueues the background job to run again after
-`reactive_cache_refresh_interval`. Therefore, it will keep the stored value up to date.
+`reactive_cache_refresh_interval`. Therefore, it keeps the stored value up to date.
Calculations never run concurrently.
-Calling `#with_reactive_cache` while a value is cached will call the block given to
-`#with_reactive_cache`, yielding the cached value. It will also extend the lifetime
+Calling `#with_reactive_cache` while a value is cached calls the block given to
+`#with_reactive_cache`, yielding the cached value. It also extends the lifetime
of the cache by the `reactive_cache_lifetime` value.
-Once the lifetime has expired, no more background jobs will be enqueued and calling
-`#with_reactive_cache` will again return `nil` - starting the process all over again.
+After the lifetime has expired, no more background jobs are enqueued and calling
+`#with_reactive_cache` again returns `nil`, starting the process all over again.
-### 1 MB hard limit
+### Set a hard limit for ReactiveCaching
-`ReactiveCaching` has a 1 megabyte default limit. [This value is configurable](#selfreactive_cache_worker_finder).
+To preserve performance, you should set a hard caching limit in the class that includes
+`ReactiveCaching`. See the example of [how to set it up](#selfreactive_cache_hard_limit).
-If the data we're trying to cache has over 1 megabyte, it will not be cached and a handled `ReactiveCaching::ExceededReactiveCacheLimit` will be notified on Sentry.
+For more information, read the internal issue
+[Redis (or ReactiveCache) soft and hard limits](https://gitlab.com/gitlab-org/gitlab/-/issues/14015).
## When to use
- If we need to make a request to an external API (for example, requests to the k8s API).
-It is not advisable to keep the application server worker blocked for the duration of
-the external request.
+ It is not advisable to keep the application server worker blocked for the duration of
+ the external request.
- If a model needs to perform a lot of database calls or other time consuming
-calculations.
+ calculations.
## How to use
@@ -99,13 +101,13 @@ Controller endpoints that call a model or service method that uses `ReactiveCach
not wait until the background worker completes.
- An API that calls a model or service method that uses `ReactiveCaching` should return
-`202 accepted` when the cache is being calculated (when `#with_reactive_cache` returns `nil`).
+ `202 accepted` when the cache is being calculated (when `#with_reactive_cache` returns `nil`).
- It should also
-[set the polling interval header](fe_guide/performance.md#realtime-components) with
-`Gitlab::PollingInterval.set_header`.
+ [set the polling interval header](fe_guide/performance.md#realtime-components) with
+ `Gitlab::PollingInterval.set_header`.
- The consumer of the API is expected to poll the API.
- You can also consider implementing [ETag caching](polling.md) to reduce the server
-load caused by polling.
+ load caused by polling.
### Methods to implement in a model or service
@@ -113,17 +115,17 @@ These are methods that should be implemented in the model/service that includes
#### `#calculate_reactive_cache` (required)
-- This method must be implemented. Its return value will be cached.
-- It will be called by `ReactiveCaching` when it needs to populate the cache.
-- Any arguments passed to `with_reactive_cache` will also be passed to `calculate_reactive_cache`.
+- This method must be implemented. Its return value is cached.
+- It is called by `ReactiveCaching` when it needs to populate the cache.
+- Any arguments passed to `with_reactive_cache` are also passed to `calculate_reactive_cache`.
#### `#reactive_cache_updated` (optional)
- This method can be implemented if needed.
- It is called by the `ReactiveCaching` concern whenever the cache is updated.
-If the cache is being refreshed and the new cache value is the same as the old cache
-value, this method will not be called. It is only called if a new value is stored in
-the cache.
+ If the cache is being refreshed and the new cache value is the same as the old cache
+ value, this method is not called. It is only called if a new value is stored in
+ the cache.
- It can be used to perform an action whenever the cache is updated.
### Methods called by a model or service
@@ -134,22 +136,22 @@ the model/service.
#### `#with_reactive_cache` (required)
- `with_reactive_cache` must be called where the result of `calculate_reactive_cache`
-is required.
+ is required.
- A block can be given to `with_reactive_cache`. `with_reactive_cache` can also take
-any number of arguments. Any arguments passed to `with_reactive_cache` will be
-passed to `calculate_reactive_cache`. The arguments passed to `with_reactive_cache`
-will be appended to the cache key name.
+ any number of arguments. Any arguments passed to `with_reactive_cache` are
+ passed to `calculate_reactive_cache`. The arguments passed to `with_reactive_cache`
+ are appended to the cache key name.
- If `with_reactive_cache` is called when the result has already been cached, the
-block will be called, yielding the cached value and the return value of the block
-will be returned by `with_reactive_cache`. It will also reset the timeout of the
-cache to the `reactive_cache_lifetime` value.
-- If the result has not been cached as yet, `with_reactive_cache` will return nil.
-It will also enqueue a background job, which will call `calculate_reactive_cache`
-and cache the result.
-- Once the background job has completed and the result is cached, the next call
-to `with_reactive_cache` will pick up the cached value.
+ block is called, yielding the cached value and the return value of the block
+ is returned by `with_reactive_cache`. It also resets the timeout of the
+ cache to the `reactive_cache_lifetime` value.
+- If the result has not been cached as yet, `with_reactive_cache` return `nil`.
+ It also enqueues a background job, which calls `calculate_reactive_cache`
+ and caches the result.
+- After the background job has completed and the result is cached, the next call
+ to `with_reactive_cache` picks up the cached value.
- In the example below, `data` is the cached value which is yielded to the block
-given to `with_reactive_cache`.
+ given to `with_reactive_cache`.
```ruby
class Foo < ApplicationRecord
@@ -170,16 +172,16 @@ given to `with_reactive_cache`.
#### `#clear_reactive_cache!` (optional)
- This method can be called when the cache needs to be expired/cleared. For example,
-it can be called in an `after_save` callback in a model so that the cache is
-cleared after the model is modified.
+ it can be called in an `after_save` callback in a model so that the cache is
+ cleared after the model is modified.
- This method should be called with the same parameters that are passed to
-`with_reactive_cache` because the parameters are part of the cache key.
+ `with_reactive_cache` because the parameters are part of the cache key.
#### `#without_reactive_cache` (optional)
- This is a convenience method that can be used for debugging purposes.
- This method calls `calculate_reactive_cache` in the current process instead of
-in a background worker.
+ in a background worker.
### Configurable options
@@ -188,19 +190,19 @@ There are some `class_attribute` options which can be tweaked.
#### `self.reactive_cache_key`
- The value of this attribute is the prefix to the `data` and `alive` cache key names.
-The parameters passed to `with_reactive_cache` form the rest of the cache key names.
+ The parameters passed to `with_reactive_cache` form the rest of the cache key names.
- By default, this key uses the model's name and the ID of the record.
```ruby
self.reactive_cache_key = -> (record) { [model_name.singular, record.id] }
```
-- The `data` and `alive` cache keys in this case will be `"ExampleModel:1:arg1:arg2"`
-and `"ExampleModel:1:arg1:arg2:alive"` respectively, where `ExampleModel` is the
-name of the model, `1` is the ID of the record, `arg1` and `arg2` are parameters
-passed to `with_reactive_cache`.
-- If you're including this concern in a service instead, you will need to override
-the default by adding the following to your service:
+- The `data` and `alive` cache keys in this case are `"ExampleModel:1:arg1:arg2"`
+ and `"ExampleModel:1:arg1:arg2:alive"` respectively, where `ExampleModel` is the
+ name of the model, `1` is the ID of the record, `arg1` and `arg2` are parameters
+ passed to `with_reactive_cache`.
+- If you're including this concern in a service instead, you must override
+ the default by adding the following to your service:
```ruby
self.reactive_cache_key = ->(service) { [service.class.model_name.singular, service.project_id] }
@@ -212,7 +214,7 @@ the default by adding the following to your service:
#### `self.reactive_cache_lease_timeout`
- `ReactiveCaching` uses `Gitlab::ExclusiveLease` to ensure that the cache calculation
-is never run concurrently by multiple workers.
+ is never run concurrently by multiple workers.
- This attribute is the timeout for the `Gitlab::ExclusiveLease`.
- It defaults to 2 minutes, but can be overridden if a different timeout is required.
@@ -231,11 +233,11 @@ self.reactive_cache_lease_timeout = 1.minute
#### `self.reactive_cache_lifetime`
-- This is the duration after which the cache will be cleared if there are no requests.
+- This is the duration after which the cache is cleared if there are no requests.
- The default is 10 minutes. If there are no requests for this cache value for 10 minutes,
-the cache will expire.
-- If the cache value is requested before it expires, the timeout of the cache will
-be reset to `reactive_cache_lifetime`.
+ the cache expires.
+- If the cache value is requested before it expires, the timeout of the cache is
+ reset to `reactive_cache_lifetime`.
```ruby
self.reactive_cache_lifetime = 10.minutes
@@ -244,8 +246,8 @@ self.reactive_cache_lifetime = 10.minutes
#### `self.reactive_cache_hard_limit`
- This is the maximum data size that `ReactiveCaching` allows to be cached.
-- The default is 1 megabyte. Data that goes over this value will not be cached
-and will silently raise `ReactiveCaching::ExceededReactiveCacheLimit` on Sentry.
+- The default is 1 megabyte. Data that goes over this value is not cached
+ and silently raises `ReactiveCaching::ExceededReactiveCacheLimit` on Sentry.
```ruby
self.reactive_cache_hard_limit = 5.megabytes
@@ -300,7 +302,7 @@ which `calculate_reactive_cache` can be called.
end
```
- - In this example, the primary key ID will be passed to `reactive_cache_worker_finder`
- along with the parameters passed to `with_reactive_cache`.
+ - In this example, the primary key ID is passed to `reactive_cache_worker_finder`
+ along with the parameters passed to `with_reactive_cache`.
- The custom `reactive_cache_worker_finder` calls `.from_cache` with the parameters
- passed to `with_reactive_cache`.
+ passed to `with_reactive_cache`.
diff --git a/doc/development/redis.md b/doc/development/redis.md
index 7ee8afda36a..bb725e3c321 100644
--- a/doc/development/redis.md
+++ b/doc/development/redis.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Redis guidelines
@@ -43,9 +43,9 @@ a single Redis server. This means that keys should **always** be
globally unique across all categories.
It is usually better to use immutable identifiers - project ID rather than
-full path, for instance - in Redis key names. If full path is used, the key will
-stop being consulted if the project is renamed. If the contents of the key are
-invalidated by a name change, it is better to include a hook that will expire
+full path, for instance - in Redis key names. If full path is used, the key
+stops being consulted if the project is renamed. If the contents of the key are
+invalidated by a name change, it is better to include a hook that expires
the entry, instead of relying on the key changing.
### Multi-key commands
@@ -106,7 +106,7 @@ requests that read the most data from the cache, we can just sort by
### The slow log
-TIP: **Tip:**
+NOTE:
There is a [video showing how to see the slow log](https://youtu.be/BBI68QuYRH8) (GitLab internal)
on GitLab.com
@@ -195,7 +195,7 @@ provides a convenient interface for adding and counting values in HyperLogLogs.
For cases where we need to efficiently check the whether an item is in a group
of items, we can use a Redis set.
[`Gitlab::SetCache`](https://gitlab.com/gitlab-org/gitlab/blob/master/lib/gitlab/set_cache.rb)
-provides an `#include?` method that will use the
+provides an `#include?` method that uses the
[`SISMEMBER`](https://redis.io/commands/sismember) command, as well as `#read`
to fetch all entries in the set.
diff --git a/doc/development/refactoring_guide/index.md b/doc/development/refactoring_guide/index.md
index 17deffb4cb2..224b6bf9b38 100644
--- a/doc/development/refactoring_guide/index.md
+++ b/doc/development/refactoring_guide/index.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Refactoring guide
diff --git a/doc/development/reference_processing.md b/doc/development/reference_processing.md
index 07833c0d302..23c0861081d 100644
--- a/doc/development/reference_processing.md
+++ b/doc/development/reference_processing.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
description: 'An introduction to reference parsers and reference filters, and a guide to their implementation.'
---
@@ -101,7 +101,7 @@ format the reference as:
This default implementation is not very efficient, because we need to call
`#find_object` for each reference, which may require issuing a DB query every
-time. For this reason, most reference filter implementations will instead use an
+time. For this reason, most reference filter implementations instead use an
optimization included in `AbstractReferenceFilter`:
> `AbstractReferenceFilter` provides a lazily initialized value
@@ -140,7 +140,7 @@ We are skipping:
To avoid filtering such nodes for each `ReferenceFilter`, we do it only once and store the result in the result Hash of the pipeline as `result[:reference_filter_nodes]`.
-Pipeline `result` is passed to each filter for modification, so every time when `ReferenceFilter` replaces text or link tag, filtered list (`reference_filter_nodes`) will be updated for the next filter to use.
+Pipeline `result` is passed to each filter for modification, so every time when `ReferenceFilter` replaces text or link tag, filtered list (`reference_filter_nodes`) are updated for the next filter to use.
## Reference parsers
@@ -199,4 +199,4 @@ In practice, all reference parsers inherit from [`BaseParser`](https://gitlab.co
- `#nodes_user_can_reference(user, nodes)` to filter nodes directly.
A failure to implement this class for each reference type means that the
-application will raise exceptions during Markdown processing.
+application raises exceptions during Markdown processing.
diff --git a/doc/development/renaming_features.md b/doc/development/renaming_features.md
index 02d1851dbfe..f7fc1c11e37 100644
--- a/doc/development/renaming_features.md
+++ b/doc/development/renaming_features.md
@@ -1,14 +1,14 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Renaming features
Sometimes the business asks to change the name of a feature. Broadly speaking, there are 2 approaches to that task. They basically trade between immediate effort and future complexity/bug risk:
-- Complete, rename everything in the repo.
+- Complete, rename everything in the repository.
- Pros: does not increase code complexity.
- Cons: more work to execute, and higher risk of immediate bugs.
- Façade, rename as little as possible; only the user-facing content like interfaces,
@@ -22,9 +22,9 @@ The more of the following that are true, the more likely you should choose the f
- You are not confident the new name is permanent.
- The feature is susceptible to bugs (large, complex, needing refactor, etc).
-- The renaming will be difficult to review (feature spans many lines, files, or repositories).
-- The renaming will be disruptive in some way (database table renaming).
+- The renaming is difficult to review (feature spans many lines, files, or repositories).
+- The renaming is disruptive in some way (database table renaming).
## Consider a façade-first approach
-The façade approach is not necessarily a final step. It can (and possibly *should*) be treated as the first step, where later iterations will accomplish the complete rename.
+The façade approach is not necessarily a final step. It can (and possibly *should*) be treated as the first step, where later iterations accomplish the complete rename.
diff --git a/doc/development/repository_mirroring.md b/doc/development/repository_mirroring.md
index 02d1499ca35..4153bcf77a5 100644
--- a/doc/development/repository_mirroring.md
+++ b/doc/development/repository_mirroring.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Repository mirroring
@@ -9,9 +9,9 @@ info: To determine the technical writer assigned to the Stage/Group associated w
## Deep Dive
In December 2018, Tiago Botelho hosted a Deep Dive (GitLab team members only: `https://gitlab.com/gitlab-org/create-stage/issues/1`)
-on GitLab's [Pull Repository Mirroring functionality](../user/project/repository/repository_mirroring.md#pulling-from-a-remote-repository)
+on the GitLab [Pull Repository Mirroring functionality](../user/project/repository/repository_mirroring.md#pulling-from-a-remote-repository)
to share his domain specific knowledge with anyone who may work in this part of the
-code base in the future. You can find the [recording on YouTube](https://www.youtube.com/watch?v=sSZq0fpdY-Y),
+codebase in the future. You can find the [recording on YouTube](https://www.youtube.com/watch?v=sSZq0fpdY-Y),
and the slides in [PDF](https://gitlab.com/gitlab-org/create-stage/uploads/8693404888a941fd851f8a8ecdec9675/Gitlab_Create_-_Pull_Mirroring_Deep_Dive.pdf).
Everything covered in this deep dive was accurate as of GitLab 11.6, and while specific
details may have changed since then, it should still serve as a good introduction.
diff --git a/doc/development/reusing_abstractions.md b/doc/development/reusing_abstractions.md
index 77534eea076..648f7104814 100644
--- a/doc/development/reusing_abstractions.md
+++ b/doc/development/reusing_abstractions.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Guidelines for reusing abstractions
diff --git a/doc/development/rolling_out_changes_using_feature_flags.md b/doc/development/rolling_out_changes_using_feature_flags.md
index cff88388ba0..7456e8df8d9 100644
--- a/doc/development/rolling_out_changes_using_feature_flags.md
+++ b/doc/development/rolling_out_changes_using_feature_flags.md
@@ -3,3 +3,6 @@ redirect_to: 'feature_flags/index.md'
---
This document was moved to [another location](feature_flags/index.md).
+
+<!-- This redirect file can be deleted after February 1, 2021. -->
+<!-- Before deletion, see: https://docs.gitlab.com/ee/development/documentation/#move-or-rename-a-page -->
diff --git a/doc/development/routing.md b/doc/development/routing.md
index cbae2b38427..2c2f6b2a558 100644
--- a/doc/development/routing.md
+++ b/doc/development/routing.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Routing
diff --git a/doc/development/scalability.md b/doc/development/scalability.md
index 73f7c5e0915..a9b8fb4389f 100644
--- a/doc/development/scalability.md
+++ b/doc/development/scalability.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# GitLab scalability
@@ -16,7 +16,7 @@ scalability and reliability.
_[diagram source - GitLab employees only](https://docs.google.com/drawings/d/1RTGtuoUrE0bDT-9smoHbFruhEMI4Ys6uNrufe5IA-VI/edit)_
The diagram above shows a GitLab reference architecture scaled up for 50,000
-users. We will discuss each component below.
+users. We discuss each component below.
## Components
@@ -26,11 +26,10 @@ The PostgreSQL database holds all metadata for projects, issues, merge
requests, users, etc. The schema is managed by the Rails application
[db/structure.sql](https://gitlab.com/gitlab-org/gitlab/blob/master/db/structure.sql).
-GitLab Web/API servers and Sidekiq nodes talk directly to the database via a
-Rails object relational model (ORM). Most SQL queries are accessed via this
+GitLab Web/API servers and Sidekiq nodes talk directly to the database by using a
+Rails object relational model (ORM). Most SQL queries are accessed by using this
ORM, although some custom SQL is also written for performance or for
-exploiting advanced PostgreSQL features (e.g. recursive CTEs, LATERAL JOINs,
-etc.).
+exploiting advanced PostgreSQL features (like recursive CTEs or LATERAL JOINs).
The application has a tight coupling to the database schema. When the
application starts, Rails queries the database schema, caching the tables and
@@ -42,8 +41,8 @@ no-downtime changes](what_requires_downtime.md).
#### Multi-tenancy
A single database is used to store all customer data. Each user can belong to
-many groups or projects, and the access level (e.g. guest, developer,
-maintainer, etc.) to groups and projects determines what users can see and
+many groups or projects, and the access level (including guest, developer, or
+maintainer) to groups and projects determines what users can see and
what they can access.
Users with admin access can access all projects and even impersonate
@@ -70,7 +69,7 @@ dates](https://gitlab.com/groups/gitlab-org/-/epics/2023). For example,
the `events` and `audit_events` table are natural candidates for this
kind of partitioning.
-Sharding is likely more difficult and will require significant changes
+Sharding is likely more difficult and requires significant changes
to the schema and application. For example, if we have to store projects
in many different databases, we immediately run into the question, "How
can we retrieve data across different projects?" One answer to this is
@@ -78,7 +77,7 @@ to abstract data access into API calls that abstract the database from
the application, but this is a significant amount of work.
There are solutions that may help abstract the sharding to some extent
-from the application. For example, we will want to look at [Citus
+from the application. For example, we want to look at [Citus
Data](https://www.citusdata.com/product/community) closely. Citus Data
provides a Rails plugin that adds a [tenant ID to ActiveRecord
models](https://www.citusdata.com/blog/2017/01/05/easily-scale-out-multi-tenant-apps/).
@@ -100,17 +99,16 @@ systems.
A recent [database checkup shows a breakdown of the table sizes on
GitLab.com](https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/8022#master-1022016101-8).
-Since `merge_request_diff_files` contains over 1 TB of data, we will want to
+Since `merge_request_diff_files` contains over 1 TB of data, we want to
reduce/eliminate this table first. GitLab has support for [storing diffs in
-object storage](../administration/merge_request_diffs.md), which we [will
-want to do on
+object storage](../administration/merge_request_diffs.md), which we [want to do on
GitLab.com](https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/7356).
#### High availability
There are several strategies to provide high-availability and redundancy:
-- Write-ahead logs (WAL) streamed to object storage (e.g. S3, Google Cloud
+- Write-ahead logs (WAL) streamed to object storage (for example, S3, or Google Cloud
Storage).
- Read-replicas (hot backups).
- Delayed replicas.
@@ -126,11 +124,10 @@ the read replicas. [Omnibus ships with both repmgr and Patroni](../administratio
#### Load-balancing
GitLab EE has [application support for load balancing using read
-replicas](../administration/database_load_balancing.md). This load
-balancer does some smart things that are not traditionally available in
-standard load balancers. For example, the application will only consider a
-replica if its replication lag is low (e.g. WAL data behind by < 100
-megabytes).
+replicas](../administration/database_load_balancing.md). This load balancer does
+some actions that aren't traditionally available in standard load balancers. For
+example, the application considers a replica only if its replication lag is low
+(for example, WAL data behind by less than 100 MB).
More [details are in a blog
post](https://about.gitlab.com/blog/2017/10/02/scaling-the-gitlab-database/).
@@ -140,7 +137,7 @@ post](https://about.gitlab.com/blog/2017/10/02/scaling-the-gitlab-database/).
As PostgreSQL forks a backend process for each request, PostgreSQL has a
finite limit of connections that it can support, typically around 300 by
default. Without a connection pooler like PgBouncer, it's quite possible to
-hit connection limits. Once the limits are reached, then GitLab will generate
+hit connection limits. Once the limits are reached, then GitLab generates
errors or slow down as it waits for a connection to be available.
#### High availability
@@ -151,7 +148,7 @@ background job and/or Web requests. There are two ways to address this
limitation:
- Run multiple PgBouncer instances.
-- Use a multi-threaded connection pooler (e.g.
+- Use a multi-threaded connection pooler (for example,
[Odyssey](https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/7776).
On some Linux systems, it's possible to run [multiple PgBouncer instances on
@@ -192,9 +189,9 @@ connections gracefully.
There are three ways Redis is used in GitLab:
-- Queues. Sidekiq jobs marshal jobs into JSON payloads.
-- Persistent state. Session data, exclusive leases, etc.
-- Cache. Repository data (e.g. Branch and tag names), view partials, etc.
+- Queues: Sidekiq jobs marshal jobs into JSON payloads.
+- Persistent state: Session data and exclusive leases.
+- Cache: Repository data (like Branch and tag names) and view partials.
For GitLab instances running at scale, splitting Redis usage into
separate Redis clusters helps for two reasons:
@@ -206,8 +203,8 @@ For example, the cache instance can behave like an least-recently used
(LRU) cache by setting the `maxmemory` configuration option. That option
should not be set for the queues or persistent clusters because data
would be evicted from memory at random times. This would cause jobs to
-be dropped on the floor, which would cause many problems (e.g. merges
-not running, builds not updating, etc.).
+be dropped on the floor, which would cause many problems (like merges
+not running or builds not updating).
Sidekiq also polls its queues quite frequently, and this activity can
slow down other queries. For this reason, having a dedicated Redis
@@ -219,7 +216,7 @@ Redis process.
Single-core: Like PgBouncer, a single Redis process can only use one
core. It does not support multi-threading.
-Dumb secondaries: Redis secondaries (aka replicas) don't actually
+Dumb secondaries: Redis secondaries (also known as replicas) don't actually
handle any load. Unlike PostgreSQL secondaries, they don't even serve
read queries. They simply replicate data from the primary and take over
only when the primary fails.
@@ -236,7 +233,7 @@ election to determine a new leader.
No leader: A Redis cluster can get into a mode where there are no
primaries. For example, this can happen if Redis nodes are misconfigured
to follow the wrong node. Sometimes this requires forcing one node to
-become a primary via the [`REPLICAOF NO ONE`
+become a primary by using the [`REPLICAOF NO ONE`
command](https://redis.io/commands/replicaof).
### Sidekiq
@@ -254,14 +251,14 @@ The full list of jobs can be found in the
[`app/workers`](https://gitlab.com/gitlab-org/gitlab/tree/master/app/workers)
and
[`ee/app/workers`](https://gitlab.com/gitlab-org/gitlab/tree/master/ee/app/workers)
-directories in the GitLab code base.
+directories in the GitLab codebase.
#### Runaway Queues
As jobs are added to the Sidekiq queue, Sidekiq worker threads need to
pull these jobs from the queue and finish them at a rate faster than
-they are added. When an imbalance occurs (e.g. delays in the database,
-slow jobs, etc.), Sidekiq queues can balloon and lead to runaway queues.
+they are added. When an imbalance occurs (for example, delays in the database
+or slow jobs), Sidekiq queues can balloon and lead to runaway queues.
In recent months, many of these queues have ballooned due to delays in
PostgreSQL, PgBouncer, and Redis. For example, PgBouncer saturation can
@@ -278,11 +275,11 @@ in a timely manner:
used to process each commit message in the push, but now it farms out
this to `ProcessCommitWorker`.
- Redistribute/gerrymander Sidekiq processes by queue
- types. Long-running jobs (e.g. relating to project import) can often
- squeeze out jobs that run fast (e.g. delivering e-mail). [This technique
+ types. Long-running jobs (for example, relating to project import) can often
+ squeeze out jobs that run fast (for example, delivering e-mail). [This technique
was used in to optimize our existing Sidekiq deployment](https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/7219#note_218019483).
- Optimize jobs. Eliminating unnecessary work, reducing network calls
- (e.g. SQL, Gitaly, etc.), and optimizing processor time can yield significant
+ (including SQL and Gitaly), and optimizing processor time can yield significant
benefits.
From the Sidekiq logs, it's possible to see which jobs run the most
diff --git a/doc/development/secure_coding_guidelines.md b/doc/development/secure_coding_guidelines.md
index ebab0e59cc3..44a95f6e820 100644
--- a/doc/development/secure_coding_guidelines.md
+++ b/doc/development/secure_coding_guidelines.md
@@ -112,21 +112,43 @@ Here `params[:ip]` should not contain anything else but numbers and dots. Howeve
In most cases the anchors `\A` for beginning of text and `\z` for end of text should be used instead of `^` and `$`.
-## Denial of Service (ReDoS)
+## Denial of Service (ReDoS) / Catastrophic Backtracking
-[ReDoS](https://owasp.org/www-community/attacks/Regular_expression_Denial_of_Service_-_ReDoS) is a possible attack if the attacker knows
-or controls the regular expression (regex) used, and is able to enter user input to match against the bad regular expression.
+When a regular expression (regex) is used to search for a string and can't find a match,
+it may then backtrack to try other possibilities.
+
+For example when the regex `.*!$` matches the string `hello!`, the `.*` first matches
+the entire string but then the `!` from the regex is unable to match because the
+character has already been used. In that case, the Ruby regex engine _backtracks_
+one character to allow the `!` to match.
+
+[ReDoS](https://owasp.org/www-community/attacks/Regular_expression_Denial_of_Service_-_ReDoS)
+is an attack in which the attacker knows or controls the regular expression used.
+The attacker may be able to enter user input that triggers this backtracking behavior in a
+way that increases execution time by several orders of magnitude.
### Impact
-The resource, for example Unicorn, Puma, or Sidekiq, can be made to hang as it takes a long time to evaluate the bad regex match.
+The resource, for example Unicorn, Puma, or Sidekiq, can be made to hang as it takes
+a long time to evaluate the bad regex match. The evaluation time may require manual
+termination of the resource.
### Examples
-GitLab-specific examples can be found in the following merge requests:
+Here are some GitLab-specific examples.
+
+User inputs used to create regular expressions:
+
+- [User-controlled filename](https://gitlab.com/gitlab-org/gitlab/-/issues/257497)
+- [User-controlled domain name](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/25314)
+- [User-controlled email address](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/25122#note_289087459)
-- [MR25314](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/25314)
-- [MR25122](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/25122#note_289087459)
+Hardcoded regular expressions with backtracking issues:
+
+- [Repository name validation](https://gitlab.com/gitlab-org/gitlab/-/issues/220019)
+- [Link validation](https://gitlab.com/gitlab-org/gitlab/-/issues/218753), and [a bypass](https://gitlab.com/gitlab-org/gitlab/-/issues/273771)
+- [Entity name validation](https://gitlab.com/gitlab-org/gitlab/-/issues/289934)
+- [Validating color codes](https://gitlab.com/gitlab-org/gitlab/commit/717824144f8181bef524592eab882dd7525a60ef)
Consider the following example application, which defines a check using a regular expression. A user entering `user@aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa!.com` as the email on a form will hang the web server.
@@ -141,22 +163,32 @@ class Email < ApplicationRecord
def domain_matches
errors.add(:email, 'does not match') if email =~ DOMAIN_MATCH
end
+end
```
### Mitigation
-GitLab has `Gitlab::UntrustedRegexp` which internally uses the [`re2`](https://github.com/google/re2/wiki/Syntax) library.
-By utilizing `re2`, we get a strict limit on total execution time, and a smaller subset of available regex features.
+#### Ruby
+
+GitLab has [`Gitlab::UntrustedRegexp`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/untrusted_regexp.rb)
+ which internally uses the [`re2`](https://github.com/google/re2/wiki/Syntax) library.
+`re2` does not support backtracking so we get constant execution time, and a smaller subset of available regex features.
All user-provided regular expressions should use `Gitlab::UntrustedRegexp`.
For other regular expressions, here are a few guidelines:
-- Remove unnecessary backtracking.
-- Avoid nested quantifiers if possible.
-- Try to be as precise as possible in your regex and avoid the `.` if something else can be used (e.g.: Use `_[^_]+_` instead of `_.*_` to match `_text here_`).
+- If there's a clean non-regex solution, such as `String#start_with?`, consider using it
+- Ruby supports some advanced regex features like [atomic groups](https://www.regular-expressions.info/atomic.html)
+and [possessive quantifiers](https://www.regular-expressions.info/possessive.html) that eleminate backtracking
+- Avoid nested quantifiers if possible (for example `(a+)+`)
+- Try to be as precise as possible in your regex and avoid the `.` if there's an alternative
+ - For example, Use `_[^_]+_` instead of `_.*_` to match `_text here_`
+- If in doubt, don't hesitate to ping `@gitlab-com/gl-security/appsec`
+
+#### Go
-An example can be found [in this commit](https://gitlab.com/gitlab-org/gitlab/commit/717824144f8181bef524592eab882dd7525a60ef).
+Go's [`regexp`](https://golang.org/pkg/regexp/) package uses `re2` and isn't vulnerable to backtracking issues.
## Further Links
@@ -299,7 +331,7 @@ Once you've [determined when and where](#setting-expectations) the user submitte
- Content placed inside [HTML URL GET parameters](https://youtu.be/2VFavqfDS6w?t=3494) need to be [URL-encoded](https://cheatsheetseries.owasp.org/cheatsheets/Cross_Site_Scripting_Prevention_Cheat_Sheet.html#rule-5---url-escape-before-inserting-untrusted-data-into-html-url-parameter-values)
- [Additional contexts may require context-specific encoding](https://youtu.be/2VFavqfDS6w?t=2341).
-### Additional info
+### Additional information
#### XSS mitigation and prevention in Rails
@@ -466,7 +498,7 @@ where you can't avoid this:
characters, for example).
- Always use `--` to separate options from arguments.
-### Ruby
+#### Ruby
Consider using `system("command", "arg0", "arg1", ...)` whenever you can. This prevents an attacker
from concatenating commands.
@@ -475,7 +507,7 @@ For more examples on how to use shell commands securely, consult
[Guidelines for shell commands in the GitLab codebase](shell_commands.md).
It contains various examples on how to securely call OS commands.
-### Go
+#### Go
Go has built-in protections that usually prevent an attacker from successfully injecting OS commands.
@@ -558,4 +590,3 @@ In order to prevent this from happening, it is recommended to use the method `us
forbidden!(api_access_denied_message(user))
end
```
- \ No newline at end of file
diff --git a/doc/development/serializing_data.md b/doc/development/serializing_data.md
index 83a75eef556..f5924a44753 100644
--- a/doc/development/serializing_data.md
+++ b/doc/development/serializing_data.md
@@ -1,7 +1,7 @@
---
stage: Enablement
group: Database
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Serializing Data
diff --git a/doc/development/service_measurement.md b/doc/development/service_measurement.md
index 571bbddb494..91fb8248db4 100644
--- a/doc/development/service_measurement.md
+++ b/doc/development/service_measurement.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# GitLab Developers Guide to service measurement
@@ -19,7 +19,7 @@ The measuring module is a tool that allows to measure a service's execution, and
- RSS memory usage
- Server worker ID
-The measuring module will log these measurements into a structured log called [`service_measurement.log`](../administration/logs.md#service_measurementlog),
+The measuring module logs these measurements into a structured log called [`service_measurement.log`](../administration/logs.md#service_measurementlog),
as a single entry for each service execution.
For GitLab.com, `service_measurement.log` is ingested in Elasticsearch and Kibana as part of our monitoring solution.
@@ -43,7 +43,7 @@ DummyService.prepend(Measurable)
In case when you are prepending a module from the `EE` namespace with EE features, you need to prepend Measurable after prepending the `EE` module.
-This way, `Measurable` will be at the bottom of the ancestor chain, in order to measure execution of `EE` features as well:
+This way, `Measurable` is at the bottom of the ancestor chain, in order to measure execution of `EE` features as well:
```ruby
class DummyService
@@ -69,7 +69,7 @@ def extra_attributes_for_measurement
end
```
-After the measurement module is injected in the service, it will be behind a generic feature flag.
+After the measurement module is injected in the service, it is behind a generic feature flag.
To actually use it, you need to enable measuring for the desired service by enabling the feature flag.
### Enabling measurement using feature flags
diff --git a/doc/development/session.md b/doc/development/session.md
index a7f69f570c3..bee857da323 100644
--- a/doc/development/session.md
+++ b/doc/development/session.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Accessing session data
diff --git a/doc/development/sha1_as_binary.md b/doc/development/sha1_as_binary.md
index e69ba149cde..a7bb3001ddb 100644
--- a/doc/development/sha1_as_binary.md
+++ b/doc/development/sha1_as_binary.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Storing SHA1 Hashes As Binary
diff --git a/doc/development/shared_files.md b/doc/development/shared_files.md
index 2c6b1222cfb..859650c2e6c 100644
--- a/doc/development/shared_files.md
+++ b/doc/development/shared_files.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Shared files
@@ -12,7 +12,7 @@ etc. Having so many shared directories makes it difficult to deploy GitLab on
shared storage (e.g. NFS). Working towards GitLab 9.0 we are consolidating
these different directories under the `shared` directory.
-This means that if GitLab will start storing puppies in some future version
+This means that if GitLab begins storing puppies in some future version
then we should put them in `shared/puppies`. Temporary puppy files should be
stored in `shared/tmp`.
diff --git a/doc/development/shell_commands.md b/doc/development/shell_commands.md
index 25113b4ac29..db72454b482 100644
--- a/doc/development/shell_commands.md
+++ b/doc/development/shell_commands.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Guidelines for shell commands in the GitLab codebase
@@ -53,7 +53,7 @@ system(*%W(#{Gitlab.config.git.bin_path} branch -d -- #{branch_name}))
## Bypass the shell by splitting commands into separate tokens
-When we pass shell commands as a single string to Ruby, Ruby will let `/bin/sh` evaluate the entire string. Essentially, we are asking the shell to evaluate a one-line script. This creates a risk for shell injection attacks. It is better to split the shell command into tokens ourselves. Sometimes we use the scripting capabilities of the shell to change the working directory or set environment variables. All of this can also be achieved securely straight from Ruby
+When we pass shell commands as a single string to Ruby, Ruby lets `/bin/sh` evaluate the entire string. Essentially, we are asking the shell to evaluate a one-line script. This creates a risk for shell injection attacks. It is better to split the shell command into tokens ourselves. Sometimes we use the scripting capabilities of the shell to change the working directory or set environment variables. All of this can also be achieved securely straight from Ruby
```ruby
# Wrong
diff --git a/doc/development/shell_scripting_guide/index.md b/doc/development/shell_scripting_guide/index.md
index f447a0e0e72..6071ae3a09d 100644
--- a/doc/development/shell_scripting_guide/index.md
+++ b/doc/development/shell_scripting_guide/index.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Shell scripting standards and style guidelines
@@ -21,7 +21,7 @@ deviations from this guide, they should be described in the
## Avoid using shell scripts
-CAUTION: **Caution:**
+WARNING:
This is a must-read section.
Having said all of the above, we recommend staying away from shell scripts
@@ -72,8 +72,8 @@ shell check:
- shellcheck scripts/**/*.sh # path to your shell scripts
```
-TIP: **Tip:**
-By default, ShellCheck will use the [shell detection](https://github.com/koalaman/shellcheck/wiki/SC2148#rationale)
+NOTE:
+By default, ShellCheck uses the [shell detection](https://github.com/koalaman/shellcheck/wiki/SC2148#rationale)
to determine the shell dialect in use. If the shell file is out of your control and ShellCheck cannot
detect the dialect, use `-s` flag to specify it: `-s sh` or `-s bash`.
@@ -92,7 +92,7 @@ use this job:
```yaml
shfmt:
- image: mvdan/shfmt:v3.1.0-alpine
+ image: mvdan/shfmt:v3.2.0-alpine
stage: test
before_script:
- shfmt -version
@@ -100,14 +100,14 @@ shfmt:
- shfmt -i 2 -ci -d scripts # path to your shell scripts
```
-TIP: **Tip:**
-By default, shfmt will use the [shell detection](https://github.com/mvdan/sh#shfmt) similar to one of ShellCheck
+NOTE:
+By default, shfmt uses the [shell detection](https://github.com/mvdan/sh#shfmt) similar to one of ShellCheck
and ignore files starting with a period. To override this, use `-ln` flag to specify the shell dialect:
`-ln posix` or `-ln bash`.
## Testing
-NOTE: **Note:**
+NOTE:
This is a work in progress.
It is an [ongoing effort](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/64016) to evaluate different tools for the
diff --git a/doc/development/sidekiq_debugging.md b/doc/development/sidekiq_debugging.md
index e0f74b39c9a..a9b6e246861 100644
--- a/doc/development/sidekiq_debugging.md
+++ b/doc/development/sidekiq_debugging.md
@@ -3,3 +3,6 @@ redirect_to: '../administration/troubleshooting/sidekiq.md'
---
This document was moved to [another location](../administration/troubleshooting/sidekiq.md).
+
+<!-- This redirect file can be deleted after February 1, 2021. -->
+<!-- Before deletion, see: https://docs.gitlab.com/ee/development/documentation/#move-or-rename-a-page -->
diff --git a/doc/development/sidekiq_style_guide.md b/doc/development/sidekiq_style_guide.md
index 13ae39997bc..e4f07f732cf 100644
--- a/doc/development/sidekiq_style_guide.md
+++ b/doc/development/sidekiq_style_guide.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Sidekiq Style Guide
@@ -27,7 +27,7 @@ After adding a new queue, run `bin/rake
gitlab:sidekiq:all_queues_yml:generate` to regenerate
`app/workers/all_queues.yml` or `ee/app/workers/all_queues.yml` so that
it can be picked up by
-[`sidekiq-cluster`](../administration/operations/extra_sidekiq_processes.md).
+[`sidekiq-cluster`](../administration/operations/extra_sidekiq_processes.md).
Additionally, run
`bin/rake gitlab:sidekiq:sidekiq_queues_yml:generate` to regenerate
`config/sidekiq_queues.yml`.
@@ -42,8 +42,8 @@ without needing to explicitly list all their queue names. If, for example, all
workers that are managed by `sidekiq-cron` use the `cronjob` queue namespace, we
can spin up a Sidekiq process specifically for these kinds of scheduled jobs.
If a new worker using the `cronjob` namespace is added later on, the Sidekiq
-process will automatically pick up jobs for that worker too (after having been
-restarted), without the need to change any configuration.
+process also picks up jobs for that worker (after having been restarted),
+without the need to change any configuration.
A queue namespace can be set using the `queue_namespace` DSL class method:
@@ -57,19 +57,19 @@ class SomeScheduledTaskWorker
end
```
-Behind the scenes, this will set `SomeScheduledTaskWorker.queue` to
-`cronjob:some_scheduled_task`. Commonly used namespaces will have their own
+Behind the scenes, this sets `SomeScheduledTaskWorker.queue` to
+`cronjob:some_scheduled_task`. Commonly used namespaces have their own
concern module that can easily be included into the worker class, and that may
set other Sidekiq options besides the queue namespace. `CronjobQueue`, for
example, sets the namespace, but also disables retries.
-`bundle exec sidekiq` is namespace-aware, and will automatically listen on all
+`bundle exec sidekiq` is namespace-aware, and listens on all
queues in a namespace (technically: all queues prefixed with the namespace name)
when a namespace is provided instead of a simple queue name in the `--queue`
(`-q`) option, or in the `:queues:` section in `config/sidekiq_queues.yml`.
Note that adding a worker to an existing namespace should be done with care, as
-the extra jobs will take resources away from jobs from workers that were already
+the extra jobs take resources away from jobs from workers that were already
there, if the resources available to the Sidekiq process handling the namespace
are not adjusted appropriately.
@@ -121,9 +121,8 @@ As a general rule, a worker can be considered idempotent if:
A good example of that would be a cache expiration worker.
-A job scheduled for an idempotent worker will automatically be
-[deduplicated](#deduplication) when an unstarted job with the same
-arguments is already in the queue.
+A job scheduled for an idempotent worker is [deduplicated](#deduplication) when
+an unstarted job with the same arguments is already in the queue.
### Ensuring a worker is idempotent
@@ -160,9 +159,8 @@ end
It's encouraged to only have the `idempotent!` call in the top-most worker class, even if
the `perform` method is defined in another class or module.
-If the worker class is not marked as idempotent, a cop will fail.
-Consider skipping the cop if you're not confident your job can safely
-run multiple times.
+If the worker class isn't marked as idempotent, a cop fails. Consider skipping
+the cop if you're not confident your job can safely run multiple times.
### Deduplication
@@ -230,7 +228,7 @@ end
#### Scheduling jobs in the future
GitLab doesn't skip jobs scheduled in the future, as we assume that
-the state will have changed by the time the job is scheduled to
+the state has changed by the time the job is scheduled to
execute. Deduplication of jobs scheduled in the feature is possible
for both `until_executed` and `until_executing` strategies.
@@ -251,26 +249,6 @@ module AuthorizedProjectUpdate
end
```
-#### Troubleshooting
-
-If the automatic deduplication were to cause issues in certain
-queues. This can be temporarily disabled by enabling a feature flag
-named `disable_<queue name>_deduplication`. For example to disable
-deduplication for the `AuthorizedProjectsWorker`, we would enable the
-feature flag `disable_authorized_projects_deduplication`.
-
-From ChatOps:
-
-```shell
-/chatops run feature set disable_authorized_projects_deduplication true
-```
-
-From the rails console:
-
-```ruby
-Feature.enable!(:disable_authorized_projects_deduplication)
-```
-
## Limited capacity worker
It is possible to limit the number of concurrent running jobs for a worker class
@@ -278,10 +256,10 @@ by using the `LimitedCapacity::Worker` concern.
The worker must implement three methods:
-- `perform_work` - the concern implements the usual `perform` method and calls
-`perform_work` if there is any capacity available.
-- `remaining_work_count` - number of jobs that will have work to perform.
-- `max_running_jobs` - maximum number of jobs allowed to run concurrently.
+- `perform_work`: The concern implements the usual `perform` method and calls
+ `perform_work` if there's any available capacity.
+- `remaining_work_count`: Number of jobs that have work to perform.
+- `max_running_jobs`: Maximum number of jobs allowed to run concurrently.
```ruby
class MyDummyWorker
@@ -303,7 +281,7 @@ end
Additional to the regular worker, a cron worker must be defined as well to
backfill the queue with jobs. the arguments passed to `perform_with_capacity`
-will be passed along to the `perform_work` method.
+are passed to the `perform_work` method.
```ruby
class ScheduleMyDummyCronWorker
@@ -318,10 +296,10 @@ end
### How many jobs are running?
-It will be running `max_running_jobs` at almost all times.
+It runs `max_running_jobs` at almost all times.
-The cron worker will check the remaining capacity on each execution and it
-will schedule at most `max_running_jobs` jobs. Those jobs on completion will
+The cron worker checks the remaining capacity on each execution and it
+schedules at most `max_running_jobs` jobs. Those jobs on completion
re-enqueue themselves immediately, but not on failure. The cron worker is in
charge of replacing those failed jobs.
@@ -329,11 +307,11 @@ charge of replacing those failed jobs.
This concern disables Sidekiq retries, logs the errors, and sends the job to the
dead queue. This is done to have only one source that produces jobs and because
-the retry would occupy a slot with a job that will be performed in the distant future.
+the retry would occupy a slot with a job to perform in the distant future.
We let the cron worker enqueue new jobs, this could be seen as our retry and
back off mechanism because the job might fail again if executed immediately.
-This means that for every failed job, we will be running at a lower capacity
+This means that for every failed job, we run at a lower capacity
until the cron worker fills the capacity again. If it is important for the
worker not to get a backlog, exceptions must be handled in `#perform_work` and
the job should not raise.
@@ -416,7 +394,7 @@ each of which represents a particular type of workload.
When changing a queue's urgency, or adding a new queue, we need to take
into account the expected workload on the new shard. Note that, if we're
changing an existing queue, there is also an effect on the old shard,
-but that will always be a reduction in work.
+but that always reduces work.
To do this, we want to calculate the expected increase in total execution time
and RPS (throughput) for the new shard. We can get these values from:
@@ -429,7 +407,7 @@ and RPS (throughput) for the new shard. We can get these values from:
- The [Shard Detail
dashboard](https://dashboards.gitlab.net/d/sidekiq-shard-detail/sidekiq-shard-detail)
has Total Execution Time and Throughput (RPS). The Shard Utilization
- panel will show if there is currently any excess capacity for this
+ panel displays if there is currently any excess capacity for this
shard.
We can then calculate the RPS * average runtime (estimated for new jobs)
@@ -454,7 +432,7 @@ Most background jobs in the GitLab application communicate with other GitLab
services. For example, PostgreSQL, Redis, Gitaly, and Object Storage. These are considered
to be "internal" dependencies for a job.
-However, some jobs will be dependent on external services in order to complete
+However, some jobs are dependent on external services in order to complete
successfully. Some examples include:
1. Jobs which call web-hooks configured by a user.
@@ -492,7 +470,7 @@ A job cannot be both high urgency and have external dependencies.
Workers that are constrained by CPU or memory resource limitations should be
annotated with the `worker_resource_boundary` method.
-Most workers tend to spend most of their time blocked, wait on network responses
+Most workers tend to spend most of their time blocked, waiting on network responses
from other services such as Redis, PostgreSQL, and Gitaly. Since Sidekiq is a
multi-threaded environment, these jobs can be scheduled with high concurrency.
@@ -505,8 +483,8 @@ hosting the process has. For IO bound workers, this is not a problem, since most
of the threads are blocked in underlying libraries (which are outside of the
GIL).
-If many threads are attempting to run Ruby code simultaneously, this will lead
-to contention on the GIL which will have the affect of slowing down all
+If many threads are attempting to run Ruby code simultaneously, this leads
+to contention on the GIL which has the effect of slowing down all
processes.
In high-traffic environments, knowing that a worker is CPU-bound allows us to
@@ -517,9 +495,9 @@ Likewise, if a worker uses large amounts of memory, we can run these on a
bespoke low concurrency, high memory fleet.
Note that memory-bound workers create heavy GC workloads, with pauses of
-10-50ms. This will have an impact on the latency requirements for the
+10-50ms. This has an impact on the latency requirements for the
worker. For this reason, `memory` bound, `urgency :high` jobs are not
-permitted and will fail CI. In general, `memory` bound workers are
+permitted and fail CI. In general, `memory` bound workers are
discouraged, and alternative approaches to processing the work should be
considered.
@@ -583,12 +561,11 @@ default weight, which is 1.
To have some more information about workers in the logs, we add
[metadata to the jobs in the form of an
`ApplicationContext`](logging.md#logging-context-metadata-through-rails-or-grape-requests).
-In most cases, when scheduling a job from a request, this context will
-already be deducted from the request and added to the scheduled
-job.
+In most cases, when scheduling a job from a request, this context is already
+deducted from the request and added to the scheduled job.
When a job runs, the context that was active when it was scheduled
-will be restored. This causes the context to be propagated to any job
+is restored. This causes the context to be propagated to any job
scheduled from within the running job.
All this means that in most cases, to add context to jobs, we don't
@@ -603,7 +580,7 @@ As with most our cops, there are perfectly valid reasons for disabling
them. In this case it could be that the context from the request is
correct. Or maybe you've specified a context already in a way that
isn't picked up by the cops. In any case, leave a code comment
-pointing to which context will be used when disabling the cops.
+pointing to which context to use when disabling the cops.
When you do provide objects to the context, make sure that the
route for namespaces and projects is pre-loaded. This can be done by using
@@ -699,7 +676,7 @@ blocks:
## Arguments logging
-As of GitLab 13.6, Sidekiq job arguments will be logged by default, unless [`SIDEKIQ_LOG_ARGUMENTS`](../administration/troubleshooting/sidekiq.md#log-arguments-to-sidekiq-jobs)
+As of GitLab 13.6, Sidekiq job arguments are logged by default, unless [`SIDEKIQ_LOG_ARGUMENTS`](../administration/troubleshooting/sidekiq.md#log-arguments-to-sidekiq-jobs)
is disabled.
By default, the only arguments logged are numeric arguments, because
@@ -820,7 +797,7 @@ This approach requires multiple releases.
##### Parameter hash
-This approach will not require multiple releases if an existing worker already
+This approach doesn't require multiple releases if an existing worker already
uses a parameter hash.
1. Use a parameter hash in the worker to allow future flexibility.
diff --git a/doc/development/single_table_inheritance.md b/doc/development/single_table_inheritance.md
index 51268092225..6b35d9f71da 100644
--- a/doc/development/single_table_inheritance.md
+++ b/doc/development/single_table_inheritance.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Single Table Inheritance
diff --git a/doc/development/sql.md b/doc/development/sql.md
index bf405eab688..f68ca44efa8 100644
--- a/doc/development/sql.md
+++ b/doc/development/sql.md
@@ -1,7 +1,7 @@
---
stage: Enablement
group: Database
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# SQL Query Guidelines
diff --git a/doc/development/swapping_tables.md b/doc/development/swapping_tables.md
index f633721ea9d..cb038a3b85a 100644
--- a/doc/development/swapping_tables.md
+++ b/doc/development/swapping_tables.md
@@ -1,7 +1,7 @@
---
stage: Enablement
group: Database
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Swapping Tables
diff --git a/doc/development/telemetry/event_dictionary.md b/doc/development/telemetry/event_dictionary.md
index 53b7ddea095..bc230a46441 100644
--- a/doc/development/telemetry/event_dictionary.md
+++ b/doc/development/telemetry/event_dictionary.md
@@ -3,3 +3,6 @@ redirect_to: '../product_analytics/event_dictionary.md'
---
This document was moved to [another location](../product_analytics/event_dictionary.md).
+
+<!-- This redirect file can be deleted after February 1, 2021. -->
+<!-- Before deletion, see: https://docs.gitlab.com/ee/development/documentation/#move-or-rename-a-page -->
diff --git a/doc/development/telemetry/index.md b/doc/development/telemetry/index.md
index 79ea80a52ea..24e83ffc524 100644
--- a/doc/development/telemetry/index.md
+++ b/doc/development/telemetry/index.md
@@ -3,3 +3,6 @@ redirect_to: '../product_analytics/index.md'
---
This document was moved to [another location](../product_analytics/index.md).
+
+<!-- This redirect file can be deleted after February 1, 2021. -->
+<!-- Before deletion, see: https://docs.gitlab.com/ee/development/documentation/#move-or-rename-a-page -->
diff --git a/doc/development/telemetry/snowplow.md b/doc/development/telemetry/snowplow.md
index fd75d29286b..7cd385be681 100644
--- a/doc/development/telemetry/snowplow.md
+++ b/doc/development/telemetry/snowplow.md
@@ -3,3 +3,6 @@ redirect_to: '../product_analytics/snowplow.md'
---
This document was moved to [another location](../product_analytics/snowplow.md).
+
+<!-- This redirect file can be deleted after February 1, 2021. -->
+<!-- Before deletion, see: https://docs.gitlab.com/ee/development/documentation/#move-or-rename-a-page -->
diff --git a/doc/development/telemetry/usage_ping.md b/doc/development/telemetry/usage_ping.md
index 1026e7a5a66..c890353fe3b 100644
--- a/doc/development/telemetry/usage_ping.md
+++ b/doc/development/telemetry/usage_ping.md
@@ -3,3 +3,6 @@ redirect_to: '../product_analytics/usage_ping.md'
---
This document was moved to [another location](../product_analytics/usage_ping.md).
+
+<!-- This redirect file can be deleted after February 1, 2021. -->
+<!-- Before deletion, see: https://docs.gitlab.com/ee/development/documentation/#move-or-rename-a-page -->
diff --git a/doc/development/testing.md b/doc/development/testing.md
index 79ef8e75432..a0130bfb4ac 100644
--- a/doc/development/testing.md
+++ b/doc/development/testing.md
@@ -3,3 +3,6 @@ redirect_to: 'testing_guide/index.md'
---
This document was moved to [another location](testing_guide/index.md).
+
+<!-- This redirect file can be deleted after February 1, 2021. -->
+<!-- Before deletion, see: https://docs.gitlab.com/ee/development/documentation/#move-or-rename-a-page -->
diff --git a/doc/development/testing_guide/best_practices.md b/doc/development/testing_guide/best_practices.md
index dabb18c1f75..d1b7883451f 100644
--- a/doc/development/testing_guide/best_practices.md
+++ b/doc/development/testing_guide/best_practices.md
@@ -50,6 +50,22 @@ bundle exec guard
When using spring and guard together, use `SPRING=1 bundle exec guard` instead to make use of spring.
+### Ruby warnings
+
+> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/47767) in GitLab 13.7.
+
+We've enabled [deprecation warnings](https://ruby-doc.org/core-2.7.2/Warning.html)
+by default when running specs. Making these warnings more visible to developers
+helps upgrading to newer Ruby versions.
+
+You can silence deprecation warnings by setting the environment variable
+`SILENCE_DEPRECATIONS`, for example:
+
+```shell
+# silence all deprecation warnings
+SILENCE_DEPRECATIONS=1 bin/rspec spec/models/project_spec.rb
+```
+
### Test speed
GitLab has a massive test suite that, without [parallelization](ci.md#test-suite-parallelization-on-the-ci), can take hours
@@ -311,7 +327,7 @@ Use the coverage reports to ensure your tests cover 100% of your code.
### System / Feature tests
-NOTE: **Note:**
+NOTE:
Before writing a new system test, [please consider **not**
writing one](testing_levels.md#consider-not-writing-a-system-test)!
@@ -432,7 +448,7 @@ instead of 30+ seconds in case of a regular `spec_helper`.
### `subject` and `let` variables
-GitLab's RSpec suite has made extensive use of `let`(along with its strict, non-lazy
+The GitLab RSpec suite has made extensive use of `let`(along with its strict, non-lazy
version `let!`) variables to reduce duplication. However, this sometimes [comes at the cost of clarity](https://thoughtbot.com/blog/lets-not),
so we need to set some guidelines for their use going forward:
@@ -603,6 +619,32 @@ it "really connects to Prometheus", :permit_dns do
And if you need more specific control, the DNS blocking is implemented in
`spec/support/helpers/dns_helpers.rb` and these methods can be called elsewhere.
+#### Stubbing File methods
+
+In the situations where you need to
+[stub](https://relishapp.com/rspec/rspec-mocks/v/3-9/docs/basics/allowing-messages)
+methods such as `File.read`, make sure to:
+
+1. Stub `File.read` for only the filepath you are interested in.
+1. Call the original implementation for other filepaths.
+
+Otherwise `File.read` calls from other parts of the codebase get
+stubbed incorrectly. You should use the `stub_file_read`, and
+`expect_file_read` helper methods which does the stubbing for
+`File.read` correctly.
+
+```ruby
+# bad, all Files will read and return nothing
+allow(File).to receive(:read)
+
+# good
+stub_file_read(my_filepath)
+
+# also OK
+allow(File).to receive(:read).and_call_original
+allow(File).to receive(:read).with(my_filepath)
+```
+
#### Filesystem
Filesystem data can be roughly split into "repositories", and "everything else".
@@ -678,7 +720,7 @@ at all possible.
#### Test Snowplow events
-CAUTION: **Warning:**
+WARNING:
Snowplow performs **runtime type checks** by using the [contracts gem](https://rubygems.org/gems/contracts).
Since Snowplow is **by default disabled in tests and development**, it can be hard to
**catch exceptions** when mocking `Gitlab::Tracking`.
@@ -750,7 +792,7 @@ describe "#==" do
end
```
-CAUTION: **Caution:**
+WARNING:
Only use simple values as input in the `where` block. Using procs, stateful
objects, FactoryBot-created objects etc. can lead to
[unexpected results](https://github.com/tomykaira/rspec-parameterized/issues/8).
diff --git a/doc/development/testing_guide/ci.md b/doc/development/testing_guide/ci.md
index 618f9010b4d..7318f767219 100644
--- a/doc/development/testing_guide/ci.md
+++ b/doc/development/testing_guide/ci.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# GitLab tests in the Continuous Integration (CI) context
@@ -12,8 +12,8 @@ Our current CI parallelization setup is as follows:
1. The `retrieve-tests-metadata` job in the `prepare` stage ensures we have a
`knapsack/report-master.json` file:
- - The `knapsack/report-master.json` file is fetched from S3, if it's not here
- we initialize the file with `{}`.
+ - The `knapsack/report-master.json` file is fetched from the latest `master` pipeline which runs `update-tests-metadata`
+ (for now it's the 2-hourly scheduled master pipeline), if it's not here we initialize the file with `{}`.
1. Each `[rspec|rspec-ee] [unit|integration|system|geo] n m` job are run with
`knapsack rspec` and should have an evenly distributed share of tests:
- It works because the jobs have access to the `knapsack/report-master.json`
@@ -25,9 +25,9 @@ Our current CI parallelization setup is as follows:
1. The `update-tests-metadata` job (which only runs on scheduled pipelines for
[the canonical project](https://gitlab.com/gitlab-org/gitlab) takes all the
`knapsack/rspec*_pg_*.json` files and merge them all together into a single
- `knapsack/report-master.json` file that is then uploaded to S3.
+ `knapsack/report-master.json` file that is saved as artifact.
-After that, the next pipeline will use the up-to-date `knapsack/report-master.json` file.
+After that, the next pipeline uses the up-to-date `knapsack/report-master.json` file.
## Monitoring
diff --git a/doc/development/testing_guide/end_to_end/beginners_guide.md b/doc/development/testing_guide/end_to_end/beginners_guide.md
index ef0bd9902e1..d60b780eeea 100644
--- a/doc/development/testing_guide/end_to_end/beginners_guide.md
+++ b/doc/development/testing_guide/end_to_end/beginners_guide.md
@@ -1,20 +1,20 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Beginner's guide to writing end-to-end tests
-In this tutorial, you will learn about the creation of end-to-end (_e2e_) tests
+This tutorial walks you through the creation of end-to-end (_e2e_) tests
for [GitLab Community Edition](https://about.gitlab.com/install/?version=ce) and
[GitLab Enterprise Edition](https://about.gitlab.com/install/).
-By the end of this tutorial, you will be able to:
+By the end of this tutorial, you can:
- Determine whether an end-to-end test is needed.
- Understand the directory structure within `qa/`.
-- Write a basic end-to-end test that will validate login features.
+- Write a basic end-to-end test that validates login features.
- Develop any missing [page object](page_objects.md) libraries.
## Before you write a test
@@ -29,7 +29,7 @@ must be configured to run the specs. The end-to-end tests:
- Create [resources](resources.md) (such as project, issue, user) on an ad-hoc basis.
- Test the UI and API interfaces, and use the API to efficiently set up the UI tests.
-TIP: **Tip:**
+NOTE:
For more information, see [End-to-end testing Best Practices](best_practices.md).
## Determine if end-to-end tests are needed
@@ -52,7 +52,7 @@ For information about the distribution of tests per level in GitLab, see
- Finally, discuss the proposed test with the developer(s) involved in implementing
the feature and the lower-level tests.
-CAUTION: **Caution:**
+WARNING:
Check both [GitLab Community Edition](https://gitlab-org.gitlab.io/gitlab-foss/coverage-ruby/#_AllFiles) and
[GitLab Enterprise Edition](https://gitlab-org.gitlab.io/gitlab/coverage-ruby/#_AllFiles) coverage projects
for previously-written tests for this feature. For analyzing the code coverage,
@@ -67,18 +67,18 @@ end-to-end flows, and is easiest to understand.
The GitLab QA end-to-end tests are organized by the different
[stages in the DevOps lifecycle](https://gitlab.com/gitlab-org/gitlab-foss/tree/master/qa/qa/specs/features/browser_ui).
Determine where the test should be placed by
-[stage](https://about.gitlab.com/handbook/product/product-categories/#devops-stages),
-determine which feature the test will belong to, and then place it in a subdirectory
+[stage](https://about.gitlab.com/handbook/product/categories/#devops-stages),
+determine which feature the test belongs to, and then place it in a subdirectory
under the stage.
![DevOps lifecycle by stages](img/gl-devops-lifecycle-by-stage-numbers_V12_10.png)
-If the test is Enterprise Edition only, the test will be created in the `features/ee`
+If the test is Enterprise Edition only, the test is created in the `features/ee`
directory, but follow the same DevOps lifecycle format.
## Create a skeleton test
-In the first part of this tutorial we will be testing login, which is owned by the
+In the first part of this tutorial we are testing login, which is owned by the
Manage stage. Inside `qa/specs/features/browser_ui/1_manage/login`, create a
file `basic_login_spec.rb`.
@@ -86,7 +86,7 @@ file `basic_login_spec.rb`.
See the [`RSpec.describe` outer block](#the-outer-rspecdescribe-block)
-CAUTION: **Deprecation notice:**
+WARNING:
The outer `context` [was deprecated](https://gitlab.com/gitlab-org/quality/team-tasks/-/issues/550) in `13.2`
in adherence to RSpec 4.0 specifications. Use `RSpec.describe` instead.
@@ -286,12 +286,12 @@ end
Note the following important points:
-- At the start of our example, we will be at the `page/issue/show.rb` [page](page_objects.md).
+- At the start of our example, we are at the `page/issue/show.rb` [page](page_objects.md).
- Our test fabricates only what it needs, when it needs it.
- The issue is fabricated through the API to save time.
- GitLab prefers `let()` over instance variables. See
[best practices](../best_practices.md#subject-and-let-variables).
-- `be_closed` is not implemented in `page/project/issue/show.rb` yet, but will be
+- `be_closed` is not implemented in `page/project/issue/show.rb` yet, but is
implemented in the next step.
The issue is fabricated as a [Resource](resources.md), which is a GitLab entity
@@ -349,3 +349,9 @@ Where `<test_file>` is:
- `qa/specs/features/browser_ui/1_manage/login/login_spec.rb` when running the Login example.
- `qa/specs/features/browser_ui/2_plan/issues/issue_spec.rb` when running the Issue example.
+
+## End-to-end test merge request template
+
+When submitting a new end-to-end test, use the ["New End to End Test"](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/merge_request_templates/New%20End%20To%20End%20Test.md)
+merge request description template for additional
+steps that are required prior a successful merge.
diff --git a/doc/development/testing_guide/end_to_end/best_practices.md b/doc/development/testing_guide/end_to_end/best_practices.md
index 4d12a0f79cb..b761e33367f 100644
--- a/doc/development/testing_guide/end_to_end/best_practices.md
+++ b/doc/development/testing_guide/end_to_end/best_practices.md
@@ -1,7 +1,7 @@
---
stage: none
group: Development
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# End-to-end testing Best Practices
@@ -278,10 +278,10 @@ In line with [using the API](#prefer-api-over-ui), use a `Commit` resource whene
```ruby
# Using a commit resource
-Resource::Commit.fabricate_via_api! do |commit|
+Resource::Repository::Commit.fabricate_via_api! do |commit|
commit.commit_message = 'Initial commit'
commit.add_files([
- {file_path: 'README.md', content: 'Hello, GitLab'}
+ { file_path: 'README.md', content: 'Hello, GitLab' }
])
end
@@ -372,8 +372,10 @@ end
[See this merge request for a real example of adding a custom matcher](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/46302).
-NOTE: **Note:**
-We need to create custom negatable matchers only for the predicate methods we've added to the test framework, and only if we're using `not_to`. If we use `to have_no_*` a negatable matcher is not necessary.
+We are creating custom negatable matchers in `qa/spec/support/matchers`.
+
+NOTE:
+We need to create custom negatable matchers only for the predicate methods we've added to the test framework, and only if we're using `not_to`. If we use `to have_no_*` a negatable matcher is not necessary but it increases code readability.
### Why we need negatable matchers
diff --git a/doc/development/testing_guide/end_to_end/dynamic_element_validation.md b/doc/development/testing_guide/end_to_end/dynamic_element_validation.md
index 871b3f80c18..1e7f528f6ff 100644
--- a/doc/development/testing_guide/end_to_end/dynamic_element_validation.md
+++ b/doc/development/testing_guide/end_to_end/dynamic_element_validation.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Dynamic Element Validation
@@ -23,7 +23,7 @@ We interpret user actions on the page to have some sort of effect. These actions
### Navigation
-When a page is navigated to, there are elements that will always appear on the page unconditionally.
+When a page is navigated to, there are elements that always appear on the page unconditionally.
Dynamic element validation is instituted when using
@@ -100,7 +100,7 @@ Runtime::Browser.visit(:gitlab, Page::MyPage)
execute_stuff
```
-will invoke GitLab QA to scan `MyPage` for `my_element` and `another_element` to be on the page before continuing to
+invokes GitLab QA to scan `MyPage` for `my_element` and `another_element` to be on the page before continuing to
`execute_stuff`
### Clicking
@@ -113,7 +113,7 @@ def open_layer
end
```
-will invoke GitLab QA to ensure that `message_content` appears on
+invokes GitLab QA to ensure that `message_content` appears on
the Layer upon clicking `my_element`.
-This will imply that the Layer is indeed rendered before we continue our test.
+This implies that the Layer is indeed rendered before we continue our test.
diff --git a/doc/development/testing_guide/end_to_end/environment_selection.md b/doc/development/testing_guide/end_to_end/environment_selection.md
index f5e3e99b79e..7e34b6c265d 100644
--- a/doc/development/testing_guide/end_to_end/environment_selection.md
+++ b/doc/development/testing_guide/end_to_end/environment_selection.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Environment selection
@@ -19,7 +19,7 @@ We can specify what environments or pipelines to run tests against using the `on
| `production` | Match against production | `Static` |
| `pipeline` | Match against a pipeline | `Array` or `Static`|
-CAUTION: **Caution:**
+WARNING:
You cannot specify `:production` and `{ <switch>: 'value' }` simultaneously.
These options are mutually exclusive. If you want to specify production, you
can control the `tld` and `domain` independently.
@@ -35,7 +35,7 @@ can control the `tld` and `domain` independently.
| `dev.gitlab.org` | `only: { tld: '.org', domain: 'gitlab', subdomain: 'dev' }` | `(dev).gitlab.org` |
| `staging.gitlab.com & domain.gitlab.com` | `only: { subdomain: %i[staging domain] }` | `(staging|domain).+.com` |
| `nightly` | `only: { pipeline: :nightly }` | "nightly" |
-| `nightly`, `canary` | `only_run_in_pipeline: [:nightly, :canary]` | ["nightly"](https://gitlab.com/gitlab-org/quality/nightly) and ["canary"](https://gitlab.com/gitlab-org/quality/canary) |
+| `nightly`, `canary` | `only: { pipeline: [:nightly, :canary] }` | ["nightly"](https://gitlab.com/gitlab-org/quality/nightly) and ["canary"](https://gitlab.com/gitlab-org/quality/canary) |
```ruby
RSpec.describe 'Area' do
@@ -65,4 +65,4 @@ If you want to run an `only: { :pipeline }` tagged test on your local GDK make s
Similarly to specifying that a test should only run against a specific environment, it's also possible to quarantine a
test only when it runs against a specific environment. The syntax is exactly the same, except that the `only: { ... }`
hash is nested in the [`quarantine: { ... }`](https://about.gitlab.com/handbook/engineering/quality/guidelines/debugging-qa-test-failures/#quarantining-tests) hash.
-For instance, `quarantine: { only: { subdomain: :staging } }` will only quarantine the test when run against staging.
+For instance, `quarantine: { only: { subdomain: :staging } }` only quarantines the test when run against staging.
diff --git a/doc/development/testing_guide/end_to_end/feature_flags.md b/doc/development/testing_guide/end_to_end/feature_flags.md
index 2ff1c9f6dc3..1e0eda6491a 100644
--- a/doc/development/testing_guide/end_to_end/feature_flags.md
+++ b/doc/development/testing_guide/end_to_end/feature_flags.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Testing with feature flags
@@ -10,14 +10,14 @@ To run a specific test with a feature flag enabled you can use the `QA::Runtime:
enable and disable feature flags ([via the API](../../../api/features.md)).
Note that administrator authorization is required to change feature flags. `QA::Runtime::Feature`
-will automatically authenticate as an administrator as long as you provide an appropriate access
+automatically authenticates as an administrator as long as you provide an appropriate access
token via `GITLAB_QA_ADMIN_ACCESS_TOKEN` (recommended), or provide `GITLAB_ADMIN_USERNAME`
and `GITLAB_ADMIN_PASSWORD`.
Please be sure to include the tag `:requires_admin` so that the test can be skipped in environments
where admin access is not available.
-CAUTION: **Caution:**
+WARNING:
You are strongly advised to [enable feature flags only for a group, project, user](../../feature_flags/development.md#feature-actors),
or [feature group](../../feature_flags/development.md#feature-groups). This makes it possible to
test a feature in a shared environment without affecting other users.
@@ -60,7 +60,7 @@ feature_group = "a_feature_group"
Runtime::Feature.enable(:feature_flag_name, feature_group: feature_group)
```
-If no scope is provided, the feature flag will be set instance-wide:
+If no scope is provided, the feature flag is set instance-wide:
```ruby
# This will affect all users!
diff --git a/doc/development/testing_guide/end_to_end/flows.md b/doc/development/testing_guide/end_to_end/flows.md
index 291d8bd5319..b99a1f9deb7 100644
--- a/doc/development/testing_guide/end_to_end/flows.md
+++ b/doc/development/testing_guide/end_to_end/flows.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Flows in GitLab QA
diff --git a/doc/development/testing_guide/end_to_end/index.md b/doc/development/testing_guide/end_to_end/index.md
index 1d144359ed8..d981f9bcbba 100644
--- a/doc/development/testing_guide/end_to_end/index.md
+++ b/doc/development/testing_guide/end_to_end/index.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# End-to-end Testing
@@ -126,8 +126,8 @@ For example, when we [dequarantine](https://about.gitlab.com/handbook/engineerin
a flaky test we first want to make sure that it's no longer flaky.
We can do that using the `ce:custom-parallel` and `ee:custom-parallel` jobs.
Both are manual jobs that you can configure using custom variables.
-When you click the name (not the play icon) of one of the parallel jobs,
-you'll be prompted to enter variables. You can use any of [the variables
+When clicking the name (not the play icon) of one of the parallel jobs,
+you are prompted to enter variables. You can use any of [the variables
that can be used with `gitlab-qa`](https://gitlab.com/gitlab-org/gitlab-qa/blob/master/docs/what_tests_can_be_run.md#supported-gitlab-environment-variables)
as well as these:
@@ -137,7 +137,7 @@ as well as these:
| `QA_TESTS` | The test(s) to run (no default, which means run all the tests in the scenario). Use file paths as you would when running tests via RSpec, e.g., `qa/specs/features/ee/browser_ui` would include all the `EE` UI tests. |
| `QA_RSPEC_TAGS` | The RSpec tags to add (no default) |
-For now [manual jobs with custom variables will not use the same variable
+For now [manual jobs with custom variables don't use the same variable
when retried](https://gitlab.com/gitlab-org/gitlab/-/issues/31367), so if you want to run the same test(s) multiple times,
specify the same variables in each `custom-parallel` job (up to as
many of the 10 available jobs that you want to run).
diff --git a/doc/development/testing_guide/end_to_end/page_objects.md b/doc/development/testing_guide/end_to_end/page_objects.md
index 7eacaf4b08a..939e44cedd9 100644
--- a/doc/development/testing_guide/end_to_end/page_objects.md
+++ b/doc/development/testing_guide/end_to_end/page_objects.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Page objects in GitLab QA
@@ -22,7 +22,7 @@ fields.
## Why do we need that?
We need page objects because we need to reduce duplication and avoid problems
-whenever someone changes some selectors in GitLab's source code.
+whenever someone changes some selectors in the GitLab source code.
Imagine that we have a hundred specs in GitLab QA, and we need to sign into
GitLab each time, before we make assertions. Without a page object, one would
@@ -30,13 +30,13 @@ need to rely on volatile helpers or invoke Capybara methods directly. Imagine
invoking `fill_in :user_login` in every `*_spec.rb` file / test example.
When someone later changes `t.text_field :login` in the view associated with
-this page to `t.text_field :username` it will generate a different field
+this page to `t.text_field :username` it generates a different field
identifier, what would effectively break all tests.
Because we are using `Page::Main::Login.perform(&:sign_in_using_credentials)`
everywhere, when we want to sign in to GitLab, the page object is the single
-source of truth, and we will need to update `fill_in :user_login`
-to `fill_in :user_username` only in a one place.
+source of truth, and we must update `fill_in :user_login`
+to `fill_in :user_username` only in one place.
## What problems did we have in the past?
@@ -44,7 +44,7 @@ We do not run QA tests for every commit, because of performance reasons, and
the time it would take to build packages and test everything.
That is why when someone changes `t.text_field :login` to
-`t.text_field :username` in the _new session_ view we won't know about this
+`t.text_field :username` in the _new session_ view we don't know about this
change until our GitLab QA nightly pipeline fails, or until someone triggers
`package-and-qa` action in their merge request.
@@ -56,14 +56,14 @@ problem by introducing coupling between GitLab CE / EE views and GitLab QA.
## How did we solve fragile tests problem?
-Currently, when you add a new `Page::Base` derived class, you will also need to
+Currently, when you add a new `Page::Base` derived class, you must also
define all selectors that your page objects depend on.
Whenever you push your code to CE / EE repository, `qa:selectors` sanity test
-job is going to be run as a part of a CI pipeline.
+job runs as a part of a CI pipeline.
-This test is going to validate all page objects that we have implemented in
-`qa/page` directory. When it fails, you will be notified about missing
+This test validates all page objects that we have implemented in
+`qa/page` directory. When it fails, it notifies you about missing
or invalid views/selectors definition.
## How to properly implement a page object?
@@ -95,10 +95,10 @@ end
### Defining Elements
-The `view` DSL method will correspond to the Rails view, partial, or Vue component that renders the elements.
+The `view` DSL method corresponds to the Rails view, partial, or Vue component that renders the elements.
The `element` DSL method in turn declares an element for which a corresponding
-`data-qa-selector=element_name_snaked` data attribute will need to be added to the view file.
+`data-qa-selector=element_name_snaked` data attribute must be added to the view file.
You can also define a value (String or Regexp) to match to the actual view
code but **this is deprecated** in favor of the above method for two reasons:
@@ -257,7 +257,7 @@ These modules must:
1. Include/prepend other modules and define their `view`/`elements` in a `base.class_eval` block to ensure they're
defined in the class that prepends the module.
-These steps ensure the sanity selectors check will detect problems properly.
+These steps ensure the sanity selectors check detect problems properly.
For example, `qa/qa/ee/page/merge_request/show.rb` adds EE-specific methods to `qa/qa/page/merge_request/show.rb` (with
`QA::Page::MergeRequest::Show.prepend_if_ee('QA::EE::Page::MergeRequest::Show')`) and following is how it's implemented
diff --git a/doc/development/testing_guide/end_to_end/resources.md b/doc/development/testing_guide/end_to_end/resources.md
index d73bae331d5..b6aef123c64 100644
--- a/doc/development/testing_guide/end_to_end/resources.md
+++ b/doc/development/testing_guide/end_to_end/resources.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Resource class in GitLab QA
@@ -94,7 +94,7 @@ needs a group to be created in.
To define a resource attribute, you can use the `attribute` method with a
block using the other resource class to fabricate the resource.
-That will allow access to the other resource from your resource object's
+That allows access to the other resource from your resource object's
methods. You would usually use it in `#fabricate!`, `#api_get_path`,
`#api_post_path`, `#api_post_body`.
@@ -144,7 +144,7 @@ end
```
**Note that all the attributes are lazily constructed. This means if you want
-a specific attribute to be fabricated first, you'll need to call the
+a specific attribute to be fabricated first, you must call the
attribute method first even if you're not using it.**
#### Product data attributes
@@ -185,7 +185,7 @@ end
```
**Note again that all the attributes are lazily constructed. This means if
-you call `shirt.brand` after moving to the other page, it'll not properly
+you call `shirt.brand` after moving to the other page, it doesn't properly
retrieve the data because we're no longer on the expected page.**
Consider this:
@@ -201,7 +201,7 @@ shirt.project.visit!
shirt.brand # => FAIL!
```
-The above example will fail because now we're on the project page, trying to
+The above example fails because now we're on the project page, trying to
construct the brand data from the shirt page, however we moved to the project
page already. There are two ways to solve this, one is that we could try to
retrieve the brand before visiting the project again:
@@ -219,8 +219,8 @@ shirt.project.visit!
shirt.brand # => OK!
```
-The attribute will be stored in the instance therefore all the following calls
-will be fine, using the data previously constructed. If we think that this
+The attribute is stored in the instance, therefore all the following calls
+are fine, using the data previously constructed. If we think that this
might be too brittle, we could eagerly construct the data right before
ending fabrication:
@@ -249,12 +249,12 @@ module QA
end
```
-The `populate` method will iterate through its arguments and call each
+The `populate` method iterates through its arguments and call each
attribute respectively. Here `populate(:brand)` has the same effect as
just `brand`. Using the populate method makes the intention clearer.
-With this, it will make sure we construct the data right after we create the
-shirt. The drawback is that this will always construct the data when the
+With this, it ensures we construct the data right after we create the
+shirt. The drawback is that this always constructs the data when the
resource is fabricated even if we don't need to use the data.
Alternatively, we could just make sure we're on the right page before
@@ -290,7 +290,7 @@ module QA
end
```
-This will make sure it's on the shirt page before constructing brand, and
+This ensures it's on the shirt page before constructing brand, and
move back to the previous page to avoid breaking the state.
#### Define an attribute based on an API response
@@ -343,16 +343,16 @@ end
- resource instance variables have the highest precedence
- attributes from the API response take precedence over attributes from the
block (usually from Browser UI)
-- attributes without a value will raise a `QA::Resource::Base::NoValueError` error
+- attributes without a value raises a `QA::Resource::Base::NoValueError` error
## Creating resources in your tests
To create a resource in your tests, you can call the `.fabricate!` method on
the resource class.
-Note that if the resource class supports API fabrication, this will use this
+Note that if the resource class supports API fabrication, this uses this
fabrication by default.
-Here is an example that will use the API fabrication method under the hood
+Here is an example that uses the API fabrication method under the hood
since it's supported by the `Shirt` resource class:
```ruby
@@ -389,8 +389,7 @@ my_shirt = Resource::Shirt.fabricate_via_api! do |shirt|
end
```
-In this case, the result will be similar to calling
-`Resource::Shirt.fabricate!`.
+In this case, the result is similar to calling `Resource::Shirt.fabricate!`.
## Where to ask for help?
diff --git a/doc/development/testing_guide/end_to_end/rspec_metadata_tests.md b/doc/development/testing_guide/end_to_end/rspec_metadata_tests.md
index 8e99cf18ea0..6d6f7fbcf8d 100644
--- a/doc/development/testing_guide/end_to_end/rspec_metadata_tests.md
+++ b/doc/development/testing_guide/end_to_end/rspec_metadata_tests.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# RSpec metadata for end-to-end tests
@@ -14,16 +14,16 @@ This is a partial list of the [RSpec metadata](https://relishapp.com/rspec/rspec
| Tag | Description |
|-----|-------------|
| `:elasticsearch` | The test requires an Elasticsearch service. It is used by the [instance-level scenario](https://gitlab.com/gitlab-org/gitlab-qa#definitions) [`Test::Integration::Elasticsearch`](https://gitlab.com/gitlab-org/gitlab/-/blob/72b62b51bdf513e2936301cb6c7c91ec27c35b4d/qa/qa/ee/scenario/test/integration/elasticsearch.rb) to include only tests that require Elasticsearch. |
-| `:gitaly_cluster` | The test will run against a GitLab instance where repositories are stored on redundant Gitaly nodes behind a Praefect node. All nodes are [separate containers](../../../administration/gitaly/praefect.md#requirements-for-configuring-a-gitaly-cluster). Tests that use this tag have a longer setup time since there are three additional containers that need to be started. |
-| `:jira` | The test requires a Jira Server. [GitLab-QA](https://gitlab.com/gitlab-org/gitlab-qa) will provision the Jira Server in a Docker container when the `Test::Integration::Jira` test scenario is run.
-| `:kubernetes` | The test includes a GitLab instance that is configured to be run behind an SSH tunnel, allowing a TLS-accessible GitLab. This test will also include provisioning of at least one Kubernetes cluster to test against. _This tag is often be paired with `:orchestrated`._ |
+| `:gitaly_cluster` | The test runs against a GitLab instance where repositories are stored on redundant Gitaly nodes behind a Praefect node. All nodes are [separate containers](../../../administration/gitaly/praefect.md#requirements-for-configuring-a-gitaly-cluster). Tests that use this tag have a longer setup time since there are three additional containers that need to be started. |
+| `:jira` | The test requires a Jira Server. [GitLab-QA](https://gitlab.com/gitlab-org/gitlab-qa) provisions the Jira Server in a Docker container when the `Test::Integration::Jira` test scenario is run.
+| `:kubernetes` | The test includes a GitLab instance that is configured to be run behind an SSH tunnel, allowing a TLS-accessible GitLab. This test also includes provisioning of at least one Kubernetes cluster to test against. _This tag is often be paired with `:orchestrated`._ |
| `:only` | The test is only to be run against specific environments or pipelines. See [Environment selection](environment_selection.md) for more information. |
-| `:orchestrated` | The GitLab instance under test may be [configured by `gitlab-qa`](https://gitlab.com/gitlab-org/gitlab-qa/-/blob/master/docs/what_tests_can_be_run.md#orchestrated-tests) to be different to the default GitLab configuration, or `gitlab-qa` may launch additional services in separate Docker containers, or both. Tests tagged with `:orchestrated` are excluded when testing environments where we can't dynamically modify GitLab's configuration (for example, Staging). |
-| `:quarantine` | The test has been [quarantined](https://about.gitlab.com/handbook/engineering/quality/guidelines/debugging-qa-test-failures/#quarantining-tests), will run in a separate job that only includes quarantined tests, and is allowed to fail. The test will be skipped in its regular job so that if it fails it will not hold up the pipeline. Note that you can also [quarantine a test only when it runs against specific environment](environment_selection.md#quarantining-a-test-for-a-specific-environment). |
+| `:orchestrated` | The GitLab instance under test may be [configured by `gitlab-qa`](https://gitlab.com/gitlab-org/gitlab-qa/-/blob/master/docs/what_tests_can_be_run.md#orchestrated-tests) to be different to the default GitLab configuration, or `gitlab-qa` may launch additional services in separate Docker containers, or both. Tests tagged with `:orchestrated` are excluded when testing environments where we can't dynamically modify the GitLab configuration (for example, Staging). |
+| `:quarantine` | The test has been [quarantined](https://about.gitlab.com/handbook/engineering/quality/guidelines/debugging-qa-test-failures/#quarantining-tests), runs in a separate job that only includes quarantined tests, and is allowed to fail. The test is skipped in its regular job so that if it fails it doesn't hold up the pipeline. Note that you can also [quarantine a test only when it runs against specific environment](environment_selection.md#quarantining-a-test-for-a-specific-environment). |
| `:reliable` | The test has been [promoted to a reliable test](https://about.gitlab.com/handbook/engineering/quality/guidelines/reliable-tests/#promoting-an-existing-test-to-reliable) meaning it passes consistently in all pipelines, including merge requests. |
| `:requires_admin` | The test requires an admin account. Tests with the tag are excluded when run against Canary and Production environments. |
-| `:runner` | The test depends on and will set up a GitLab Runner instance, typically to run a pipeline. |
-| `:skip_live_env` | The test will be excluded when run against live deployed environments such as Staging, Canary, and Production. |
+| `:runner` | The test depends on and sets up a GitLab Runner instance, typically to run a pipeline. |
+| `:skip_live_env` | The test is excluded when run against live deployed environments such as Staging, Canary, and Production. |
| `:testcase` | The link to the test case issue in the [Quality Testcases project](https://gitlab.com/gitlab-org/quality/testcases/). |
| `:mattermost` | The test requires a GitLab Mattermost service on the GitLab instance. |
| `:ldap_no_server` | The test requires a GitLab instance to be configured to use LDAP. To be used with the `:orchestrated` tag. It does not spin up an LDAP server at orchestration time. Instead, it creates the LDAP server at runtime. |
@@ -33,7 +33,7 @@ This is a partial list of the [RSpec metadata](https://relishapp.com/rspec/rspec
| `:smtp` | The test requires a GitLab instance to be configured to use an SMTP server. Tests SMTP notification email delivery from GitLab by using MailHog. |
| `:group_saml` | The test requires a GitLab instance that has SAML SSO enabled at the group level. Interacts with an external SAML identity provider. Paired with the `:orchestrated` tag. |
| `:instance_saml` | The test requires a GitLab instance that has SAML SSO enabled at the instance level. Interacts with an external SAML identity provider. Paired with the `:orchestrated` tag. |
-| `:skip_signup_disabled` | The test uses UI to sign up a new user and will be skipped in any environment that does not allow new user registration via the UI. |
+| `:skip_signup_disabled` | The test uses UI to sign up a new user and is skipped in any environment that does not allow new user registration via the UI. |
| `:smoke` | The test belongs to the test suite which verifies basic functionality of a GitLab instance.|
| `:github` | The test requires a GitHub personal access token. |
| `:repository_storage` | The test requires a GitLab instance to be configured to use multiple [repository storage paths](../../../administration/repository_storage_paths.md). Paired with the `:orchestrated` tag. |
diff --git a/doc/development/testing_guide/end_to_end/running_tests_that_require_special_setup.md b/doc/development/testing_guide/end_to_end/running_tests_that_require_special_setup.md
index df4b833526c..8a49c333f9f 100644
--- a/doc/development/testing_guide/end_to_end/running_tests_that_require_special_setup.md
+++ b/doc/development/testing_guide/end_to_end/running_tests_that_require_special_setup.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Running tests that require special setup
@@ -10,7 +10,7 @@ info: To determine the technical writer assigned to the Stage/Group associated w
The [`jenkins_build_status_spec`](https://gitlab.com/gitlab-org/gitlab/blob/163c8a8c814db26d11e104d1cb2dcf02eb567dbe/qa/qa/specs/features/ee/browser_ui/3_create/jenkins/jenkins_build_status_spec.rb) spins up a Jenkins instance in a Docker container based on an image stored in the [GitLab-QA container registry](https://gitlab.com/gitlab-org/gitlab-qa/container_registry).
The Docker image it uses is preconfigured with some base data and plugins.
-The test then configures the GitLab plugin in Jenkins with a URL of the GitLab instance that will be used
+The test then configures the GitLab plugin in Jenkins with a URL of the GitLab instance that are used
to run the tests. Unfortunately, the GitLab Jenkins plugin does not accept ports so `http://localhost:3000` would
not be accepted. Therefore, this requires us to run GitLab on port 80 or inside a Docker container.
@@ -30,7 +30,7 @@ To run the tests from the `/qa` directory:
CHROME_HEADLESS=false bin/qa Test::Instance::All http://localhost -- qa/specs/features/ee/browser_ui/3_create/jenkins/jenkins_build_status_spec.rb
```
-The test will automatically spin up a Docker container for Jenkins and tear down once the test completes.
+The test automatically spins up a Docker container for Jenkins and tear down once the test completes.
However, if you need to run Jenkins manually outside of the tests, use this command:
@@ -43,7 +43,7 @@ docker run \
registry.gitlab.com/gitlab-org/gitlab-qa/jenkins-gitlab:version1
```
-Jenkins will be available on `http://localhost:8080`.
+Jenkins is available on `http://localhost:8080`.
Admin username is `admin` and password is `password`.
@@ -59,19 +59,19 @@ the Docker Engine.
The tests tagged `:gitaly_ha` are orchestrated tests that can only be run against a set of Docker containers as configured and started by [the `Test::Integration::GitalyCluster` GitLab QA scenario](https://gitlab.com/gitlab-org/gitlab-qa/-/blob/master/docs/what_tests_can_be_run.md#testintegrationgitalycluster-ceeefull-image-address).
-As described in the documentation about the scenario noted above, the following command will run the tests:
+As described in the documentation about the scenario noted above, the following command runs the tests:
```shell
gitlab-qa Test::Integration::GitalyCluster EE
```
-However, that will remove the containers after it finishes running the tests. If you would like to do further testing, for example, if you would like to run a single test via a debugger, you can use [the `--no-tests` option](https://gitlab.com/gitlab-org/gitlab-qa#command-line-options) to make `gitlab-qa` skip running the tests, and to leave the containers running so that you can continue to use them.
+However, that removes the containers after it finishes running the tests. If you would like to do further testing, for example, if you would like to run a single test via a debugger, you can use [the `--no-tests` option](https://gitlab.com/gitlab-org/gitlab-qa#command-line-options) to make `gitlab-qa` skip running the tests, and to leave the containers running so that you can continue to use them.
```shell
gitlab-qa Test::Integration::GitalyCluster EE --no-tests
```
-When all the containers are running you will see the output of the `docker ps` command, showing on which ports the GitLab container can be accessed. For example:
+When all the containers are running, the output of the `docker ps` command shows which ports the GitLab container can be accessed on. For example:
```plaintext
CONTAINER ID ... PORTS NAMES
@@ -82,7 +82,7 @@ That shows that the GitLab instance running in the `gitlab-gitaly-ha` container
Another option is to use NGINX.
-In both cases you will need to configure your machine to translate `gitlab-gitlab-ha.test` into an appropriate IP address:
+In both cases you must configure your machine to translate `gitlab-gitlab-ha.test` into an appropriate IP address:
```shell
echo '127.0.0.1 gitlab-gitaly-ha.test' | sudo tee -a /etc/hosts
@@ -205,7 +205,7 @@ You can now search through the logs for *Job log*, which matches delimited secti
A Job log is a subsection within these logs, related to app deployment. We use two jobs: `build` and `production`.
You can find the root causes of deployment failures in these logs, which can compromise the entire test.
-If a `build` job fails, the `production` job won't run, and the test fails.
+If a `build` job fails, the `production` job doesn't run, and the test fails.
The long test setup does not take screenshots of failures, which is a known [issue](https://gitlab.com/gitlab-org/quality/team-tasks/-/issues/270).
However, if the spec fails (after a successful deployment) then you should be able to find screenshots which display the feature failure.
@@ -286,7 +286,7 @@ CHROME_HEADLESS=false bundle exec bin/qa QA::EE::Scenario::Test::Geo --primary-a
You can use [GitLab-QA Orchestrator](https://gitlab.com/gitlab-org/gitlab-qa) to orchestrate two GitLab containers and configure them as a Geo setup.
-Geo requires an EE license. To visit the Geo sites in your browser, you will need a reverse proxy server (for example, [NGINX](https://www.nginx.com/)).
+Geo requires an EE license. To visit the Geo sites in your browser, you need a reverse proxy server (for example, [NGINX](https://www.nginx.com/)).
1. Export your EE license
@@ -296,7 +296,7 @@ Geo requires an EE license. To visit the Geo sites in your browser, you will nee
1. (Optional) Pull the GitLab image
- This step is optional because pulling the Docker image is part of the [`Test::Integration::Geo` orchestrated scenario](https://gitlab.com/gitlab-org/gitlab-qa/-/blob/d8c5c40607c2be0eda58bbca1b9f534b00889a0b/lib/gitlab/qa/scenario/test/integration/geo.rb). However, it's easier to monitor the download progress if you pull the image first, and the scenario will skip this step after checking that the image is up to date.
+ This step is optional because pulling the Docker image is part of the [`Test::Integration::Geo` orchestrated scenario](https://gitlab.com/gitlab-org/gitlab-qa/-/blob/d8c5c40607c2be0eda58bbca1b9f534b00889a0b/lib/gitlab/qa/scenario/test/integration/geo.rb). However, it's easier to monitor the download progress if you pull the image first, and the scenario skips this step after checking that the image is up to date.
```shell
# For the most recent nightly image
@@ -309,7 +309,7 @@ Geo requires an EE license. To visit the Geo sites in your browser, you will nee
docker pull registry.gitlab.com/gitlab-org/build/omnibus-gitlab-mirror/gitlab-ee:examplesha123456789
```
-1. Run the [`Test::Integration::Geo` orchestrated scenario](https://gitlab.com/gitlab-org/gitlab-qa/-/blob/d8c5c40607c2be0eda58bbca1b9f534b00889a0b/lib/gitlab/qa/scenario/test/integration/geo.rb) with the `--no-teardown` option to build the GitLab containers, configure the Geo setup, and run Geo end-to-end tests. Running the tests after the Geo setup is complete is optional; the containers will keep running after you stop the tests.
+1. Run the [`Test::Integration::Geo` orchestrated scenario](https://gitlab.com/gitlab-org/gitlab-qa/-/blob/d8c5c40607c2be0eda58bbca1b9f534b00889a0b/lib/gitlab/qa/scenario/test/integration/geo.rb) with the `--no-teardown` option to build the GitLab containers, configure the Geo setup, and run Geo end-to-end tests. Running the tests after the Geo setup is complete is optional; the containers keep running after you stop the tests.
```shell
# Using the most recent nightly image
diff --git a/doc/development/testing_guide/end_to_end/style_guide.md b/doc/development/testing_guide/end_to_end/style_guide.md
index e6c96bca93e..ac4d26df794 100644
--- a/doc/development/testing_guide/end_to_end/style_guide.md
+++ b/doc/development/testing_guide/end_to_end/style_guide.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Style guide for writing end-to-end tests
@@ -74,7 +74,8 @@ We follow a simple formula roughly based on Hungarian notation.
- `_tab`
- `_menu_item`
-*Note: If none of the listed types are suitable, please open a merge request to add an appropriate type to the list.*
+NOTE:
+If none of the listed types are suitable, please open a merge request to add an appropriate type to the list.
### Examples
diff --git a/doc/development/testing_guide/flaky_tests.md b/doc/development/testing_guide/flaky_tests.md
index 47ed11d76a2..126d8725c21 100644
--- a/doc/development/testing_guide/flaky_tests.md
+++ b/doc/development/testing_guide/flaky_tests.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Flaky tests
@@ -26,14 +26,14 @@ it 'should succeed', quarantine: 'https://gitlab.com/gitlab-org/gitlab/-/issues/
end
```
-This means it will be skipped unless run with `--tag quarantine`:
+This means it is skipped unless run with `--tag quarantine`:
```shell
bin/rspec --tag quarantine
```
**Before putting a test in quarantine, you should make sure that a
-~"master:broken" issue exists for it so it won't stay in quarantine forever.**
+~"master:broken" issue exists for it so it doesn't stay in quarantine forever.**
Once a test is in quarantine, there are 3 choices:
diff --git a/doc/development/testing_guide/frontend_testing.md b/doc/development/testing_guide/frontend_testing.md
index 28fe63f1fb4..d83d58d14dd 100644
--- a/doc/development/testing_guide/frontend_testing.md
+++ b/doc/development/testing_guide/frontend_testing.md
@@ -1,12 +1,12 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Frontend testing standards and style guidelines
-There are two types of test suites you'll encounter while developing frontend code
+There are two types of test suites encountered while developing frontend code
at GitLab. We use Karma with Jasmine and Jest for JavaScript unit and integration testing,
and RSpec feature tests with Capybara for e2e (end-to-end) integration testing.
@@ -25,17 +25,19 @@ If you are looking for a guide on Vue component testing, you can jump right away
## Jest
-We have started to migrate frontend tests to the [Jest](https://jestjs.io) testing framework (see also the corresponding
-[epic](https://gitlab.com/groups/gitlab-org/-/epics/895)).
-
+We use Jest to write frontend unit and integration tests.
Jest tests can be found in `/spec/frontend` and `/ee/spec/frontend` in EE.
-Most examples have a Jest and Karma example. See the Karma examples only as explanation to what's going on in the code, should you stumble over some use cases during your discovery. The Jest examples are the one you should follow.
-
## Karma test suite
-While GitLab is switching over to [Jest](https://jestjs.io) you'll still find Karma tests in our application. [Karma](http://karma-runner.github.io/) is a test runner which uses [Jasmine](https://jasmine.github.io/) as its test
-framework. Jest also uses Jasmine as foundation, that's why it's looking quite similar.
+While GitLab has switched over to [Jest](https://jestjs.io), Karma tests still exist in our
+application because some of our specs require a browser and can't be easiliy migrated to Jest.
+Those specs intend to eventually drop Karma in favor of either Jest or RSpec. You can track this migration
+in the [related epic](https://gitlab.com/groups/gitlab-org/-/epics/4900).
+
+[Karma](http://karma-runner.github.io/) is a test runner which uses
+[Jasmine](https://jasmine.github.io/) as its test framework. Jest also uses Jasmine as foundation,
+that's why it's looking quite similar.
Karma tests live in `spec/javascripts/` and `/ee/spec/javascripts` in EE.
@@ -43,23 +45,10 @@ Karma tests live in `spec/javascripts/` and `/ee/spec/javascripts` in EE.
might have a corresponding `spec/javascripts/behaviors/autosize_spec.js` file.
Keep in mind that in a CI environment, these tests are run in a headless
-browser and you will not have access to certain APIs, such as
+browser and you don't have access to certain APIs, such as
[`Notification`](https://developer.mozilla.org/en-US/docs/Web/API/notification),
which have to be stubbed.
-### When should I use Jest over Karma?
-
-If you need to update an existing Karma test file (found in `spec/javascripts`), you do not
-need to migrate the whole spec to Jest. Simply updating the Karma spec to test your change
-is fine. It is probably more appropriate to migrate to Jest in a separate merge request.
-
-If you create a new test file, it needs to be created in Jest. This will
-help support our migration and we think you'll love using Jest.
-
-As always, please use discretion. Jest solves a lot of issues we experienced in Karma and
-provides a better developer experience, however there are potentially unexpected issues
-which could arise (especially with testing against browser specific features).
-
### Differences to Karma
- Jest runs in a Node.js environment, not in a browser. Support for running Jest tests in a browser [is planned](https://gitlab.com/gitlab-org/gitlab/-/issues/26982).
@@ -71,7 +60,7 @@ which could arise (especially with testing against browser specific features).
- No [context object](https://jasmine.github.io/tutorials/your_first_suite#section-The_%3Ccode%3Ethis%3C/code%3E_keyword) is passed to tests in Jest.
This means sharing `this.something` between `beforeEach()` and `it()` for example does not work.
Instead you should declare shared variables in the context that they are needed (via `const` / `let`).
-- The following will cause tests to fail in Jest:
+- The following cause tests to fail in Jest:
- Unmocked requests.
- Unhandled Promise rejections.
- Calls to `console.warn`, including warnings from libraries like Vue.
@@ -89,14 +78,14 @@ See also the issue for [support running Jest tests in browsers](https://gitlab.c
### Debugging Jest tests
-Running `yarn jest-debug` will run Jest in debug mode, allowing you to debug/inspect as described in the [Jest docs](https://jestjs.io/docs/en/troubleshooting#tests-are-failing-and-you-don-t-know-why).
+Running `yarn jest-debug` runs Jest in debug mode, allowing you to debug/inspect as described in the [Jest docs](https://jestjs.io/docs/en/troubleshooting#tests-are-failing-and-you-don-t-know-why).
### Timeout error
The default timeout for Jest is set in
[`/spec/frontend/test_setup.js`](https://gitlab.com/gitlab-org/gitlab/blob/master/spec/frontend/test_setup.js).
-If your test exceeds that time, it will fail.
+If your test exceeds that time, it fails.
If you cannot improve the performance of the tests, you can increase the timeout
for a specific test using
@@ -115,37 +104,6 @@ describe('Component', () => {
Remember that the performance of each test depends on the environment.
-### Timout error due to async components
-
-If your component is fetching some other components asynchroneously based on some conditions, it might happen so that your Jest suite for this component will become flaky timing out from time to time.
-
-```javascript
-// ide.vue
-export default {
- components: {
- 'error-message': () => import('./error_message.vue'),
- 'gl-button': () => import('@gitlab/ui/src/components/base/button/button.vue'),
- ...
-};
-```
-
-To address this issue, you can "help" Jest by stubbing the async components so that Jest would not need to fetch those asynchroneously at the run-time.
-
-```javascript
-// ide_spec.js
-import { GlButton } from '@gitlab/ui';
-import ErrorMessage from '~/ide/components/error_message.vue';
-...
-return shallowMount(ide, {
- ...
- stubs: {
- ErrorMessage,
- GlButton,
- ...
- },
-})
-```
-
## What and how to test
Before jumping into more gritty details about Jest-specific workflows like mocks and spies, we should briefly cover what to test with Jest.
@@ -227,15 +185,15 @@ For example, it's better to use the generated markup to trigger a button click a
## Common practices
-Following you'll find some general common practices you will find as part of our test suite. Should you stumble over something not following this guide, ideally fix it right away. πŸŽ‰
+These some general common practices included as part of our test suite. Should you stumble over something not following this guide, ideally fix it right away. πŸŽ‰
### How to query DOM elements
When it comes to querying DOM elements in your tests, it is best to uniquely and semantically target
the element.
-Preferentially, this is done by targeting what the user actually sees using [DOM Testing Library](https://testing-library.com/docs/dom-testing-library/intro).
-When selecting by text it is best to use [`getByRole` or `findByRole`](https://testing-library.com/docs/dom-testing-library/api-queries#byrole)
+Preferentially, this is done by targeting what the user actually sees using [DOM Testing Library](https://testing-library.com/docs/dom-testing-library/intro/).
+When selecting by text it is best to use [`getByRole` or `findByRole`](https://testing-library.com/docs/dom-testing-library/api-queries/#byrole)
as these enforce accessibility best practices as well. The examples below demonstrate the order of preference.
When writing Vue component unit tests, it can be wise to query children by component, so that the unit test can focus on comprehensive value coverage
@@ -246,6 +204,7 @@ possible selectors include:
- A semantic attribute like `name` (also verifies that `name` was setup properly)
- A `data-testid` attribute ([recommended by maintainers of `@vue/test-utils`](https://github.com/vuejs/vue-test-utils/issues/1498#issuecomment-610133465))
+ optionally combined with [`findByTestId`](#extendedwrapper-and-findbytestid)
- a Vue `ref` (if using `@vue/test-utils`)
```javascript
@@ -262,7 +221,8 @@ it('exists', () => {
// Good (especially for unit tests)
wrapper.find(FooComponent);
wrapper.find('input[name=foo]');
- wrapper.find('[data-testid="foo"]');
+ wrapper.find('[data-testid="my-foo-id"]');
+ wrapper.findByTestId('my-foo-id'); // with the extendedWrapper utility – check below
wrapper.find({ ref: 'foo'});
// Bad
@@ -273,6 +233,8 @@ it('exists', () => {
});
```
+It is recommended to use `kebab-case` for `data-testid` attribute.
+
It is not recommended that you add `.js-*` classes just for testing purposes. Only do this if there are no other feasible options available.
Do not use a `.qa-*` class or `data-qa-selector` attribute for any tests other than QA end-to-end testing.
@@ -389,7 +351,7 @@ it('tests a promise rejection', () => {
### Manipulating Time
-Sometimes we have to test time-sensitive code. For example, recurring events that run every X amount of seconds or similar. Here you'll find some strategies to deal with that:
+Sometimes we have to test time-sensitive code. For example, recurring events that run every X amount of seconds or similar. Here are some strategies to deal with that:
#### `setTimeout()` / `setInterval()` in application
@@ -435,7 +397,7 @@ it('does something', () => {
Sometimes a test needs to wait for something to happen in the application before it continues.
Avoid using [`setTimeout`](https://developer.mozilla.org/en-US/docs/Web/API/WindowOrWorkerGlobalScope/setTimeout)
-because it makes the reason for waiting unclear and if used within Karma with a time larger than zero it will slow down our test suite.
+because it makes the reason for waiting unclear and if used within Karma with a time larger than zero it slows down our test suite.
Instead use one of the following approaches.
#### Promises and Ajax calls
@@ -599,7 +561,7 @@ Jest has [`toBe`](https://jestjs.io/docs/en/expect#tobevalue) and
As [`toBe`](https://jestjs.io/docs/en/expect#tobevalue) uses
[`Object.is`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/is)
to compare values, it's faster (by default) than using `toEqual`.
-While the latter will eventually fallback to leverage [`Object.is`](https://github.com/facebook/jest/blob/master/packages/expect/src/jasmineUtils.ts#L91),
+While the latter eventually falls back to leverage [`Object.is`](https://github.com/facebook/jest/blob/master/packages/expect/src/jasmineUtils.ts#L91),
for primitive values, it should only be used when complex objects need a comparison.
Examples:
@@ -745,10 +707,10 @@ a [Jest mock for the package `monaco-editor`](https://gitlab.com/gitlab-org/gitl
If a manual mock is needed for a CE module, please place it in `spec/frontend/mocks/ce`.
-- Files in `spec/frontend/mocks/ce` will mock the corresponding CE module from `app/assets/javascripts`, mirroring the source module's path.
- - Example: `spec/frontend/mocks/ce/lib/utils/axios_utils` will mock the module `~/lib/utils/axios_utils`.
+- Files in `spec/frontend/mocks/ce` mocks the corresponding CE module from `app/assets/javascripts`, mirroring the source module's path.
+ - Example: `spec/frontend/mocks/ce/lib/utils/axios_utils` mocks the module `~/lib/utils/axios_utils`.
- We don't support mocking EE modules yet.
-- If a mock is found for which a source module doesn't exist, the test suite will fail. 'Virtual' mocks, or mocks that don't have a 1-to-1 association with a source module, are not supported yet.
+- If a mock is found for which a source module doesn't exist, the test suite fails. 'Virtual' mocks, or mocks that don't have a 1-to-1 association with a source module, are not supported yet.
#### Manual mock examples
@@ -776,11 +738,10 @@ Please consult the [official Jest docs](https://jestjs.io/docs/en/jest-object#mo
For running the frontend tests, you need the following commands:
-- `rake frontend:fixtures` (re-)generates [fixtures](#frontend-test-fixtures).
-- `yarn test` executes the tests.
-- `yarn jest` executes only the Jest tests.
-
-As long as the fixtures don't change, `yarn test` is sufficient (and saves you some time).
+- `rake frontend:fixtures` (re-)generates [fixtures](#frontend-test-fixtures). Make sure that
+ fixtures are up-to-date before running tests that require them.
+- `yarn jest` runs Jest tests.
+- `yarn karma` runs Karma tests.
### Live testing and focused testing -- Jest
@@ -809,12 +770,12 @@ yarn jest term
Karma allows something similar, but it's way more costly.
-Running Karma with `yarn run karma-start` will compile the JavaScript
-assets and run a server at `http://localhost:9876/` where it will automatically
-run the tests on any browser which connects to it. You can enter that URL on
+Running Karma with `yarn run karma-start` compiles the JavaScript
+assets and runs a server at `http://localhost:9876/` where it automatically
+runs the tests on any browser which connects to it. You can enter that URL on
multiple browsers at once to have it run the tests on each in parallel.
-While Karma is running, any changes you make will instantly trigger a recompile
+While Karma is running, any changes you make instantly trigger a recompile
and retest of the **entire test suite**, so you can see instantly if you've broken
a test with your changes. You can use [Jasmine focused](https://jasmine.github.io/2.5/focused_specs.html) or
excluded tests (with `fdescribe` or `xdescribe`) to get Karma to run only the
@@ -866,8 +827,8 @@ You can find generated fixtures are in `tmp/tests/frontend/fixtures-ee`.
#### Creating new fixtures
For each fixture, you can find the content of the `response` variable in the output file.
-For example, test named `"merge_requests/diff_discussion.json"` in `spec/frontend/fixtures/merge_requests.rb`
-will produce output file `tmp/tests/frontend/fixtures-ee/merge_requests/diff_discussion.json`.
+For example, a test named `"merge_requests/diff_discussion.json"` in `spec/frontend/fixtures/merge_requests.rb`
+produces an output file `tmp/tests/frontend/fixtures-ee/merge_requests/diff_discussion.json`.
The `response` variable gets automatically set if the test is marked as `type: :request` or `type: :controller`.
When creating a new fixture, it often makes sense to take a look at the corresponding tests for the
@@ -977,12 +938,12 @@ describe.each`
### RSpec errors due to JavaScript
-By default RSpec unit tests will not run JavaScript in the headless browser
-and will simply rely on inspecting the HTML generated by rails.
+By default RSpec unit tests don't run JavaScript in the headless browser
+and rely on inspecting the HTML generated by rails.
If an integration test depends on JavaScript to run correctly, you need to make
sure the spec is configured to enable JavaScript when the tests are run. If you
-don't do this you'll see vague error messages from the spec runner.
+don't do this, the spec runner displays vague error messages.
To enable a JavaScript driver in an `rspec` test, add `:js` to the
individual spec or the context block containing multiple specs that need
@@ -1004,6 +965,45 @@ describe "Admin::AbuseReports", :js do
end
```
+### Jest test timeout due to async imports
+
+If a module asynchronously imports some other modules at runtime, these modules must be
+transpiled by the Jest loaders at runtime. It's possible that this can cause [Jest to timeout](https://gitlab.com/gitlab-org/gitlab/-/issues/280809).
+
+If you run into this issue, consider eager importing the module so that Jest compiles
+and caches it at compile-time, fixing the runtime timeout.
+
+Consider the following example:
+
+```javascript
+// the_subject.js
+
+export default {
+ components: {
+ // Async import Thing because it is large and isn't always needed.
+ Thing: () => import(/* webpackChunkName: 'thing' */ './path/to/thing.vue'),
+ }
+};
+```
+
+Jest doesn't automatically transpile the `thing.vue` module, and depending on its size, could
+cause Jest to time out. We can force Jest to transpile and cache this module by eagerly importing
+it like so:
+
+```javascript
+// the_subject_spec.js
+
+import Subject from '~/feature/the_subject.vue';
+
+// Force Jest to transpile and cache
+// eslint-disable-next-line import/order, no-unused-vars
+import _Thing from '~/feature/path/to/thing.vue';
+```
+
+**PLEASE NOTE:** Do not simply disregard test timeouts. This could be a sign that there's
+actually a production problem. Use this opportunity to analyze the production webpack bundles and
+chunks and confirm that there is not a production issue with the async imports.
+
## Overview of Frontend Testing Levels
Main information on frontend testing levels can be found in the [Testing Levels page](testing_levels.md).
@@ -1016,7 +1016,7 @@ Tests relevant for frontend development can be found at the following places:
RSpec runs complete [feature tests](testing_levels.md#frontend-feature-tests), while the Jest and Karma directories contain [frontend unit tests](testing_levels.md#frontend-unit-tests), [frontend component tests](testing_levels.md#frontend-component-tests), and [frontend integration tests](testing_levels.md#frontend-integration-tests).
-All tests in `spec/javascripts/` will eventually be migrated to `spec/frontend/` (see also [#52483](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/52483)).
+All tests in `spec/javascripts/` are intended to be migrated to `spec/frontend/` (see also [#52483](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/52483)).
Before May 2018, `features/` also contained feature tests run by Spinach. These tests were removed from the codebase in May 2018 ([#23036](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/23036)).
@@ -1045,7 +1045,7 @@ testAction(
);
```
-Check an example in [`spec/javascripts/ide/stores/actions_spec.jsspec/javascripts/ide/stores/actions_spec.js`](https://gitlab.com/gitlab-org/gitlab/blob/master/spec/javascripts/ide/stores/actions_spec.js).
+Check an example in [`spec/frontend/ide/stores/actions_spec.js`](https://gitlab.com/gitlab-org/gitlab/-/blob/fdc7197609dfa7caeb1d962042a26248e49f27da/spec/frontend/ide/stores/actions_spec.js#L392).
### Wait until Axios requests finish
@@ -1057,6 +1057,29 @@ These are very useful if you don't have a handle to the request's Promise, for e
Both functions run `callback` on the next tick after the requests finish (using `setImmediate()`), to allow any `.then()` or `.catch()` handlers to run.
+### `extendedWrapper` and `findByTestId`
+
+Using `data-testid` is one of the [recommended ways to query DOM elements](#how-to-query-dom-elements).
+You can use the `extendedWrapper` utility on the `wrapper` returned by `shalowMount`/`mount`.
+By doing so, the `wrapper` provides you with the ability to perform a `findByTestId`,
+which is a shortcut to the more verbose `wrapper.find('[data-testid="my-test-id"]');`
+
+```javascript
+import { extendedWrapper } from 'jest/helpers/vue_test_utils_helper';
+
+describe('FooComponent', () => {
+ const wrapper = extendedWrapper(shallowMount({
+ template: `<div data-testid="my-test-id"></div>`,
+ }));
+
+ it('exists', () => {
+ expect(wrapper.findByTestId('my-test-id').exists()).toBe(true);
+ });
+});
+```
+
+Check an example in [`spec/frontend/alert_management/components/alert_details_spec.js`](https://gitlab.com/gitlab-org/gitlab/-/blob/ac1c9fa4c5b3b45f9566147b1c88fd1339cd7c25/spec/frontend/alert_management/components/alert_details_spec.js#L32).
+
## Testing with older browsers
Some regressions only affect a specific browser version. We can install and test in particular browsers with either Firefox or BrowserStack using the following steps:
@@ -1065,7 +1088,7 @@ Some regressions only affect a specific browser version. We can install and test
[BrowserStack](https://www.browserstack.com/) allows you to test more than 1200 mobile devices and browsers.
You can use it directly through the [live app](https://www.browserstack.com/live) or you can install the [chrome extension](https://chrome.google.com/webstore/detail/browserstack/nkihdmlheodkdfojglpcjjmioefjahjb) for easy access.
-Sign in to BrowserStack with the credentials saved in the **Engineering** vault of GitLab's
+Sign in to BrowserStack with the credentials saved in the **Engineering** vault of the GitLab
[shared 1Password account](https://about.gitlab.com/handbook/security/#1password-guide).
### Firefox
@@ -1076,7 +1099,7 @@ You can download any older version of Firefox from the releases FTP server, <htt
1. From the website, select a version, in this case `50.0.1`.
1. Go to the mac folder.
-1. Select your preferred language, you will find the DMG package inside, download it.
+1. Select your preferred language. The DMG package is inside. Download it.
1. Drag and drop the application to any other folder but the `Applications` folder.
1. Rename the application to something like `Firefox_Old`.
1. Move the application to the `Applications` folder.
diff --git a/doc/development/testing_guide/index.md b/doc/development/testing_guide/index.md
index 840c8c9206c..68326879dd0 100644
--- a/doc/development/testing_guide/index.md
+++ b/doc/development/testing_guide/index.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Testing standards and style guidelines
diff --git a/doc/development/testing_guide/review_apps.md b/doc/development/testing_guide/review_apps.md
index 5dcc7e7091e..a5294be40a9 100644
--- a/doc/development/testing_guide/review_apps.md
+++ b/doc/development/testing_guide/review_apps.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Review Apps
@@ -102,8 +102,8 @@ subgraph "CNG-mirror pipeline"
- The manual `review-stop` can be used to
stop a Review App manually, and is also started by GitLab once a merge
request's branch is deleted after being merged.
-- The Kubernetes cluster is connected to the `gitlab` projects using
- [GitLab's Kubernetes integration](../../user/project/clusters/index.md). This basically
+- The Kubernetes cluster is connected to the `gitlab` projects using the
+ [GitLab Kubernetes integration](../../user/project/clusters/index.md). This basically
allows to have a link to the Review App directly from the merge request widget.
### Auto-stopping of Review Apps
@@ -164,7 +164,7 @@ used by the `review-deploy` and `review-stop` jobs.
You need to [open an access request (internal link)](https://gitlab.com/gitlab-com/access-requests/-/issues/new)
for the `gcp-review-apps-dev` GCP group and role.
-This will grant you the following permissions for:
+This grants you the following permissions for:
- [Retrieving pod logs](#dig-into-a-pods-logs). Granted by [Viewer (`roles/viewer`)](https://cloud.google.com/iam/docs/understanding-roles#kubernetes-engine-roles).
- [Running a Rails console](#run-a-rails-console). Granted by [Kubernetes Engine Developer (`roles/container.pods.exec`)](https://cloud.google.com/iam/docs/understanding-roles#kubernetes-engine-roles).
@@ -317,7 +317,7 @@ kubectl get cm --sort-by='{.metadata.creationTimestamp}' | grep 'review-' | grep
### Using K9s
-[K9s](https://github.com/derailed/k9s) is a powerful command line dashboard which allows you to filter by labels. This can help identify trends with apps exceeding the [review-app resource requests](https://gitlab.com/gitlab-org/gitlab/-/blob/master/scripts/review_apps/base-config.yaml). Kubernetes will schedule pods to nodes based on resource requests and allow for CPU usage up to the limits.
+[K9s](https://github.com/derailed/k9s) is a powerful command line dashboard which allows you to filter by labels. This can help identify trends with apps exceeding the [review-app resource requests](https://gitlab.com/gitlab-org/gitlab/-/blob/master/scripts/review_apps/base-config.yaml). Kubernetes schedules pods to nodes based on resource requests and allow for CPU usage up to the limits.
- In K9s you can sort or add filters by typing the `/` character
- `-lrelease=<review-app-slug>` - filters down to all pods for a release. This aids in determining what is having issues in a single deployment
@@ -395,8 +395,8 @@ helm ls -d | grep "Jun 4" | cut -f1 | xargs helm delete --purge
#### Mitigation steps taken to avoid this problem in the future
-We've created a new node pool with smaller machines so that it's less likely
-that a machine will hit the "too many mount points" problem in the future.
+We've created a new node pool with smaller machines to reduce the risk
+that a machine reaches the "too many mount points" problem in the future.
## Frequently Asked Questions
diff --git a/doc/development/testing_guide/smoke.md b/doc/development/testing_guide/smoke.md
index 76484dd193b..4fd2729a4d3 100644
--- a/doc/development/testing_guide/smoke.md
+++ b/doc/development/testing_guide/smoke.md
@@ -1,17 +1,17 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Smoke Tests
It is imperative in any testing suite that we have Smoke Tests. In short, smoke
-tests will run quick sanity end-to-end functional tests from GitLab QA and are
+tests run quick end-to-end functional tests from GitLab QA and are
designed to run against the specified environment to ensure that basic
functionality is working.
-Currently, our suite consists of this basic functionality coverage:
+Our suite consists of this basic functionality coverage:
- User standard authentication
- SSH Key creation and addition to a user
diff --git a/doc/development/testing_guide/testing_levels.md b/doc/development/testing_guide/testing_levels.md
index d9e7edfa0c8..14d4ee82f75 100644
--- a/doc/development/testing_guide/testing_levels.md
+++ b/doc/development/testing_guide/testing_levels.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Testing levels
@@ -129,14 +129,14 @@ graph RL
- **All server requests**:
When running frontend unit tests, the backend may not be reachable, so all outgoing requests need to be mocked.
- **Asynchronous background operations**:
- Background operations cannot be stopped or waited on, so they will continue running in the following tests and cause side effects.
+ Background operations cannot be stopped or waited on, so they continue running in the following tests and cause side effects.
#### What *not* to mock in unit tests
- **Non-exported functions or classes**:
- Everything that is not exported can be considered private to the module, and will be implicitly tested through the exported classes and functions.
+ Everything that is not exported can be considered private to the module, and is implicitly tested through the exported classes and functions.
- **Methods of the class under test**:
- By mocking methods of the class under test, the mocks will be tested and not the real methods.
+ By mocking methods of the class under test, the mocks are tested and not the real methods.
- **Utility functions (pure functions, or those that only modify parameters)**:
If a function has no side effects because it has no state, it is safe to not mock it in tests.
- **Full HTML pages**:
@@ -206,7 +206,7 @@ graph RL
- **All server requests**:
Similar to unit tests, when running component tests, the backend may not be reachable, so all outgoing requests need to be mocked.
- **Asynchronous background operations**:
- Similar to unit tests, background operations cannot be stopped or waited on. This means they will continue running in the following tests and cause side effects.
+ Similar to unit tests, background operations cannot be stopped or waited on. This means they continue running in the following tests and cause side effects.
- **Child components**:
Every component is tested individually, so child components are mocked.
See also [`shallowMount()`](https://vue-test-utils.vuejs.org/api/#shallowmount)
@@ -214,7 +214,7 @@ graph RL
#### What *not* to mock in component tests
- **Methods or computed properties of the component under test**:
- By mocking part of the component under test, the mocks will be tested and not the real component.
+ By mocking part of the component under test, the mocks are tested and not the real component.
- **Functions and classes independent from Vue**:
All plain JavaScript code is already covered by unit tests and needs not to be mocked in component tests.
@@ -295,7 +295,7 @@ graph RL
Similar to unit and component tests, when running component tests, the backend may not be reachable, so all outgoing requests must be mocked.
- **Asynchronous background operations that are not perceivable on the page**:
Background operations that affect the page must be tested on this level.
- All other background operations cannot be stopped or waited on, so they will continue running in the following tests and cause side effects.
+ All other background operations cannot be stopped or waited on, so they continue running in the following tests and cause side effects.
#### What *not* to mock in integration tests
@@ -360,7 +360,7 @@ possible).
| Tests path | Testing engine | Notes |
| ---------- | -------------- | ----- |
-| `spec/features/` | [Capybara](https://github.com/teamcapybara/capybara) + [RSpec](https://github.com/rspec/rspec-rails#feature-specs) | If your test has the `:js` metadata, the browser driver will be [Poltergeist](https://github.com/teamcapybara/capybara#poltergeist), otherwise it's using [RackTest](https://github.com/teamcapybara/capybara#racktest). |
+| `spec/features/` | [Capybara](https://github.com/teamcapybara/capybara) + [RSpec](https://github.com/rspec/rspec-rails#feature-specs) | If your test has the `:js` metadata, the browser driver is [Poltergeist](https://github.com/teamcapybara/capybara#poltergeist), otherwise it's using [RackTest](https://github.com/teamcapybara/capybara#racktest). |
### Frontend feature tests
@@ -453,13 +453,13 @@ should take care of not introducing too many (slow and duplicated) tests.
The reasons why we should follow these best practices are as follows:
-- System tests are slow to run since they spin up the entire application stack
+- System tests are slow to run because they spin up the entire application stack
in a headless browser, and even slower when they integrate a JS driver
- When system tests run with a JavaScript driver, the tests are run in a
different thread than the application. This means it does not share a
- database connection and your test will have to commit the transactions in
+ database connection and your test must commit the transactions in
order for the running application to see the data (and vice-versa). In that
- case we need to truncate the database after each spec instead of simply
+ case we need to truncate the database after each spec instead of
rolling back a transaction (the faster strategy that's in use for other kind
of tests). This is slower than transactions, however, so we want to use
truncation only when necessary.
@@ -488,7 +488,7 @@ Every new feature should come with a [test plan](https://gitlab.com/gitlab-org/g
| Tests path | Testing engine | Notes |
| ---------- | -------------- | ----- |
-| `qa/qa/specs/features/` | [Capybara](https://github.com/teamcapybara/capybara) + [RSpec](https://github.com/rspec/rspec-rails#feature-specs) + Custom QA framework | Tests should be placed under their corresponding [Product category](https://about.gitlab.com/handbook/product/product-categories/) |
+| `qa/qa/specs/features/` | [Capybara](https://github.com/teamcapybara/capybara) + [RSpec](https://github.com/rspec/rspec-rails#feature-specs) + Custom QA framework | Tests should be placed under their corresponding [Product category](https://about.gitlab.com/handbook/product/categories/) |
> See [end-to-end tests](end_to_end/index.md) for more information.
diff --git a/doc/development/testing_guide/testing_migrations_guide.md b/doc/development/testing_guide/testing_migrations_guide.md
index b944c7ed406..31054d0ffb2 100644
--- a/doc/development/testing_guide/testing_migrations_guide.md
+++ b/doc/development/testing_guide/testing_migrations_guide.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
type: reference
---
@@ -24,15 +24,15 @@ Adding a `:migration` tag to a test signature enables some custom RSpec
[`spec/support/migration.rb`](https://gitlab.com/gitlab-org/gitlab/-/blob/f81fa6ab1dd788b70ef44b85aaba1f31ffafae7d/spec/support/migration.rb)
to run.
-A `before` hook will revert all migrations to the point that a migration
+A `before` hook reverts all migrations to the point that a migration
under test is not yet migrated.
-In other words, our custom RSpec hooks will find a previous migration, and
+In other words, our custom RSpec hooks finds a previous migration, and
migrate the database **down** to the previous migration version.
With this approach you can test a migration against a database schema.
-An `after` hook will migrate the database **up** and reinstitute the latest
+An `after` hook migrates the database **up** and reinstitutes the latest
schema version, so that the process does not affect subsequent specs and
ensures proper isolation.
@@ -40,7 +40,7 @@ ensures proper isolation.
To test an `ActiveRecord::Migration` class (i.e., a
regular migration `db/migrate` or a post-migration `db/post_migrate`), you
-will need to load the migration file by using the `require_migration!` helper
+must load the migration file by using the `require_migration!` helper
method because it is not autoloaded by Rails.
Example:
@@ -57,15 +57,15 @@ RSpec.describe ...
#### `require_migration!`
-Since the migration files are not autoloaded by Rails, you will need to manually
+Since the migration files are not autoloaded by Rails, you must manually
load the migration file. To do so, you can use the `require_migration!` helper method
-which can automatically load the correct migration file based on the spec file name.
+which can automatically load the correct migration file based on the spec filename.
For example, if your spec file is named as `populate_foo_column_spec.rb` then the
-helper method will try to load `${schema_version}_populate_foo_column.rb` migration file.
+helper method tries to load `${schema_version}_populate_foo_column.rb` migration file.
In case there is no pattern between your spec file and the actual migration file,
-you can provide the migration file name without the schema version, like so:
+you can provide the migration filename without the schema version, like so:
```ruby
require_migration!('populate_foo_column')
@@ -75,8 +75,9 @@ require_migration!('populate_foo_column')
Use the `table` helper to create a temporary `ActiveRecord::Base`-derived model
for a table. [FactoryBot](best_practices.md#factories)
-**should not** be used to create data for migration specs. For example, to
-create a record in the `projects` table:
+**should not** be used to create data for migration specs because it relies on
+application code which can change after the migration has run, and cause the test
+to fail. For example, to create a record in the `projects` table:
```ruby
project = table(:projects).create!(id: 1, name: 'gitlab1', path: 'gitlab1')
@@ -84,8 +85,8 @@ project = table(:projects).create!(id: 1, name: 'gitlab1', path: 'gitlab1')
#### `migrate!`
-Use the `migrate!` helper to run the migration that is under test. It will not only
-run the migration, but will also bump the schema version in the `schema_migrations`
+Use the `migrate!` helper to run the migration that is under test. It
+runs the migration and bumps the schema version in the `schema_migrations`
table. It is necessary because in the `after` hook we trigger the rest of
the migrations, and we need to know where to start. Example:
@@ -102,7 +103,7 @@ end
#### `reversible_migration`
Use the `reversible_migration` helper to test migrations with either a
-`change` or both `up` and `down` hooks. This will test that the state of
+`change` or both `up` and `down` hooks. This tests that the state of
the application and its data after the migration becomes reversed is the
same as it was before the migration ran in the first place. The helper:
@@ -183,7 +184,7 @@ end
## Testing a non-`ActiveRecord::Migration` class
To test a non-`ActiveRecord::Migration` test (a background migration),
-you will need to manually provide a required schema version. Please add a
+you must manually provide a required schema version. Please add a
`schema` tag to a context that you want to switch the database schema within.
If not set, `schema` defaults to `:latest`.
diff --git a/doc/development/testing_guide/testing_rake_tasks.md b/doc/development/testing_guide/testing_rake_tasks.md
index 384739d1adb..dc754721e24 100644
--- a/doc/development/testing_guide/testing_rake_tasks.md
+++ b/doc/development/testing_guide/testing_rake_tasks.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Testing Rake tasks
@@ -11,7 +11,7 @@ in lieu of the standard Spec helper. Instead of `require 'spec_helper'`, use
`require 'rake_helper'`. The helper includes `spec_helper` for you, and configures
a few other things to make testing Rake tasks easier.
-At a minimum, requiring the Rake helper will redirect `stdout`, include the
+At a minimum, requiring the Rake helper redirects `stdout`, include the
runtime task helpers, and include the `RakeHelpers` Spec support module.
The `RakeHelpers` module exposes a `run_rake_task(<task>)` method to make
diff --git a/doc/development/understanding_explain_plans.md b/doc/development/understanding_explain_plans.md
index 896b34a4ab4..3f596e89e29 100644
--- a/doc/development/understanding_explain_plans.md
+++ b/doc/development/understanding_explain_plans.md
@@ -1,7 +1,7 @@
---
stage: Enablement
group: Database
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Understanding EXPLAIN plans
diff --git a/doc/development/uploads.md b/doc/development/uploads.md
index 0307ab76cd1..7ffa9014240 100644
--- a/doc/development/uploads.md
+++ b/doc/development/uploads.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Uploads development documentation
@@ -46,7 +46,7 @@ We have three challenges here: performance, availability, and scalability.
### Performance
-Rails process are expensive in terms of both CPU and memory. Ruby [global interpreter lock](https://en.wikipedia.org/wiki/Global_interpreter_lock) adds to cost too because the Ruby process will spend time on I/O operations on step 3 causing incoming requests to pile up.
+Rails process are expensive in terms of both CPU and memory. Ruby [global interpreter lock](https://en.wikipedia.org/wiki/Global_interpreter_lock) adds to cost too because the Ruby process spends time on I/O operations on step 3 causing incoming requests to pile up.
In order to improve this, [disk buffered upload](#disk-buffered-upload) was implemented. With this, Rails no longer deals with writing uploaded files to disk.
@@ -88,7 +88,7 @@ To address this problem an HA object storage can be used and it's supported by [
Scaling NFS is outside of our support scope, and NFS is not a part of cloud native installations.
-All features that require Sidekiq and do not use direct upload won't work without NFS. In Kubernetes, machine boundaries translate to PODs, and in this case the uploaded file will be written into the POD private disk. Since Sidekiq POD cannot reach into other pods, the operation will fail to read it.
+All features that require Sidekiq and do not use direct upload doesn't work without NFS. In Kubernetes, machine boundaries translate to PODs, and in this case the uploaded file is written into the POD private disk. Since Sidekiq POD cannot reach into other pods, the operation fails to read it.
## How to select the proper level of acceleration?
@@ -96,7 +96,7 @@ Selecting the proper acceleration is a tradeoff between speed of development and
We can identify three major use-cases for an upload:
-1. **storage:** if we are uploading for storing a file (i.e. artifacts, packages, discussion attachments). In this case [direct upload](#direct-upload) is the proper level as it's the less resource-intensive operation. Additional information can be found on [File Storage in GitLab](file_storage.md).
+1. **storage:** if we are uploading for storing a file (like artifacts, packages, or discussion attachments). In this case [direct upload](#direct-upload) is the proper level as it's the less resource-intensive operation. Additional information can be found on [File Storage in GitLab](file_storage.md).
1. **in-controller/synchronous processing:** if we allow processing **small files** synchronously, using [disk buffered upload](#disk-buffered-upload) may speed up development.
1. **Sidekiq/asynchronous processing:** Asynchronous processing must implement [direct upload](#direct-upload), the reason being that it's the only way to support Cloud Native deployments without a shared NFS.
@@ -120,7 +120,7 @@ We have three kinds of file encoding in our uploads:
1. <i class="fa fa-check-circle"></i> **multipart**: `multipart/form-data` is the most common, a file is encoded as a part of a multipart encoded request.
1. <i class="fa fa-check-circle"></i> **body**: some APIs uploads files as the whole request body.
-1. <i class="fa fa-times-circle"></i> **JSON**: some JSON API uploads files as base64 encoded strings. This will require a change to GitLab Workhorse, which [is planned](https://gitlab.com/gitlab-org/gitlab-workhorse/-/issues/226).
+1. <i class="fa fa-times-circle"></i> **JSON**: some JSON API uploads files as base64 encoded strings. This requires a change to GitLab Workhorse, which [is planned](https://gitlab.com/gitlab-org/gitlab-workhorse/-/issues/226).
## Uploading technologies
@@ -166,7 +166,7 @@ is replaced with the path to the corresponding file before it is forwarded to
Rails.
To prevent abuse of this feature, Workhorse signs the modified request with a
-special header, stating which entries it modified. Rails will ignore any
+special header, stating which entries it modified. Rails ignores any
unsigned path entries.
```mermaid
@@ -220,8 +220,8 @@ In this setup, an extra Rails route must be implemented in order to handle autho
and [its routes](https://gitlab.com/gitlab-org/gitlab/blob/cc723071ad337573e0360a879cbf99bc4fb7adb9/config/routes/git_http.rb#L31-32).
- [API endpoints for uploading packages](packages.md#file-uploads).
-This will fallback to _disk buffered upload_ when `direct_upload` is disabled inside the [object storage setting](../administration/uploads.md#object-storage-settings).
-The answer to the `/authorize` call will only contain a file system path.
+This falls back to _disk buffered upload_ when `direct_upload` is disabled inside the [object storage setting](../administration/uploads.md#object-storage-settings).
+The answer to the `/authorize` call contains only a file system path.
```mermaid
sequenceDiagram
@@ -272,7 +272,7 @@ sequenceDiagram
## How to add a new upload route
-In this section, we'll describe how to add a new upload route [accelerated](#uploading-technologies) by Workhorse for [body and multipart](#upload-encodings) encoded uploads.
+In this section, we describe how to add a new upload route [accelerated](#uploading-technologies) by Workhorse for [body and multipart](#upload-encodings) encoded uploads.
Uploads routes belong to one of these categories:
@@ -280,7 +280,7 @@ Uploads routes belong to one of these categories:
1. Grape API: uploads handled by a Grape API endpoint.
1. GraphQL API: uploads handled by a GraphQL resolve function.
-CAUTION: **Warning:**
+WARNING:
GraphQL uploads do not support [direct upload](#direct-upload) yet. Depending on the use case, the feature may not work on installations without NFS (like GitLab.com or Kubernetes installations). Uploading to object storage inside the GraphQL resolve function may result in timeout errors. For more details please follow [issue #280819](https://gitlab.com/gitlab-org/gitlab/-/issues/280819).
### Update Workhorse for the new route
@@ -310,7 +310,7 @@ few things to do:
1. Generally speaking, it's a good idea to check if the instance is from the [`UploadedFile`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/uploaded_file.rb) class. For example, see how we checked
[that the parameter is indeed an `UploadedFile`](https://gitlab.com/gitlab-org/gitlab/-/commit/ea30fe8a71bf16ba07f1050ab4820607b5658719#51c0cc7a17b7f12c32bc41cfab3649ff2739b0eb_79_77).
-CAUTION: **Caution:**
+WARNING:
**Do not** call `UploadedFile#from_params` directly! Do not build an [`UploadedFile`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/uploaded_file.rb)
instance using `UploadedFile#from_params`! This method can be unsafe to use depending on the `params`
passed. Instead, use the [`UploadedFile`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/uploaded_file.rb)
@@ -339,7 +339,7 @@ use `requires :file, type: ::API::Validations::Types::WorkhorseFile`.
- The remaining code of the processing. This is where the code must be reading the parameter (for
our example, it would be `params[:file]`).
-CAUTION: **Caution:**
+WARNING:
**Do not** call `UploadedFile#from_params` directly! Do not build an [`UploadedFile`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/uploaded_file.rb)
object using `UploadedFile#from_params`! This method can be unsafe to use depending on the `params`
passed. Instead, use the [`UploadedFile`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/uploaded_file.rb)
diff --git a/doc/development/utilities.md b/doc/development/utilities.md
index aca14507fcd..2d347df0559 100644
--- a/doc/development/utilities.md
+++ b/doc/development/utilities.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# GitLab utilities
@@ -100,7 +100,7 @@ Refer to [`override.rb`](https://gitlab.com/gitlab-org/gitlab/blob/master/lib/gi
end
```
- Note that the check will only happen when either:
+ Note that the check only happens when either:
- The overriding method is defined in a class, or:
- The overriding method is defined in a module, and it's prepended to
diff --git a/doc/development/ux_guide/users.md b/doc/development/ux_guide/users.md
index 3aecbbffd73..a01aa4bcf35 100644
--- a/doc/development/ux_guide/users.md
+++ b/doc/development/ux_guide/users.md
@@ -1,5 +1,8 @@
---
-redirect_to: 'https://about.gitlab.com/handbook/marketing/product-marketing/roles-personas/'
+redirect_to: 'https://about.gitlab.com/handbook/marketing/strategic-marketing/roles-personas/'
---
-This document was moved to [another location](https://about.gitlab.com/handbook/marketing/product-marketing/roles-personas/).
+This document was moved to [another location](https://about.gitlab.com/handbook/marketing/strategic-marketing/roles-personas/).
+
+<!-- This redirect file can be deleted after February 1, 2021. -->
+<!-- Before deletion, see: https://docs.gitlab.com/ee/development/documentation/#move-or-rename-a-page -->
diff --git a/doc/development/value_stream_analytics.md b/doc/development/value_stream_analytics.md
index 4560a115562..b3692bd1d2c 100644
--- a/doc/development/value_stream_analytics.md
+++ b/doc/development/value_stream_analytics.md
@@ -1,7 +1,7 @@
---
stage: Manage
group: Optimize
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Value Stream Analytics development guide
@@ -32,7 +32,7 @@ These events play a key role in the duration calculation.
Formula: `duration = end_event_time - start_event_time`
-To make the duration calculation flexible, each `Event` is implemented as a separate class. They're responsible for defining a timestamp expression that will be used in the calculation query.
+To make the duration calculation flexible, each `Event` is implemented as a separate class. They're responsible for defining a timestamp expression that is used in the calculation query.
#### Implementing an `Event` class
@@ -41,12 +41,12 @@ There are a few methods that are required to be implemented, the `StageEvent` ba
- `object_type`
- `timestamp_projection`
-The `object_type` method defines which domain object will be queried for the calculation. Currently two models are allowed:
+The `object_type` method defines which domain object is queried for the calculation. Currently two models are allowed:
- `Issue`
- `MergeRequest`
-For the duration calculation the `timestamp_projection` method will be used.
+For the duration calculation the `timestamp_projection` method is used.
```ruby
def timestamp_projection
@@ -92,7 +92,7 @@ Some start/end event pairs are not "compatible" with each other. For example:
The `StageEvents` module describes the allowed `start_event` and `end_event` pairings (`PAIRING_RULES` constant). If a new event is added, it needs to be registered in this module.
​To add a new event:​
-1. Add an entry in `ENUM_MAPPING` with a unique number, it'll be used in the `Stage` model as `enum`.
+1. Add an entry in `ENUM_MAPPING` with a unique number, which is used in the `Stage` model as `enum`.
1. Define which events are compatible with the event in the `PAIRING_RULES` hash.
Supported start/end event pairings:
@@ -185,19 +185,19 @@ Currently supported parents:
1. User navigates to the value stream analytics page.
1. User selects a group.
1. Backend loads the defined stages for the selected group.
-1. Additions and modifications to the stages will be persisted within the selected group only.
+1. Additions and modifications to the stages are persisted within the selected group only.
### Default stages
The [original implementation](https://gitlab.com/gitlab-org/gitlab/-/issues/847) of value stream analytics defined 7 stages. These stages are always available for each parent, however altering these stages is not possible.
​
-To make things efficient and reduce the number of records created, the default stages are expressed as in-memory objects (not persisted). When the user creates a custom stage for the first time, all the stages will be persisted. This behavior is implemented in the value stream analytics service objects.
+To make things efficient and reduce the number of records created, the default stages are expressed as in-memory objects (not persisted). When the user creates a custom stage for the first time, all the stages are persisted. This behavior is implemented in the value stream analytics service objects.
​
The reason for this was that we'd like to add the abilities to hide and order stages later on.
## Data Collector
-`DataCollector` is the central point where the data will be queried from the database. The class always operates on a single stage and consists of the following components:
+`DataCollector` is the central point where the data is queried from the database. The class always operates on a single stage and consists of the following components:
- `BaseQueryBuilder`:
- Responsible for composing the initial query.
@@ -238,7 +238,7 @@ SELECT (START_EVENT_TIME-END_EVENT_TIME) as duration, END_EVENT.timestamp
## High-level overview
- Rails Controller (`Analytics::CycleAnalytics` module): Value stream analytics exposes its data via JSON endpoints, implemented within the `analytics` workspace. Configuring the stages are also implements JSON endpoints (CRUD).
-- Services (`Analytics::CycleAnalytics` module): All `Stage` related actions will be delegated to respective service objects.
+- Services (`Analytics::CycleAnalytics` module): All `Stage` related actions are delegated to respective service objects.
- Models (`Analytics::CycleAnalytics` module): Models are used to persist the `Stage` objects `ProjectStage` and `GroupStage`.
- Feature classes (`Gitlab::Analytics::CycleAnalytics` module):
- Responsible for composing queries and define feature specific business logic.
@@ -248,7 +248,7 @@ SELECT (START_EVENT_TIME-END_EVENT_TIME) as duration, END_EVENT.timestamp
Since we have a lots of events and possible pairings, testing each pairing is not possible. The rule is to have at least one test case using an `Event` class.
-Writing a test case for a stage using a new `Event` can be challenging since data must be created for both events. To make this a bit simpler, each test case must be implemented in the `data_collector_spec.rb` where the stage is tested through the `DataCollector`. Each test case will be turned into multiple tests, covering the following cases:
+Writing a test case for a stage using a new `Event` can be challenging since data must be created for both events. To make this a bit simpler, each test case must be implemented in the `data_collector_spec.rb` where the stage is tested through the `DataCollector`. Each test case is turned into multiple tests, covering the following cases:
- Different parents: `Group` or `Project`
- Different calculations: `Median`, `RecordsFetcher` or `DataForDurationChart`
diff --git a/doc/development/verifying_database_capabilities.md b/doc/development/verifying_database_capabilities.md
index fe60ec11bce..195e8c5c77e 100644
--- a/doc/development/verifying_database_capabilities.md
+++ b/doc/development/verifying_database_capabilities.md
@@ -1,7 +1,7 @@
---
stage: Enablement
group: Database
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# Verifying Database Capabilities
diff --git a/doc/development/what_requires_downtime.md b/doc/development/what_requires_downtime.md
index 18a2e17967a..63a7ea4dfbd 100644
--- a/doc/development/what_requires_downtime.md
+++ b/doc/development/what_requires_downtime.md
@@ -1,7 +1,7 @@
---
stage: Enablement
group: Database
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# What requires downtime?
@@ -51,7 +51,7 @@ We require indication of when it is safe to remove the column ignore with:
- `remove_with`: set to a GitLab release typically two releases (M+2) after adding the
column ignore.
- `remove_after`: set to a date after which we consider it safe to remove the column
- ignore, typically last date of the development cycle of release M+2 - namely the release date.
+ ignore, typically after the M+1 release date, during the M+2 development cycle.
This information allows us to reason better about column ignores and makes sure we
don't remove column ignores too early for both regular releases and deployments to GitLab.com. For
diff --git a/doc/development/wikis.md b/doc/development/wikis.md
index f8bcaac2ec0..a2da4843eac 100644
--- a/doc/development/wikis.md
+++ b/doc/development/wikis.md
@@ -2,7 +2,7 @@
type: reference, dev
stage: Create
group: Knowledge
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
description: "GitLab's development guidelines for Wikis"
---
diff --git a/doc/development/windows.md b/doc/development/windows.md
index 2ca99508746..08ff29a4e58 100644
--- a/doc/development/windows.md
+++ b/doc/development/windows.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
type: reference, howto
---
@@ -13,9 +13,9 @@ This is a guide for how to get a Windows development virtual machine on Google C
## Why Windows in Google Cloud?
-Use of Microsoft Windows operating systems on company laptops is banned under GitLab's [Approved Operating Systems policy](https://about.gitlab.com/handbook/security/approved_os.html#windows).
+Use of Microsoft Windows operating systems on company laptops is banned under the GitLab [Approved Operating Systems policy](https://about.gitlab.com/handbook/security/approved_os.html#windows).
-This can make it difficult to develop features for the Windows platforms. Using GCP will allow us to have a temporary Windows machine that can be removed once we're done with it.
+This can make it difficult to develop features for the Windows platforms. Using GCP allows us to have a temporary Windows machine that can be removed once we're done with it.
## Shared Windows runners
@@ -74,7 +74,7 @@ Build a Google Cloud image with the above shared runners repository by doing the
1. Click **Set Windows password**.
1. Optional: Set a username or use default.
1. Click **Next**.
-1. Copy and save the password as it won't be shown again.
+1. Copy and save the password as it is not shown again.
1. Click **RDP** down arrow.
1. Click **Download the RDP file**.
1. Open the downloaded RDP file with the Windows remote desktop app (<https://docs.microsoft.com/en-us/windows-server/remote/remote-desktop-services/clients/remote-desktop-clients>).
@@ -98,8 +98,8 @@ Here are a few tips on GCP and Windows.
### GCP cost savings
-To minimise the cost of your GCP VM instance, stop it when you're not using it.
-If you do, you'll need to re-download the RDP file from the console as the IP
+To minimize the cost of your GCP VM instance, stop it when you're not using it.
+If you do, you must download the RDP file again from the console as the IP
address changes every time you stop and start it.
### chocolatey