diff options
Diffstat (limited to 'doc/development')
116 files changed, 3536 insertions, 2056 deletions
diff --git a/doc/development/README.md b/doc/development/README.md index 1566173992a..3912a828dec 100644 --- a/doc/development/README.md +++ b/doc/development/README.md @@ -3,9 +3,9 @@ comments: false description: 'Learn how to contribute to GitLab.' --- -# GitLab development guides +# Contributor and Development Docs -## Get started! +## Get started - Set up GitLab's development environment with [GitLab Development Kit (GDK)](https://gitlab.com/gitlab-org/gitlab-development-kit/blob/master/doc/howto/README.md) - [GitLab contributing guide](contributing/index.md) @@ -17,12 +17,13 @@ description: 'Learn how to contribute to GitLab.' - [GitLab core team & GitLab Inc. contribution process](https://gitlab.com/gitlab-org/gitlab-ce/blob/master/PROCESS.md) - [Generate a changelog entry with `bin/changelog`](changelog.md) - [Code review guidelines](code_review.md) for reviewing code and having code reviewed +- [Database review guidelines](database_review.md) for reviewing database-related changes and complex SQL queries - [Automatic CE->EE merge](automatic_ce_ee_merge.md) - [Guidelines for implementing Enterprise Edition features](ee_features.md) - [Security process for developers](https://gitlab.com/gitlab-org/release/docs/blob/master/general/security/developer.md#security-releases-critical-non-critical-as-a-developer) - [Requesting access to Chatops on GitLab.com](chatops_on_gitlabcom.md#requesting-access) (for GitLabbers) -## UX and frontend guides +## UX and Frontend guides - [GitLab Design System](https://design.gitlab.com/) for building GitLab with existing CSS styles and elements - [Frontend guidelines](fe_guide/index.md) @@ -63,6 +64,8 @@ description: 'Learn how to contribute to GitLab.' - [Routing](routing.md) - [Repository mirroring](repository_mirroring.md) - [Git LFS](lfs.md) +- [Developing against interacting components or features](interacting_components.md) +- [File uploads](uploads.md) ## Performance guides @@ -111,6 +114,11 @@ description: 'Learn how to contribute to GitLab.' - [Database helper modules](database_helpers.md) - [Code comments](code_comments.md) +## Case studies + +- [Database case study: Filtering by label](filtering_by_label.md) +- [Database case study: Namespaces storage statistics](namespaces_storage_statistics.md) + ## Integration guides - [Jira Connect app](integrations/jira_connect.md) @@ -144,6 +152,10 @@ description: 'Learn how to contribute to GitLab.' - [Go Guidelines](go_guide/index.md) +## Shell Scripting guides + +- [Shell scripting standards and style guidelines](shell_scripting_guide/index.md) + ## Other GitLab Development Kit (GDK) guides - [Run full Auto DevOps cycle in a GDK instance](https://gitlab.com/gitlab-org/gitlab-development-kit/blob/master/doc/howto/auto_devops.md) diff --git a/doc/development/api_graphql_styleguide.md b/doc/development/api_graphql_styleguide.md index c83a0427c98..7569ccc04c1 100644 --- a/doc/development/api_graphql_styleguide.md +++ b/doc/development/api_graphql_styleguide.md @@ -424,12 +424,8 @@ Will generate a field called `mergeRequestSetWip` that ### Authorizing resources -To authorize resources inside a mutation, we can include the -`Gitlab::Graphql::Authorize::AuthorizeResource` concern in the -mutation. - -This allows us to provide the required abilities on the mutation like -this: +To authorize resources inside a mutation, we first provide the required + abilities on the mutation like this: ```ruby module Mutations diff --git a/doc/development/api_styleguide.md b/doc/development/api_styleguide.md index 0866d3baeeb..61576236c96 100644 --- a/doc/development/api_styleguide.md +++ b/doc/development/api_styleguide.md @@ -51,7 +51,7 @@ allowed. – <https://github.com/ruby-grape/grape#declared> -### Exclude params from parent namespaces! +### Exclude params from parent namespaces > By default `declared(params)`includes parameters that were defined in all parent namespaces. @@ -64,7 +64,7 @@ In most cases you will want to exclude params from the parent namespaces: declared(params, include_parent_namespaces: false) ``` -### When to use `declared(params)`? +### When to use `declared(params)` You should always use `declared(params)` when you pass the params hash as arguments to a method call. diff --git a/doc/development/architecture.md b/doc/development/architecture.md index b645a72567c..2adca2dae28 100644 --- a/doc/development/architecture.md +++ b/doc/development/architecture.md @@ -12,21 +12,21 @@ New versions of GitLab are released in stable branches and the master branch is For information, see the [GitLab Release Process](https://gitlab.com/gitlab-org/release/docs/tree/master#gitlab-release-process). -Both EE and CE require some add-on components called gitlab-shell and Gitaly. These components are available from the [gitlab-shell](https://gitlab.com/gitlab-org/gitlab-shell/tree/master) and [gitaly](https://gitlab.com/gitlab-org/gitaly/tree/master) repositories respectively. New versions are usually tags but staying on the master branch will give you the latest stable version. New releases are generally around the same time as GitLab CE releases with exception for informal security updates deemed critical. +Both EE and CE require some add-on components called GitLab Shell and Gitaly. These components are available from the [GitLab Shell](https://gitlab.com/gitlab-org/gitlab-shell/tree/master) and [Gitaly](https://gitlab.com/gitlab-org/gitaly/tree/master) repositories respectively. New versions are usually tags but staying on the master branch will give you the latest stable version. New releases are generally around the same time as GitLab CE releases with exception for informal security updates deemed critical. ## Components A typical install of GitLab will be on GNU/Linux. It uses Nginx or Apache as a web front end to proxypass the Unicorn web server. By default, communication between Unicorn and the front end is via a Unix domain socket but forwarding requests via TCP is also supported. The web front end accesses `/home/git/gitlab/public` bypassing the Unicorn server to serve static pages, uploads (e.g. avatar images or attachments), and precompiled assets. GitLab serves web pages and a [GitLab API](https://gitlab.com/gitlab-org/gitlab-ce/tree/master/doc/api) using the Unicorn web server. It uses Sidekiq as a job queue which, in turn, uses redis as a non-persistent database backend for job information, meta data, and incoming jobs. -We also support deploying GitLab on Kubernetes using our [gitlab Helm chart](https://docs.gitlab.com/charts/). +We also support deploying GitLab on Kubernetes using our [GitLab Helm chart](https://docs.gitlab.com/charts/). -The GitLab web app uses MySQL or PostgreSQL for persistent database information (e.g. users, permissions, issues, other meta data). GitLab stores the bare git repositories it serves in `/home/git/repositories` by default. It also keeps default branch and hook information with the bare repository. +The GitLab web app uses PostgreSQL for persistent database information (e.g. users, permissions, issues, other meta data). GitLab stores the bare Git repositories it serves in `/home/git/repositories` by default. It also keeps default branch and hook information with the bare repository. -When serving repositories over HTTP/HTTPS GitLab utilizes the GitLab API to resolve authorization and access as well as serving git objects. +When serving repositories over HTTP/HTTPS GitLab utilizes the GitLab API to resolve authorization and access as well as serving Git objects. -The add-on component gitlab-shell serves repositories over SSH. It manages the SSH keys within `/home/git/.ssh/authorized_keys` which should not be manually edited. gitlab-shell accesses the bare repositories through Gitaly to serve git objects and communicates with redis to submit jobs to Sidekiq for GitLab to process. gitlab-shell queries the GitLab API to determine authorization and access. +The add-on component GitLab Shell serves repositories over SSH. It manages the SSH keys within `/home/git/.ssh/authorized_keys` which should not be manually edited. GitLab Shell accesses the bare repositories through Gitaly to serve Git objects and communicates with redis to submit jobs to Sidekiq for GitLab to process. GitLab Shell queries the GitLab API to determine authorization and access. -Gitaly executes git operations from gitlab-shell and the GitLab web app, and provides an API to the GitLab web app to get attributes from git (e.g. title, branches, tags, other meta data), and to get blobs (e.g. diffs, commits, files). +Gitaly executes Git operations from GitLab Shell and the GitLab web app, and provides an API to the GitLab web app to get attributes from Git (e.g. title, branches, tags, other meta data), and to get blobs (e.g. diffs, commits, files). You may also be interested in the [production architecture of GitLab.com](https://about.gitlab.com/handbook/engineering/infrastructure/production-architecture/). @@ -130,7 +130,7 @@ Component statuses are linked to configuration documentation for each component. | [NGINX](#nginx) | Routes requests to appropriate components, terminates SSL | [✅][nginx-omnibus] | [✅][nginx-charts] | [⚙][nginx-charts] | [✅](https://about.gitlab.com/handbook/engineering/infrastructure/production-architecture/#service-architecture) | [⤓][nginx-source] | ❌ | CE & EE | | [Unicorn (GitLab Rails)](#unicorn) | Handles requests for the web interface and API | [✅][unicorn-omnibus] | [✅][unicorn-charts] | [✅][unicorn-charts] | [✅](../user/gitlab_com/index.md#unicorn) | [⚙][unicorn-source] | [✅][gitlab-yml] | CE & EE | | [Sidekiq](#sidekiq) | Background jobs processor | [✅][sidekiq-omnibus] | [✅][sidekiq-charts] | [✅](https://docs.gitlab.com/charts/charts/gitlab/sidekiq/index.html) | [✅](../user/gitlab_com/index.md#sidekiq) | [✅][gitlab-yml] | [✅][gitlab-yml] | CE & EE | -| [Gitaly](#gitaly) | Git RPC service for handling all git calls made by GitLab | [✅][gitaly-omnibus] | [✅][gitaly-charts] | [✅][gitaly-charts] | [✅](https://about.gitlab.com/handbook/engineering/infrastructure/production-architecture/#service-architecture) | [⚙][gitaly-source] | ✅ | CE & EE | +| [Gitaly](#gitaly) | Git RPC service for handling all Git calls made by GitLab | [✅][gitaly-omnibus] | [✅][gitaly-charts] | [✅][gitaly-charts] | [✅](https://about.gitlab.com/handbook/engineering/infrastructure/production-architecture/#service-architecture) | [⚙][gitaly-source] | ✅ | CE & EE | | [GitLab Workhorse](#gitlab-workhorse) | Smart reverse proxy, handles large HTTP requests | [✅][workhorse-omnibus] | [✅][workhorse-charts] | [✅][workhorse-charts] | [✅](https://about.gitlab.com/handbook/engineering/infrastructure/production-architecture/#service-architecture) | [⚙][workhorse-source] | ✅ | CE & EE | | [GitLab Shell](#gitlab-shell) | Handles `git` over SSH sessions | [✅][shell-omnibus] | [✅][shell-charts] | [✅][shell-charts] | [✅](https://about.gitlab.com/handbook/engineering/infrastructure/production-architecture/#service-architecture) | [⚙][shell-source] | [✅][gitlab-yml] | CE & EE | | [GitLab Pages](#gitlab-pages) | Hosts static websites | [⚙][pages-omnibus] | [❌][pages-charts] | [❌][pages-charts] | [✅](../user/gitlab_com/index.md#gitlab-pages) | [⚙][pages-source] | [⚙][pages-gdk] | CE & EE | @@ -147,7 +147,7 @@ Component statuses are linked to configuration documentation for each component. | [Redis Exporter](#redis-exporter) | Prometheus endpoint with Redis metrics | [✅][redis-exporter-omnibus] | [✅][redis-exporter-charts] | [✅][redis-exporter-charts] | [✅](https://about.gitlab.com/handbook/engineering/monitoring/) | ❌ | ❌ | CE & EE | | [Postgres Exporter](#postgres-exporter) | Prometheus endpoint with PostgreSQL metrics | [✅][postgres-exporter-omnibus] | [✅][postgres-exporter-charts] | [✅][postgres-exporter-charts] | [✅](https://about.gitlab.com/handbook/engineering/monitoring/) | ❌ | ❌ | CE & EE | | [PgBouncer Exporter](#pgbouncer-exporter) | Prometheus endpoint with PgBouncer metrics | [⚙][pgbouncer-exporter-omnibus] | [❌][pgbouncer-exporter-charts] | [❌][pgbouncer-exporter-charts] | [✅](https://about.gitlab.com/handbook/engineering/monitoring/) | ❌ | ❌ | CE & EE | -| [GitLab Monitor](#gitlab-monitor) | Generates a variety of GitLab metrics | [✅][gitlab-monitor-omnibus] | [✅][gitab-monitor-charts] | [✅][gitab-monitor-charts] | [✅](https://about.gitlab.com/handbook/engineering/monitoring/) | ❌ | ❌ | CE & EE | +| [GitLab Monitor](#gitlab-monitor) | Generates a variety of GitLab metrics | [✅][gitlab-monitor-omnibus] | [✅][gitlab-monitor-charts] | [✅][gitlab-monitor-charts] | [✅](https://about.gitlab.com/handbook/engineering/monitoring/) | ❌ | ❌ | CE & EE | | [Node Exporter](#node-exporter) | Prometheus endpoint with system metrics | [✅][node-exporter-omnibus] | [❌][node-exporter-charts] | [❌][node-exporter-charts] | [✅](https://about.gitlab.com/handbook/engineering/monitoring/) | ❌ | ❌ | CE & EE | | [Mattermost](#mattermost) | Open-source Slack alternative | [⚙][mattermost-omnibus] | [⤓][mattermost-charts] | [⤓][mattermost-charts] | [⤓](../user/project/integrations/mattermost.md) | ❌ | ❌ | CE & EE | | [MinIO](#minio) | Object storage service | [⤓][minio-omnibus] | [✅][minio-charts] | [✅][minio-charts] | [✅](https://about.gitlab.com/handbook/engineering/infrastructure/production-architecture/#storage-architecture) | ❌ | [⚙][minio-gdk] | CE & EE | @@ -185,7 +185,7 @@ GitLab can be considered to have two layers from a process perspective: - Layer: Monitoring - Process: `alertmanager` -[Alert manager](https://prometheus.io/docs/alerting/alertmanager/) is a tool provided by Prometheus that _"handles alerts sent by client applications such as the Prometheus server. It takes care of deduplicating, grouping, and routing them to the correct receiver integration such as email, PagerDuty, or OpsGenie. It also takes care of silencing and inhibition of alerts."_ You can read more in [issue gitlab-ce#45740](https://gitlab.com/gitlab-org/gitlab-ce/issues/45740) about what we will be alerting on. +[Alert manager](https://prometheus.io/docs/alerting/alertmanager/) is a tool provided by Prometheus that _"handles alerts sent by client applications such as the Prometheus server. It takes care of deduplicating, grouping, and routing them to the correct receiver integration such as email, PagerDuty, or OpsGenie. It also takes care of silencing and inhibition of alerts."_ You can read more in [issue #45740](https://gitlab.com/gitlab-org/gitlab-ce/issues/45740) about what we will be alerting on. #### Certificate management @@ -223,12 +223,12 @@ Elasticsearch is a distributed RESTful search engine built for the cloud. Gitaly is a service designed by GitLab to remove our need for NFS for Git storage in distributed deployments of GitLab (think GitLab.com or High Availability Deployments). As of 11.3.0, this service handles all Git level access in GitLab. You can read more about the project [in the project's readme](https://gitlab.com/gitlab-org/gitaly). -#### Gitlab Geo +#### GitLab Geo - Configuration: [Omnibus][geo-omnibus], [Charts][geo-charts], [GDK][geo-gdk] - Layer: Core Service (Processor) -#### Gitlab Monitor +#### GitLab Monitor - [Project page](https://gitlab.com/gitlab-org/gitlab-monitor) - Configuration: [Omnibus][gitlab-monitor-omnibus], [Charts][gitlab-monitor-charts] @@ -237,7 +237,7 @@ Gitaly is a service designed by GitLab to remove our need for NFS for Git storag GitLab Monitor is a process designed in house that allows us to export metrics about GitLab application internals to Prometheus. You can read more [in the project's readme](https://gitlab.com/gitlab-org/gitlab-monitor). -#### Gitlab Pages +#### GitLab Pages - Configuration: [Omnibus][pages-omnibus], [Charts][pages-charts], [Source][pages-source], [GDK][pages-gdk] - Layer: Core Service (Processor) @@ -246,7 +246,7 @@ GitLab Pages is a feature that allows you to publish static websites directly fr You can use it either for personal or business websites, such as portfolios, documentation, manifestos, and business presentations. You can also attribute any license to your content. -#### Gitlab Runner +#### GitLab Runner - [Project page](https://gitlab.com/gitlab-org/gitlab-runner/blob/master/README.md) - Configuration: [Omnibus][runner-omnibus], [Charts][runner-charts], [Source][runner-source], [GDK][runner-gdk] @@ -256,7 +256,7 @@ GitLab Runner runs tests and sends the results to GitLab. GitLab CI is the open-source continuous integration service included with GitLab that coordinates the testing. The old name of this project was GitLab CI Multi Runner but please use "GitLab Runner" (without CI) from now on. -#### Gitlab Shell +#### GitLab Shell - [Project page](https://gitlab.com/gitlab-org/gitlab-shell/blob/master/README.md) - Configuration: [Omnibus][shell-omnibus], [Charts][shell-charts], [Source][shell-source], [GDK][gitlab-yml] @@ -264,7 +264,7 @@ GitLab CI is the open-source continuous integration service included with GitLab [GitLab Shell](https://gitlab.com/gitlab-org/gitlab-shell) is a program designed at GitLab to handle ssh-based `git` sessions, and modifies the list of authorized keys. GitLab Shell is not a Unix shell nor a replacement for Bash or Zsh. -#### Gitlab Workhorse +#### GitLab Workhorse - [Project page](https://gitlab.com/gitlab-org/gitlab-workhorse/blob/master/README.md) - Configuration: [Omnibus][workhorse-omnibus], [Charts][workhorse-charts], [Source][workhorse-source] @@ -475,7 +475,7 @@ It's important to understand the distinction as some processes are used in both When making a request to an HTTP Endpoint (think `/users/sign_in`) the request will take the following path through the GitLab Service: - nginx - Acts as our first line reverse proxy. -- gitlab-workhorse - This determines if it needs to go to the Rails application or somewhere else to reduce load on Unicorn. +- GitLab Workhorse - This determines if it needs to go to the Rails application or somewhere else to reduce load on Unicorn. - unicorn - Since this is a web request, and it needs to access the application it will go to Unicorn. - Postgres/Gitaly/Redis - Depending on the type of request, it may hit these services to store or retrieve data. @@ -493,13 +493,13 @@ TODO ## System Layout -When referring to `~git` in the pictures it means the home directory of the git user which is typically `/home/git`. +When referring to `~git` in the pictures it means the home directory of the Git user which is typically `/home/git`. GitLab is primarily installed within the `/home/git` user home directory as `git` user. Within the home directory is where the gitlabhq server software resides as well as the repositories (though the repository location is configurable). The bare repositories are located in `/home/git/repositories`. GitLab is a ruby on rails application so the particulars of the inner workings can be learned by studying how a ruby on rails application works. -To serve repositories over SSH there's an add-on application called gitlab-shell which is installed in `/home/git/gitlab-shell`. +To serve repositories over SSH there's an add-on application called GitLab Shell which is installed in `/home/git/gitlab-shell`. ### Installation Folder Summary @@ -511,11 +511,19 @@ To summarize here's the [directory structure of the `git` user home directory](. ps aux | grep '^git' ``` -GitLab has several components to operate. As a system user (i.e. any user that is not the `git` user) it requires a persistent database (MySQL/PostreSQL) and redis database. It also uses Apache httpd or Nginx to proxypass Unicorn. As the `git` user it starts Sidekiq and Unicorn (a simple ruby HTTP server running on port `8080` by default). Under the GitLab user there are normally 4 processes: `unicorn_rails master` (1 process), `unicorn_rails worker` (2 processes), `sidekiq` (1 process). +GitLab has several components to operate. It requires a persistent database +(PostgreSQL) and redis database, and uses Apache httpd or Nginx to proxypass +Unicorn. All these components should run as different system users to GitLab +(e.g., `postgres`, `redis` and `www-data`, instead of `git`). + +As the `git` user it starts Sidekiq and Unicorn (a simple ruby HTTP server +running on port `8080` by default). Under the GitLab user there are normally 4 +processes: `unicorn_rails master` (1 process), `unicorn_rails worker` +(2 processes), `sidekiq` (1 process). ### Repository access -Repositories get accessed via HTTP or SSH. HTTP cloning/push/pull utilizes the GitLab API and SSH cloning is handled by gitlab-shell (previously explained). +Repositories get accessed via HTTP or SSH. HTTP cloning/push/pull utilizes the GitLab API and SSH cloning is handled by GitLab Shell (previously explained). ## Troubleshooting @@ -523,28 +531,28 @@ See the README for more information. ### Init scripts of the services -The GitLab init script starts and stops Unicorn and Sidekiq. +The GitLab init script starts and stops Unicorn and Sidekiq: ``` /etc/init.d/gitlab Usage: service gitlab {start|stop|restart|reload|status} ``` -Redis (key-value store/non-persistent database) +Redis (key-value store/non-persistent database): ``` /etc/init.d/redis Usage: /etc/init.d/redis {start|stop|status|restart|condrestart|try-restart} ``` -SSH daemon +SSH daemon: ``` /etc/init.d/sshd Usage: /etc/init.d/sshd {start|stop|restart|reload|force-reload|condrestart|try-restart|status} ``` -Web server (one of the following) +Web server (one of the following): ``` /etc/init.d/httpd @@ -554,54 +562,46 @@ $ /etc/init.d/nginx Usage: nginx {start|stop|restart|reload|force-reload|status|configtest} ``` -Persistent database (one of the following) +Persistent database: ``` -/etc/init.d/mysqld -Usage: /etc/init.d/mysqld {start|stop|status|restart|condrestart|try-restart|reload|force-reload} - $ /etc/init.d/postgresql Usage: /etc/init.d/postgresql {start|stop|restart|reload|force-reload|status} [version ..] ``` ### Log locations of the services -gitlabhq (includes Unicorn and Sidekiq logs) +gitlabhq (includes Unicorn and Sidekiq logs): -- `/home/git/gitlab/log/` contains `application.log`, `production.log`, `sidekiq.log`, `unicorn.stdout.log`, `githost.log` and `unicorn.stderr.log` normally. +- `/home/git/gitlab/log/` contains `application.log`, `production.log`, `sidekiq.log`, `unicorn.stdout.log`, `git_json.log` and `unicorn.stderr.log` normally. -gitlab-shell +GitLab Shell: - `/home/git/gitlab-shell/gitlab-shell.log` -ssh +SSH: - `/var/log/auth.log` auth log (on Ubuntu). - `/var/log/secure` auth log (on RHEL). -nginx +nginx: - `/var/log/nginx/` contains error and access logs. -Apache httpd +Apache httpd: - [Explanation of Apache logs](https://httpd.apache.org/docs/2.2/logs.html). - `/var/log/apache2/` contains error and output logs (on Ubuntu). - `/var/log/httpd/` contains error and output logs (on RHEL). -redis +Redis: - `/var/log/redis/redis.log` there are also log-rotated logs there. -PostgreSQL +PostgreSQL: - `/var/log/postgresql/*` -MySQL - -- `/var/log/mysql/*` -- `/var/log/mysql.*` - ### GitLab specific config files GitLab has configuration files located in `/home/git/gitlab/config/*`. Commonly referenced config files include: @@ -610,7 +610,7 @@ GitLab has configuration files located in `/home/git/gitlab/config/*`. Commonly - `unicorn.rb` - Unicorn web server settings. - `database.yml` - Database connection settings. -gitlab-shell has a configuration file at `/home/git/gitlab-shell/config.yml`. +GitLab Shell has a configuration file at `/home/git/gitlab-shell/config.yml`. ### Maintenance Tasks diff --git a/doc/development/automatic_ce_ee_merge.md b/doc/development/automatic_ce_ee_merge.md index 423b35a9e3a..c2700461467 100644 --- a/doc/development/automatic_ce_ee_merge.md +++ b/doc/development/automatic_ce_ee_merge.md @@ -171,6 +171,19 @@ Now, every time you create an MR for CE and EE: job failed, you are required to submit the EE MR so that you can fix the conflicts in EE before merging your changes into CE. +## How we run the Automatic CE->EE merge at GitLab + +At GitLab, we use the [Merge Train](https://gitlab.com/gitlab-org/merge-train) +project to keep our [GitLab EE](https://gitlab.com/gitlab-org/gitlab-ee) +repository updated with commits from +[GitLab CE](https://gitlab.com/gitlab-org/gitlab-ce). + +We have a mirror of the [Merge Train](https://gitlab.com/gitlab-org/merge-train) +project [configured](https://ops.gitlab.net/gitlab-org/merge-train) to run an +automatic CE->EE merge job every twenty minutes as a scheduled CI job. The +[configured](https://ops.gitlab.net/gitlab-org/merge-train) Merge Train project +is only accessible to authorized GitLab staff. + ## FAQ ### How does automatic merging work? @@ -187,7 +200,7 @@ code. ### Why merge automatically? As we work towards continuous deployments and a single repository for both CE -and EE, we need to first make sure that all CE changes make their way into CE as +and EE, we need to first make sure that all CE changes make their way into EE as fast as possible. Past experiences and data have shown that periodic CE to EE merge requests do not scale, and often take a very long time to complete. For example, [in this diff --git a/doc/development/background_migrations.md b/doc/development/background_migrations.md index 642dac614c7..3fd95537eaa 100644 --- a/doc/development/background_migrations.md +++ b/doc/development/background_migrations.md @@ -294,7 +294,7 @@ to migrate you database down and up, which can result in other background migrations being called. That means that using `spy` test doubles with `have_received` is encouraged, instead of using regular test doubles, because your expectations defined in a `it` block can conflict with what is being -called in RSpec hooks. See [gitlab-org/gitlab-ce#35351][issue-rspec-hooks] +called in RSpec hooks. See [issue #35351][issue-rspec-hooks] for more details. ## Best practices diff --git a/doc/development/build_test_package.md b/doc/development/build_test_package.md index c5f6adfeaeb..21891f70d73 100644 --- a/doc/development/build_test_package.md +++ b/doc/development/build_test_package.md @@ -3,7 +3,7 @@ While developing a new feature or modifying an existing one, it is helpful if an installable package (or a docker image) containing those changes is available for testing. For this very purpose, a manual job is provided in the GitLab CI/CD -pipeline that can be used to trigger a pipeline in the omnibus-gitlab repository +pipeline that can be used to trigger a pipeline in the Omnibus GitLab repository that will create: - A deb package for Ubuntu 16.04, available as a build artifact, and @@ -12,7 +12,7 @@ that will create: (images titled `gitlab-ce` and `gitlab-ee` respectively and image tag is the commit which triggered the pipeline). -When you push a commit to either the gitlab-ce or gitlab-ee project, the +When you push a commit to either the GitLab CE or GitLab EE project, the pipeline for that commit will have a `build-package` manual action you can trigger. @@ -30,9 +30,9 @@ branch `0-1-stable`, modify the content of `GITALY_SERVER_VERSION` to `0-1-stable` and push the commit. This will create a manual job that can be used to trigger the build. -## Specifying the branch in omnibus-gitlab repository +## Specifying the branch in Omnibus GitLab repository -In scenarios where a configuration change is to be introduced and omnibus-gitlab +In scenarios where a configuration change is to be introduced and Omnibus GitLab repository already has the necessary changes in a specific branch, you can build a package against that branch through an environment variable named `OMNIBUS_BRANCH`. To do this, specify that environment variable with the name of diff --git a/doc/development/changelog.md b/doc/development/changelog.md index 3ed586f07e9..814624c7586 100644 --- a/doc/development/changelog.md +++ b/doc/development/changelog.md @@ -35,6 +35,7 @@ the `author` field. GitLab team members **should not**. - Any user-facing change **should** have a changelog entry. Example: "GitLab now uses system fonts for all text." +- Any docs-only changes **should not** have a changelog entry. - Any change behind a feature flag **should not** have a changelog entry. The entry should be added [in the merge request removing the feature flags](feature_flags/development.md). - A fix for a regression introduced and then fixed in the same release (i.e., fixing a bug introduced during a monthly release candidate) **should not** @@ -129,6 +130,7 @@ merge_request: author: type: ``` + If you're working on the GitLab EE repository, the entry will be added to `ee/changelogs/unreleased/` instead. diff --git a/doc/development/chaos_endpoints.md b/doc/development/chaos_endpoints.md index 403a5b21827..961520db7d8 100644 --- a/doc/development/chaos_endpoints.md +++ b/doc/development/chaos_endpoints.md @@ -15,23 +15,19 @@ Currently, there are four endpoints for simulating the following conditions: ## Enabling chaos endpoints -For obvious reasons, these endpoints are not enabled by default. They can be enabled by setting the `GITLAB_ENABLE_CHAOS_ENDPOINTS` environment variable to `1`. - -For example, if you're using the [GDK](https://gitlab.com/gitlab-org/gitlab-development-kit) this can be done with the following command: - -```bash -GITLAB_ENABLE_CHAOS_ENDPOINTS=1 gdk run -``` - -## Securing the chaos endpoints +For obvious reasons, these endpoints are not enabled by default on `production`. +They are enabled by default on **development** environments. DANGER: **Danger:** -It is highly recommended that you secure access to the chaos endpoints using a secret token. This is recommended when enabling these endpoints locally and essential when running in a staging or other shared environment. You should not enable them in production unless you absolutely know what you're doing. +It is required that you secure access to the chaos endpoints using a secret token. +You should not enable them in production unless you absolutely know what you're doing. -A secret token can be set through the `GITLAB_CHAOS_SECRET` environment variable. For example, when using the [GDK](https://gitlab.com/gitlab-org/gitlab-development-kit) this can be done with the following command: +A secret token can be set through the `GITLAB_CHAOS_SECRET` environment variable. +For example, when using the [GDK](https://gitlab.com/gitlab-org/gitlab-development-kit) +this can be done with the following command: ```bash -GITLAB_ENABLE_CHAOS_ENDPOINTS=1 GITLAB_CHAOS_SECRET=secret gdk run +GITLAB_CHAOS_SECRET=secret gdk run ``` Replace `secret` with your own secret token. @@ -40,6 +36,10 @@ Replace `secret` with your own secret token. Once you have enabled the chaos endpoints and restarted the application, you can start testing using the endpoints. +By default, when invoking a chaos endpoint, the web worker process which receives the request will handle it. This means, for example, that if the Kill +operation is invoked, the Puma or Unicorn worker process handling the request will be killed. To test these operations in Sidekiq, the `async` parameter on +each endpoint can be set to `true`. This will run the chaos process in a Sidekiq worker. + ## Memory leaks To simulate a memory leak in your application, use the `/-/chaos/leakmem` endpoint. @@ -51,54 +51,88 @@ The memory is not retained after the request finishes. Once the request has comp GET /-/chaos/leakmem GET /-/chaos/leakmem?memory_mb=1024 GET /-/chaos/leakmem?memory_mb=1024&duration_s=50 +GET /-/chaos/leakmem?memory_mb=1024&duration_s=50&async=true ``` -| Attribute | Type | Required | Description | -| ------------ | ------- | -------- | ---------------------------------------------------------------------------------- | -| `memory_mb` | integer | no | How much memory, in MB, should be leaked. Defaults to 100MB. | -| `duration_s` | integer | no | Minimum duration, in seconds, that the memory should be retained. Defaults to 30s. | +| Attribute | Type | Required | Description | +| ------------ | ------- | -------- | ------------------------------------------------------------------------------------ | +| `memory_mb` | integer | no | How much memory, in MB, should be leaked. Defaults to 100MB. | +| `duration_s` | integer | no | Minimum duration_s, in seconds, that the memory should be retained. Defaults to 30s. | +| `async` | boolean | no | Set to true to leak memory in a Sidekiq background worker process | ```bash curl http://localhost:3000/-/chaos/leakmem?memory_mb=1024&duration_s=10 --header 'X-Chaos-Secret: secret' +curl http://localhost:3000/-/chaos/leakmem?memory_mb=1024&duration_s=10&token=secret ``` ## CPU spin This endpoint attempts to fully utilise a single core, at 100%, for the given period. -Depending on your rack server setup, your request may timeout after a predermined period (normally 60 seconds). +Depending on your rack server setup, your request may timeout after a predetermined period (normally 60 seconds). If you're using Unicorn, this is done by killing the worker process. ``` -GET /-/chaos/cpuspin -GET /-/chaos/cpuspin?duration_s=50 +GET /-/chaos/cpu_spin +GET /-/chaos/cpu_spin?duration_s=50 +GET /-/chaos/cpu_spin?duration_s=50&async=true ``` | Attribute | Type | Required | Description | | ------------ | ------- | -------- | --------------------------------------------------------------------- | -| `duration_s` | integer | no | Duration, in seconds, that the core will be utilised. Defaults to 30s | +| `duration_s` | integer | no | Duration, in seconds, that the core will be utilized. Defaults to 30s | +| `async` | boolean | no | Set to true to consume CPU in a Sidekiq background worker process | + +```bash +curl http://localhost:3000/-/chaos/cpu_spin?duration_s=60 --header 'X-Chaos-Secret: secret' +curl http://localhost:3000/-/chaos/cpu_spin?duration_s=60&token=secret +``` + +## DB spin + +This endpoint attempts to fully utilise a single core, and interleave it with DB request, for the given period. +This endpoint can be used to model yielding execution to another threads when running concurrently. + +Depending on your rack server setup, your request may timeout after a predetermined period (normally 60 seconds). +If you're using Unicorn, this is done by killing the worker process. + +``` +GET /-/chaos/db_spin +GET /-/chaos/db_spin?duration_s=50 +GET /-/chaos/db_spin?duration_s=50&async=true +``` + +| Attribute | Type | Required | Description | +| ------------ | ------- | -------- | --------------------------------------------------------------------------- | +| `interval_s` | float | no | Interval, in seconds, for every DB request. Defaults to 1s | +| `duration_s` | integer | no | Duration, in seconds, that the core will be utilized. Defaults to 30s | +| `async` | boolean | no | Set to true to perform the operation in a Sidekiq background worker process | ```bash -curl http://localhost:3000/-/chaos/cpuspin?duration_s=60 --header 'X-Chaos-Secret: secret' +curl http://localhost:3000/-/chaos/db_spin?interval_s=1&duration_s=60 --header 'X-Chaos-Secret: secret' +curl http://localhost:3000/-/chaos/db_spin?interval_s=1&duration_s=60&token=secret ``` ## Sleep -This endpoint is similar to the CPU Spin endpoint but simulates off-processor activity, such as network calls to backend services. It will sleep for a given duration. +This endpoint is similar to the CPU Spin endpoint but simulates off-processor activity, such as network calls to backend services. It will sleep for a given duration_s. -As with the CPU Spin endpoint, this may lead to your request timing out if duration exceeds the configured limit. +As with the CPU Spin endpoint, this may lead to your request timing out if duration_s exceeds the configured limit. ``` GET /-/chaos/sleep GET /-/chaos/sleep?duration_s=50 +GET /-/chaos/sleep?duration_s=50&async=true ``` | Attribute | Type | Required | Description | | ------------ | ------- | -------- | ---------------------------------------------------------------------- | | `duration_s` | integer | no | Duration, in seconds, that the request will sleep for. Defaults to 30s | +| `async` | boolean | no | Set to true to sleep in a Sidekiq background worker process | ```bash curl http://localhost:3000/-/chaos/sleep?duration_s=60 --header 'X-Chaos-Secret: secret' +curl http://localhost:3000/-/chaos/sleep?duration_s=60&token=secret ``` ## Kill @@ -110,8 +144,14 @@ Since this endpoint uses the `KILL` signal, the worker is not given a chance to ``` GET /-/chaos/kill +GET /-/chaos/kill?async=true ``` +| Attribute | Type | Required | Description | +| ------------ | ------- | -------- | ---------------------------------------------------------------------- | +| `async` | boolean | no | Set to true to kill a Sidekiq background worker process | + ```bash curl http://localhost:3000/-/chaos/kill --header 'X-Chaos-Secret: secret' +curl http://localhost:3000/-/chaos/kill?token=secret ``` diff --git a/doc/development/code_review.md b/doc/development/code_review.md index e60800f1ab7..b7d74b17eb3 100644 --- a/doc/development/code_review.md +++ b/doc/development/code_review.md @@ -45,9 +45,9 @@ page, with these behaviours: 1. It will not pick people whose [GitLab status](../user/profile/index.md#current-status) contains the string 'OOO'. -2. [Trainee maintainers](https://about.gitlab.com/handbook/engineering/workflow/code-review/#trainee-maintainer) +1. [Trainee maintainers](https://about.gitlab.com/handbook/engineering/workflow/code-review/#trainee-maintainer) are three times as likely to be picked as other reviewers. -3. It always picks the same reviewers and maintainers for the same +1. It always picks the same reviewers and maintainers for the same branch name (unless their OOO status changes, as in point 1). It removes leading `ce-` and `ee-`, and trailing `-ce` and `-ee`, so that it can be stable for backport branches. @@ -58,20 +58,21 @@ As described in the section on the responsibility of the maintainer below, you are recommended to get your merge request approved and merged by maintainer(s) from teams other than your own. - 1. If your merge request includes backend changes [^1], it must be - **approved by a [backend maintainer](https://about.gitlab.com/handbook/engineering/projects/#gitlab-ce_maintainers_backend)**. - 1. If your merge request includes database migrations or changes to expensive queries [^2], it must be - **approved by a [database maintainer](https://about.gitlab.com/handbook/engineering/projects/#gitlab-ce_maintainers_database)**. - 1. If your merge request includes frontend changes [^1], it must be - **approved by a [frontend maintainer](https://about.gitlab.com/handbook/engineering/projects/#gitlab-ce_maintainers_frontend)**. - 1. If your merge request includes UX changes [^1], it must be - **approved by a [UX team member][team]**. - 1. If your merge request includes adding a new JavaScript library [^1], it must be - **approved by a [frontend lead][team]**. - 1. If your merge request includes adding a new UI/UX paradigm [^1], it must be - **approved by a [UX lead][team]**. - 1. If your merge request includes a new dependency or a filesystem change, it must be - **approved by a [Distribution team member][team]**. See how to work with the [Distribution team](https://about.gitlab.com/handbook/engineering/dev-backend/distribution/) for more details. +1. If your merge request includes backend changes [^1], it must be + **approved by a [backend maintainer](https://about.gitlab.com/handbook/engineering/projects/#gitlab-ce_maintainers_backend)**. +1. If your merge request includes database migrations or changes to expensive queries [^2], it must be + **approved by a [database maintainer](https://about.gitlab.com/handbook/engineering/projects/#gitlab-ce_maintainers_database)**. + Read the [database review guidelines](database_review.md) for more details. +1. If your merge request includes frontend changes [^1], it must be + **approved by a [frontend maintainer](https://about.gitlab.com/handbook/engineering/projects/#gitlab-ce_maintainers_frontend)**. +1. If your merge request includes UX changes [^1], it must be + **approved by a [UX team member][team]**. +1. If your merge request includes adding a new JavaScript library [^1], it must be + **approved by a [frontend lead][team]**. +1. If your merge request includes adding a new UI/UX paradigm [^1], it must be + **approved by a [UX lead][team]**. +1. If your merge request includes a new dependency or a filesystem change, it must be + **approved by a [Distribution team member][team]**. See how to work with the [Distribution team](https://about.gitlab.com/handbook/engineering/dev-backend/distribution/) for more details. #### Security requirements @@ -207,9 +208,9 @@ first time. - Extract unrelated changes and refactorings into future merge requests/issues. - Seek to understand the reviewer's perspective. - Try to respond to every comment. -- The merge request author resolves only the discussions they have fully - addressed. If there's an open reply, an open discussion, a suggestion, - a question, or anything else, the discussion should be left to be resolved +- The merge request author resolves only the threads they have fully + addressed. If there's an open reply, an open thread, a suggestion, + a question, or anything else, the thread should be left to be resolved by the reviewer. - Push commits based on earlier rounds of feedback as isolated commits to the branch. Do not squash until the branch is ready to merge. Reviewers should be @@ -341,8 +342,7 @@ Enterprise Edition instance. This has some implications: - [Background migrations](background_migrations.md) run in Sidekiq, and should only be done for migrations that would take an extreme amount of time at GitLab.com scale. -1. **Sidekiq workers** - [cannot change in a backwards-incompatible way](sidekiq_style_guide.md#removing-or-renaming-queues): +1. **Sidekiq workers** [cannot change in a backwards-incompatible way](sidekiq_style_guide.md#removing-or-renaming-queues): 1. Sidekiq queues are not drained before a deploy happens, so there will be workers in the queue from the previous version of GitLab. 1. If you need to change a method signature, try to do so across two releases, @@ -365,6 +365,31 @@ Enterprise Edition instance. This has some implications: 1. **Filesystem access** can be slow, so try to avoid [shared files](shared_files.md) when an alternative solution is available. +## Examples + +How code reviews are conducted can surprise new contributors. Here are some examples of code reviews that should help to orient you as to what to expect. + +**["Modify `DiffNote` to reuse it for Designs"](https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/13703):** +It contained everything from nitpicks around newlines to reasoning +about what versions for designs are, how we should compare them +if there was no previous version of a certain file (parent vs. +blank `sha` vs empty tree). + +**["Support multi-line suggestions"](https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/25211)**: +The MR itself consists of a collaboration between FE and BE, +and documenting comments from the author for the reviewer. +There's some nitpicks, some questions for information, and +towards the end, a security vulnerability. + +**["Allow multiple repositories per project"](https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/10251)**: +ZJ referred to the other projects (workhorse) this might impact, +suggested some improvements for consistency. And James' comments +helped us with overall code quality (using delegation, `&.` those +types of things), and making the code more robust. + +**["Support multiple assignees for merge requests"](https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/10161)**: +A good example of collaboration on an MR touching multiple parts of the codebase. Nick pointed out interesting edge cases, James Lopes also joined in raising concerns on import/export feature. + ### Credits Largely based on the [thoughtbot code review guide]. diff --git a/doc/development/contributing/community_roles.md b/doc/development/contributing/community_roles.md index 3296cb173d7..7d2d1b77a0e 100644 --- a/doc/development/contributing/community_roles.md +++ b/doc/development/contributing/community_roles.md @@ -1,4 +1,4 @@ -### Community members & roles +# Community members & roles GitLab community members and their privileges/responsibilities. diff --git a/doc/development/contributing/issue_workflow.md b/doc/development/contributing/issue_workflow.md index 910f9f4bf7a..b2e3ef7bf63 100644 --- a/doc/development/contributing/issue_workflow.md +++ b/doc/development/contributing/issue_workflow.md @@ -58,78 +58,33 @@ issue is labeled with a subject label corresponding to your expertise. Subject labels are always all-lowercase. -## Team labels +## Team labels -**Important**: Most of the team labels will be soon deprecated in favor of [Group labels](#group-labels). +**Important**: Most of the historical team labels (e.g. Manage, Plan etc.) are +now deprecated in favor of [Group labels](#group-labels) and [Stage labels](#stage-labels). Team labels specify what team is responsible for this issue. Assigning a team label makes sure issues get the attention of the appropriate people. -The team labels planned for deprecation are: - -- ~Configure -- ~Create -- ~Defend -- ~Distribution -- ~Ecosystem -- ~Geo -- ~Gitaly -- ~Growth -- ~Manage -- ~Memory -- ~Monitor -- ~Plan -- ~Release -- ~Secure -- ~Verify - -The following team labels are **true** teams per our [organization structure](https://about.gitlab.com/company/team/structure/#organizational-structure) which will remain post deprecation. +The current team labels are: - ~Delivery - ~Documentation - -The descriptions on the [labels page](https://gitlab.com/gitlab-org/gitlab-ce/-/labels) explain what falls under the -responsibility of each team. - -Within those team labels, we also have the ~backend and ~frontend labels to -indicate if an issue needs backend work, frontend work, or both. +- ~Quality Team labels are always capitalized so that they show up as the first label for any issue. - ## Stage labels Stage labels specify which [DevOps stage][devops-stages] the issue belongs to. -The current stage labels are: - -- ~"devops::manage" -- ~"devops::plan" -- ~"devops::create" -- ~"devops::verify" -- ~"devops::package" -- ~"devops::release" -- ~"devops::configure" -- ~"devops::monitor" -- ~"devops::secure" -- ~"devops::defend" -- ~"devops::growth" -- ~"devops::enablement" +The current stage labels can be found by [searching the labels list for `devops::`](https://gitlab.com/groups/gitlab-org/-/labels?search=devops%3A%3A). These labels are [scoped labels](../../user/project/labels.md#scoped-labels-premium) and thus are mutually exclusive. -They differ from the [Team labels](#team-labels) because teams may work on -issues outside their stage. - -Normally there is a 1:1 relationship between Stage labels and Team labels, but -any issue can be picked up by any team, depending on current priorities. -So, an issue labeled ~"devops:create" may be scheduled by the ~Plan team, for -example. In such cases, it's usual to include both team labels so each team can -be aware of the progress. - The Stage labels are used to generate the [direction pages][direction-pages] automatically. [devops-stages]: https://about.gitlab.com/direction/#devops-stages @@ -139,51 +94,21 @@ The Stage labels are used to generate the [direction pages][direction-pages] aut Group labels specify which [groups][structure-groups] the issue belongs to. -The current group labels are: - -* ~"group::access" -* ~"group::measure" -* ~"group::source code" -* ~"group::knowledge" -* ~"group::editor" -* ~"group::gitaly" -* ~"group::gitter" -* ~"group::team planning" -* ~"group::enterprise planning" -* ~"group::certify" -* ~"group::ci and runner" -* ~"group::testing" -* ~"group::package" -* ~"group::progressive delivery" -* ~"group::release management" -* ~"group::autodevops and kubernetes" -* ~"group::serverless and paas" -* ~"group::apm" -* ~"group::health" -* ~"group::static analysis" -* ~"group::dynamic analysis" -* ~"group::software composition analysis" -* ~"group::runtime application security" -* ~"group::threat management" -* ~"group::application infrastructure security" -* ~"group::activation" -* ~"group::adoption" -* ~"group::upsell" -* ~"group::retention" -* ~"group::fulfillment" -* ~"group::telemetry" -* ~"group::distribution" -* ~"group::geo" -* ~"group::memory" -* ~"group::ecosystem" - +The current group labels can be found by [searching the labels list for `group::`](https://gitlab.com/groups/gitlab-org/-/labels?search=group%3A%3A). These labels are [scoped labels](../../user/project/labels.md#scoped-labels-premium) and thus are mutually exclusive. -Groups are nested beneath a particular stage, so only one stage label and one group label -can be applied to a single issue. You can find the groups listed in the -[Product Categories pages][product-categories]. +You can find the groups listed in the [Product Stages, Groups, and Categories][product-categories] page. + +We use the term group to map down product requirements from our product stages. +As a team needs some way to collect the work their members are planning to be assigned to, we use the `~group::` labels to do so. + +Normally there is a 1:1 relationship between Stage labels and Group labels. In the spirit of "Everyone can contribute", +any issue can be picked up by any group, depending on current priorities. For example, an issue labeled ~"devops::create" may be picked up by the ~"group::access" group. + +We also use stage and group labels to help quantify our [throughput](https://about.gitlab.com/handbook/engineering/management/throughput). +Please read [Stage and Group labels in Throughtput](https://about.gitlab.com/handbook/engineering/management/throughput/#stage-and-group-labels-in-throughput) for more information on how the labels are used in this context. [structure-groups]: https://about.gitlab.com/company/team/structure/#groups [product-categories]: https://about.gitlab.com/handbook/product/categories/ @@ -192,15 +117,15 @@ can be applied to a single issue. You can find the groups listed in the The current department labels are: -* ~UX -* ~Quality +- ~UX +- ~Quality ## Specialization labels These labels narrow the [specialization](https://about.gitlab.com/company/team/structure/#specialist) on a unit of work. -* ~frontend -* ~backend +- ~frontend +- ~backend ## Release Scoping labels @@ -248,10 +173,10 @@ There can be multiple facets of the impact. The below is a guideline. If a bug seems to fall between two severity labels, assign it to the higher-severity label. - Example(s) of ~S1 - - Data corruption/loss. + - Data corruption/loss. - Security breach. - - Unable to create an issue or merge request. - - Unable to add a comment or discussion to the issue or merge request. + - Unable to create an issue or merge request. + - Unable to add a comment or thread to the issue or merge request. - Example(s) of ~S2 - Cannot submit changes through the web IDE but the commandline works. - A status widget on the merge request page is not working but information can be seen in the test pipeline page. @@ -261,7 +186,7 @@ If a bug seems to fall between two severity labels, assign it to the higher-seve - Example(s) of ~S4 - Label colors are incorrect. - UI elements are not fully aligned. - + ## Label for community contributors Issues that are beneficial to our users, 'nice to haves', that we currently do @@ -296,7 +221,7 @@ know how difficult the issue is. Additionally: as suitable for people that have never contributed to GitLab before on the [Up For Grabs campaign](http://up-for-grabs.net) - We encourage people that have never contributed to any open source project to - look for [`Accepting merge requests` issues with a weight of 1][firt-timers] + look for [`Accepting merge requests` issues with a weight of 1][first-timers] If you've decided that you would like to work on an issue, please @-mention the [appropriate product manager](https://about.gitlab.com/handbook/product/#who-to-talk-to-for-what) @@ -309,8 +234,8 @@ GitLab team members who apply the ~"Accepting merge requests" label to an issue should update the issue description with a responsible product manager, inviting any potential community contributor to @-mention per above. -[up-for-grabs]: https://gitlab.com/groups/gitlab-org/-/issues?state=opened&label_name[]=Accepting+merge+requests&assignee_id=0&sort=weight -[firt-timers]: https://gitlab.com/groups/gitlab-org/-/issues?state=opened&label_name[]=Accepting+merge+requests&assignee_id=0&sort=weight&weight=1 +[up-for-grabs]: https://gitlab.com/groups/gitlab-org/-/issues?state=opened&label_name[]=Accepting+merge+requests&assignee_id=None&sort=weight +[first-timers]: https://gitlab.com/groups/gitlab-org/-/issues?state=opened&label_name[]=Accepting+merge+requests&assignee_id=None&sort=weight&weight=1 ## Issue triaging diff --git a/doc/development/contributing/merge_request_workflow.md b/doc/development/contributing/merge_request_workflow.md index 3f61ad7cb13..4e9c5c81379 100644 --- a/doc/development/contributing/merge_request_workflow.md +++ b/doc/development/contributing/merge_request_workflow.md @@ -74,10 +74,10 @@ request is as follows: can be found by running `grep css-class ./app -R`. 1. Be prepared to answer questions and incorporate feedback into your MR with new commits. Once you have fully addressed a suggestion from a reviewer, click the - "Resolve discussion" button beneath it to mark it resolved. - 1. The merge request author resolves only the discussions they have fully addressed. - If there's an open reply or discussion, a suggestion, a question, or anything else, - the discussion should be left to be resolved by the reviewer. + "Resolve thread" button beneath it to mark it resolved. + 1. The merge request author resolves only the threads they have fully addressed. + If there's an open reply or thread, a suggestion, a question, or anything else, + the thread should be left to be resolved by the reviewer. 1. If your MR touches code that executes shell commands, reads or opens files, or handles paths to files on disk, make sure it adheres to the [shell command guidelines](../shell_commands.md) @@ -103,7 +103,8 @@ If you would like quick feedback on your merge request feel free to mention some from the [core team](https://about.gitlab.com/community/core-team/) or one of the [merge request coaches](https://about.gitlab.com/team/). When having your code reviewed and when reviewing merge requests, please keep the [code review guidelines](../code_review.md) -in mind. +in mind. And if your code also makes changes to the database, or does expensive queries, +check the [database review guidelines](../database_review.md). ### Keep it simple @@ -142,6 +143,37 @@ If the guidelines are not met, the MR will not pass the [Danger checks](https://gitlab.com/gitlab-org/gitlab-ce/blob/master/danger/commit_messages/Dangerfile). For more information see [How to Write a Git Commit Message](https://chris.beams.io/posts/git-commit/). +Example commit message template that can be used on your machine that embodies the above (guide for [how to apply template](https://codeinthehole.com/tips/a-useful-template-for-commit-messages/)): + +```text +# (If applied, this commit will...) <subject> (Max 50 char) +# |<---- Using a Maximum Of 50 Characters ---->| + + +# Explain why this change is being made +# |<---- Try To Limit Each Line to a Maximum Of 72 Characters ---->| + +# Provide links or keys to any relevant tickets, articles or other resources +# Use issues and merge requests' full URLs instead of short references, +# as they are displayed as plain text outside of GitLab + +# --- COMMIT END --- +# -------------------- +# Remember to +# Capitalize the subject line +# Use the imperative mood in the subject line +# Do not end the subject line with a period +# Subject must contain at least 3 words +# Separate subject from body with a blank line +# Commits that change 30 or more lines across at least 3 files must +# describe these changes in the commit body +# Do not use Emojis +# Use the body to explain what and why vs. how +# Can use multiple lines with "-" for bullet points in body +# For more information: https://chris.beams.io/posts/git-commit/ +# -------------------- +``` + ## Contribution acceptance criteria To make sure that your merge request can be approved, please ensure that it meets diff --git a/doc/development/contributing/style_guides.md b/doc/development/contributing/style_guides.md index 5c6ea1f469d..7832850a9f0 100644 --- a/doc/development/contributing/style_guides.md +++ b/doc/development/contributing/style_guides.md @@ -23,6 +23,7 @@ 1. Code should be written in [US English][us-english] 1. [Go](../go_guide/index.md) 1. [Python](../python_guide/index.md) +1. [Shell scripting](../shell_scripting_guide/index.md) This is also the style used by linting tools such as [RuboCop](https://github.com/rubocop-hq/rubocop) and [Hound CI](https://houndci.com). diff --git a/doc/development/database_debugging.md b/doc/development/database_debugging.md index 0311eda1ff1..6c9fa983c96 100644 --- a/doc/development/database_debugging.md +++ b/doc/development/database_debugging.md @@ -9,31 +9,31 @@ An easy first step is to search for your error in Slack or google "GitLab (my er Available `RAILS_ENV` - - `production` (generally not for your main GDK db, but you may need this for e.g. omnibus) - - `development` (this is your main GDK db) - - `test` (used for tests like rspec) +- `production` (generally not for your main GDK db, but you may need this for e.g. Omnibus) +- `development` (this is your main GDK db) +- `test` (used for tests like rspec) ## Nuke everything and start over If you just want to delete everything and start over with an empty DB (~1 minute): - - `bundle exec rake db:reset RAILS_ENV=development` +- `bundle exec rake db:reset RAILS_ENV=development` If you just want to delete everything and start over with dummy data (~40 minutes). This also does `db:reset` and runs DB-specific migrations: - - `bundle exec rake dev:setup RAILS_ENV=development` +- `bundle exec rake dev:setup RAILS_ENV=development` If your test DB is giving you problems, it is safe to nuke it because it doesn't contain important data: - - `bundle exec rake db:reset RAILS_ENV=test` +- `bundle exec rake db:reset RAILS_ENV=test` ## Migration wrangling - - `bundle exec rake db:migrate RAILS_ENV=development`: Execute any pending migrations that you may have picked up from a MR - - `bundle exec rake db:migrate:status RAILS_ENV=development`: Check if all migrations are `up` or `down` - - `bundle exec rake db:migrate:down VERSION=20170926203418 RAILS_ENV=development`: Tear down a migration - - `bundle exec rake db:migrate:up VERSION=20170926203418 RAILS_ENV=development`: Set up a migration - - `bundle exec rake db:migrate:redo VERSION=20170926203418 RAILS_ENV=development`: Re-run a specific migration +- `bundle exec rake db:migrate RAILS_ENV=development`: Execute any pending migrations that you may have picked up from a MR +- `bundle exec rake db:migrate:status RAILS_ENV=development`: Check if all migrations are `up` or `down` +- `bundle exec rake db:migrate:down VERSION=20170926203418 RAILS_ENV=development`: Tear down a migration +- `bundle exec rake db:migrate:up VERSION=20170926203418 RAILS_ENV=development`: Set up a migration +- `bundle exec rake db:migrate:redo VERSION=20170926203418 RAILS_ENV=development`: Re-run a specific migration ## Manually access the database @@ -45,12 +45,12 @@ bundle exec rails dbconsole RAILS_ENV=development bundle exec rails db RAILS_ENV=development ``` - - `\q`: Quit/exit - - `\dt`: List all tables - - `\d+ issues`: List columns for `issues` table - - `CREATE TABLE board_labels();`: Create a table called `board_labels` - - `SELECT * FROM schema_migrations WHERE version = '20170926203418';`: Check if a migration was run - - `DELETE FROM schema_migrations WHERE version = '20170926203418';`: Manually remove a migration +- `\q`: Quit/exit +- `\dt`: List all tables +- `\d+ issues`: List columns for `issues` table +- `CREATE TABLE board_labels();`: Create a table called `board_labels` +- `SELECT * FROM schema_migrations WHERE version = '20170926203418';`: Check if a migration was run +- `DELETE FROM schema_migrations WHERE version = '20170926203418';`: Manually remove a migration ## FAQ diff --git a/doc/development/database_review.md b/doc/development/database_review.md new file mode 100644 index 00000000000..367a481ee11 --- /dev/null +++ b/doc/development/database_review.md @@ -0,0 +1,134 @@ +# Database Review Guidelines + +This page is specific to database reviews. Please refer to our +[code review guide](code_review.md) for broader advice and best +practices for code review in general. + +## General process + +A database review is required for: + +- Changes that touch the database schema or perform data migrations, + including files in: + - `db/` + - `lib/gitlab/background_migration/` +- Changes to the database tooling, e.g.: + - migration or ActiveRecord helpers in `lib/gitlab/database/` + - load balancing +- Changes that produce SQL queries that are beyond the obvious. It is + generally up to the author of a merge request to decide whether or + not complex queries are being introduced and if they require a + database review. + +A database reviewer is expected to look out for obviously complex +queries in the change and review those closer. If the author does not +point out specific queries for review and there are no obviously +complex queries, it is enough to concentrate on reviewing the +migration only. + +It is preferable to review queries in SQL form and generally accepted +to ask the author to translate any ActiveRecord queries in SQL form +for review. + +### Roles and process + +A Merge Request author's role is to: + +- Decide whether a database review is needed. +- If database review is needed, add the ~database label. +- Use the [database changes](https://gitlab.com/gitlab-org/gitlab-ce/blob/master/.gitlab/merge_request_templates/Database%20changes.md) + merge request template, or include the appropriate items in the MR description. + +A database **reviewer**'s role is to: + +- Perform a first-pass review on the MR and suggest improvements to the author. +- Once satisfied, relabel the MR with ~"database::reviewed", approve it, and + reassign MR to the database **maintainer** suggested by Reviewer + Roulette. + +A database **maintainer**'s role is to: + +- Perform the final database review on the MR. +- Discuss further improvements or other relevant changes with the + database reviewer and the MR author. +- Finally approve the MR and relabel the MR with ~"database::approved" +- Merge the MR if no other approvals are pending or pass it on to + other maintainers as required (frontend, backend, docs). + +### Distributing review workload + +Review workload is distributed using [reviewer roulette](code_review.md#reviewer-roulette) +([example](https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/25181#note_147551725)). +The MR author should then co-assign the suggested database +**reviewer**. When they give their sign-off, they will hand over to +the suggested database **maintainer**. + +If reviewer roulette didn't suggest a database reviewer & maintainer, +make sure you have applied the ~database label and rerun the +`danger-review` CI job, or pick someone from the +[`@gl-database` team](https://gitlab.com/groups/gl-database/-/group_members). + +### How to prepare for speedy database reviews + +In order to make reviewing easier and therefore faster, please consider preparing a comment +and details for a database reviewer: + +- Provide queries in SQL form rather than ActiveRecord. +- Format any queries with a SQL query formatter, for example with [sqlformat.darold.net](http://sqlformat.darold.net). +- Consider providing query plans via a link to [explain.depesz.com](https://explain.depesz.com) or another tool instead of textual form. +- For query changes, it is best to provide the SQL query along with a plan *before* and *after* the change. This helps to spot differences quickly. +- When providing query plans, make sure to use good parameter values, so that the query executed is a good example and also hits enough data. Usually, the `gitlab-org` namespace (`namespace_id = 9970`) and the `gitlab-org/gitlab-ce` project (`project_id = 13083`) provides enough data to serve as a good example. + +### How to review for database + +- Check migrations + - Review relational modeling and design choices + - Review migrations follow [database migration style guide](migration_style_guide.md), + for example + - [Check ordering of columns](ordering_table_columns.md) + - [Check indexes are present for foreign keys](migration_style_guide.md#adding-foreign-key-constraints) + - Ensure that migrations execute in a transaction or only contain + concurrent index/foreign key helpers (with transactions disabled) + - Check consistency with `db/schema.rb` and that migrations are [reversible](migration_style_guide.md#reversibility) + - Check queries timing (If any): Queries executed in a migration + need to fit comfortably within `15s` - preferably much less than that - on GitLab.com. +- Check [background migrations](background_migrations.md): + - For data migrations, establish a time estimate for execution + - They should only be used when migrating data in larger tables. + - If a single `update` is below than `1s` the query can be placed + directly in a regular migration (inside `db/migrate`). + - Review queries (for example, make sure batch sizes are fine) + - Establish a time estimate for execution + - Because execution time can be longer than for a regular migration, + it's suggested to treat background migrations as post migrations: + place them in `db/post_migrate` instead of `db/migrate`. Keep in mind + that post migrations are executed post-deployment in production. +- Check [timing guidelines for migrations](#timing-guidelines-for-migrations) +- Query performance + - Check for any obviously complex queries and queries the author specifically + points out for review (if any) + - If not present yet, ask the author to provide SQL queries and query plans + (e.g. by using [chatops](understanding_explain_plans.md#chatops) or direct + database access) + - For given queries, review parameters regarding data distribution + - [Check query plans](understanding_explain_plans.md) and suggest improvements + to queries (changing the query, schema or adding indexes and similar) + - General guideline is for queries to come in below 100ms execution time + - If queries rely on prior migrations that are not present yet on production + (eg indexes, columns), you can use a [one-off instance from the restore + pipeline](https://ops.gitlab.net/gitlab-com/gl-infra/gitlab-restore/postgres-gprd) + in order to establish a proper testing environment. + +### Timing guidelines for migrations + +In general, migrations for a single deploy shouldn't take longer than +1 hour for GitLab.com. The following guidelines are not hard rules, they were +estimated to keep migration timing to a minimum. + +NOTE: **Note:** Keep in mind that all runtimes should be measured against GitLab.com. + +| Migration Type | Execution Time Recommended | Notes | +|----|----|---| +| Regular migrations on `db/migrate` | `3 minutes` | A valid exception are index creation as this can take a long time. | +| Post migrations on `db/post_migrate` | `10 minutes` | | +| Background migrations | --- | Since these are suitable for larger tables, it's not possible to set a precise timing guideline, however, any query must stay well below `10s` of execution time. | diff --git a/doc/development/diffs.md b/doc/development/diffs.md index 5655398c886..ac0b8555360 100644 --- a/doc/development/diffs.md +++ b/doc/development/diffs.md @@ -133,4 +133,4 @@ File diff will be suppressed (technically different from collapsed, but behaves Diff Viewers, which can be found on `models/diff_viewer/*` are classes used to map metadata about each type of Diff File. It has information whether it's a binary, which partial should be used to render it or which File extensions this class accounts for. -`DiffViewer::Base` validates _blobs_ (old and new versions) content, extension and file type in order to check if it can be rendered.
\ No newline at end of file +`DiffViewer::Base` validates _blobs_ (old and new versions) content, extension and file type in order to check if it can be rendered. diff --git a/doc/development/distributed_tracing.md b/doc/development/distributed_tracing.md index bfce7488a8d..4776c8348d4 100644 --- a/doc/development/distributed_tracing.md +++ b/doc/development/distributed_tracing.md @@ -179,4 +179,3 @@ By default, the Jaeger search UI is available at <http://localhost:16686/search> TIP: **Tip:** Don't forget that you will need to generate traces by using the application before they appear in the Jaeger UI. - diff --git a/doc/development/documentation/feature-change-workflow.md b/doc/development/documentation/feature-change-workflow.md index ca29353ecbe..00c76fe0f1b 100644 --- a/doc/development/documentation/feature-change-workflow.md +++ b/doc/development/documentation/feature-change-workflow.md @@ -69,7 +69,7 @@ To follow a consistent workflow every month, documentation changes involve the Product Managers, the developer who shipped the feature, and the technical writer for the DevOps stage. Each role is described below. -The Documentation items in the GitLab CE/EE [Feature Proposal issue template](https://gitlab.com/gitlab-org/gitlab-ce/raw/template-improvements-for-documentation/.gitlab/issue_templates/Feature%20proposal.md) +The Documentation items in the GitLab CE/EE [Feature Proposal issue template](https://gitlab.com/gitlab-org/gitlab-ce/raw/master/.gitlab/issue_templates/Feature%20proposal.md) and default merge request template will assist you with following this process. ### Product Manager role @@ -121,27 +121,27 @@ All reviewers can help ensure accuracy, clarity, completeness, and adherence to - **Prior to merging**, documentation changes committed by the developer must be reviewed by: - 1. **The code reviewer** for the MR, to confirm accuracy, clarity, and completeness. - 1. Optionally: Others involved in the work, such as other devs or the PM. - 1. Optionally: The technical writer for the DevOps stage. If not prior to merging, the technical writer will review after the merge. - This helps us ensure that the developer has time to merge good content by the freeze, and that it can be further refined by the release, if needed. - - To decide whether to request this review before the merge, consider the amount of time left before the code freeze, the size of the change, - and your degree of confidence in having users of an RC use your docs as written. - - Pre-merge tech writer reviews should be most common when the code is complete well in advance of the freeze and/or for larger documentation changes. - - You can request a review and if there is not sufficient time to complete it prior to the freeze, - the maintainer can merge the current doc changes (if complete) and create a follow-up doc review issue. - - The technical writer can also help decide what docs to merge before the freeze and whether to work on further changes in a follow up MR. - - **To request a pre-merge technical writer review**, assign the writer listed for the applicable [DevOps stage](https://about.gitlab.com/handbook/product/categories/#devops-stages). - - **To request a post-merge technical writer review**, [create an issue for one using the Doc Review template](https://gitlab.com/gitlab-org/gitlab-ce/issues/new?issuable_template=Doc%20Review) and link it from the MR that makes the doc change. - 1. **The maintainer** who is assigned to merge the MR, to verify clarity, completeness, and quality, to the best of their ability. + 1. **The code reviewer** for the MR, to confirm accuracy, clarity, and completeness. + 1. Optionally: Others involved in the work, such as other devs or the PM. + 1. Optionally: The technical writer for the DevOps stage. If not prior to merging, the technical writer will review after the merge. + This helps us ensure that the developer has time to merge good content by the freeze, and that it can be further refined by the release, if needed. + - To decide whether to request this review before the merge, consider the amount of time left before the code freeze, the size of the change, + and your degree of confidence in having users of an RC use your docs as written. + - Pre-merge tech writer reviews should be most common when the code is complete well in advance of the freeze and/or for larger documentation changes. + - You can request a review and if there is not sufficient time to complete it prior to the freeze, + the maintainer can merge the current doc changes (if complete) and create a follow-up doc review issue. + - The technical writer can also help decide what docs to merge before the freeze and whether to work on further changes in a follow up MR. + - **To request a pre-merge technical writer review**, assign the writer listed for the applicable [DevOps stage](https://about.gitlab.com/handbook/product/categories/#devops-stages). + - **To request a post-merge technical writer review**, [create an issue for one using the Doc Review template](https://gitlab.com/gitlab-org/gitlab-ce/issues/new?issuable_template=Doc%20Review) and link it from the MR that makes the doc change. + 1. **The maintainer** who is assigned to merge the MR, to verify clarity, completeness, and quality, to the best of their ability. - Upon merging, if a technical writer review has not been performed and there is not yet a linked issue for a follow-up review, the maintainer should [create an issue using the Doc Review template](https://gitlab.com/gitlab-org/gitlab-ce/issues/new?issuable_template=Doc%20Review), link it from the MR, and mention the original MR author in the new issue. Alternatively, the maintainer can ask the MR author to create and link this issue before the MR is merged. - After merging, documentation changes are reviewed by: - 1. The technical writer--**if** their review was not performed prior to the merge. - 2. Optionally: by the PM (for accuracy and to ensure it's consistent with the vision for how the product will be used). + 1. The technical writer -- **if** their review was not performed prior to the merge. + 1. Optionally: by the PM (for accuracy and to ensure it's consistent with the vision for how the product will be used). Any party can raise the item to the PM for review at any point: the dev, the technical writer, or the PM, who can request/plan a review at the outset. ### Technical Writer role diff --git a/doc/development/documentation/improvement-workflow.md b/doc/development/documentation/improvement-workflow.md index a12c3d5ea7b..80fbd4b6427 100644 --- a/doc/development/documentation/improvement-workflow.md +++ b/doc/development/documentation/improvement-workflow.md @@ -52,9 +52,9 @@ To request a post-merge review, [create an issue for one using the Doc Review te **3. Maintainer** 1. Review by assigned maintainer, who can always request/require the above reviews. Maintainer review can occur before or after a technical writer review. -2. Ensure a release milestone of the format XX.Y is set. If the freeze for that release has begun, add the label `pick into <XX.Y>` unless this change is not required for the release. In that case, simply change the milestone. -3. If EE and CE MRs exist, merge the EE MR first, then the CE MR. -4. After merging, if there has not been a technical writer review and an issue for a follow-up review was not already created and linked from the MR, [create the issue using the Doc Review template](https://gitlab.com/gitlab-org/gitlab-ce/issues/new?issuable_template=Doc%20Review) and link it from the MR. +1. Ensure a release milestone of the format XX.Y is set. If the freeze for that release has begun, add the label `pick into <XX.Y>` unless this change is not required for the release. In that case, simply change the milestone. +1. If EE and CE MRs exist, merge the EE MR first, then the CE MR. +1. After merging, if there has not been a technical writer review and an issue for a follow-up review was not already created and linked from the MR, [create the issue using the Doc Review template](https://gitlab.com/gitlab-org/gitlab-ce/issues/new?issuable_template=Doc%20Review) and link it from the MR. ## Other ways to help diff --git a/doc/development/documentation/index.md b/doc/development/documentation/index.md index cbdc0a3a174..c9ae00d148a 100644 --- a/doc/development/documentation/index.md +++ b/doc/development/documentation/index.md @@ -43,7 +43,7 @@ Meanwhile, anyone can contribute [documentation improvements](improvement-workfl ## Markdown and styles -[GitLab docs](https://gitlab.com/gitlab-com/gitlab-docs) uses [GitLab Kramdown](https://gitlab.com/gitlab-org/gitlab_kramdown) +[GitLab docs](https://gitlab.com/gitlab-org/gitlab-docs) uses [GitLab Kramdown](https://gitlab.com/gitlab-org/gitlab_kramdown) as its markdown rendering engine. See the [GitLab Markdown Guide](https://about.gitlab.com/handbook/product/technical-writing/markdown-guide/) for a complete Kramdown reference. Adhere to the [Documentation Style Guide](styleguide.md). If a style standard is missing, you are welcome to suggest one via a merge request. @@ -57,35 +57,22 @@ See the [Structure](styleguide.md#structure) section of the [Documentation Style We currently maintain two sets of docs: one in the [gitlab-ce](https://gitlab.com/gitlab-org/gitlab-ce/tree/master/doc) repo and one in [gitlab-ee](https://gitlab.com/gitlab-org/gitlab-ee/tree/master/doc). -They are similar, and most pages are identical, but they are different repositories. -With the single codebase effort, we want to make those two sets identical, so when the -time comes to have only one codebase, we'll be ready. - -Here are some links to get you up to speed with the current effort: - -- [CE/EE codebases blueprint](https://about.gitlab.com/handbook/engineering/infrastructure/blueprint/ce-ee-codebases/) -- [CE/EE codebases merge design](https://about.gitlab.com/handbook/engineering/infrastructure/design/merge-ce-ee-codebases/) -- [Single docs codebase epic](https://gitlab.com/groups/gitlab-org/-/epics/199) -- [Issue board of related issues](https://gitlab.com/groups/gitlab-org/-/boards/981090?&label_name[]=Documentation&label_name[]=single%20codebase) -- [Related merge requests](https://gitlab.com/groups/gitlab-org/-/merge_requests?scope=all&utf8=%E2%9C%93&state=all&label_name[]=Documentation&label_name[]=single%20codebase) -- [Visualize the existing diffs](https://leipert-projects.gitlab.io/is-gitlab-pretty-yet/diff/?search=%5Edoc) +They are identical, but they are different repositories. When the +time comes to have only one codebase for the GitLab project, we'll be ready. ### CE first -After a given documentation path is aligned across CE and EE, all merge requests -affecting that path must be submitted to CE, regardless of the content it has. -This means that: +All merge requests for documentation must be submitted to CE, regardless of the content +it has. This means that: -- For **EE-only docs changes**, you only have to submit a CE MR. +- For **EE-only docs changes**, you only have to submit an MR in the CE project. - For **EE-only features** that touch both the code and the docs, you have to submit - an EE MR containing all changes, and a CE MR containing only the docs changes + an EE MR containing all code changes, and a CE MR containing only the docs changes and without a changelog entry. This might seem like a duplicate effort, but it's only for the short term. -A list of the already aligned docs can be found in -[the epic description](https://gitlab.com/groups/gitlab-org/-/epics/199#ee-specific-lines-check). -Since the docs will be combined, it's crucial to add the relevant +Since the CE and EE docs are combined, it's crucial to add the relevant [product badges](styleguide.md#product-badges) for all EE documentation, so that we can discern which features belong to which tier. @@ -94,19 +81,9 @@ we can discern which features belong to which tier. There's a special test in place ([`ee_specific_check.rb`](https://gitlab.com/gitlab-org/gitlab-ee/blob/master/scripts/ee_specific_check/ee_specific_check.rb)), which, among others, checks and prevents creating/editing new files and directories -in EE under `doc/`. - -We have a long list of documentation paths that are either whitelisted or not. -Paths in the whitelist (not commented out) will not be subject to the test, -which means you are allowed to create/change docs content in EE for the time -being. The goal is to not have any doc whitelisted. - -At the time of this writing, the only items left to be aligned are the API docs: - -- `doc/api/*` ([issue](https://gitlab.com/gitlab-org/gitlab-ce/issues/60045) / [merge request](https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/27491)) - -Eventually, once all docs are aligned, we'll remove any doc reference from that -script, so it catches everything. +in EE under `doc/`. This should fail when changes to anything in `/doc` are submitted +in an EE MR. To pass the test, simply remove the docs changes from the EE MR, and +[submit them in CE](#ce-first). ## Changing document location @@ -134,18 +111,18 @@ For example, if you were to move `doc/workflow/lfs/lfs_administration.md` to 1. Copy `doc/workflow/lfs/lfs_administration.md` to `doc/administration/lfs.md` 1. Replace the contents of `doc/workflow/lfs/lfs_administration.md` with: - ```md - This document was moved to [another location](../../administration/lfs.md). - ``` + ```md + This document was moved to [another location](../../administration/lfs.md). + ``` 1. Find and replace any occurrences of the old location with the new one. A quick way to find them is to use `git grep`. First go to the root directory where you cloned the `gitlab-ce` repository and then do: - ```sh - git grep -n "workflow/lfs/lfs_administration" - git grep -n "lfs/lfs_administration" - ``` + ```sh + git grep -n "workflow/lfs/lfs_administration" + git grep -n "lfs/lfs_administration" + ``` NOTE: **Note:** If the document being moved has any Disqus comments on it, there are extra steps @@ -193,7 +170,7 @@ Disqus uses an identifier per page, and for docs.gitlab.com, the page identifier is configured to be the page URL. Therefore, when we change the document location, we need to preserve the old URL as the same Disqus identifier. -To do that, add to the frontmatter the variable `redirect_from`, +To do that, add to the frontmatter the variable `disqus_identifier`, using the old URL as value. For example, let's say I moved the document available under `https://docs.gitlab.com/my-old-location/README.html` to a new location, `https://docs.gitlab.com/my-new-location/index.html`. @@ -202,11 +179,11 @@ Into the **new document** frontmatter add the following: ```yaml --- -redirect_from: 'https://docs.gitlab.com/my-old-location/README.html' +disqus_identifier: 'https://docs.gitlab.com/my-old-location/README.html' --- ``` -Note: it is necessary to include the file name in the `redirect_from` URL, +Note: it is necessary to include the file name in the `disqus_identifier` URL, even if it's `index.html` or `README.html`. ## Branch naming @@ -319,45 +296,45 @@ You can combine one or more of the following: 1. **Linking to an anchor link.** Use `anchor` as part of the `help_page_path` method: - ```haml - = link_to 'Help page', help_page_path('user/permissions', anchor: 'anchor-link') - ``` + ```haml + = link_to 'Help page', help_page_path('user/permissions', anchor: 'anchor-link') + ``` 1. **Opening links in a new tab.** This should be the default behavior: - ```haml - = link_to 'Help page', help_page_path('user/permissions'), target: '_blank' - ``` + ```haml + = link_to 'Help page', help_page_path('user/permissions'), target: '_blank' + ``` 1. **Linking to a circle icon.** Usually used in settings where a long description cannot be used, like near checkboxes. You can basically use any font awesome icon, but prefer the `question-circle`: - ```haml - = link_to icon('question-circle'), help_page_path('user/permissions') - ``` + ```haml + = link_to icon('question-circle'), help_page_path('user/permissions') + ``` 1. **Using a button link.** Useful in places where text would be out of context with the rest of the page layout: - ```haml - = link_to 'Help page', help_page_path('user/permissions'), class: 'btn btn-info' - ``` + ```haml + = link_to 'Help page', help_page_path('user/permissions'), class: 'btn btn-info' + ``` 1. **Using links inline of some text.** - ```haml - Description to #{link_to 'Help page', help_page_path('user/permissions')}. - ``` + ```haml + Description to #{link_to 'Help page', help_page_path('user/permissions')}. + ``` 1. **Adding a period at the end of the sentence.** Useful when you don't want the period to be part of the link: - ```haml - = succeed '.' do - Learn more in the - = link_to 'Help page', help_page_path('user/permissions') - ``` + ```haml + = succeed '.' do + Learn more in the + = link_to 'Help page', help_page_path('user/permissions') + ``` ### GitLab `/help` tests @@ -384,7 +361,7 @@ on how the left-side navigation menu is built and updated. NOTE: **Note:** To preview your changes to documentation locally, follow this -[development guide](https://gitlab.com/gitlab-com/gitlab-docs/blob/master/README.md#development-when-contributing-to-gitlab-documentation) or [these instructions for GDK](https://gitlab.com/gitlab-org/gitlab-development-kit/blob/master/doc/howto/gitlab_docs.md). +[development guide](https://gitlab.com/gitlab-org/gitlab-docs/blob/master/README.md#development-when-contributing-to-gitlab-documentation) or [these instructions for GDK](https://gitlab.com/gitlab-org/gitlab-development-kit/blob/master/doc/howto/gitlab_docs.md). The live preview is currently enabled for the following projects: @@ -408,7 +385,7 @@ You will need to push a branch to those repositories, it doesn't work for forks. The `review-docs-deploy*` job will: -1. Create a new branch in the [gitlab-docs](https://gitlab.com/gitlab-com/gitlab-docs) +1. Create a new branch in the [gitlab-docs](https://gitlab.com/gitlab-org/gitlab-docs) project named after the scheme: `$DOCS_GITLAB_REPO_SUFFIX-$CI_ENVIRONMENT_SLUG`, where `DOCS_GITLAB_REPO_SUFFIX` is the suffix for each product, e.g, `ce` for CE, etc. @@ -464,7 +441,7 @@ If you want to know the in-depth details, here's what's really happening: 1. The preview URL is shown both at the job output and in the merge request widget. You also get the link to the remote pipeline. 1. In the docs project, the pipeline is created and it - [skips the test jobs](https://gitlab.com/gitlab-com/gitlab-docs/blob/8d5d5c750c602a835614b02f9db42ead1c4b2f5e/.gitlab-ci.yml#L50-55) + [skips the test jobs](https://gitlab.com/gitlab-org/gitlab-docs/blob/8d5d5c750c602a835614b02f9db42ead1c4b2f5e/.gitlab-ci.yml#L50-55) to lower the build time. 1. Once the docs site is built, the HTML files are uploaded as artifacts. 1. A specific Runner tied only to the docs project, runs the Review App job @@ -488,15 +465,17 @@ Currently, the following tests are in place: that all cURL examples in API docs use the full switches. It's recommended to [check locally](#previewing-the-changes-live) before pushing to GitLab by executing the command `bundle exec nanoc check internal_links` on your local - [`gitlab-docs`](https://gitlab.com/gitlab-com/gitlab-docs) directory. + [`gitlab-docs`](https://gitlab.com/gitlab-org/gitlab-docs) directory. In addition, + `docs-lint` also runs [markdownlint](styleguide.md#markdown-rules) to ensure the + markdown is consistent across all documentation. 1. [`ee_compat_check`](../automatic_ce_ee_merge.md#avoiding-ce-ee-merge-conflicts-beforehand) (runs on CE only): - When you submit a merge request to GitLab Community Edition (CE), - there is this additional job that runs against Enterprise Edition (EE) - and checks if your changes can apply cleanly to the EE codebase. - If that job fails, read the instructions in the job log for what to do next. - As CE is merged into EE once a day, it's important to avoid merge conflicts. - Submitting an EE-equivalent merge request cherry-picking all commits from CE to EE is - essential to avoid them. + When you submit a merge request to GitLab Community Edition (CE), + there is this additional job that runs against Enterprise Edition (EE) + and checks if your changes can apply cleanly to the EE codebase. + If that job fails, read the instructions in the job log for what to do next. + As CE is merged into EE once a day, it's important to avoid merge conflicts. + Submitting an EE-equivalent merge request cherry-picking all commits from CE to EE is + essential to avoid them. 1. [`ee-files-location-check`/`ee-specific-lines-check`](#ee-specific-lines-check) (runs on EE only): This test ensures that no new files/directories are created/changed in EE. All docs should be submitted in CE instead, regardless the tier they are on. @@ -559,15 +538,16 @@ A file with `proselint` configuration must be placed in a #### `markdownlint` `markdownlint` checks that certain rules ([example](https://github.com/DavidAnson/markdownlint/blob/master/README.md#rules--aliases)) - are followed for Markdown syntax. - Our [Documentation Style Guide](styleguide.md) and [Markdown Guide](https://about.gitlab.com/handbook/product/technical-writing/markdown-guide/) - elaborate on which choices must be made when selecting Markdown syntax for - GitLab documentation. This tool helps catch deviations from those guidelines. +are followed for Markdown syntax. Our [Documentation Style Guide](styleguide.md) and +[Markdown Guide](https://about.gitlab.com/handbook/product/technical-writing/markdown-guide/) +elaborate on which choices must be made when selecting Markdown syntax for GitLab +documentation. This tool helps catch deviations from those guidelines, and matches the +tests run on the documentation by [`docs-lint`](#testing). `markdownlint` can be used [on the command line](https://github.com/igorshubovych/markdownlint-cli#markdownlint-cli--), - either on a single Markdown file or on all Markdown files in a project. For example, to run - `markdownlint` on all documentation in the [`gitlab-ce` project](https://gitlab.com/gitlab-org/gitlab-ce), - run the following commands from within the `gitlab-ce` project: +either on a single Markdown file or on all Markdown files in a project. For example, to run +`markdownlint` on all documentation in the [`gitlab-ce` project](https://gitlab.com/gitlab-org/gitlab-ce), +run the following commands from within the `gitlab-ce` project: ```sh cd doc @@ -597,7 +577,7 @@ The following sample `markdownlint` configuration modifies the available default "line-length": false, "no-trailing-punctuation": false, "ol-prefix": { "style": "one" }, - "blanks-around-fences": false, + "blanks-around-fences": true, "no-inline-html": { "allowed_elements": [ "table", @@ -612,11 +592,15 @@ The following sample `markdownlint` configuration modifies the available default "a", "strong", "i", - "div" + "div", + "b" ] }, "hr-style": { "style": "---" }, - "fenced-code-language": false + "code-block-style": { "style": "fenced" }, + "fenced-code-language": false, + "no-duplicate-header": { "allow_different_nesting": true }, + "commands-show-output": false } ``` diff --git a/doc/development/documentation/site_architecture/global_nav.md b/doc/development/documentation/site_architecture/global_nav.md index 20eeebf444f..753a636a779 100644 --- a/doc/development/documentation/site_architecture/global_nav.md +++ b/doc/development/documentation/site_architecture/global_nav.md @@ -4,9 +4,9 @@ description: "Learn how GitLab docs' global navigation works and how to add new # Global navigation -> - [Introduced](https://gitlab.com/gitlab-com/gitlab-docs/merge_requests/362) +> - [Introduced](https://gitlab.com/gitlab-org/gitlab-docs/merge_requests/362) in GitLab 11.6. -> - [Updated](https://gitlab.com/gitlab-com/gitlab-docs/merge_requests/482) in GitLab 12.1. +> - [Updated](https://gitlab.com/gitlab-org/gitlab-docs/merge_requests/482) in GitLab 12.1. The global nav adds to the left sidebar the ability to navigate and explore the contents of GitLab's documentation. @@ -25,7 +25,7 @@ To add a new doc to the nav, first and foremost, check with the technical writin Once you get their approval and their guidance in regards to the position on the nav, read trhough this page to understand how it works, and submit a merge request to the docs site, adding the doc you wish to include in the nav into the -[global nav data file](https://gitlab.com/gitlab-com/gitlab-docs/blob/master/content/_data/global-nav.yaml). +[global nav data file](https://gitlab.com/gitlab-org/gitlab-docs/blob/master/content/_data/global-nav.yaml). Don't forget to ask a technical writer to review your changes before merging. @@ -70,7 +70,7 @@ the data among the nav in containers properly [styled](#css-classes). ### Data file -The [data file](https://gitlab.com/gitlab-com/gitlab-docs/blob/master/content/_data/global-nav.yaml) +The [data file](https://gitlab.com/gitlab-org/gitlab-docs/blob/master/content/_data/global-nav.yaml) is structured in three components: sections, categories, and docs. #### Sections @@ -248,9 +248,9 @@ Examples: ### Layout file (logic) -The [layout](https://gitlab.com/gitlab-com/gitlab-docs/blob/master/layouts/global_nav.html) +The [layout](https://gitlab.com/gitlab-org/gitlab-docs/blob/master/layouts/global_nav.html) is fed by the [data file](#data-file), builds the global nav, and is rendered by the -[default](https://gitlab.com/gitlab-com/gitlab-docs/blob/master/layouts/default.html) layout. +[default](https://gitlab.com/gitlab-org/gitlab-docs/blob/master/layouts/default.html) layout. There are three main considerations on the logic built for the nav: diff --git a/doc/development/documentation/site_architecture/index.md b/doc/development/documentation/site_architecture/index.md index 6dd12b5efa7..1aef0ed855c 100644 --- a/doc/development/documentation/site_architecture/index.md +++ b/doc/development/documentation/site_architecture/index.md @@ -4,14 +4,14 @@ description: "Learn how GitLab's documentation website is architectured." # Documentation site architecture -Learn how we build and architecture [`gitlab-docs`](https://gitlab.com/gitlab-com/gitlab-docs) +Learn how we build and architecture [`gitlab-docs`](https://gitlab.com/gitlab-org/gitlab-docs) and deploy it to <https://docs.gitlab.com>. ## Repository While the source of the documentation content is stored in GitLab's respective product repositories, the source that is used to build the documentation site _from that content_ -is located at <https://gitlab.com/gitlab-com/gitlab-docs>. +is located at <https://gitlab.com/gitlab-org/gitlab-docs>. The following diagram illustrates the relationship between the repositories from where content is sourced, the `gitlab-docs` project, and the published output. @@ -43,7 +43,7 @@ from where content is sourced, the `gitlab-docs` project, and the published outp G --> L ``` -See the [README there](https://gitlab.com/gitlab-com/gitlab-docs/blob/master/README.md) +See the [README there](https://gitlab.com/gitlab-org/gitlab-docs/blob/master/README.md) for detailed information. ## Assets @@ -76,7 +76,7 @@ read through the [global navigation](global_nav.md) doc. The docs site is deployed to production with GitLab Pages, and previewed in merge requests with Review Apps. -The deployment aspects will be soon transferred from the [original document](https://gitlab.com/gitlab-com/gitlab-docs/blob/master/README.md) +The deployment aspects will be soon transferred from the [original document](https://gitlab.com/gitlab-org/gitlab-docs/blob/master/README.md) to this page. <!-- diff --git a/doc/development/documentation/structure.md b/doc/development/documentation/structure.md index fe676efa94d..025a946da0e 100644 --- a/doc/development/documentation/structure.md +++ b/doc/development/documentation/structure.md @@ -127,7 +127,7 @@ Notes: ## Help and feedback section -The "help and feedback" section (introduced by [!319](https://gitlab.com/gitlab-com/gitlab-docs/merge_requests/319)) displayed at the end of each document +The "help and feedback" section (introduced by [!319](https://gitlab.com/gitlab-org/gitlab-docs/merge_requests/319)) displayed at the end of each document can be omitted from the doc by adding a key into the its frontmatter: ```yaml @@ -142,7 +142,7 @@ you must check with a technical writer before doing so. ### Disqus We also have integrated the docs site with Disqus (introduced by -[!151](https://gitlab.com/gitlab-com/gitlab-docs/merge_requests/151)), +[!151](https://gitlab.com/gitlab-org/gitlab-docs/merge_requests/151)), allowing our users to post comments. To omit only the comments from the feedback section, use the following diff --git a/doc/development/documentation/styleguide.md b/doc/development/documentation/styleguide.md index d9cea0614c3..c1e3eb9680b 100644 --- a/doc/development/documentation/styleguide.md +++ b/doc/development/documentation/styleguide.md @@ -36,17 +36,17 @@ For the Troubleshooting sections, people in GitLab Support can merge additions t Include any media types/sources if the content is relevant to readers. You can freely include or link presentations, diagrams, videos, etc.; no matter who it was originally composed for, if it is helpful to any of our audiences, we can include it. - - If you use an image that has a separate source file (for example, a vector or diagram format), link the image to the source file so that it may be reused or updated by anyone. - - Do not copy and paste content from other sources unless it is a limited quotation with the source cited. Typically it is better to either rephrase relevant information in your own words or link out to the other source. +- If you use an image that has a separate source file (for example, a vector or diagram format), link the image to the source file so that it may be reused or updated by anyone. +- Do not copy and paste content from other sources unless it is a limited quotation with the source cited. Typically it is better to either rephrase relevant information in your own words or link out to the other source. ### No special types -In the software industry, it is a best practice to organize documentatioin in different types. For example, [Divio recommends](https://www.divio.com/blog/documentation/): +In the software industry, it is a best practice to organize documentation in different types. For example, [Divio recommends](https://www.divio.com/blog/documentation/): 1. Tutorials -2. How-to guides -3. Explanation -4. Reference (for example, a glossary) +1. How-to guides +1. Explanation +1. Reference (for example, a glossary) At GitLab, we have so many product changes in our monthly releases that we can't afford to continually update multiple types of information. If we have multiple types, the information will become outdated. Therefore, we have a [single template](structure.md) for documentation. @@ -107,6 +107,22 @@ Hard-coded HTML is valid, although it's discouraged to be used while we have `/h - Special styling is required. - Reviewed and approved by a technical writer. +### Markdown Rules + +GitLab ensures that the Markdown used across all documentation is consistent, as +well as easy to review and maintain, by testing documentation changes with +[Markdownlint (mdl)](https://github.com/markdownlint/markdownlint). This lint test +checks many common problems with Markdown, and fails when any document has an issue +with Markdown formatting that may cause the page to render incorrectly within GitLab. +It will also fail when a document is using non-standard Markdown (which may render +correctly, but is not the current standard in GitLab documentation). + +Each formatting issue that mdl checks has an associated [rule](https://github.com/markdownlint/markdownlint/blob/master/docs/RULES.md), +and the rules that are currently enabled for GitLab documentation are listed in the +[`.mdlrc.style`](https://gitlab.com/gitlab-org/gitlab-ce/blob/master/.mdlrc.style) file. +Configuration options are set in the [`.mdlrc`](https://gitlab.com/gitlab-org/gitlab-ce/blob/master/.mdlrc.style) +file. + ## Structure ### Organize by topic, not by type @@ -142,9 +158,9 @@ The table below shows what kind of documentation goes where. ### Working with directories and files 1. When you create a new directory, always start with an `index.md` file. - Do not use another file name and **do not** create `README.md` files. + Do not use another file name and **do not** create `README.md` files. 1. **Do not** use special characters and spaces, or capital letters in file names, - directory names, branch names, and anything that generates a path. + directory names, branch names, and anything that generates a path. 1. When creating a new document and it has more than one word in its name, make sure to use underscores instead of spaces or dashes (`-`). For example, a proper naming would be `import_projects_from_github.md`. The same rule @@ -221,14 +237,14 @@ Do not include the same information in multiple places. [Link to a SSOT instead. - Use sentence case for titles, headings, labels, menu items, and buttons. - Insert an empty line between different markups (e.g., after every paragraph, header, list, etc). Example: - ```md - ## Header + ```md + ## Header - Paragraph. + Paragraph. - - List item 1 - - List item 2 - ``` + - List item 1 + - List item 2 + ``` ### Tables overlapping the TOC @@ -267,32 +283,52 @@ Check specific punctuation rules for [list items](#list-items) below. ## List items -- Always start list items with a capital letter. +- Always start list items with a capital letter, unless they are parameters or commands + that are in backticks, or similar. - Always leave a blank line before and after a list. -- Begin a line with spaces (not tabs) to denote a subitem. -- To nest subitems, indent them with two spaces. -- To nest code blocks, indent them with four spaces. -- Only use ordered lists when their items describe a sequence of steps to follow. +- Begin a line with spaces (not tabs) to denote a [nested subitem](#nesting-inside-a-list-item). +- Only use ordered lists when their items describe a sequence of steps to follow: + + Do: + + These are the steps to do something: + + 1. First, do step 1 + 1. Then, do step 2 + 1. Finally, do step 3 + + Don't: + + This is a list of different features: + + 1. Feature 1 + 1. Feature 2 + 1. Feature 3 **Markup:** - Use dashes (`-`) for unordered lists instead of asterisks (`*`). -- Use the number one (`1`) for each item in an ordered list. - When rendered, the list items will appear with sequential numbering. +- Prefix `1.` to each item in an ordered list. + When rendered, the list items will appear with sequential numbering automatically. **Punctuation:** -- Do not add commas (`,`) or semicolons (`;`) to the end of a list item. -- Only add periods to the end of a list item if the item consists of a complete sentence. The [definition of full sentence](https://www2.le.ac.uk/offices/ld/resources/writing/grammar/grammar-guides/sentence) is: _"a complete sentence always contains a verb, expresses a complete idea, and makes sense standing alone"_. -- Be consistent throughout the list: if the majority of the items do not end in a period, do not end any of the items in a period, even if they consist of a complete sentence. The opposite is also valid: if the majority of the items end with a period, end all with a period. +- Do not add commas (`,`) or semicolons (`;`) to the end of list items. +- Only add periods to the end of a list item if the item consists of a complete sentence. + The [definition of full sentence](https://www2.le.ac.uk/offices/ld/resources/writing/grammar/grammar-guides/sentence) + is: _"a complete sentence always contains a verb, expresses a complete idea, and makes sense standing alone"_. +- Be consistent throughout the list: if the majority of the items do not end in a period, + do not end any of the items in a period, even if they consist of a complete sentence. + The opposite is also valid: if the majority of the items end with a period, end + all with a period. - Separate list items from explanatory text with a colon (`:`). For example: - ```md - The list is as follows: + ```md + The list is as follows: - - First item: this explains the first item. - - Second item: this explains the second item. - ``` + - First item: this explains the first item. + - Second item: this explains the second item. + ``` **Examples:** @@ -314,12 +350,86 @@ Do: - Let's say this is also a complete sentence. - Not a complete sentence. -Don't: +Don't (third item should have a `.` to match the first and second items): - Let's say this is a complete sentence. - Let's say this is also a complete sentence. - Not a complete sentence +### Nesting inside a list item + +It is possible to nest items under a list item, so that they render with the same indentation +as the list item. This can be done with: + +- [Code blocks](#code-blocks) +- [Blockquotes](#blockquotes) +- [Alert boxes](#alert-boxes) +- [Images](#images) + +Items nested in lists should always align with the first character of the list item. +In unordered lists (using `-`), this means two spaces for each level of indentation: + +~~~md +- Unordered list item 1 + + A line nested using 2 spaces to align with the `U` above. + +- Unordered list item 2 + + > A quote block that will nest + > inside list item 2. + +- Unordered list item 3 + + ```text + a codeblock that will next inside list item 3 + ``` + +- Unordered list item 4 + +  +~~~ + +For ordered lists, use three spaces for each level of indentation: + +~~~md +1. Ordered list item 1 + + A line nested using 3 spaces to align with the `O` above. + +1. Ordered list item 2 + + > A quote block that will nest + > inside list item 2. + +1. Ordered list item 3 + + ```text + a codeblock that will next inside list item 3 + ``` + +1. Ordered list item 4 + +  +~~~ + +You can nest full lists inside other lists using the same rules as above. If you wish +to mix types, that is also possible, as long as you don't mix items at the same level: + +``` +1. Ordered list item one. +1. Ordered list item two. + - Nested unordered list item one. + - Nested unordered list item two. +1. Ordered list item three. + +- Unordered list item one. +- Unordered list item two. + 1. Nested ordered list item one. + 1. Nested ordered list item two. +- Unordered list item three. +``` + ## Quotes Valid for markdown content only, not for frontmatter entries: @@ -340,7 +450,7 @@ For other punctuation rules, please refer to the links shift too, which eventually leads to dead links. If you think it is compelling to add numbers in headings, make sure to at least discuss it with someone in the Merge Request. -- [Avoid using symbols and special chars](https://gitlab.com/gitlab-com/gitlab-docs/issues/84) +- [Avoid using symbols and special chars](https://gitlab.com/gitlab-org/gitlab-docs/issues/84) in headers. Whenever possible, they should be plain and short text. - Avoid adding things that show ephemeral statuses. For example, if a feature is considered beta or experimental, put this info in a note, not in the heading. @@ -488,7 +598,7 @@ You can link any up-to-date video that is useful to the GitLab user. ### Embed videos -> [Introduced](https://gitlab.com/gitlab-com/gitlab-docs/merge_requests/472) in GitLab 12.1. +> [Introduced](https://gitlab.com/gitlab-org/gitlab-docs/merge_requests/472) in GitLab 12.1. GitLab docs (docs.gitlab.com) support embedded videos. @@ -504,16 +614,16 @@ To embed a video, follow the instructions below and make sure you have your MR reviewed and approved by a technical writer. 1. Copy the code below and paste it into your markdown file. - Leave a blank line above and below it. Do NOT edit the code - (don't remove or add any spaces, etc). + Leave a blank line above and below it. Do NOT edit the code + (don't remove or add any spaces, etc). 1. On YouTube, visit the video URL you want to display. Copy - the regular URL from your browser (`https://www.youtube.com/watch?v=VIDEO-ID`) - and replace the video title and link in the line under `<div class="video-fallback">`. + the regular URL from your browser (`https://www.youtube.com/watch?v=VIDEO-ID`) + and replace the video title and link in the line under `<div class="video-fallback">`. 1. On YouTube, click **Share**, then **Embed**. 1. Copy the `<iframe>` source (`src`) **URL only** - (`https://www.youtube.com/embed/VIDEO-ID`), - and paste it, replacing the content of the `src` field in the - `iframe` tag. + (`https://www.youtube.com/embed/VIDEO-ID`), + and paste it, replacing the content of the `src` field in the + `iframe` tag. ```html leave a blank line here @@ -545,15 +655,16 @@ nicely on different mobile devices. ## Code blocks -- Always wrap code added to a sentence in inline code blocks (``` ` ```). +- Always wrap code added to a sentence in inline code blocks (`` ` ``). E.g., `.gitlab-ci.yml`, `git add .`, `CODEOWNERS`, `only: master`. File names, commands, entries, and anything that refers to code should be added to code blocks. To make things easier for the user, always add a full code block for things that can be useful to copy and paste, as they can easily do it with the button on code blocks. +- Add a blank line above and below code blocks. - For regular code blocks, always use a highlighting class corresponding to the language for better readability. Examples: - ````md + ~~~md ```ruby Ruby code ``` @@ -563,16 +674,17 @@ nicely on different mobile devices. ``` ```md - Markdown code + [Markdown code example](example.md) ``` ```text - Code for which no specific highlighting class is available. + Code or text for which no specific highlighting class is available. ``` - ```` + ~~~ -- To display raw markdown instead of rendered markdown, use four backticks on their own lines around the - markdown to display. See [example](https://gitlab.com/gitlab-org/gitlab-ce/blob/8c1991b9bb7e3b8d606481fdea316d633cfa5eb7/doc/development/documentation/styleguide.md#L275-287). +- To display raw markdown instead of rendered markdown, you can use triple backticks + with `md`, like the `Markdown code` example above, unless you want to include triple + backticks in the code block as well. In that case, use triple tildes (`~~~`) instead. - For a complete reference on code blocks, check the [Kramdown guide](https://about.gitlab.com/handbook/product/technical-writing/markdown-guide/#code-blocks). ## Alert boxes @@ -595,7 +707,7 @@ In most cases, content considered for a note should be included: #### When to use Use a note when there is a reason that most or all readers who browse the -section should see the content. That is, if missed, it’s likely to cause +section should see the content. That is, if missed, it’s likely to cause major trouble for a minority of users or significant trouble for a majority of users. @@ -731,24 +843,24 @@ a helpful link back to how the feature was developed. - For features that need to declare the GitLab version that the feature was introduced. Text similar to the following should be added immediately below the heading as a blockquote: - ```md - > Introduced in GitLab 11.3. - ``` + ```md + > Introduced in GitLab 11.3. + ``` - Whenever possible, version text should have a link to the issue, merge request, or epic that introduced the feature. An issue is preferred over a merge request, and a merge request is preferred over an epic. For example: - ```md - > [Introduced](<link-to-issue>) in GitLab 11.3. - ``` + ```md + > [Introduced](<link-to-issue>) in GitLab 11.3. + ``` - If the feature is only available in GitLab Enterprise Edition, mention the [paid tier](https://about.gitlab.com/handbook/marketing/product-marketing/#tiers) the feature is available in: - ```md - > [Introduced](<link-to-issue>) in [GitLab Starter](https://about.gitlab.com/pricing/) 11.3. - ``` + ```md + > [Introduced](<link-to-issue>) in [GitLab Starter](https://about.gitlab.com/pricing/) 11.3. + ``` ### Removing version text @@ -767,10 +879,10 @@ Other text includes deprecation notices and version-specific how-to information. When a feature is available in EE-only tiers, add the corresponding tier according to the feature availability: +- For GitLab Core and GitLab.com Free: `**(CORE)**`. - For GitLab Starter and GitLab.com Bronze: `**(STARTER)**`. - For GitLab Premium and GitLab.com Silver: `**(PREMIUM)**`. - For GitLab Ultimate and GitLab.com Gold: `**(ULTIMATE)**`. -- For GitLab Core and GitLab.com Free: `**(CORE)**`. To exclude GitLab.com tiers (when the feature is not available in GitLab.com), add the keyword "only": @@ -782,6 +894,7 @@ keyword "only": For GitLab.com only tiers (when the feature is not available for self-hosted instances): +- For GitLab Free and higher tiers: `**(FREE ONLY)**`. - For GitLab Bronze and higher tiers: `**(BRONZE ONLY)**`. - For GitLab Silver and higher tiers: `**(SILVER ONLY)**`. - For GitLab Gold: `**(GOLD ONLY)**`. @@ -801,7 +914,7 @@ GitLab.com Free, and all higher tiers. ### How it works -Introduced by [!244](https://gitlab.com/gitlab-com/gitlab-docs/merge_requests/244), +Introduced by [!244](https://gitlab.com/gitlab-org/gitlab-docs/merge_requests/244), the special markup `**(STARTER)**` will generate a `span` element to trigger the badges and tooltips (`<span class="badge-trigger starter">`). When the keyword "only" is added, the corresponding GitLab.com badge will not be displayed. @@ -855,14 +968,14 @@ When there is a list of steps to perform, usually that entails editing the configuration file and reconfiguring/restarting GitLab. In such case, follow the style below as a guide: -```md +````md **For Omnibus installations** 1. Edit `/etc/gitlab/gitlab.rb`: - ```ruby - external_url "https://gitlab.example.com" - ``` + ```ruby + external_url "https://gitlab.example.com" + ``` 1. Save the file and [reconfigure] GitLab for the changes to take effect. @@ -872,17 +985,16 @@ the style below as a guide: 1. Edit `config/gitlab.yml`: - ```yaml - gitlab: - host: "gitlab.example.com" - ``` + ```yaml + gitlab: + host: "gitlab.example.com" + ``` 1. Save the file and [restart] GitLab for the changes to take effect. - [reconfigure]: path/to/administration/restart_gitlab.md#omnibus-gitlab-reconfigure [restart]: path/to/administration/restart_gitlab.md#installations-from-source -``` +```` In this case: @@ -901,9 +1013,9 @@ on this document. Further explanation is given below. - Every method must have the REST API request. For example: - ``` - GET /projects/:id/repository/branches - ``` + ``` + GET /projects/:id/repository/branches + ``` - Every method must have a detailed [description of the parameters](#method-description). @@ -914,7 +1026,7 @@ on this document. Further explanation is given below. The following can be used as a template to get started: -````md +~~~md ## Descriptive title One or two sentence description of what endpoint does. @@ -942,7 +1054,7 @@ Example response: } ] ``` -```` +~~~ ### Fake tokens @@ -955,7 +1067,7 @@ You can use the following fake tokens as examples. | Token type | Token value | |:----------------------|:-------------------------------------------------------------------| -| Private user token | `<your_access_token>` | +| Private user token | `<your_access_token>` | | Personal access token | `n671WNGecHugsdEDPsyo` | | Application ID | `2fcb195768c39e9a94cec2c2e32c59c0aad7a3365c10892e8116b5d83d4096b6` | | Application secret | `04f294d1eaca42b8692017b426d53bbc8fe75f827734f0260710b83a556082df` | @@ -970,7 +1082,7 @@ You can use the following fake tokens as examples. ### Method description Use the following table headers to describe the methods. Attributes should -always be in code blocks using backticks (``` ` ```). +always be in code blocks using backticks (`` ` ``). ```md | Attribute | Type | Required | Description | @@ -1076,6 +1188,6 @@ curl --request PUT --header "PRIVATE-TOKEN: <your_access_token>" --data "domain_ [cURL]: http://curl.haxx.se/ "cURL website" [single spaces]: http://www.slate.com/articles/technology/technology/2011/01/space_invaders.html -[gfm]: https://docs.gitlab.com/ce/user/markdown.html#newlines "GitLab flavored markdown documentation" +[gfm]: ../../user/markdown.md#newlines "GitLab flavored markdown documentation" [ce-1242]: https://gitlab.com/gitlab-org/gitlab-ce/issues/1242 [doc-restart]: ../../administration/restart_gitlab.md "GitLab restart documentation" diff --git a/doc/development/documentation/workflow.md b/doc/development/documentation/workflow.md index 0abfe4b82a4..9f488fac7d0 100644 --- a/doc/development/documentation/workflow.md +++ b/doc/development/documentation/workflow.md @@ -6,5 +6,5 @@ description: Learn the processes for contributing to GitLab's documentation. Documentation workflows at GitLab differ depending on the reason for the change: -- [Documentation process for feature changes](feature-change-workflow.md) - The documentation is being created or updated as part of the development and release of a new or enhanced feature. This process involves the developer of the feature (who includes new/updated documentation files as part of the same merge request containing the feature's code) and also involves the product manager and technical writer who are listed for the feature's [DevOps stage](https://about.gitlab.com/handbook/product/categories/#devops-stages). +- [Documentation process for feature changes](feature-change-workflow.md) - The documentation is being created or updated as part of the development and release of a new or enhanced feature. This process involves the developer of the feature (who includes new/updated documentation files as part of the same merge request containing the feature's code) and also involves the product manager and technical writer who are listed for the feature's [DevOps stage](https://about.gitlab.com/handbook/product/categories/#devops-stages). - [Documentation improvement workflow](improvement-workflow.md) - All documentation additions not associated with a feature release. Documentation is being created or updated to improve accuracy, completeness, ease of use, or any reason other than a feature change. Anyone (and everyone) can contribute a merge request for this type of change at any time. diff --git a/doc/development/ee_features.md b/doc/development/ee_features.md index 7131b717353..1358851f3cd 100644 --- a/doc/development/ee_features.md +++ b/doc/development/ee_features.md @@ -125,20 +125,24 @@ This also applies to views. ### EE features based on CE features For features that build on existing CE features, write a module in the `EE` -namespace and `prepend` it in the CE class, on the last line of the file that -the class resides in. This makes conflicts less likely to happen during CE to EE -merges because only one line is added to the CE class - the `prepend` line. For -example, to prepend a module into the `User` class you would use the following -approach: +namespace and inject it in the CE class, on the last line of the file that the +class resides in. This makes conflicts less likely to happen during CE to EE +merges because only one line is added to the CE class - the line that injects +the module. For example, to prepend a module into the `User` class you would use +the following approach: ```ruby class User < ActiveRecord::Base # ... lots of code here ... end -User.prepend(EE::User) +User.prepend_if_ee('EE::User') ``` +Do not use methods such as `prepend`, `extend`, and `include`. Instead, use +`prepend_if_ee`, `extend_if_ee`, or `include_if_ee`. These methods take a +_String_ containing the full module name as the argument, not the module itself. + Since the module would require an `EE` namespace, the file should also be put in an `ee/` sub-directory. For example, we want to extend the user model in EE, so we have a module called `::EE::User` put inside @@ -255,7 +259,7 @@ class ApplicationController < ActionController::Base # ... end -ApplicationController.prepend(EE::ApplicationController) +ApplicationController.prepend_if_ee('EE::ApplicationController') ``` And create a new file in the `ee/` sub-directory with the altered @@ -504,9 +508,9 @@ EE-specific LDAP classes in `ee/lib/ee/gitlab/ldap`. ### Code in `lib/api/` -It can be very tricky to extend EE features by a single line of `prepend`, -and for each different [Grape](https://github.com/ruby-grape/grape) feature, -we might need different strategies to extend it. To apply different strategies +It can be very tricky to extend EE features by a single line of `prepend_if_ee`, +and for each different [Grape](https://github.com/ruby-grape/grape) feature, we +might need different strategies to extend it. To apply different strategies easily, we would use `extend ActiveSupport::Concern` in the EE module. Put the EE module files following @@ -543,12 +547,12 @@ constants. We can define `params` and utilize `use` in another `params` definition to include params defined in EE. However, we need to define the "interface" first in CE in order for EE to override it. We don't have to do this in other places -due to `prepend`, but Grape is complex internally and we couldn't easily do -that, so we'll follow regular object-oriented practices that we define the +due to `prepend_if_ee`, but Grape is complex internally and we couldn't easily +do that, so we'll follow regular object-oriented practices that we define the interface first here. For example, suppose we have a few more optional params for EE. We can move the -params out of the `Grape::API` class to a helper module, so we can `prepend` it +params out of the `Grape::API` class to a helper module, so we can inject it before it would be used in the class. ```ruby @@ -583,7 +587,7 @@ module API end end -API::Helpers::ProjectsHelpers.prepend(EE::API::Helpers::ProjectsHelpers) +API::Helpers::ProjectsHelpers.prepend_if_ee('EE::API::Helpers::ProjectsHelpers') ``` We could override it in EE module: @@ -624,7 +628,7 @@ module API end end -API::JobArtifacts.prepend(EE::API::JobArtifacts) +API::JobArtifacts.prepend_if_ee('EE::API::JobArtifacts') ``` And then we can follow regular object-oriented practices to override it: @@ -677,7 +681,7 @@ module API end end -API::MergeRequests.prepend(EE::API::MergeRequests) +API::MergeRequests.prepend_if_ee('EE::API::MergeRequests') ``` Note that `update_merge_request_ee` doesn't do anything in CE, but @@ -717,8 +721,8 @@ Sometimes we need to use different arguments for a particular API route, and we can't easily extend it with an EE module because Grape has different context in different blocks. In order to overcome this, we need to move the data to a class method that resides in a separate module or class. This allows us to extend that -module or class before its data is used, without having to place a `prepend` in -the middle of CE code. +module or class before its data is used, without having to place a +`prepend_if_ee` in the middle of CE code. For example, in one place we need to pass an extra argument to `at_least_one_of` so that the API could consider an EE-only argument as the @@ -739,7 +743,7 @@ module API end end -API::MergeRequests::Parameters.prepend(EE::API::MergeRequests::Parameters) +API::MergeRequests::Parameters.prepend_if_ee('EE::API::MergeRequests::Parameters') # api/merge_requests.rb module API @@ -789,7 +793,7 @@ class Identity < ActiveRecord::Base [:provider] end - prepend EE::Identity + prepend_if_ee('EE::Identity') validates :extern_uid, allow_blank: true, @@ -841,7 +845,7 @@ class Identity < ActiveRecord::Base end end -Identity::UniquenessScopes.prepend(EE::Identity::UniquenessScopes) +Identity::UniquenessScopes.prepend_if_ee('EE::Identity::UniquenessScopes') # app/models/identity.rb class Identity < ActiveRecord::Base @@ -906,7 +910,7 @@ import bundle from 'ee_else_ce/protected_branches/protected_branches_bundle.js'; ``` See the frontend guide [performance section](fe_guide/performance.md) for -information on managing page-specific javascript within EE. +information on managing page-specific JavaScript within EE. ## Vue code in `assets/javascript` @@ -941,7 +945,7 @@ export default { - Since we [can't async load a mixin](https://github.com/vuejs/vue-loader/issues/418#issuecomment-254032223) we will use the [`ee_else_ce`](../development/ee_features.md#javascript-code-in-assetsjavascripts) alias we already have for webpack. - This means all the EE specific props, computed properties, methods, etc that are EE only should be in a mixin in the `ee/` folder and we need to create a CE counterpart of the mixin -##### Example: +##### Example ```javascript import mixin from 'ee_else_ce/path/mixin'; @@ -972,7 +976,7 @@ For regular JS files, the approach is similar. 1. An EE file should be created with the EE only code, and it should extend the CE counterpart. 1. For code inside functions that can't be extended, the code should be moved into a new file and we should use `ee_else_ce` helper: -#### Example: +#### Example ```javascript import eeCode from 'ee_else_ce/ee_code'; @@ -1053,7 +1057,7 @@ Here is a workflow to make sure those changes end up backported safely into CE t **Note:** regarding SCSS, make sure the files living outside `/ee/` don't diverge between CE and EE projects. -## gitlab-svgs +## GitLab-svgs Conflicts in `app/assets/images/icons.json` or `app/assets/images/icons.svg` can be resolved simply by regenerating those assets with diff --git a/doc/development/elasticsearch.md b/doc/development/elasticsearch.md index 0965db29557..f2412c249c1 100644 --- a/doc/development/elasticsearch.md +++ b/doc/development/elasticsearch.md @@ -40,9 +40,11 @@ There is no need to install any plugins If you're interested on working with the new beta repo indexer, all you need to do is: -- git clone git@gitlab.com:gitlab-org/gitlab-elasticsearch-indexer.git -- make -- make install +```sh +git clone git@gitlab.com:gitlab-org/gitlab-elasticsearch-indexer.git +make +make install +``` this adds `gitlab-elasticsearch-indexer` to `$GOPATH/bin`, please make sure that is in your `$PATH`. After that GitLab will find it and you'll be able to enable it in the admin settings area. @@ -148,6 +150,59 @@ Uses an [Edge NGram token filter](https://www.elastic.co/guide/en/elasticsearch/ - Searches can have their own analyzers. Remember to check when editing analyzers - `Character` filters (as opposed to token filters) always replace the original character, so they're not a good choice as they can hinder exact searches +## Zero downtime reindexing with multiple indices + +Currently GitLab can only handle a single version of setting. Any setting/schema changes would require reindexing everything from scratch. Since reindexing can take a long time, this can cause search functionality downtime. + +To avoid downtime, GitLab is working to support multiple indices that +can function at the same time. Whenever the schema changes, the admin +will be able to create a new index and reindex to it, while searches +continue to go to the older, stable index. Any data updates will be +forwarded to both indices. Once the new index is ready, an admin can +mark it active, which will direct all searches to it, and remove the old +index. + +This is also helpful for migrating to new servers, e.g. moving to/from AWS. + +Currently we are on the process of migrating to this new design. Everything is hardwired to work with one single version for now. + +### Architecture + +The traditional setup, provided by `elasticsearch-rails`, is to communicate through its internal proxy classes. Developers would write model-specific logic in a module for the model to include in (e.g. `SnippetsSearch`). The `__elasticsearch__` methods would return a proxy object, e.g.: + +- `Issue.__elasticsearch__` returns an instance of `Elasticsearch::Model::Proxy::ClassMethodsProxy` +- `Issue.first.__elasticsearch__` returns an instance of `Elasticsearch::Model::Proxy::InstanceMethodsProxy`. + +These proxy objects would talk to Elasticsearch server directly (see top half of the diagram). + + + +In the planned new design, each model would have a pair of corresponding subclassed proxy objects, in which model-specific logic is located. For example, `Snippet` would have `SnippetClassProxy` and `SnippetInstanceProxy` (being subclass of `Elasticsearch::Model::Proxy::ClassMethodsProxy` and `Elasticsearch::Model::Proxy::InstanceMethodsProxy`, respectively). + +`__elasticsearch__` would represent another layer of proxy object, keeping track of multiple actual proxy objects. It would forward method calls to the appropriate index. For example: + +- `model.__elasticsearch__.search` would be forwarded to the one stable index, since it is a read operation. +- `model.__elasticsearch__.update_document` would be forwarded to all indices, to keep all indices up-to-date. + +The global configurations per version are now in the `Elastic::(Version)::Config` class. You can change mappings there. + +### Creating new version of schema + +NOTE: **Note:** this is not applicable yet as multiple indices functionality is not fully implemented. + +Folders like `ee/lib/elastic/v12p1` contain snapshots of search logic from different versions. To keep a continuous Git history, the latest version lives under `ee/lib/elastic/latest`, but its classes are aliased under an actual version (e.g. `ee/lib/elastic/v12p3`). When referencing these classes, never use the `Latest` namespace directly, but use the actual version (e.g. `V12p3`). + +The version name basically follows GitLab's release version. If setting is changed in 12.3, we will create a new namespace called `V12p3` (p stands for "point"). Raise an issue if there is a need to name a version differently. + +If the current version is `v12p1`, and we need to create a new version for `v12p3`, the steps are as follows: + +1. Copy the entire folder of `v12p1` as `v12p3` +1. Change the namespace for files under `v12p3` folder from `V12p1` to `V12p3` (which are still aliased to `Latest`) +1. Delete `v12p1` folder +1. Copy the entire folder of `latest` as `v12p1` +1. Change the namespace for files under `v12p1` folder from `Latest` to `V12p1` +1. Make changes to files under the `latest` folder as needed + ## Troubleshooting ### Getting `flood stage disk watermark [95%] exceeded` diff --git a/doc/development/emails.md b/doc/development/emails.md index e6af075a282..5676c3b32f4 100644 --- a/doc/development/emails.md +++ b/doc/development/emails.md @@ -5,6 +5,10 @@ To view rendered emails "sent" in your development instance, visit [`/rails/letter_opener`](http://localhost:3000/rails/letter_opener). +Please note that [S/MIME signed](../administration/smime_signing_email.md) emails +[cannot be currently previewed](https://github.com/fgrehm/letter_opener_web/issues/96) with +`letter_opener`. + ## Mailer previews Rails provides a way to preview our mailer templates in HTML and plaintext using diff --git a/doc/development/fe_guide/architecture.md b/doc/development/fe_guide/architecture.md index 49b74b5ebcf..3d27f67a8a6 100644 --- a/doc/development/fe_guide/architecture.md +++ b/doc/development/fe_guide/architecture.md @@ -11,7 +11,7 @@ Architectural decisions should be accessible to everyone, so please document them in the relevant Merge Request discussion or by updating our documentation when appropriate. -You can find the Frontend Architecture experts on the [team page](https://about.gitlab.com/company/team). +You can find the Frontend Architecture experts on the [team page](https://about.gitlab.com/company/team/). ## Examples diff --git a/doc/development/fe_guide/axios.md b/doc/development/fe_guide/axios.md index 0d9397c3bd5..6e7cf523f36 100644 --- a/doc/development/fe_guide/axios.md +++ b/doc/development/fe_guide/axios.md @@ -1,15 +1,18 @@ # Axios + We use [axios][axios] to communicate with the server in Vue applications and most new code. In order to guarantee all defaults are set you *should not use `axios` directly*, you should import `axios` from `axios_utils`. ## CSRF token + All our request require a CSRF token. To guarantee this token is set, we are importing [axios][axios], setting the token, and exporting `axios` . This exported module should be used instead of directly using `axios` to ensure the token is set. ## Usage + ```javascript import axios from './lib/utils/axios_utils'; @@ -35,7 +38,7 @@ Advantages over [`spyOn()`]: - no need to create response objects - does not allow call through (which we want to avoid) -- simple API to test error cases +- simple API to test error cases - provides `replyOnce()` to allow for different responses We have also decided against using [axios interceptors] because they are not suitable for mocking. diff --git a/doc/development/fe_guide/components.md b/doc/development/fe_guide/components.md index 52462a4bec9..096ce8ca25a 100644 --- a/doc/development/fe_guide/components.md +++ b/doc/development/fe_guide/components.md @@ -14,28 +14,29 @@ See also the [corresponding UX guide](https://design.gitlab.com/#/components/dro 1. Use the HTML structure provided by the [docs][bootstrap-dropdowns] 1. Add a specific class to the top level `.dropdown` element - ```Haml - .dropdown.my-dropdown - %button{ type: 'button', data: { toggle: 'dropdown' }, 'aria-haspopup': true, 'aria-expanded': false } - %span.dropdown-toggle-text - Toggle Dropdown - = icon('chevron-down') - - %ul.dropdown-menu - %li - %a - item! - ``` - - Or use the helpers - ```Haml - .dropdown.my-dropdown - = dropdown_toggle('Toogle!', { toggle: 'dropdown' }) - = dropdown_content - %li - %a - item! - ``` + ```Haml + .dropdown.my-dropdown + %button{ type: 'button', data: { toggle: 'dropdown' }, 'aria-haspopup': true, 'aria-expanded': false } + %span.dropdown-toggle-text + Toggle Dropdown + = icon('chevron-down') + + %ul.dropdown-menu + %li + %a + item! + ``` + + Or use the helpers + + ```Haml + .dropdown.my-dropdown + = dropdown_toggle('Toogle!', { toggle: 'dropdown' }) + = dropdown_content + %li + %a + item! + ``` [bootstrap-dropdowns]: https://getbootstrap.com/docs/3.3/javascript/#dropdowns diff --git a/doc/development/fe_guide/design_patterns.md b/doc/development/fe_guide/design_patterns.md index 0342d16a87c..2f372f783f5 100644 --- a/doc/development/fe_guide/design_patterns.md +++ b/doc/development/fe_guide/design_patterns.md @@ -53,6 +53,7 @@ When writing a class that needs to manipulate the DOM guarantee a container opti This is useful when we need that class to be instantiated more than once in the same page. Bad: + ```javascript class Foo { constructor() { @@ -63,6 +64,7 @@ new Foo(); ``` Good: + ```javascript class Foo { constructor(opts) { @@ -72,6 +74,7 @@ class Foo { new Foo({ container: '.my-element' }); ``` + You can find an example of the above in this [class][container-class-example]; [container-class-example]: https://gitlab.com/gitlab-org/gitlab-ce/blob/master/app/assets/javascripts/mini_pipeline_graph_dropdown.js diff --git a/doc/development/fe_guide/droplab/droplab.md b/doc/development/fe_guide/droplab/droplab.md index 2f8c79abde1..1c6d895b3ab 100644 --- a/doc/development/fe_guide/droplab/droplab.md +++ b/doc/development/fe_guide/droplab/droplab.md @@ -25,6 +25,7 @@ If you do not provide any arguments, it will globally query and instantiate all <!-- ... --> <ul> ``` + ```js const droplab = new DropLab(); droplab.init(); @@ -45,6 +46,7 @@ You can add static list items. <li>Static value 2</li> <ul> ``` + ```js const droplab = new DropLab(); droplab.init(); @@ -62,6 +64,7 @@ a non-global instance of DropLab using the `DropLab.prototype.init` method. <!-- ... --> <ul> ``` + ```js const trigger = document.getElementById('trigger'); const list = document.getElementById('list'); @@ -79,6 +82,7 @@ You can also add hooks to an existing DropLab instance using `DropLab.prototype. <a href="#" id="trigger" data-dropdown-trigger="#list">Toggle</a> <ul id="list" data-dropdown><!-- ... --><ul> ``` + ```js const droplab = new DropLab(); @@ -109,6 +113,7 @@ for all `data-dynamic` dropdown lists tracked by that DropLab instance. <li><a href="#" data-id="{{id}}">{{text}}</a></li> </ul> ``` + ```js const droplab = new DropLab(); @@ -131,6 +136,7 @@ the data as the second argument and the `id` of the trigger element as the first <li><a href="#" data-id="{{id}}">{{text}}</a></li> </ul> ``` + ```js const droplab = new DropLab(); @@ -160,6 +166,7 @@ dropdown lists, one of which is dynamic. </ul> </div> ``` + ```js const droplab = new DropLab(); @@ -216,6 +223,7 @@ Some plugins require configuration values, the config object can be passed as th <a href="#" id="trigger" data-dropdown-trigger="#list">Toggle</a> <ul id="list" data-dropdown><!-- ... --><ul> ``` + ```js const droplab = new DropLab(); diff --git a/doc/development/fe_guide/droplab/plugins/ajax.md b/doc/development/fe_guide/droplab/plugins/ajax.md index b6a883ce6c4..4b76b207d88 100644 --- a/doc/development/fe_guide/droplab/plugins/ajax.md +++ b/doc/development/fe_guide/droplab/plugins/ajax.md @@ -17,18 +17,19 @@ Add the `Ajax` object to the plugins array of a `DropLab.prototype.init` or `Dro <a href="#" id="trigger" data-dropdown-trigger="#list">Toggle</a> <ul id="list" data-dropdown><!-- ... --><ul> ``` + ```js - const droplab = new DropLab(); +const droplab = new DropLab(); - const trigger = document.getElementById('trigger'); - const list = document.getElementById('list'); +const trigger = document.getElementById('trigger'); +const list = document.getElementById('list'); - droplab.addHook(trigger, list, [Ajax], { - Ajax: { - endpoint: '/some-endpoint', - method: 'setData', - }, - }); +droplab.addHook(trigger, list, [Ajax], { + Ajax: { + endpoint: '/some-endpoint', + method: 'setData', + }, +}); ``` Optionally you can set `loadingTemplate` to a HTML string. This HTML string will diff --git a/doc/development/fe_guide/droplab/plugins/filter.md b/doc/development/fe_guide/droplab/plugins/filter.md index 1f188c64fe4..b867394a241 100644 --- a/doc/development/fe_guide/droplab/plugins/filter.md +++ b/doc/development/fe_guide/droplab/plugins/filter.md @@ -17,25 +17,26 @@ Add the `Filter` object to the plugins array of a `DropLab.prototype.init` or `D <li><a href="#" data-id="{{id}}">{{text}}</a></li> <ul> ``` + ```js - const droplab = new DropLab(); - - const trigger = document.getElementById('trigger'); - const list = document.getElementById('list'); - - droplab.init(trigger, list, [Filter], { - Filter: { - template: 'text', - }, - }); - - droplab.addData('trigger', [{ - id: 0, - text: 'Jacob', - }, { - id: 1, - text: 'Jeff', - }]); +const droplab = new DropLab(); + +const trigger = document.getElementById('trigger'); +const list = document.getElementById('list'); + +droplab.init(trigger, list, [Filter], { + Filter: { + template: 'text', + }, +}); + +droplab.addData('trigger', [{ + id: 0, + text: 'Jacob', +}, { + id: 1, + text: 'Jeff', +}]); ``` Above, the input string will be compared against the `test` key of the passed data objects. diff --git a/doc/development/fe_guide/droplab/plugins/input_setter.md b/doc/development/fe_guide/droplab/plugins/input_setter.md index e4050213869..db492da478a 100644 --- a/doc/development/fe_guide/droplab/plugins/input_setter.md +++ b/doc/development/fe_guide/droplab/plugins/input_setter.md @@ -22,33 +22,34 @@ You can also set the `InputSetter` config to an array of objects, which will all <li><a href="#" data-id="{{id}}">{{text}}</a></li> <ul> ``` + ```js - const droplab = new DropLab(); - - const trigger = document.getElementById('trigger'); - const list = document.getElementById('list'); - - const input = document.getElementById('input'); - const div = document.getElementById('div'); - - droplab.init(trigger, list, [InputSetter], { - InputSetter: [{ - input: input, - valueAttribute: 'data-id', - } { - input: div, - valueAttribute: 'data-id', - inputAttribute: 'data-selected-id', - }], - }); - - droplab.addData('trigger', [{ - id: 0, - text: 'Jacob', - }, { - id: 1, - text: 'Jeff', - }]); +const droplab = new DropLab(); + +const trigger = document.getElementById('trigger'); +const list = document.getElementById('list'); + +const input = document.getElementById('input'); +const div = document.getElementById('div'); + +droplab.init(trigger, list, [InputSetter], { + InputSetter: [{ + input: input, + valueAttribute: 'data-id', + } { + input: div, + valueAttribute: 'data-id', + inputAttribute: 'data-selected-id', + }], +}); + +droplab.addData('trigger', [{ + id: 0, + text: 'Jacob', +}, { + id: 1, + text: 'Jeff', +}]); ``` Above, if the second list item was clicked, it would update the `#input` element diff --git a/doc/development/fe_guide/event_tracking.md b/doc/development/fe_guide/event_tracking.md index 6ab3fa4acf3..1b417d4c8c2 100644 --- a/doc/development/fe_guide/event_tracking.md +++ b/doc/development/fe_guide/event_tracking.md @@ -1,79 +1,76 @@ -# Event Tracking +# Event tracking -We use [Snowplow](https://github.com/snowplow/snowplow) for tracking custom events (available in GitLab [Enterprise Edition](https://about.gitlab.com/pricing/) only). +GitLab provides `Tracking`, an interface that wraps +[Snowplow](https://github.com/snowplow/snowplow) for tracking custom events. +It uses Snowplow's custom event tracking functions. -## Generic tracking function - -In addition to Snowplow's built-in method for tracking page views, we use a generic tracking function which enables us to selectively apply listeners to events. - -The generic tracking function can be imported in EE-specific JS files as follows: +The tracking interface can be imported in JS files as follows: ```javascript -import { trackEvent } from `ee/stats`; +import Tracking from `~/tracking`; ``` -This gives the user access to the `trackEvent` method, which takes the following parameters: +## Tracking in HAML or Vue templates -| parameter | type | description | required | -| ---------------- | ------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------- | -| `category` | string | Describes the page that you're capturing click events on. Unless infeasible, please use the Rails page attribute `document.body.dataset.page` by default. | true | -| `eventName` | string | Describes the action the user is taking. The first word should always describe the action. For example, clicks should be `click` and activations should be `activate`. Use underscores to describe what was acted on. For example, activating a form field would be `activate_form_input`. Clicking on a dropdown is `click_dropdown`. | true | -| `additionalData` | object | Additional data such as `label`, `property`, and `value` as described [in our Feature Instrumentation taxonomy](https://about.gitlab.com/handbook/product/feature-instrumentation/#taxonomy). | false | +To avoid having to do create a bunch of custom javascript event handlers, when working within HAML or Vue templates, we can add `data-track-*` attributes to elements of interest. This way, all elements that have a `data-track-event` attribute to automatically have event tracking bound. -Read more about instrumentation and the taxonomy in the [Product Handbook](https://about.gitlab.com/handbook/product/feature-instrumentation). - -### Tracking in `.js` and `.vue` files +Below is an example of `data-track-*` attributes assigned to a button in HAML: -The most simple use case is to add tracking programmatically to an event of interest in Javascript. +```haml +%button.btn{ data: { track_event: "click_button", track_label: "template_preview", track_property: "my-template", track_value: "" } } +``` -The following example demonstrates how to track a click on a button in Javascript by calling the `trackEvent` method explicitly: +We can then setup tracking for large sections of a page, or an entire page by telling the Tracking interface to bind to it. ```javascript -import { trackEvent } from `ee/stats`; +import Tracking from '~/tracking'; -trackEvent('dashboard:projects:index', 'click_button', { - label: 'create_from_template', - property: 'template_preview', - value: 'rails', +// for the entire document +new Tracking().bind(); + +// for a container element +document.addEventListener('DOMContentLoaded', () => { + new Tracking('my_category').bind(document.getElementById('my-container')); }); + ``` -### Tracking in HAML templates +When you instantiate a Tracking instance you can provide a category. If none is provided, `document.body.dataset.page` will be used. When you bind the Tracking instance you can provide an element. If no element is provided to bind to, the `document` is assumed. -Sometimes we want to track clicks for multiple elements on a page. Creating event handlers for all elements could soon turn into a tedious task. +Below is a list of supported `data-track-*` attributes: -There's a more convenient solution to this problem. When working with HAML templates, we can add `data-track-*` attributes to elements of interest. This way, all elements that have both `data-track-label` and `data-track-event` attributes assigned get marked for event tracking. All we have to do is call the `bindTrackableContainer` method on a container which allows for better scoping. +| attribute | required | description | +|:----------------------|:---------|:------------| +| `data-track-event` | true | Action the user is taking. Clicks should be `click` and activations should be `activate`, so for example, focusing a form field would be `activate_form_input`, and clicking a button would be `click_button`. | +| `data-track-label` | false | The `label` as described [in our Feature Instrumentation taxonomy](https://about.gitlab.com/handbook/product/feature-instrumentation/#taxonomy). | +| `data-track-property` | false | The `property` as described [in our Feature Instrumentation taxonomy](https://about.gitlab.com/handbook/product/feature-instrumentation/#taxonomy). | +| `data-track-value` | false | The `value` as described [in our Feature Instrumentation taxonomy](https://about.gitlab.com/handbook/product/feature-instrumentation/#taxonomy). If omitted, this will be the elements `value` property or an empty string. For checkboxes, the default value will be the element's checked attribute or `false` when unchecked. | -Below is an example of `data-track-*` attributes assigned to a button in HAML: +## Tracking in raw Javascript -```ruby -%button.btn{ data: { track_label: "create_from_template", track_property: "template_preview", track_event: "click_button", track_value: "my-template" } } -``` +Custom events can be tracked by directly calling the `Tracking.event` static function, which accepts the following arguments: + +| argument | type | default value | description | +|:-----------|:-------|:---------------------------|:------------| +| `category` | string | document.body.dataset.page | Page or subsection of a page that events are being captured within. | +| `event` | string | 'generic' | Action the user is taking. Clicks should be `click` and activations should be `activate`, so for example, focusing a form field would be `activate_form_input`, and clicking a button would be `click_button`. | +| `data` | object | {} | Additional data such as `label`, `property`, and `value` as described [in our Feature Instrumentation taxonomy](https://about.gitlab.com/handbook/product/feature-instrumentation/#taxonomy). These will be set as empty strings if you don't provide them. | -By calling `bindTrackableContainer('.my-container')`, click handlers get bound to all elements located in `.my-container` provided that they have the necessary `data-track-*` attributes assigned to them. +Tracking can be programmatically added to an event of interest in Javascript, and the following example demonstrates tracking a click on a button by calling `Tracking.event` manually. ```javascript -import Stats from 'ee/stats'; +import Tracking from `~/tracking`; -document.addEventListener('DOMContentLoaded', () => { - Stats.bindTrackableContainer('.my-container', 'category'); -}); +document.getElementById('my_button').addEventListener('click', () => { + Tracking.event('dashboard:projects:index', 'click_button', { + label: 'create_from_template', + property: 'template_preview', + value: 'rails', + }); +}) ``` -The second parameter in `bindTrackableContainer` is optional. If omitted, the value of `document.body.dataset.page` will be used as category instead. - -Below is a list of supported `data-track-*` attributes: - -| attribute | description | required | -| --------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------- | -| `data-track-label` | The `label` in `trackEvent` | true | -| `data-track-event` | The `eventName` in `trackEvent` | true | -| `data-track-property` | The `property` in `trackEvent`. If omitted, an empty string will be used as a default value. | false | -| `data-track-value` | The `value` in `trackEvent`. If omitted, this will be `target.value` or empty string. For checkboxes, the default value being tracked will be the element's checked attribute if `data-track-value` is omitted. | false | - -Since Snowplow is an Enterprise Edition feature, it's necessary to create a CE backport when adding `data-track-*` attributes to HAML templates in most cases. - -## Testing +## Toggling tracking on or off Snowplow can be enabled by navigating to: diff --git a/doc/development/fe_guide/frontend_faq.md b/doc/development/fe_guide/frontend_faq.md index e4225f2bc39..0d2aeffeac0 100644 --- a/doc/development/fe_guide/frontend_faq.md +++ b/doc/development/fe_guide/frontend_faq.md @@ -5,12 +5,12 @@ 1. **You talk about Frontend FAQ.** Please share links to it whenever applicable, so more eyes catch when content gets outdated. -2. **Keep it short and simple.** +1. **Keep it short and simple.** Whenever an answer needs more than two sentences it does not belong here. -3. **Provide background when possible.** +1. **Provide background when possible.** Linking to relevant source code, issue / epic, or other documentation helps to understand the answer. -4. **If you see something, do something.** +1. **If you see something, do something.** Please remove or update any content that is outdated as soon as you see it. ## FAQ diff --git a/doc/development/fe_guide/graphql.md b/doc/development/fe_guide/graphql.md index 55b719227e5..4fc5dfc8c3d 100644 --- a/doc/development/fe_guide/graphql.md +++ b/doc/development/fe_guide/graphql.md @@ -47,7 +47,7 @@ new Vue({ }); ``` -Read more about [Vue Apollo][vue-apollo] in the [Vue Apollo documentation][vue-apollo-docs]. +Read more about [Vue Apollo][vue-apollo] in the [Vue Apollo documentation](https://vue-apollo.netlify.com/guide/). ### Local state with Apollo @@ -118,7 +118,6 @@ Read more about the [Apollo] client in the [Apollo documentation](https://www.ap [Apollo]: https://www.apollographql.com/ [vue-apollo]: https://github.com/Akryum/vue-apollo/ -[vue-apollo-docs]: https://akryum.github.io/vue-apollo/ [feature-flags]: ../feature_flags.md [default-client]: https://gitlab.com/gitlab-org/gitlab-ce/blob/master/app/assets/javascripts/lib/graphql.js [vue-test-utils]: https://vue-test-utils.vuejs.org/ diff --git a/doc/development/fe_guide/icons.md b/doc/development/fe_guide/icons.md index 533e2001300..4f687d8642e 100644 --- a/doc/development/fe_guide/icons.md +++ b/doc/development/fe_guide/icons.md @@ -21,10 +21,10 @@ To use a sprite Icon in HAML or Rails we use a specific helper function : sprite_icon(icon_name, size: nil, css_class: '') ``` -- **icon_name** Use the icon_name that you can find in the SVG Sprite - ([Overview is available here][svg-preview]). -- **size (optional)** Use one of the following sizes : 16, 24, 32, 48, 72 (this will be translated into a `s16` class) -- **css_class (optional)** If you want to add additional css classes +- **icon_name** Use the icon_name that you can find in the SVG Sprite + ([Overview is available here][svg-preview]). +- **size (optional)** Use one of the following sizes : 16, 24, 32, 48, 72 (this will be translated into a `s16` class) +- **css_class (optional)** If you want to add additional css classes **Example** @@ -65,10 +65,10 @@ export default { </template> ``` -- **name** Name of the Icon in the SVG Sprite ([Overview is available here][svg-preview]). -- **size (optional)** Number value for the size which is then mapped to a specific CSS class - (Available Sizes: 8, 12, 16, 18, 24, 32, 48, 72 are mapped to `sXX` css classes) -- **css-classes (optional)** Additional CSS Classes to add to the svg tag. +- **name** Name of the Icon in the SVG Sprite ([Overview is available here][svg-preview]). +- **size (optional)** Number value for the size which is then mapped to a specific CSS class + (Available Sizes: 8, 12, 16, 18, 24, 32, 48, 72 are mapped to `sXX` css classes) +- **css-classes (optional)** Additional CSS Classes to add to the svg tag. ### Usage in HTML/JS diff --git a/doc/development/fe_guide/index.md b/doc/development/fe_guide/index.md index 36d5e4ab96b..deaef8e768b 100644 --- a/doc/development/fe_guide/index.md +++ b/doc/development/fe_guide/index.md @@ -17,7 +17,7 @@ Working with our frontend assets requires Node (v8.10.0 or greater) and Yarn For our currently-supported browsers, see our [requirements][requirements]. ---- +Use [BrowserStack](https://www.browserstack.com/) to test with our supported browsers. Login to BrowserStack with the credentials saved in GitLab's [shared 1Password account](https://about.gitlab.com/handbook/security/#1password-for-teams). ## Initiatives @@ -77,8 +77,6 @@ How we use Snowplow to track custom events. Read the [frontend's FAQ](frontend_faq.md) for common small pieces of helpful information. ---- - ## Style Guides ### [JavaScript Style Guide](style_guide_js.md) @@ -91,20 +89,14 @@ changes. Our SCSS conventions which are enforced through [scss-lint][scss-lint]. ---- - ## [Performance](performance.md) Best practices for monitoring and maximizing frontend performance. ---- - ## [Security](security.md) Frontend security practices. ---- - ## [Accessibility](accessibility.md) Our accessibility standards and resources. diff --git a/doc/development/fe_guide/performance.md b/doc/development/fe_guide/performance.md index 2628e95dbc1..676bce32998 100644 --- a/doc/development/fe_guide/performance.md +++ b/doc/development/fe_guide/performance.md @@ -30,8 +30,8 @@ To improve the time to first render we are using lazy loading for images. This w the actual image source on the `data-src` attribute. After the HTML is rendered and JavaScript is loaded, the value of `data-src` will be moved to `src` automatically if the image is in the current viewport. -- Prepare images in HTML for lazy loading by renaming the `src` attribute to `data-src` AND adding the class `lazy`. -- If you are using the Rails `image_tag` helper, all images will be lazy-loaded by default unless `lazy: false` is provided. +- Prepare images in HTML for lazy loading by renaming the `src` attribute to `data-src` AND adding the class `lazy`. +- If you are using the Rails `image_tag` helper, all images will be lazy-loaded by default unless `lazy: false` is provided. If you are asynchronously adding content which contains lazy images then you need to call the function `gl.lazyLoader.searchLazyImages()` which will search for lazy images and load them if needed. @@ -96,26 +96,26 @@ bundle and included on the page. DOM has loaded, you should attach an event handler to the `DOMContentLoaded` event with: - ```javascript - import initMyWidget from './my_widget'; + ```javascript + import initMyWidget from './my_widget'; - document.addEventListener('DOMContentLoaded', () => { - initMyWidget(); - }); - ``` + document.addEventListener('DOMContentLoaded', () => { + initMyWidget(); + }); + ``` - **Supporting Module Placement:** - - If a class or a module is _specific to a particular route_, try to locate - it close to the entry point it will be used. For instance, if - `my_widget.js` is only imported within `pages/widget/show/index.js`, you - should place the module at `pages/widget/show/my_widget.js` and import it - with a relative path (e.g. `import initMyWidget from './my_widget';`). - - If a class or module is _used by multiple routes_, place it within a - shared directory at the closest common parent directory for the entry - points that import it. For example, if `my_widget.js` is imported within - both `pages/widget/show/index.js` and `pages/widget/run/index.js`, then - place the module at `pages/widget/shared/my_widget.js` and import it with - a relative path if possible (e.g. `../shared/my_widget`). + - If a class or a module is _specific to a particular route_, try to locate + it close to the entry point it will be used. For instance, if + `my_widget.js` is only imported within `pages/widget/show/index.js`, you + should place the module at `pages/widget/show/my_widget.js` and import it + with a relative path (e.g. `import initMyWidget from './my_widget';`). + - If a class or module is _used by multiple routes_, place it within a + shared directory at the closest common parent directory for the entry + points that import it. For example, if `my_widget.js` is imported within + both `pages/widget/show/index.js` and `pages/widget/run/index.js`, then + place the module at `pages/widget/shared/my_widget.js` and import it with + a relative path if possible (e.g. `../shared/my_widget`). - **Enterprise Edition Caveats:** For GitLab Enterprise Edition, page-specific entry points will override their @@ -161,7 +161,7 @@ General tips: - Use code-splitting dynamic imports wherever possible to lazy-load code that is not needed initially. - [High Performance Animations][high-perf-animations] -------- +--- ## Additional Resources diff --git a/doc/development/fe_guide/style_guide_js.md b/doc/development/fe_guide/style_guide_js.md index b50159c2b75..125b11afcd0 100644 --- a/doc/development/fe_guide/style_guide_js.md +++ b/doc/development/fe_guide/style_guide_js.md @@ -20,148 +20,148 @@ See [our current .eslintrc](https://gitlab.com/gitlab-org/gitlab-ce/blob/master/ 1. **Never Ever EVER** disable eslint globally for a file - ```javascript - // bad - /* eslint-disable */ + ```javascript + // bad + /* eslint-disable */ - // better - /* eslint-disable some-rule, some-other-rule */ + // better + /* eslint-disable some-rule, some-other-rule */ - // best - // nothing :) - ``` + // best + // nothing :) + ``` 1. If you do need to disable a rule for a single violation, try to do it as locally as possible - ```javascript - // bad - /* eslint-disable no-new */ + ```javascript + // bad + /* eslint-disable no-new */ - import Foo from 'foo'; + import Foo from 'foo'; - new Foo(); + new Foo(); - // better - import Foo from 'foo'; + // better + import Foo from 'foo'; - // eslint-disable-next-line no-new - new Foo(); - ``` + // eslint-disable-next-line no-new + new Foo(); + ``` 1. There are few rules that we need to disable due to technical debt. Which are: - 1. [no-new][eslint-new] - 1. [class-methods-use-this][eslint-this] + 1. [no-new](https://eslint.org/docs/rules/no-new) + 1. [class-methods-use-this](https://eslint.org/docs/rules/class-methods-use-this) 1. When they are needed _always_ place ESlint directive comment blocks on the first line of a script, followed by any global declarations, then a blank newline prior to any imports or code. - ```javascript - // bad - /* global Foo */ - /* eslint-disable no-new */ - import Bar from './bar'; + ```javascript + // bad + /* global Foo */ + /* eslint-disable no-new */ + import Bar from './bar'; - // good - /* eslint-disable no-new */ - /* global Foo */ + // good + /* eslint-disable no-new */ + /* global Foo */ - import Bar from './bar'; - ``` + import Bar from './bar'; + ``` 1. **Never** disable the `no-undef` rule. Declare globals with `/* global Foo */` instead. 1. When declaring multiple globals, always use one `/* global [name] */` line per variable. - ```javascript - // bad - /* globals Flash, Cookies, jQuery */ + ```javascript + // bad + /* globals Flash, Cookies, jQuery */ - // good - /* global Flash */ - /* global Cookies */ - /* global jQuery */ - ``` + // good + /* global Flash */ + /* global Cookies */ + /* global jQuery */ + ``` 1. Use up to 3 parameters for a function or class. If you need more accept an Object instead. - ```javascript - // bad - fn(p1, p2, p3, p4) {} + ```javascript + // bad + fn(p1, p2, p3, p4) {} - // good - fn(options) {} - ``` + // good + fn(options) {} + ``` #### Modules, Imports, and Exports 1. Use ES module syntax to import modules - ```javascript - // bad - const SomeClass = require('some_class'); + ```javascript + // bad + const SomeClass = require('some_class'); - // good - import SomeClass from 'some_class'; + // good + import SomeClass from 'some_class'; - // bad - module.exports = SomeClass; + // bad + module.exports = SomeClass; - // good - export default SomeClass; - ``` + // good + export default SomeClass; + ``` - Import statements are following usual naming guidelines, for example object literals use camel case: + Import statements are following usual naming guidelines, for example object literals use camel case: - ```javascript - // some_object file - export default { - key: 'value', - }; + ```javascript + // some_object file + export default { + key: 'value', + }; - // bad - import ObjectLiteral from 'some_object'; + // bad + import ObjectLiteral from 'some_object'; - // good - import objectLiteral from 'some_object'; - ``` + // good + import objectLiteral from 'some_object'; + ``` 1. Relative paths: when importing a module in the same directory, a child directory, or an immediate parent directory prefer relative paths. When importing a module which is two or more levels up, prefer either `~/` or `ee/`. - In **app/assets/javascripts/my-feature/subdir**: + In **app/assets/javascripts/my-feature/subdir**: - ```javascript - // bad - import Foo from '~/my-feature/foo'; - import Bar from '~/my-feature/subdir/bar'; - import Bin from '~/my-feature/subdir/lib/bin'; + ```javascript + // bad + import Foo from '~/my-feature/foo'; + import Bar from '~/my-feature/subdir/bar'; + import Bin from '~/my-feature/subdir/lib/bin'; - // good - import Foo from '../foo'; - import Bar from './bar'; - import Bin from './lib/bin'; - ``` + // good + import Foo from '../foo'; + import Bar from './bar'; + import Bin from './lib/bin'; + ``` - In **spec/javascripts**: + In **spec/javascripts**: - ```javascript - // bad - import Foo from '../../app/assets/javascripts/my-feature/foo'; + ```javascript + // bad + import Foo from '../../app/assets/javascripts/my-feature/foo'; - // good - import Foo from '~/my-feature/foo'; - ``` + // good + import Foo from '~/my-feature/foo'; + ``` - When referencing an **EE component**: + When referencing an **EE component**: - ```javascript - // bad - import Foo from '../../../../../ee/app/assets/javascripts/my-feature/ee-foo'; + ```javascript + // bad + import Foo from '../../../../../ee/app/assets/javascripts/my-feature/ee-foo'; - // good - import Foo from 'ee/my-feature/foo'; - ``` + // good + import Foo from 'ee/my-feature/foo'; + ``` 1. Avoid using IIFE. Although we have a lot of examples of files which wrap their contents in IIFEs (immediately-invoked function expressions), @@ -170,136 +170,136 @@ See [our current .eslintrc](https://gitlab.com/gitlab-org/gitlab-ce/blob/master/ 1. Avoid adding to the global namespace. - ```javascript - // bad - window.MyClass = class { /* ... */ }; + ```javascript + // bad + window.MyClass = class { /* ... */ }; - // good - export default class MyClass { /* ... */ } - ``` + // good + export default class MyClass { /* ... */ } + ``` 1. Side effects are forbidden in any script which contains export - ```javascript - // bad - export default class MyClass { /* ... */ } + ```javascript + // bad + export default class MyClass { /* ... */ } - document.addEventListener("DOMContentLoaded", function(event) { - new MyClass(); - } - ``` + document.addEventListener("DOMContentLoaded", function(event) { + new MyClass(); + } + ``` #### Data Mutation and Pure functions 1. Strive to write many small pure functions, and minimize where mutations occur. - ```javascript - // bad - const values = {foo: 1}; + ```javascript + // bad + const values = {foo: 1}; - function impureFunction(items) { - const bar = 1; + function impureFunction(items) { + const bar = 1; - items.foo = items.a * bar + 2; + items.foo = items.a * bar + 2; - return items.a; - } + return items.a; + } - const c = impureFunction(values); + const c = impureFunction(values); - // good - var values = {foo: 1}; + // good + var values = {foo: 1}; - function pureFunction (foo) { - var bar = 1; + function pureFunction (foo) { + var bar = 1; - foo = foo * bar + 2; + foo = foo * bar + 2; - return foo; - } + return foo; + } - var c = pureFunction(values.foo); + var c = pureFunction(values.foo); ``` 1. Avoid constructors with side-effects. Although we aim for code without side-effects we need some side-effects for our code to run. - If the class won't do anything if we only instantiate it, it's ok to add side effects into the constructor (_Note:_ The following is just an example. If the only purpose of the class is to add an event listener and handle the callback a function will be more suitable.) - - ```javascript - // Bad - export class Foo { - constructor() { - this.init(); - } - init() { - document.addEventListener('click', this.handleCallback) - }, - handleCallback() { - - } - } - - // Good - export class Foo { - constructor() { - document.addEventListener() - } - handleCallback() { - } - } - ``` - - On the other hand, if a class only needs to extend a third party/add event listeners in some specific cases, they should be initialized outside of the constructor. + If the class won't do anything if we only instantiate it, it's ok to add side effects into the constructor (_Note:_ The following is just an example. If the only purpose of the class is to add an event listener and handle the callback a function will be more suitable.) + + ```javascript + // Bad + export class Foo { + constructor() { + this.init(); + } + init() { + document.addEventListener('click', this.handleCallback) + }, + handleCallback() { + + } + } + + // Good + export class Foo { + constructor() { + document.addEventListener() + } + handleCallback() { + } + } + ``` + + On the other hand, if a class only needs to extend a third party/add event listeners in some specific cases, they should be initialized outside of the constructor. 1. Prefer `.map`, `.reduce` or `.filter` over `.forEach` A forEach will most likely cause side effects, it will be mutating the array being iterated. Prefer using `.map`, `.reduce` or `.filter` - ```javascript - const users = [ { name: 'Foo' }, { name: 'Bar' } ]; + ```javascript + const users = [ { name: 'Foo' }, { name: 'Bar' } ]; - // bad - users.forEach((user, index) => { - user.id = index; - }); + // bad + users.forEach((user, index) => { + user.id = index; + }); - // good - const usersWithId = users.map((user, index) => { - return Object.assign({}, user, { id: index }); - }); - ``` + // good + const usersWithId = users.map((user, index) => { + return Object.assign({}, user, { id: index }); + }); + ``` #### Parse Strings into Numbers 1. `parseInt()` is preferable over `Number()` or `+` - ```javascript - // bad - +'10' // 10 + ```javascript + // bad + +'10' // 10 - // good - Number('10') // 10 + // good + Number('10') // 10 - // better - parseInt('10', 10); - ``` + // better + parseInt('10', 10); + ``` #### CSS classes used for JavaScript 1. If the class is being used in Javascript it needs to be prepend with `js-` - ```html - // bad - <button class="add-user"> - Add User - </button> + ```html + // bad + <button class="add-user"> + Add User + </button> - // good - <button class="js-add-user"> - Add User - </button> - ``` + // good + <button class="js-add-user"> + Add User + </button> + ``` ### Vue.js @@ -314,43 +314,44 @@ Please check this [rules][eslint-plugin-vue-rules] for more documentation. 1. The store has it's own file 1. Use a function in the bundle file to instantiate the Vue component: - ```javascript - // bad - class { - init() { - new Component({}) - } - } - - // good - document.addEventListener('DOMContentLoaded', () => new Vue({ - el: '#element', - components: { - componentName - }, - render: createElement => createElement('component-name'), - })); - ``` + ```javascript + // bad + class { + init() { + new Component({}) + } + } + + // good + document.addEventListener('DOMContentLoaded', () => new Vue({ + el: '#element', + components: { + componentName + }, + render: createElement => createElement('component-name'), + })); + ``` 1. Do not use a singleton for the service or the store - ```javascript - // bad - class Store { - constructor() { - if (!this.prototype.singleton) { - // do something - } - } - } + ```javascript + // bad + class Store { + constructor() { + if (!this.prototype.singleton) { + // do something + } + } + } + + // good + class Store { + constructor() { + // do something + } + } + ``` - // good - class Store { - constructor() { - // do something - } - } - ``` 1. Use `.vue` for Vue templates. Do not use `%template` in HAML. #### Naming @@ -358,38 +359,38 @@ Please check this [rules][eslint-plugin-vue-rules] for more documentation. 1. **Extensions**: Use `.vue` extension for Vue components. Do not use `.js` as file extension ([#34371]). 1. **Reference Naming**: Use PascalCase for their instances: - ```javascript - // bad - import cardBoard from 'cardBoard.vue' + ```javascript + // bad + import cardBoard from 'cardBoard.vue' - components: { - cardBoard, - }; + components: { + cardBoard, + }; - // good - import CardBoard from 'cardBoard.vue' + // good + import CardBoard from 'cardBoard.vue' - components: { - CardBoard, - }; - ``` + components: { + CardBoard, + }; + ``` 1. **Props Naming:** Avoid using DOM component prop names. 1. **Props Naming:** Use kebab-case instead of camelCase to provide props in templates. - ```javascript - // bad - <component class="btn"> + ```javascript + // bad + <component class="btn"> - // good - <component css-class="btn"> + // good + <component css-class="btn"> - // bad - <component myProp="prop" /> + // bad + <component myProp="prop" /> - // good - <component my-prop="prop" /> - ``` + // good + <component my-prop="prop" /> + ``` [#34371]: https://gitlab.com/gitlab-org/gitlab-ce/issues/34371 @@ -399,205 +400,205 @@ Please check this [rules][eslint-plugin-vue-rules] for more documentation. 1. With more than one attribute, all attributes should be on a new line: - ```javascript - // bad - <component v-if="bar" - param="baz" /> + ```javascript + // bad + <component v-if="bar" + param="baz" /> - <button class="btn">Click me</button> + <button class="btn">Click me</button> - // good - <component - v-if="bar" - param="baz" - /> + // good + <component + v-if="bar" + param="baz" + /> - <button class="btn"> - Click me - </button> - ``` + <button class="btn"> + Click me + </button> + ``` 1. The tag can be inline if there is only one attribute: - ```javascript - // good - <component bar="bar" /> + ```javascript + // good + <component bar="bar" /> - // good - <component - bar="bar" - /> + // good + <component + bar="bar" + /> - // bad - <component - bar="bar" /> - ``` + // bad + <component + bar="bar" /> + ``` #### Quotes 1. Always use double quotes `"` inside templates and single quotes `'` for all other JS. - ```javascript - // bad - template: ` - <button :class='style'>Button</button> - ` + ```javascript + // bad + template: ` + <button :class='style'>Button</button> + ` - // good - template: ` - <button :class="style">Button</button> - ` - ``` + // good + template: ` + <button :class="style">Button</button> + ` + ``` #### Props 1. Props should be declared as an object - ```javascript - // bad - props: ['foo'] + ```javascript + // bad + props: ['foo'] - // good - props: { - foo: { - type: String, - required: false, - default: 'bar' - } - } - ``` + // good + props: { + foo: { + type: String, + required: false, + default: 'bar' + } + } + ``` 1. Required key should always be provided when declaring a prop - ```javascript - // bad - props: { - foo: { - type: String, - } - } - - // good - props: { - foo: { - type: String, - required: false, - default: 'bar' - } - } - ``` + ```javascript + // bad + props: { + foo: { + type: String, + } + } + + // good + props: { + foo: { + type: String, + required: false, + default: 'bar' + } + } + ``` 1. Default key should be provided if the prop is not required. _Note:_ There are some scenarios where we need to check for the existence of the property. On those a default key should not be provided. - ```javascript - // good - props: { - foo: { - type: String, - required: false, - } - } - - // good - props: { - foo: { - type: String, - required: false, - default: 'bar' - } - } - - // good - props: { - foo: { - type: String, - required: true - } - } - ``` + ```javascript + // good + props: { + foo: { + type: String, + required: false, + } + } + + // good + props: { + foo: { + type: String, + required: false, + default: 'bar' + } + } + + // good + props: { + foo: { + type: String, + required: true + } + } + ``` #### Data 1. `data` method should always be a function - ```javascript - // bad - data: { - foo: 'foo' - } + ```javascript + // bad + data: { + foo: 'foo' + } - // good - data() { - return { - foo: 'foo' - }; - } - ``` + // good + data() { + return { + foo: 'foo' + }; + } + ``` #### Directives 1. Shorthand `@` is preferable over `v-on` - ```javascript - // bad - <component v-on:click="eventHandler"/> + ```javascript + // bad + <component v-on:click="eventHandler"/> - // good - <component @click="eventHandler"/> - ``` + // good + <component @click="eventHandler"/> + ``` -2. Shorthand `:` is preferable over `v-bind` +1. Shorthand `:` is preferable over `v-bind` - ```javascript - // bad - <component v-bind:class="btn"/> + ```javascript + // bad + <component v-bind:class="btn"/> - // good - <component :class="btn"/> - ``` + // good + <component :class="btn"/> + ``` -3. Shorthand `#` is preferable over `v-slot` +1. Shorthand `#` is preferable over `v-slot` - ```javascript - // bad - <template v-slot:header></template> + ```javascript + // bad + <template v-slot:header></template> - // good - <template #header></template> - ``` + // good + <template #header></template> + ``` #### Closing tags 1. Prefer self closing component tags - ```javascript - // bad - <component></component> + ```javascript + // bad + <component></component> - // good - <component /> - ``` + // good + <component /> + ``` #### Ordering 1. Tag order in `.vue` file - ``` - <script> - // ... - </script> - - <template> - // ... - </template> - - // We don't use scoped styles but there are few instances of this - <style> - // ... - </style> - ``` + ``` + <script> + // ... + </script> + + <template> + // ... + </template> + + // We don't use scoped styles but there are few instances of this + <style> + // ... + </style> + ``` 1. Properties in a Vue Component: Check [order of properties in components rule][vue-order]. @@ -608,50 +609,50 @@ When using `v-for` you need to provide a *unique* `:key` attribute for each item 1. If the elements of the array being iterated have an unique `id` it is advised to use it: - ```html - <div - v-for="item in items" - :key="item.id" - > - <!-- content --> - </div> - ``` + ```html + <div + v-for="item in items" + :key="item.id" + > + <!-- content --> + </div> + ``` 1. When the elements being iterated don't have a unique id, you can use the array index as the `:key` attribute - ```html - <div - v-for="(item, index) in items" - :key="index" - > - <!-- content --> - </div> - ``` + ```html + <div + v-for="(item, index) in items" + :key="index" + > + <!-- content --> + </div> + ``` 1. When using `v-for` with `template` and there is more than one child element, the `:key` values must be unique. It's advised to use `kebab-case` namespaces. - ```html - <template v-for="(item, index) in items"> - <span :key="`span-${index}`"></span> - <button :key="`button-${index}`"></button> - </template> - ``` + ```html + <template v-for="(item, index) in items"> + <span :key="`span-${index}`"></span> + <button :key="`button-${index}`"></button> + </template> + ``` 1. When dealing with nested `v-for` use the same guidelines as above. - ```html - <div - v-for="item in items" - :key="item.id" - > - <span - v-for="element in array" - :key="element.id" - > - <!-- content --> - </span> - </div> - ``` + ```html + <div + v-for="item in items" + :key="item.id" + > + <span + v-for="element in array" + :key="element.id" + > + <!-- content --> + </span> + </div> + ``` Useful links: @@ -662,35 +663,35 @@ Useful links: 1. Tooltips: Do not rely on `has-tooltip` class name for Vue components - ```javascript - // bad - <span - class="has-tooltip" - title="Some tooltip text"> - Text - </span> - - // good - <span - v-tooltip - title="Some tooltip text"> - Text - </span> - ``` + ```javascript + // bad + <span + class="has-tooltip" + title="Some tooltip text"> + Text + </span> + + // good + <span + v-tooltip + title="Some tooltip text"> + Text + </span> + ``` 1. Tooltips: When using a tooltip, include the tooltip directive, `./app/assets/javascripts/vue_shared/directives/tooltip.js` 1. Don't change `data-original-title`. - ```javascript - // bad - <span data-original-title="tooltip text">Foo</span> + ```javascript + // bad + <span data-original-title="tooltip text">Foo</span> - // good - <span title="tooltip text">Foo</span> + // good + <span title="tooltip text">Foo</span> - $('span').tooltip('_fixTitle'); - ``` + $('span').tooltip('_fixTitle'); + ``` ### The Javascript/Vue Accord @@ -713,8 +714,6 @@ The goal of this accord is to make sure we are all on the same page. [airbnb-js-style-guide]: https://github.com/airbnb/javascript [eslintrc]: https://gitlab.com/gitlab-org/gitlab-ce/blob/master/.eslintrc -[eslint-this]: http://eslint.org/docs/rules/class-methods-use-this -[eslint-new]: http://eslint.org/docs/rules/no-new [eslint-plugin-vue]: https://github.com/vuejs/eslint-plugin-vue [eslint-plugin-vue-rules]: https://github.com/vuejs/eslint-plugin-vue#bulb-rules [vue-order]: https://github.com/vuejs/eslint-plugin-vue/blob/master/docs/rules/order-in-components.md diff --git a/doc/development/fe_guide/style_guide_scss.md b/doc/development/fe_guide/style_guide_scss.md index 5220c9eeea3..95c4a094c04 100644 --- a/doc/development/fe_guide/style_guide_scss.md +++ b/doc/development/fe_guide/style_guide_scss.md @@ -35,7 +35,7 @@ New utility classes should be added to [`utilities.scss`](https://gitlab.com/git We recommend a "utility-first" approach. 1. Start with utility classes. -2. If composing utility classes into a component class removes code duplication and encapsulates a clear responsibility, do it. +1. If composing utility classes into a component class removes code duplication and encapsulates a clear responsibility, do it. This encourages an organic growth of component classes and prevents the creation of one-off unreusable classes. Also, the kind of classes that emerge from "utility-first" tend to be design-centered (e.g. `.button`, `.alert`, `.card`) rather than domain-centered (e.g. `.security-report-widget`, `.commit-header-icon`). @@ -212,6 +212,7 @@ selectors are intended for use only with JavaScript to allow for removal or renaming without breaking styling. ### IDs + Don't use ID selectors in CSS. ```scss diff --git a/doc/development/fe_guide/vue.md b/doc/development/fe_guide/vue.md index 6c7572352ec..421b7265613 100644 --- a/doc/development/fe_guide/vue.md +++ b/doc/development/fe_guide/vue.md @@ -34,6 +34,7 @@ new_feature │ └── new_feature_store.js ├── index.js ``` + _For consistency purposes, we recommend you to follow the same structure._ Let's look into each of them: diff --git a/doc/development/fe_guide/vuex.md b/doc/development/fe_guide/vuex.md index bf248b7f8af..557d3132d71 100644 --- a/doc/development/fe_guide/vuex.md +++ b/doc/development/fe_guide/vuex.md @@ -1,15 +1,18 @@ # Vuex + To manage the state of an application you should use [Vuex][vuex-docs]. _Note:_ All of the below is explained in more detail in the official [Vuex documentation][vuex-docs]. ## Separation of concerns + Vuex is composed of State, Getters, Mutations, Actions and Modules. When a user clicks on an action, we need to `dispatch` it. This action will `commit` a mutation that will change the state. _Note:_ The action itself will not update the state, only a mutation should update the state. ## File structure + When using Vuex at GitLab, separate this concerns into different files to improve readability: ``` @@ -21,10 +24,12 @@ When using Vuex at GitLab, separate this concerns into different files to improv ├── state.js # state └── mutation_types.js # mutation types ``` + The following example shows an application that lists and adds users to the state. (For a more complex example implementation take a look at the security applications store in [here](https://gitlab.com/gitlab-org/gitlab-ee/tree/master/ee/app/assets/javascripts/vue_shared/security_reports/store)) ### `index.js` + This is the entry point for our store. You can use the following as a guide: ```javascript @@ -47,6 +52,7 @@ export default createStore(); ``` ### `state.js` + The first thing you should do before writing any code is to design the state. Often we need to provide data from haml to our Vue application. Let's store it in the state for better access. @@ -66,9 +72,11 @@ Often we need to provide data from haml to our Vue application. Let's store it i ``` #### Access `state` properties + You can use `mapState` to access state properties in the components. ### `actions.js` + An action is a payload of information to send data from our application to our store. An action is usually composed by a `type` and a `payload` and they describe what happened. @@ -110,6 +118,7 @@ In this file, we will write the actions that will call the respective mutations: ``` #### Actions Pattern: `request` and `receive` namespaces + When a request is made we often want to show a loading state to the user. Instead of creating an action to toggle the loading state and dispatch it in the component, @@ -136,6 +145,7 @@ By following this pattern we guarantee: 1. Actions are simple and straightforward #### Dispatching actions + To dispatch an action from a component, use the `mapActions` helper: ```javascript @@ -154,6 +164,7 @@ import { mapActions } from 'vuex'; ``` ### `mutations.js` + The mutations specify how the application state changes in response to actions sent to the store. The only way to change state in a Vuex store should be by committing a mutation. @@ -193,6 +204,7 @@ Remember that actions only describe that something happened, they don't describe ``` ### `getters.js` + Sometimes we may need to get derived state based on store state, like filtering for a specific prop. Using a getter will also cache the result based on dependencies due to [how computed props work](https://vuejs.org/v2/guide/computed.html#Computed-Caching-vs-Methods) This can be done through the `getters`: @@ -219,6 +231,7 @@ import { mapGetters } from 'vuex'; ``` ### `mutation_types.js` + From [vuex mutations docs][vuex-mutations]: > It is a commonly seen pattern to use constants for mutation types in various Flux implementations. This allows the code to take advantage of tooling like linters, and putting all constants in a single file allows your collaborators to get an at-a-glance view of what mutations are possible in the entire application. @@ -227,6 +240,7 @@ export const ADD_USER = 'ADD_USER'; ``` ### How to include the store in your application + The store should be included in the main component of your application: ```javascript @@ -241,6 +255,7 @@ The store should be included in the main component of your application: ``` ### Communicating with the Store + ```javascript <script> import { mapActions, mapState, mapGetters } from 'vuex'; @@ -298,29 +313,33 @@ export default { 1. Do not call a mutation directly. Always use an action to commit a mutation. Doing so will keep consistency throughout the application. From Vuex docs: - > why don't we just call store.commit('action') directly? Well, remember that mutations must be synchronous? Actions aren't. We can perform asynchronous operations inside an action. + > Why don't we just call store.commit('action') directly? Well, remember that mutations must be synchronous? Actions aren't. We can perform asynchronous operations inside an action. - ```javascript - // component.vue + ```javascript + // component.vue - // bad - created() { - this.$store.commit('mutation'); - } + // bad + created() { + this.$store.commit('mutation'); + } + + // good + created() { + this.$store.dispatch('action'); + } + ``` - // good - created() { - this.$store.dispatch('action'); - } - ``` 1. Use mutation types instead of hardcoding strings. It will be less error prone. 1. The State will be accessible in all components descending from the use where the store is instantiated. ### Testing Vuex + #### Testing Vuex concerns + Refer to [vuex docs][vuex-testing] regarding testing Actions, Getters and Mutations. #### Testing components that need a store + Smaller components might use `store` properties to access the data. In order to write unit tests for those components, we need to include the store and provide the correct state: @@ -363,6 +382,7 @@ describe('component', () => { ``` #### Testing Vuex actions and getters + Because we're currently using [`babel-plugin-rewire`](https://github.com/speedskater/babel-plugin-rewire), you may encounter the following error when testing your Vuex actions and getters: `[vuex] actions should be function or object with "handler" function` diff --git a/doc/development/feature_flags/index.md b/doc/development/feature_flags/index.md index 56872f8c075..f1374b9e280 100644 --- a/doc/development/feature_flags/index.md +++ b/doc/development/feature_flags/index.md @@ -8,5 +8,5 @@ disable those changes, without having to revert an entire release. Before using feature flags for GitLab's development, read through the following: - [Process for using features flags](process.md). -- [Developing with feature flags documentation](development.md). -- [Controlling feature flags documentation](controls.md). +- [Developing with feature flags](development.md). +- [Controlling feature flags](controls.md). diff --git a/doc/development/file_storage.md b/doc/development/file_storage.md index 02874d18a30..44af2b020a4 100644 --- a/doc/development/file_storage.md +++ b/doc/development/file_storage.md @@ -2,6 +2,8 @@ We use the [CarrierWave] gem to handle file upload, store and retrieval. +File uploads should be accelerated by workhorse, for details please refer to [uploads development documentation](uploads.md). + There are many places where file uploading is used, according to contexts: - System @@ -92,8 +94,8 @@ in your uploader, you need to either 1) include `RecordsUpload::Concern` and pre The `CarrierWave::Uploader#store_dir` is overridden to - - `GitlabUploader.base_dir` + `GitlabUploader.dynamic_segment` when the store is LOCAL - - `GitlabUploader.dynamic_segment` when the store is REMOTE (the bucket name is used to namespace) +- `GitlabUploader.base_dir` + `GitlabUploader.dynamic_segment` when the store is LOCAL +- `GitlabUploader.dynamic_segment` when the store is REMOTE (the bucket name is used to namespace) ### Using `ObjectStorage::Extension::RecordsUploads` diff --git a/doc/development/filtering_by_label.md b/doc/development/filtering_by_label.md new file mode 100644 index 00000000000..dd8944ff1c8 --- /dev/null +++ b/doc/development/filtering_by_label.md @@ -0,0 +1,162 @@ +# Filtering by label + +## Introduction + +GitLab has [labels](../user/project/labels.md) that can be assigned to issues, +merge requests, and epics. Labels on those objects are a many-to-many relation +through the polymorphic `label_links` table. + +To filter these objects by multiple labels - for instance, 'all open +issues with the label ~Plan and the label ~backend' - we generate a +query containing a `GROUP BY` clause. In a simple form, this looks like: + +```sql +SELECT + issues.* +FROM + issues + INNER JOIN label_links ON label_links.target_id = issues.id + AND label_links.target_type = 'Issue' + INNER JOIN labels ON labels.id = label_links.label_id +WHERE + issues.project_id = 13083 + AND (issues.state IN ('opened')) + AND labels.title IN ('Plan', + 'backend') +GROUP BY + issues.id +HAVING (COUNT(DISTINCT labels.title) = 2) +ORDER BY + issues.updated_at DESC, + issues.id DESC +LIMIT 20 OFFSET 0 +``` + +In particular, note that: + +1. We `GROUP BY issues.id` so that we can ... +1. Use the `HAVING (COUNT(DISTINCT labels.title) = 2)` condition to ensure that + all matched issues have both labels. + +This is more complicated than is ideal. It makes the query construction more +prone to errors (such as +[issue #15557](https://gitlab.com/gitlab-org/gitlab-ce/issues/15557)). + +## Attempt A: WHERE EXISTS + +### Attempt A1: use multiple subqueries with WHERE EXISTS + +In [issue #37137](https://gitlab.com/gitlab-org/gitlab-ce/issues/37137) +and its associated [merge request](https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/14022), +we tried to replace the `GROUP BY` with multiple uses of `WHERE EXISTS`. For the +example above, this would give: + +```sql +WHERE (EXISTS ( + SELECT + TRUE + FROM + label_links + INNER JOIN labels ON labels.id = label_links.label_id + WHERE + labels.title = 'Plan' + AND target_type = 'Issue' + AND target_id = issues.id)) +AND (EXISTS ( + SELECT + TRUE + FROM + label_links + INNER JOIN labels ON labels.id = label_links.label_id + WHERE + labels.title = 'backend' + AND target_type = 'Issue' + AND target_id = issues.id)) +``` + +While this worked without schema changes, and did improve readability somewhat, +it did not improve query performance. + +## Attempt B: Denormalize using an array column + +Having [removed MySQL support in GitLab 12.1](https://about.gitlab.com/2019/06/27/removing-mysql-support/), +using [Postgres's arrays](https://www.postgresql.org/docs/9.6/arrays.html) became more +tractable as we didn't have to support two databases. We discussed denormalizing +the `label_links` table for querying in +[issue #49651](https://gitlab.com/gitlab-org/gitlab-ce/issues/49651), +with two options: label IDs and titles. + +We can think of both of those as array columns on `issues`, `merge_requests`, +and `epics`: `issues.label_ids` would be an array column of label IDs, and +`issues.label_titles` would be an array of label titles. + +These array columns can be complemented with [GIN +indexes](https://www.postgresql.org/docs/9.6/gin-intro.html) to improve +matching. + +### Attempt B1: store label IDs for each object + +This has some strong advantages over titles: + +1. Unless a label is deleted, or a project is moved, we never need to + bulk-update the denormalized column. +1. It uses less storage than the titles. + +Unfortunately, our application design makes this hard. If we were able to query +just by label ID easily, we wouldn't need the `INNER JOIN labels` in the initial +query at the start of this document. GitLab allows users to filter by label +title across projects and even across groups, so a filter by the label ~Plan may +include labels with multiple distinct IDs. + +We do not want users to have to know about the different IDs, which means that +given this data set: + +| Project | ~Plan label ID | ~backend label ID | +| ------- | -------------- | ----------------- | +| A | 11 | 12 | +| B | 21 | 22 | +| C | 31 | 32 | + +We would need something like: + +```sql +WHERE + label_ids @> ARRAY[11, 12] + OR label_ids @> ARRAY[21, 22] + OR label_ids @> ARRAY[31, 32] +``` + +This can get even more complicated when we consider that in some cases, there +might be two ~backend labels - with different IDs - that could apply to the same +object, so the number of combinations would balloon further. + +### Attempt B2: store label titles for each object + +From the perspective of updating the labelable object, this is the worst +option. We have to bulk update the objects when: + +1. The objects are moved from one project to another. +1. The project is moved from one group to another. +1. The label is renamed. +1. The label is deleted. + +It also uses much more storage. Querying is simple, though: + +```sql +WHERE + label_titles @> ARRAY['Plan', 'backend'] +``` + +And our [tests in issue #49651](https://gitlab.com/gitlab-org/gitlab-ce/issues/49651#note_188777346) +showed that this could be fast. + +However, at present, the disadvantages outweigh the advantages. + +## Conclusion + +We have yet to find a method that is demonstratably better than the current +method, when considering: + +1. Query performance. +1. Readability. +1. Ease of maintaining schema consistency. diff --git a/doc/development/gemfile.md b/doc/development/gemfile.md index ec9718cea71..8d93c52e7bc 100644 --- a/doc/development/gemfile.md +++ b/doc/development/gemfile.md @@ -3,9 +3,9 @@ When adding a new entry to `Gemfile` or upgrading an existing dependency pay attention to the following rules. -## No gems fetched from git repositories +## No gems fetched from Git repositories -We do not allow gems that are fetched from git repositories. All gems have +We do not allow gems that are fetched from Git repositories. All gems have to be available in the RubyGems index. We want to minimize external build dependencies and build times. diff --git a/doc/development/geo.md b/doc/development/geo.md index 685d4e44ad3..cc3e2d1ccc5 100644 --- a/doc/development/geo.md +++ b/doc/development/geo.md @@ -170,7 +170,7 @@ while `pull` requests will continue to be served by the **secondary** node for m HTTPS and SSH requests are handled differently: - With HTTPS, we will give the user a `HTTP 302 Redirect` pointing to the project on the **primary** node. - The git client is wise enough to understand that status code and process the redirection. + The Git client is wise enough to understand that status code and process the redirection. - With SSH, because there is no equivalent way to perform a redirect, we have to proxy the request. This is done inside [`gitlab-shell`](https://gitlab.com/gitlab-org/gitlab-shell), by first translating the request to the HTTP protocol, and then proxying it to the **primary** node. @@ -341,9 +341,9 @@ not used, so sessions etc. aren't shared between nodes. GitLab can optionally use Object Storage to store data it would otherwise store on disk. These things can be: - - LFS Objects - - CI Job Artifacts - - Uploads +- LFS Objects +- CI Job Artifacts +- Uploads Objects that are stored in object storage, are not handled by Geo. Geo ignores items in object storage. Either: @@ -412,15 +412,15 @@ The Geo **primary** stores events in the `geo_event_log` table. Each entry in the log contains a specific type of event. These type of events include: - - Repository Deleted event - - Repository Renamed event - - Repositories Changed event - - Repository Created event - - Hashed Storage Migrated event - - Lfs Object Deleted event - - Hashed Storage Attachments event - - Job Artifact Deleted event - - Upload Deleted event +- Repository Deleted event +- Repository Renamed event +- Repositories Changed event +- Repository Created event +- Hashed Storage Migrated event +- Lfs Object Deleted event +- Hashed Storage Attachments event +- Job Artifact Deleted event +- Upload Deleted event ### Geo Log Cursor @@ -526,4 +526,4 @@ old method: - Replication is synchronous and we preserve the order of events. - Replication of the events happen at the same time as the changes in the - database. + database. diff --git a/doc/development/git_object_deduplication.md b/doc/development/git_object_deduplication.md index c103a4527ff..4dd1edf9b5a 100644 --- a/doc/development/git_object_deduplication.md +++ b/doc/development/git_object_deduplication.md @@ -8,30 +8,6 @@ storage disk use. To counteract this problem, we are adding Git object deduplication for forks to GitLab. In this document, we will describe how GitLab implements Git object deduplication. -## Enabling Git object deduplication via feature flags - -As of GitLab 12.0, Git object deduplication in GitLab is still behind a -feature flag. In this document, you can read about the effects of -enabling the feature. Also, note that Git object deduplication is -limited to forks of public projects on hashed repository storage. - -You can enable deduplication globally by setting the `object_pools` -feature flag to `true`: - -``` {.ruby} -Feature.enable(:object_pools) -``` - -Or just for forks of a specific project: - -``` {.ruby} -fork_parent = Project.find(MY_PROJECT_ID) -Feature.enable(:object_pools, fork_parent) -``` - -To check if a project uses Git object deduplication, look in a Rails -console if `project.pool_repository` is present. - ## Pool repositories ### Understanding Git alternates @@ -79,11 +55,11 @@ at the Rails application level in SQL. In conclusion, we need three things for effective object deduplication across a collection of GitLab project repositories at the Git level: -1. A pool repository must exist. -2. The participating project repositories must be linked to the pool - repository via their respective `objects/info/alternates` files. -3. The pool repository must contain Git object data common to the - participating project repositories. +1. A pool repository must exist. +1. The participating project repositories must be linked to the pool + repository via their respective `objects/info/alternates` files. +1. The pool repository must contain Git object data common to the + participating project repositories. ### Deduplication factor @@ -105,71 +81,71 @@ With pool repositories we made a fresh start. These live in their own `pool_repositories` SQL table. The relations between these two tables are as follows: -- a `Project` belongs to at most one `PoolRepository` - (`project.pool_repository`) -- as an automatic consequence of the above, a `PoolRepository` has - many `Project`s -- a `PoolRepository` has exactly one "source `Project`" - (`pool.source_project`) +- a `Project` belongs to at most one `PoolRepository` + (`project.pool_repository`) +- as an automatic consequence of the above, a `PoolRepository` has + many `Project`s +- a `PoolRepository` has exactly one "source `Project`" + (`pool.source_project`) > TODO Fix invalid SQL data for pools created prior to GitLab 11.11 > <https://gitlab.com/gitlab-org/gitaly/issues/1653>. ### Assumptions -- All repositories in a pool must use [hashed - storage](../administration/repository_storage_types.md). This is so - that we don't have to ever worry about updating paths in - `object/info/alternates` files. -- All repositories in a pool must be on the same Gitaly storage shard. - The Git alternates mechanism relies on direct disk access across - multiple repositories, and we can only assume direct disk access to - be possible within a Gitaly storage shard. -- The only two ways to remove a member project from a pool are (1) to - delete the project or (2) to move the project to another Gitaly - storage shard. +- All repositories in a pool must use [hashed + storage](../administration/repository_storage_types.md). This is so + that we don't have to ever worry about updating paths in + `object/info/alternates` files. +- All repositories in a pool must be on the same Gitaly storage shard. + The Git alternates mechanism relies on direct disk access across + multiple repositories, and we can only assume direct disk access to + be possible within a Gitaly storage shard. +- The only two ways to remove a member project from a pool are (1) to + delete the project or (2) to move the project to another Gitaly + storage shard. ### Creating pools and pool memberships -- When a pool gets created, it must have a source project. The initial - contents of the pool repository are a Git clone of the source - project repository. -- The occasion for creating a pool is when an existing eligible - (public, hashed storage, non-forked) GitLab project gets forked and - this project does not belong to a pool repository yet. The fork - parent project becomes the source project of the new pool, and both - the fork parent and the fork child project become members of the new - pool. -- Once project A has become the source project of a pool, all future - eligible forks of A will become pool members. -- If the fork source is itself a fork, the resulting repository will - neither join the repository nor will a new pool repository be - seeded. - - eg: - - Suppose fork A is part of a pool repository, any forks created off - of fork A *will not* be a part of the pool repository that fork A is - a part of. - - Suppose B is a fork of A, and A does not belong to an object pool. - Now C gets created as a fork of B. C will not be part of a pool - repository. +- When a pool gets created, it must have a source project. The initial + contents of the pool repository are a Git clone of the source + project repository. +- The occasion for creating a pool is when an existing eligible + (public, hashed storage, non-forked) GitLab project gets forked and + this project does not belong to a pool repository yet. The fork + parent project becomes the source project of the new pool, and both + the fork parent and the fork child project become members of the new + pool. +- Once project A has become the source project of a pool, all future + eligible forks of A will become pool members. +- If the fork source is itself a fork, the resulting repository will + neither join the repository nor will a new pool repository be + seeded. + + eg: + + Suppose fork A is part of a pool repository, any forks created off + of fork A *will not* be a part of the pool repository that fork A is + a part of. + + Suppose B is a fork of A, and A does not belong to an object pool. + Now C gets created as a fork of B. C will not be part of a pool + repository. > TODO should forks of forks be deduplicated? > <https://gitlab.com/gitlab-org/gitaly/issues/1532> ### Consequences -- If a normal Project participating in a pool gets moved to another - Gitaly storage shard, its "belongs to PoolRepository" relation will - be broken. Because of the way moving repositories between shard is - implemented, we will automatically get a fresh self-contained copy - of the project's repository on the new storage shard. -- If the source project of a pool gets moved to another Gitaly storage - shard or is deleted the "source project" relation is not broken. - However, as of GitLab 12.0 a pool will not fetch from a source - unless the source is on the same Gitaly shard. +- If a normal Project participating in a pool gets moved to another + Gitaly storage shard, its "belongs to PoolRepository" relation will + be broken. Because of the way moving repositories between shard is + implemented, we will automatically get a fresh self-contained copy + of the project's repository on the new storage shard. +- If the source project of a pool gets moved to another Gitaly storage + shard or is deleted the "source project" relation is not broken. + However, as of GitLab 12.0 a pool will not fetch from a source + unless the source is on the same Gitaly shard. ## Consistency between the SQL pool relation and Gitaly @@ -193,7 +169,7 @@ There are three different things that can go wrong here. In this case, we miss out on disk space savings but all RPC's on A itself will function fine. The next time garbage collection runs on A, the alternates connection gets established in Gitaly. This is done by -`Projects::GitDeduplicationService` in gitlab-rails. +`Projects::GitDeduplicationService` in GitLab Rails. #### 2. SQL says repo A belongs to pool P1 but Gitaly says A has alternate objects in pool P2 diff --git a/doc/development/gitaly.md b/doc/development/gitaly.md index 5552d5d37b4..592fc13873b 100644 --- a/doc/development/gitaly.md +++ b/doc/development/gitaly.md @@ -45,13 +45,13 @@ The process for adding new Gitaly features is: - release a new version of gitaly-proto - write implementation and tests for the RPC [in Gitaly](https://gitlab.com/gitlab-org/gitaly), in Go or Ruby - release a new version of Gitaly -- write client code in gitlab-ce/ee, gitlab-workhorse or gitlab-shell that calls the new Gitaly RPC +- write client code in GitLab CE/EE, GitLab Workhorse or GitLab Shell that calls the new Gitaly RPC These steps often overlap. It is possible to use an unreleased version of Gitaly and gitaly-proto during testing and development. - See the [Gitaly repo](https://gitlab.com/gitlab-org/gitaly/blob/master/CONTRIBUTING.md#development-and-testing-with-a-custom-gitaly-proto) for instructions on writing server side code with an unreleased protocol. -- See [below](#running-tests-with-a-locally-modified-version-of-gitaly) for instructions on running gitlab-ce tests with a modified version of Gitaly. +- See [below](#running-tests-with-a-locally-modified-version-of-gitaly) for instructions on running GitLab CE tests with a modified version of Gitaly. - In GDK run `gdk install` and restart `gdk run` (or `gdk run app`) to use a locally modified Gitaly version for development ### Gitaly-ruby @@ -146,7 +146,7 @@ Once the code is wrapped in this block, this code-path will be excluded from n+1 ## Request counts -Commits and other git data, is now fetched through Gitaly. These fetches can, +Commits and other Git data, is now fetched through Gitaly. These fetches can, much like with a database, be batched. This improves performance for the client and for Gitaly itself and therefore for the users too. To keep performance stable and guard performance regressions, Gitaly calls can be counted and the call count @@ -164,10 +164,10 @@ end ## Running tests with a locally modified version of Gitaly -Normally, gitlab-ce/ee tests use a local clone of Gitaly in +Normally, GitLab CE/EE tests use a local clone of Gitaly in `tmp/tests/gitaly` pinned at the version specified in `GITALY_SERVER_VERSION`. The `GITALY_SERVER_VERSION` file supports -`=my-branch` syntax to use a custom branch in gitlab-org/gitaly. If +`=my-branch` syntax to use a custom branch in <https://gitlab.com/gitlab-org/gitaly>. If you want to run tests locally against a modified version of Gitaly you can replace `tmp/tests/gitaly` with a symlink. This is much faster because the `=my-branch` syntax forces a Gitaly re-install each time @@ -237,24 +237,23 @@ Here are the steps to gate a new feature in Gitaly behind a feature flag. 1. Create prometheus metrics: ```go - var findAllTagsRequests = prometheus.NewCounterVec( - prometheus.CounterOpts{ - Name: "gitaly_find_all_tags_requests_total", - Help: "Counter of go vs ruby implementation of FindAllTags", - }, - []string{"implementation"}, - ) + var findAllTagsRequests = prometheus.NewCounterVec( + prometheus.CounterOpts{ + Name: "gitaly_find_all_tags_requests_total", + Help: "Counter of go vs ruby implementation of FindAllTags", + }, + []string{"implementation"}, ) func init() { - prometheus.Register(findAllTagsRequests) + prometheus.Register(findAllTagsRequests) } if featureflag.IsEnabled(ctx, findAllTagsFeatureFlag) { - findAllTagsRequests.WithLabelValues("go").Inc() + findAllTagsRequests.WithLabelValues("go").Inc() // go implementation } else { - findAllTagsRequests.WithLabelValues("ruby").Inc() + findAllTagsRequests.WithLabelValues("ruby").Inc() // ruby implementation } ``` @@ -277,9 +276,9 @@ Here are the steps to gate a new feature in Gitaly behind a feature flag. require.NoError(t, err) ``` -### Gitlab-Rails +### GitLab Rails -1. Add feature flag to `lib/gitlab/gitaly_client.rb` (in gitlab-rails): +1. Add feature flag to `lib/gitlab/gitaly_client.rb` (in GitLab Rails): ```ruby SERVER_FEATURE_FLAGS = %w[go-find-all-tags].freeze diff --git a/doc/development/go_guide/index.md b/doc/development/go_guide/index.md index f09339eb3a4..83444093f9c 100644 --- a/doc/development/go_guide/index.md +++ b/doc/development/go_guide/index.md @@ -107,6 +107,32 @@ Modules](https://github.com/golang/go/wiki/Modules). It provides a way to define and lock dependencies for reproducible builds. It should be used whenever possible. +When Go Modules are in use, there should not be a `vendor/` directory. Instead, +Go will automatically download dependencies when they are needed to build the +project. This is in line with how dependencies are handled with Bundler in Ruby +projects, and makes merge requests easier to review. + +In some cases, such as building a Go project for it to act as a dependency of a +CI run for another project, removing the `vendor/` directory means the code must +be downloaded repeatedly, which can lead to intermittent problems due to rate +limiting or network failures. In these circumstances, you should cache the +downloaded code between runs with a `.gitlab-ci.yml` snippet like this: + +```yaml +.go-cache: + variables: + GOPATH: $CI_PROJECT_DIR/.go + before_script: + - mkdir -p .go + cache: + paths: + - .go/pkg/mod/ + +test: + extends: .go-cache + # ... +``` + There was a [bug on modules checksums](https://github.com/golang/go/issues/29278) in Go < v1.11.4, so make sure to use at least this version to avoid `checksum mismatch` errors. @@ -129,17 +155,89 @@ deploy a new pod, migrating the data automatically. ## Testing +### Testing frameworks + We should not use any specific library or framework for testing, as the [standard library](https://golang.org/pkg/) provides already everything to get -started. For example, some external dependencies might be worth considering in -case we decide to use a specific library or framework: +started. If there is a need for more sophisticated testing tools, the following +external dependencies might be worth considering in case we decide to use a specific +library or framework: - [Testify](https://github.com/stretchr/testify) - [httpexpect](https://github.com/gavv/httpexpect) +### Subtests + Use [subtests](https://blog.golang.org/subtests) whenever possible to improve code readability and test output. +### Better output in tests + +When comparing expected and actual values in tests, use +[testify/require.Equal](https://godoc.org/github.com/stretchr/testify/require#Equal), +[testify/require.EqualError](https://godoc.org/github.com/stretchr/testify/require#EqualError), +[testify/require.EqualValues](https://godoc.org/github.com/stretchr/testify/require#EqualValues), +and others to improve readability when comparing structs, errors, +large portions of text, or JSON documents: + +```go +type TestData struct { + // ... +} + +func FuncUnderTest() TestData { + // ... +} + +func Test(t *testing.T) { + t.Run("FuncUnderTest", func(t *testing.T) { + want := TestData{} + got := FuncUnderTest() + + require.Equal(t, want, got) // note that expected value comes first, then comes the actual one ("diff" semantics) + }) +} +``` + +### Table-Driven Tests + +Using [Table-Driven Tests](https://github.com/golang/go/wiki/TableDrivenTests) +is generally good practice when you have multiple entries of +inputs/outputs for the same function. Below are some guidelines one can +follow when writing table-driven test. These guidelines are mostly +extracted from Go standard library source code. Keep in mind it's OK not +to follow these guidelines when it makes sense. + +#### Defining test cases + +Each table entry is a complete test case with inputs and expected +results, and sometimes with additional information such as a test name +to make the test output easily readable. + +- [Define a slice of anonymous struct](https://github.com/golang/go/blob/50bd1c4d4eb4fac8ddeb5f063c099daccfb71b26/src/encoding/csv/reader_test.go#L16) + inside of the test. +- [Define a slice of anonymous struct](https://github.com/golang/go/blob/55d31e16c12c38d36811bdee65ac1f7772148250/src/cmd/go/internal/module/module_test.go#L9-L66) + outside of the test. +- [Named structs](https://github.com/golang/go/blob/2e0cd2aef5924e48e1ceb74e3d52e76c56dd34cc/src/cmd/go/internal/modfetch/coderepo_test.go#L54-L69) + for code reuse. +- [Using `map[string]struct{}`](https://github.com/golang/go/blob/6d5caf38e37bf9aeba3291f1f0b0081f934b1187/src/cmd/trace/annotations_test.go#L180-L235). + +#### Contents of the test case + +- Ideally, each test case should have a field with a unique identifier + to use for naming subtests. In the Go standard library, this is commonly the + `name string` field. +- Use `want`/`expect`/`actual` when you are specifcing something in the + test case that will be used for assertion. + +#### Variable names + +- Each table-driven test map/slice of struct can be named `tests`. +- When looping through `tests` the anonymous struct can be referred + to as `tt` or `tc`. +- The description of the test can be referred to as + `name`/`testName`/`tn`. + ### Benchmarks Programs handling a lot of IO or complex operations should always include diff --git a/doc/development/gotchas.md b/doc/development/gotchas.md index 13dda17bb7d..941eea2609e 100644 --- a/doc/development/gotchas.md +++ b/doc/development/gotchas.md @@ -53,7 +53,7 @@ When run, this spec doesn't do what we might expect: (compared using ==) ``` -That's because FactoryBot sequences are not reseted for each example. +This is because FactoryBot sequences are not reset for each example. Please remember that sequence-generated values exist only to avoid having to explicitly set attributes that have a uniqueness constraint when using a factory. diff --git a/doc/development/hash_indexes.md b/doc/development/hash_indexes.md index e6c1b3590b1..417ea18e22f 100644 --- a/doc/development/hash_indexes.md +++ b/doc/development/hash_indexes.md @@ -1,6 +1,6 @@ # Hash Indexes -Both PostgreSQL and MySQL support hash indexes besides the regular btree +PostgreSQL supports hash indexes besides the regular btree indexes. Hash indexes however are to be avoided at all costs. While they may _sometimes_ provide better performance the cost of rehashing can be very high. More importantly: at least until PostgreSQL 10.0 hash indexes are not diff --git a/doc/development/i18n/externalization.md b/doc/development/i18n/externalization.md index 17462887162..141f5a8d6d9 100644 --- a/doc/development/i18n/externalization.md +++ b/doc/development/i18n/externalization.md @@ -21,18 +21,18 @@ The following tools are used: 1. [`gettext_i18n_rails`](https://github.com/grosser/gettext_i18n_rails): this gem allow us to translate content from models, views and controllers. Also it gives us access to the following raketasks: - - `rake gettext:find`: Parses almost all the files from the - Rails application looking for content that has been marked for - translation. Finally, it updates the PO files with the new content that - it has found. - - `rake gettext:pack`: Processes the PO files and generates the - MO files that are binary and are finally used by the application. + - `rake gettext:find`: Parses almost all the files from the + Rails application looking for content that has been marked for + translation. Finally, it updates the PO files with the new content that + it has found. + - `rake gettext:pack`: Processes the PO files and generates the + MO files that are binary and are finally used by the application. 1. [`gettext_i18n_rails_js`](https://github.com/webhippie/gettext_i18n_rails_js): this gem is useful to make the translations available in JavaScript. It provides the following raketask: - - `rake gettext:po_to_json`: Reads the contents from the PO files and - generates JSON files containing all the available translations. + - `rake gettext:po_to_json`: Reads the contents from the PO files and + generates JSON files containing all the available translations. 1. PO editor: there are multiple applications that can help us to work with PO files, a good option is [Poedit](https://poedit.net/download) which is @@ -139,60 +139,61 @@ For example use `%{created_at}` in Ruby but `%{createdAt}` in JavaScript. Make s - In Ruby/HAML: - ```ruby - _("Hello %{name}") % { name: 'Joe' } => 'Hello Joe' - ``` + ```ruby + _("Hello %{name}") % { name: 'Joe' } => 'Hello Joe' + ``` - In JavaScript: - ```js - import { __, sprintf } from '~/locale'; + ```js + import { __, sprintf } from '~/locale'; - sprintf(__('Hello %{username}'), { username: 'Joe' }); // => 'Hello Joe' - ``` + sprintf(__('Hello %{username}'), { username: 'Joe' }); // => 'Hello Joe' + ``` - By default, `sprintf` escapes the placeholder values. - If you want to take care of that yourself, you can pass `false` as third argument. + By default, `sprintf` escapes the placeholder values. + If you want to take care of that yourself, you can pass `false` as third argument. - ```js - import { __, sprintf } from '~/locale'; + ```js + import { __, sprintf } from '~/locale'; - sprintf(__('This is %{value}'), { value: '<strong>bold</strong>' }); // => 'This is <strong>bold</strong>' - sprintf(__('This is %{value}'), { value: '<strong>bold</strong>' }, false); // => 'This is <strong>bold</strong>' - ``` + sprintf(__('This is %{value}'), { value: '<strong>bold</strong>' }); // => 'This is <strong>bold</strong>' + sprintf(__('This is %{value}'), { value: '<strong>bold</strong>' }, false); // => 'This is <strong>bold</strong>' + ``` ### Plurals - In Ruby/HAML: - ```ruby - n_('Apple', 'Apples', 3) - # => 'Apples' - ``` + ```ruby + n_('Apple', 'Apples', 3) + # => 'Apples' + ``` - Using interpolation: - ```ruby - n_("There is a mouse.", "There are %d mice.", size) % size - # => When size == 1: 'There is a mouse.' - # => When size == 2: 'There are 2 mice.' - ``` + Using interpolation: - Avoid using `%d` or count variables in singular strings. This allows more natural translation in some languages. + ```ruby + n_("There is a mouse.", "There are %d mice.", size) % size + # => When size == 1: 'There is a mouse.' + # => When size == 2: 'There are 2 mice.' + ``` + + Avoid using `%d` or count variables in singular strings. This allows more natural translation in some languages. - In JavaScript: - ```js - n__('Apple', 'Apples', 3) - // => 'Apples' - ``` + ```js + n__('Apple', 'Apples', 3) + // => 'Apples' + ``` - Using interpolation: + Using interpolation: - ```js - n__('Last day', 'Last %d days', x) - // => When x == 1: 'Last day' - // => When x == 2: 'Last 2 days' - ``` + ```js + n__('Last day', 'Last %d days', x) + // => When x == 1: 'Last day' + // => When x == 2: 'Last 2 days' + ``` ### Namespaces @@ -202,17 +203,17 @@ Namespaces should be PascalCase. - In Ruby/HAML: - ```ruby - s_('OpenedNDaysAgo|Opened') - ``` + ```ruby + s_('OpenedNDaysAgo|Opened') + ``` - In case the translation is not found it will return `Opened`. + In case the translation is not found it will return `Opened`. - In JavaScript: - ```js - s__('OpenedNDaysAgo|Opened') - ``` + ```js + s__('OpenedNDaysAgo|Opened') + ``` Note: The namespace should be removed from the translation. See the [translation guidelines for more details](translation.md#namespaced-strings). @@ -235,12 +236,12 @@ This makes use of [`Intl.DateTimeFormat`]. - In Ruby/HAML, we have two ways of adding format to dates and times: 1. **Through the `l` helper**, i.e. `l(active_session.created_at, format: :short)`. We have some predefined formats for - [dates](https://gitlab.com/gitlab-org/gitlab-ce/blob/v11.7.0/config/locales/en.yml#L54) and [times](https://gitlab.com/gitlab-org/gitlab-ce/blob/v11.7.0/config/locales/en.yml#L261). - If you need to add a new format, because other parts of the code could benefit from it, - you'll need to add it to [en.yml](https://gitlab.com/gitlab-org/gitlab-ce/blob/master/config/locales/en.yml) file. + [dates](https://gitlab.com/gitlab-org/gitlab-ce/blob/v11.7.0/config/locales/en.yml#L54) and [times](https://gitlab.com/gitlab-org/gitlab-ce/blob/v11.7.0/config/locales/en.yml#L261). + If you need to add a new format, because other parts of the code could benefit from it, + you'll need to add it to [en.yml](https://gitlab.com/gitlab-org/gitlab-ce/blob/master/config/locales/en.yml) file. 1. **Through `strftime`**, i.e. `milestone.start_date.strftime('%b %-d')`. We use `strftime` in case none of the formats - defined on [en.yml](https://gitlab.com/gitlab-org/gitlab-ce/blob/master/config/locales/en.yml) matches the date/time - specifications we need, and if there is no need to add it as a new format because is very particular (i.e. it's only used in a single view). + defined on [en.yml](https://gitlab.com/gitlab-org/gitlab-ce/blob/master/config/locales/en.yml) matches the date/time + specifications we need, and if there is no need to add it as a new format because is very particular (i.e. it's only used in a single view). ## Best practices @@ -268,40 +269,40 @@ should be externalized as follows: This also applies when using links in between translated sentences, otherwise these texts are not translatable in certain languages. - In Ruby/HAML, instead of: - - ```haml - - zones_link = link_to(s_('ClusterIntegration|zones'), 'https://cloud.google.com/compute/docs/regions-zones/regions-zones', target: '_blank', rel: 'noopener noreferrer') - = s_('ClusterIntegration|Learn more about %{zones_link}').html_safe % { zones_link: zones_link } - ``` - - Set the link starting and ending HTML fragments as variables like so: - - ```haml - - zones_link_url = 'https://cloud.google.com/compute/docs/regions-zones/regions-zones' - - zones_link_start = '<a href="%{url}" target="_blank" rel="noopener noreferrer">'.html_safe % { url: zones_link_url } - = s_('ClusterIntegration|Learn more about %{zones_link_start}zones%{zones_link_end}').html_safe % { zones_link_start: zones_link_start, zones_link_end: '</a>'.html_safe } - ``` + + ```haml + - zones_link = link_to(s_('ClusterIntegration|zones'), 'https://cloud.google.com/compute/docs/regions-zones/regions-zones', target: '_blank', rel: 'noopener noreferrer') + = s_('ClusterIntegration|Learn more about %{zones_link}').html_safe % { zones_link: zones_link } + ``` + + Set the link starting and ending HTML fragments as variables like so: + + ```haml + - zones_link_url = 'https://cloud.google.com/compute/docs/regions-zones/regions-zones' + - zones_link_start = '<a href="%{url}" target="_blank" rel="noopener noreferrer">'.html_safe % { url: zones_link_url } + = s_('ClusterIntegration|Learn more about %{zones_link_start}zones%{zones_link_end}').html_safe % { zones_link_start: zones_link_start, zones_link_end: '</a>'.html_safe } + ``` - In JavaScript, instead of: - ```js - {{ - sprintf(s__("ClusterIntegration|Learn more about %{link}"), { - link: '<a href="https://cloud.google.com/compute/docs/regions-zones/regions-zones" target="_blank" rel="noopener noreferrer">zones</a>' - }) - }} - ``` - - Set the link starting and ending HTML fragments as variables like so: - - ```js - {{ - sprintf(s__("ClusterIntegration|Learn more about %{linkStart}zones%{linkEnd}"), { - linkStart: '<a href="https://cloud.google.com/compute/docs/regions-zones/regions-zones" target="_blank" rel="noopener noreferrer">' - linkEnd: '</a>', - }) - }} - ``` + ```js + {{ + sprintf(s__("ClusterIntegration|Learn more about %{link}"), { + link: '<a href="https://cloud.google.com/compute/docs/regions-zones/regions-zones" target="_blank" rel="noopener noreferrer">zones</a>' + }) + }} + ``` + + Set the link starting and ending HTML fragments as variables like so: + + ```js + {{ + sprintf(s__("ClusterIntegration|Learn more about %{linkStart}zones%{linkEnd}"), { + linkStart: '<a href="https://cloud.google.com/compute/docs/regions-zones/regions-zones" target="_blank" rel="noopener noreferrer">' + linkEnd: '</a>', + }) + }} + ``` The reasoning behind this is that in some languages words change depending on context. For example in Japanese は is added to the subject of a sentence and を to the object. This is impossible to translate correctly if we extract individual words from the sentence. @@ -374,29 +375,29 @@ Let's suppose you want to add translations for a new language, let's say French. 1. The first step is to register the new language in `lib/gitlab/i18n.rb`: - ```ruby - ... - AVAILABLE_LANGUAGES = { - ..., - 'fr' => 'Français' - }.freeze - ... - ``` + ```ruby + ... + AVAILABLE_LANGUAGES = { + ..., + 'fr' => 'Français' + }.freeze + ... + ``` 1. Next, you need to add the language: - ```sh - bin/rake gettext:add_language[fr] - ``` + ```sh + bin/rake gettext:add_language[fr] + ``` - If you want to add a new language for a specific region, the command is similar, - you just need to separate the region with an underscore (`_`). For example: + If you want to add a new language for a specific region, the command is similar, + you just need to separate the region with an underscore (`_`). For example: - ```sh - bin/rake gettext:add_language[en_GB] - ``` + ```sh + bin/rake gettext:add_language[en_GB] + ``` - Please note that you need to specify the region part in capitals. + Please note that you need to specify the region part in capitals. 1. Now that the language is added, a new directory has been created under the path: `locale/fr/`. You can now start using your PO editor to edit the PO file @@ -406,9 +407,9 @@ Let's suppose you want to add translations for a new language, let's say French. in order to generate the binary MO files and finally update the JSON files containing the translations: - ```sh - bin/rake gettext:compile - ``` + ```sh + bin/rake gettext:compile + ``` 1. In order to see the translated content we need to change our preferred language which can be found under the user's **Settings** (`/profile`). @@ -416,7 +417,7 @@ Let's suppose you want to add translations for a new language, let's say French. 1. After checking that the changes are ok, you can proceed to commit the new files. For example: - ```sh - git add locale/fr/ app/assets/javascripts/locale/fr/ - git commit -m "Add French translations for Cycle Analytics page" - ``` + ```sh + git add locale/fr/ app/assets/javascripts/locale/fr/ + git commit -m "Add French translations for Cycle Analytics page" + ``` diff --git a/doc/development/i18n/proofreader.md b/doc/development/i18n/proofreader.md index 35c5b155594..492e3d48164 100644 --- a/doc/development/i18n/proofreader.md +++ b/doc/development/i18n/proofreader.md @@ -80,6 +80,7 @@ are very appreciative of the work done by translators and proofreaders! - Russian - Nikita Grylov - [GitLab](https://gitlab.com/nixel2007), [Crowdin](https://crowdin.com/profile/nixel2007) - Alexy Lustin - [GitLab](https://gitlab.com/allustin), [Crowdin](https://crowdin.com/profile/lustin) + - Mark Minakou - [GitLab](https://gitlab.com/sandzhaj), [Crowdin](https://crowdin.com/profile/sandzhaj) - NickVolynkin - [Crowdin](https://crowdin.com/profile/NickVolynkin) - Serbian (Cyrillic) - Proofreaders needed. @@ -106,32 +107,31 @@ are very appreciative of the work done by translators and proofreaders! 1. Contribute translations to GitLab. See instructions for [translating GitLab](translation.md). - Translating GitLab is a community effort that requires team work and - attention to detail. Proofreaders play an important role helping new - contributors, and ensuring the consistency and quality of translations. - Your conduct and contributions as a translator should reflect this before - requesting to be a proofreader. + Translating GitLab is a community effort that requires team work and + attention to detail. Proofreaders play an important role helping new + contributors, and ensuring the consistency and quality of translations. + Your conduct and contributions as a translator should reflect this before + requesting to be a proofreader. 1. Request proofreader permissions by opening a merge request to add yourself to the list of proofreaders. - Open the [proofreader.md source file][proofreader-src] and click **Edit**. + Open the [proofreader.md source file][proofreader-src] and click **Edit**. - Add your language in alphabetical order, and add yourself to the list - including: - - name - - link to your GitLab profile - - link to your CrowdIn profile + Add your language in alphabetical order, and add yourself to the list + including: + - name + - link to your GitLab profile + - link to your CrowdIn profile - In the merge request description, please include links to any projects you - have previously translated. + In the merge request description, please include links to any projects you + have previously translated. 1. Your request to become a proofreader will be considered on the merits of your previous translations by [GitLab team members](https://about.gitlab.com/team/) or [Core team members](https://about.gitlab.com/core-team/) who are fluent in the language or current proofreaders. - When a request is made for the first proofreader for a language and there are no [GitLab team members](https://about.gitlab.com/team/) - or [Core team members](https://about.gitlab.com/core-team/) who speak the language, we will request links to previous translation work in other communities or projects. - + or [Core team members](https://about.gitlab.com/core-team/) who speak the language, we will request links to previous translation work in other communities or projects. [proofreader-src]: https://gitlab.com/gitlab-org/gitlab-ce/blob/master/doc/development/i18n/proofreader.md diff --git a/doc/development/img/architecture_simplified.png b/doc/development/img/architecture_simplified.png Binary files differindex 1698c167c5e..1ad57b65468 100644 --- a/doc/development/img/architecture_simplified.png +++ b/doc/development/img/architecture_simplified.png diff --git a/doc/development/img/distributed_tracing_jaeger_ui.png b/doc/development/img/distributed_tracing_jaeger_ui.png Binary files differindex 57517dacced..dcd18b1ec9f 100644 --- a/doc/development/img/distributed_tracing_jaeger_ui.png +++ b/doc/development/img/distributed_tracing_jaeger_ui.png diff --git a/doc/development/img/distributed_tracing_performance_bar.png b/doc/development/img/distributed_tracing_performance_bar.png Binary files differindex c9998cedd2d..8c819045104 100644 --- a/doc/development/img/distributed_tracing_performance_bar.png +++ b/doc/development/img/distributed_tracing_performance_bar.png diff --git a/doc/development/img/elasticsearch_architecture.svg b/doc/development/img/elasticsearch_architecture.svg new file mode 100644 index 00000000000..2f38f9b04ee --- /dev/null +++ b/doc/development/img/elasticsearch_architecture.svg @@ -0,0 +1 @@ +<svg version="1.2" width="210mm" height="297mm" viewBox="0 0 21000 29700" preserveAspectRatio="xMidYMid" fill-rule="evenodd" stroke-width="28.222" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><defs class="ClipPathGroup"><clipPath id="a" clipPathUnits="userSpaceOnUse"><path d="M0 0h21000v29700H0z"/></clipPath></defs><g class="SlideGroup"><g class="Slide" clip-path="url(#a)"><g class="Page"><g class="com.sun.star.drawing.CustomShape"><path class="BoundingBox" fill="none" d="M1975 5575h3051v1651H1975z"/><path fill="#FFF" d="M3500 7200H2000V5600h3000v1600H3500z"/><path fill="none" stroke="#3465A4" stroke-width="50" d="M3500 7200H2000V5600h3000v1600H3500z"/><text class="TextShape"><tspan class="TextParagraph" font-family="Arial, sans-serif" font-size="423" font-weight="400"><tspan class="TextPosition" x="2778" y="6311"><tspan>Snippet</tspan></tspan><tspan class="TextPosition" x="2099" y="6785"><tspan>(ActiveRecord)</tspan></tspan></tspan></text></g><g class="com.sun.star.drawing.CustomShape"><path class="BoundingBox" fill="none" d="M1475 3975h4051v3551H1475z"/><path fill="none" stroke="#3465A4" stroke-width="50" d="M3500 7500H1500V4000h4000v3500H3500z"/><text class="TextShape"><tspan class="TextParagraph" font-family="Arial, sans-serif" font-size="423" font-weight="400"><tspan class="TextPosition" x="1788" y="5048"><tspan>ApplicationSearch</tspan></tspan></tspan></text></g><g class="com.sun.star.drawing.ConnectorShape"><path class="BoundingBox" fill="none" d="M5975 4675h8051v701H5975z"/><path fill="none" stroke="#3465A4" stroke-width="50" d="M6000 5350h4000v-650h4000"/></g><g class="com.sun.star.drawing.ConnectorShape"><path class="BoundingBox" fill="none" d="M5975 5325h8051v1101H5975z"/><path fill="none" stroke="#3465A4" stroke-width="50" d="M6000 5350h4000v1050h4000"/></g><g class="com.sun.star.drawing.CustomShape"><path class="BoundingBox" fill="none" d="M1075 2875h4951v4951H1075z"/><path fill="none" stroke="#F33" stroke-width="50" d="M3550 7800H1100V2900h4900v4900H3550z"/><text class="TextShape"><tspan class="TextParagraph" font-family="Arial, sans-serif" font-size="423" font-weight="700"><tspan class="TextPosition" x="1946" y="3514"><tspan fill="#C9211E">SnippetsSearch</tspan></tspan></tspan></text></g><g class="com.sun.star.drawing.CustomShape"><path class="BoundingBox" fill="none" d="M1975 12175h3051v1651H1975z"/><path fill="#FFF" d="M3500 13800H2000v-1600h3000v1600H3500z"/><path fill="none" stroke="#3465A4" stroke-width="50" d="M3500 13800H2000v-1600h3000v1600H3500z"/><text class="TextShape"><tspan class="TextParagraph" font-family="Arial, sans-serif" font-size="423" font-weight="400"><tspan class="TextPosition" x="2778" y="12911"><tspan>Snippet</tspan></tspan><tspan class="TextPosition" x="2099" y="13385"><tspan>(ActiveRecord)</tspan></tspan></tspan></text></g><g class="com.sun.star.drawing.CustomShape"><path class="BoundingBox" fill="none" d="M1075 10775h4951v3251H1075z"/><path fill="none" stroke="#3465A4" stroke-width="50" d="M3550 14000H1100v-3200h4900v3200H3550z"/><text class="TextShape"><tspan class="TextParagraph" font-family="Arial, sans-serif" font-size="423" font-weight="400"><tspan class="TextPosition" x="2511" y="11461"><tspan>Application</tspan></tspan><tspan class="TextPosition" x="1933" y="11935"><tspan>VersionedSearch</tspan></tspan></tspan></text></g><g class="com.sun.star.drawing.ConnectorShape"><path class="BoundingBox" fill="none" d="M3525 13975h4501v7451H3525z"/><path fill="none" stroke="#3465A4" stroke-width="50" d="M3550 14000v7400h4450"/></g><g class="com.sun.star.drawing.CustomShape"><path class="BoundingBox" fill="none" d="M14008 14075h4985v851h-4985z"/><path fill="none" stroke="#999" stroke-width="50" d="M16500 14900h-2467v-800h4934v800h-2467z"/><text class="TextShape"><tspan class="TextParagraph" font-family="Arial, sans-serif" font-size="423" font-weight="400"><tspan class="TextPosition" x="14720" y="14648"><tspan fill="gray">ClassMethodProxy</tspan></tspan></tspan></text></g><g class="com.sun.star.drawing.CustomShape"><path class="BoundingBox" fill="none" d="M13375 13075h6251v2151h-6251z"/><path fill="none" stroke="#F33" stroke-width="50" d="M16500 15200h-3100v-2100h6200v2100h-3100z"/><text class="TextShape"><tspan class="TextParagraph" font-family="Arial, sans-serif" font-size="423" font-weight="700"><tspan class="TextPosition" x="13799" y="13731"><tspan fill="#C9211E">V12p1::SnippetClassProxy</tspan></tspan></tspan></text></g><g class="com.sun.star.drawing.CustomShape"><path class="BoundingBox" fill="none" d="M7975 14575h3051v1851H7975z"/><path fill="none" stroke="#3465A4" stroke-width="50" d="M9500 16400H8000v-1800h3000v1800H9500z"/><text class="TextShape"><tspan class="TextParagraph" font-family="Arial, sans-serif" font-size="423" font-weight="400"><tspan class="TextPosition" x="8277" y="15411"><tspan>MultiVersion-</tspan></tspan><tspan class="TextPosition" x="8429" y="15885"><tspan>ClassProxy</tspan></tspan></tspan></text></g><g class="com.sun.star.drawing.CustomShape"><path class="BoundingBox" fill="none" d="M14008 16875h4985v851h-4985z"/><path fill="none" stroke="#999" stroke-width="50" d="M16500 17700h-2467v-800h4934v800h-2467z"/><text class="TextShape"><tspan class="TextParagraph" font-family="Arial, sans-serif" font-size="423" font-weight="400"><tspan class="TextPosition" x="14720" y="17448"><tspan fill="gray">ClassMethodProxy</tspan></tspan></tspan></text></g><g class="com.sun.star.drawing.CustomShape"><path class="BoundingBox" fill="none" d="M13375 15875h6251v2151h-6251z"/><path fill="none" stroke="#F33" stroke-width="50" d="M16500 18000h-3100v-2100h6200v2100h-3100z"/><text class="TextShape"><tspan class="TextParagraph" font-family="Arial, sans-serif" font-size="423" font-weight="700"><tspan class="TextPosition" x="13799" y="16531"><tspan fill="#C9211E">V12p2::SnippetClassProxy</tspan></tspan></tspan></text></g><g class="com.sun.star.drawing.ConnectorShape"><path class="BoundingBox" fill="none" d="M10975 14125h2451v1401h-2451z"/><path fill="none" stroke="#3465A4" stroke-width="50" d="M11000 15500h1463v-1350h937"/></g><g class="com.sun.star.drawing.ConnectorShape"><path class="BoundingBox" fill="none" d="M10975 15475h2451v1501h-2451z"/><path fill="none" stroke="#3465A4" stroke-width="50" d="M11000 15500h1463v1450h937"/></g><g class="com.sun.star.drawing.ConnectorShape"><path class="BoundingBox" fill="none" d="M3525 13975h4501v1551H3525z"/><path fill="none" stroke="#3465A4" stroke-width="50" d="M3550 14000v1500h4450"/></g><g class="com.sun.star.drawing.CustomShape"><path class="BoundingBox" fill="none" d="M14008 19975h4985v851h-4985z"/><path fill="none" stroke="#999" stroke-width="50" d="M16500 20800h-2467v-800h4934v800h-2467z"/><text class="TextShape"><tspan class="TextParagraph" font-family="Arial, sans-serif" font-size="423" font-weight="400"><tspan class="TextPosition" x="14445" y="20548"><tspan fill="gray">InstanceMethodProxy</tspan></tspan></tspan></text></g><g class="com.sun.star.drawing.CustomShape"><path class="BoundingBox" fill="none" d="M13375 18975h6251v2151h-6251z"/><path fill="none" stroke="#F33" stroke-width="50" d="M16500 21100h-3100v-2100h6200v2100h-3100z"/><text class="TextShape"><tspan class="TextParagraph" font-family="Arial, sans-serif" font-size="423" font-weight="700"><tspan class="TextPosition" x="13505" y="19631"><tspan fill="#C9211E">V12p1::SnippetInstanceProxy</tspan></tspan></tspan></text></g><g class="com.sun.star.drawing.CustomShape"><path class="BoundingBox" fill="none" d="M7975 20275h3051v2251H7975z"/><path fill="none" stroke="#3465A4" stroke-width="50" d="M9500 22500H8000v-2200h3000v2200H9500z"/><text class="TextShape"><tspan class="TextParagraph" font-family="Arial, sans-serif" font-size="423" font-weight="400"><tspan class="TextPosition" x="8277" y="21311"><tspan>MultiVersion-</tspan></tspan><tspan class="TextPosition" x="8154" y="21785"><tspan>InstanceProxy</tspan></tspan></tspan></text></g><g class="com.sun.star.drawing.CustomShape"><path class="BoundingBox" fill="none" d="M14008 22775h4985v851h-4985z"/><path fill="none" stroke="#999" stroke-width="50" d="M16500 23600h-2467v-800h4934v800h-2467z"/><text class="TextShape"><tspan class="TextParagraph" font-family="Arial, sans-serif" font-size="423" font-weight="400"><tspan class="TextPosition" x="14445" y="23348"><tspan fill="gray">InstanceMethodProxy</tspan></tspan></tspan></text></g><g class="com.sun.star.drawing.CustomShape"><path class="BoundingBox" fill="none" d="M13375 21775h6251v2151h-6251z"/><path fill="none" stroke="#F33" stroke-width="50" d="M16500 23900h-3100v-2100h6200v2100h-3100z"/><text class="TextShape"><tspan class="TextParagraph" font-family="Arial, sans-serif" font-size="423" font-weight="700"><tspan class="TextPosition" x="13505" y="22431"><tspan fill="#C9211E">V12p2::SnippetInstanceProxy</tspan></tspan></tspan></text></g><g class="com.sun.star.drawing.ConnectorShape"><path class="BoundingBox" fill="none" d="M10975 20025h2451v1401h-2451z"/><path fill="none" stroke="#3465A4" stroke-width="50" d="M11000 21400h1463v-1350h937"/></g><g class="com.sun.star.drawing.ConnectorShape"><path class="BoundingBox" fill="none" d="M10975 21375h2451v1501h-2451z"/><path fill="none" stroke="#3465A4" stroke-width="50" d="M11000 21400h1463v1450h937"/></g><g class="com.sun.star.drawing.TextShape"><path class="BoundingBox" fill="none" d="M900 1600h10697v879H900z"/><text class="TextShape"><tspan class="TextParagraph" font-family="Arial, sans-serif" font-size="564" font-weight="400"><tspan class="TextPosition" x="1150" y="2233"><tspan>Standard elasticsearch-rails setup</tspan></tspan></tspan></text></g><g class="com.sun.star.drawing.TextShape"><path class="BoundingBox" fill="none" d="M900 9300h7683v879H900z"/><text class="TextShape"><tspan class="TextParagraph" font-family="Arial, sans-serif" font-size="564" font-weight="400"><tspan class="TextPosition" x="1150" y="9933"><tspan>GitLab multi-indices setup</tspan></tspan></tspan></text></g><g class="com.sun.star.drawing.TextShape"><path class="BoundingBox" fill="none" d="M3400 21300h4821v1197H3400z"/><text class="TextShape"><tspan class="TextParagraph" font-size="388" font-weight="400"><tspan class="TextPosition" x="4250" y="21840"><tspan fill="gray">(instance method)</tspan></tspan><tspan class="TextPosition" x="3651" y="22264"><tspan font-family="Courier" font-size="423">__elasticsearch__</tspan></tspan></tspan></text></g><g class="com.sun.star.drawing.TextShape"><path class="BoundingBox" fill="none" d="M3380 15400h4821v1197H3380z"/><text class="TextShape"><tspan class="TextParagraph" font-size="388" font-weight="400"><tspan class="TextPosition" x="4512" y="15940"><tspan fill="gray">(class method)</tspan></tspan><tspan class="TextPosition" x="3631" y="16364"><tspan font-family="Courier" font-size="423">__elasticsearch__</tspan></tspan></tspan></text></g><g class="com.sun.star.drawing.TextShape"><path class="BoundingBox" fill="none" d="M9000 3500h4821v1197H9000z"/><text class="TextShape"><tspan class="TextParagraph" font-size="388" font-weight="400"><tspan class="TextPosition" x="10132" y="4040"><tspan fill="gray">(class method)</tspan></tspan><tspan class="TextPosition" x="9251" y="4464"><tspan font-family="Courier" font-size="423">__elasticsearch__</tspan></tspan></tspan></text></g><g class="com.sun.star.drawing.TextShape"><path class="BoundingBox" fill="none" d="M9000 6400h4821v1197H9000z"/><text class="TextShape"><tspan class="TextParagraph" font-size="388" font-weight="400"><tspan class="TextPosition" x="9850" y="6940"><tspan fill="gray">(instance method)</tspan></tspan><tspan class="TextPosition" x="9251" y="7364"><tspan font-family="Courier" font-size="423">__elasticsearch__</tspan></tspan></tspan></text></g><g class="com.sun.star.drawing.CustomShape"><path class="BoundingBox" fill="none" d="M1975 25175h2051v851H1975z"/><path fill="none" stroke="#999" stroke-width="50" d="M3000 26000H2000v-800h2000v800H3000z"/><text class="TextShape"><tspan class="TextParagraph" font-family="Arial, sans-serif" font-size="423" font-weight="400"><tspan class="TextPosition" x="2634" y="25748"><tspan fill="gray">Foo</tspan></tspan></tspan></text></g><g class="com.sun.star.drawing.TextShape"><path class="BoundingBox" fill="none" d="M4400 25200h7101v726H4400z"/><text class="TextShape"><tspan class="TextParagraph" font-family="Arial, sans-serif" font-size="423" font-weight="400"><tspan class="TextPosition" x="4650" y="25710"><tspan>elasticsearch-rails’ internal class</tspan></tspan></tspan></text></g><g class="com.sun.star.drawing.TextShape"><path class="BoundingBox" fill="none" d="M4400 26400h8601v1200H4400z"/><text class="TextShape"><tspan class="TextParagraph" font-family="Arial, sans-serif" font-size="423" font-weight="400"><tspan class="TextPosition" x="4650" y="26910"><tspan>where model-specific logic is</tspan></tspan></tspan></text></g><g class="com.sun.star.drawing.CustomShape"><path class="BoundingBox" fill="none" d="M1975 26275h2051v851H1975z"/><path fill="none" stroke="#F33" stroke-width="50" d="M3000 27100H2000v-800h2000v800H3000z"/><text class="TextShape"><tspan class="TextParagraph" font-family="Arial, sans-serif" font-size="423" font-weight="700"><tspan class="TextPosition" x="2613" y="26848"><tspan fill="#C9211E">Foo</tspan></tspan></tspan></text></g><g class="com.sun.star.drawing.TextShape"><path class="BoundingBox" fill="none" d="M4900 17289h5901v2312H4900z"/><text class="TextShape"><tspan class="TextParagraph" font-family="Arial, sans-serif" font-size="370" font-weight="400"><tspan class="TextPosition" x="7236" y="17748"><tspan fill="gray">Write operations like </tspan></tspan><tspan class="TextPosition" x="5323" y="18159"><tspan fill="gray">indexing/updating are forwarded </tspan></tspan><tspan class="TextPosition" x="8024" y="18570"><tspan fill="gray">to all instances.</tspan></tspan></tspan><tspan class="TextParagraph" font-family="Arial, sans-serif" font-size="370" font-weight="400"><tspan class="TextPosition" x="5501" y="18981"><tspan fill="gray">Read operations are forwarded </tspan></tspan><tspan class="TextPosition" x="7126" y="19392"><tspan fill="gray">to specified instance.</tspan></tspan></tspan></text></g><g class="com.sun.star.drawing.ConnectorShape"><path class="BoundingBox" fill="none" d="M10785 15769h1422v2691h-1422z"/><path fill="none" stroke="#999" stroke-width="30" d="M10800 18444c1429 0 934-1618 1119-2337"/><path fill="#999" d="M12206 15769l-460 293 267 217 193-510z"/></g><g class="com.sun.star.drawing.ConnectorShape"><path class="BoundingBox" fill="none" d="M10785 18429h1528v2862h-1528z"/><path fill="none" stroke="#999" stroke-width="30" d="M10800 18444c1509 0 970 1782 1200 2526"/><path fill="#999" d="M12312 21290l-227-496-252 235 479 261z"/></g><g class="com.sun.star.drawing.TextShape"><path class="BoundingBox" fill="none" d="M1800 24000h7101v807H1800z"/><text class="TextShape"><tspan class="TextParagraph" font-family="Arial, sans-serif" font-size="494" font-weight="700"><tspan class="TextPosition" x="2050" y="24574"><tspan>Legend</tspan></tspan></tspan></text></g><g class="com.sun.star.drawing.CustomShape"><path class="BoundingBox" fill="none" d="M13975 4275h5085v851h-5085z"/><path fill="none" stroke="#999" stroke-width="50" d="M16517 5100h-2517v-800h5034v800h-2517z"/><text class="TextShape"><tspan class="TextParagraph" font-family="Arial, sans-serif" font-size="423" font-weight="400"><tspan class="TextPosition" x="14737" y="4848"><tspan fill="gray">ClassMethodProxy</tspan></tspan></tspan></text></g><g class="com.sun.star.drawing.CustomShape"><path class="BoundingBox" fill="none" d="M13975 5975h5085v851h-5085z"/><path fill="none" stroke="#999" stroke-width="50" d="M16517 6800h-2517v-800h5034v800h-2517z"/><text class="TextShape"><tspan class="TextParagraph" font-family="Arial, sans-serif" font-size="423" font-weight="400"><tspan class="TextPosition" x="14462" y="6548"><tspan fill="gray">InstanceMethodProxy</tspan></tspan></tspan></text></g></g></g></g></svg>
\ No newline at end of file diff --git a/doc/development/import_export.md b/doc/development/import_export.md index 64c91f151c5..0c343bc22e4 100644 --- a/doc/development/import_export.md +++ b/doc/development/import_export.md @@ -19,6 +19,7 @@ Project.find_by_full_path('group/project').import_state.slice(:jid, :status, :la grep JID /var/log/gitlab/sidekiq/current grep "Import/Export error" /var/log/gitlab/sidekiq/current grep "Import/Export backtrace" /var/log/gitlab/sidekiq/current +tail /var/log/gitlab/gitlab-rails/importer.log ``` ## Troubleshooting performance issues @@ -229,8 +230,8 @@ meaning that we want to bump the version up in the next version (or patch releas For example: 1. Add rename to `RelationRenameService` in X.Y -2. Remove it from `RelationRenameService` in X.Y + 1 -3. Bump Import/Export version in X.Y + 1 +1. Remove it from `RelationRenameService` in X.Y + 1 +1. Bump Import/Export version in X.Y + 1 ```ruby module Gitlab diff --git a/doc/development/instrumentation.md b/doc/development/instrumentation.md index 5f95cf3707c..777d372ec60 100644 --- a/doc/development/instrumentation.md +++ b/doc/development/instrumentation.md @@ -1,6 +1,6 @@ # Instrumenting Ruby Code -GitLab Performance Monitoring allows instrumenting of both methods and custom +[GitLab Performance Monitoring](../administration/monitoring/performance/index.md) allows instrumenting of both methods and custom blocks of Ruby code. Method instrumentation is the primary form of instrumentation with block-based instrumentation only being used when we want to drill down to specific regions of code within a method. diff --git a/doc/development/integrations/jira_connect.md b/doc/development/integrations/jira_connect.md index 9ba3b922fd8..e1350b02262 100644 --- a/doc/development/integrations/jira_connect.md +++ b/doc/development/integrations/jira_connect.md @@ -30,9 +30,11 @@ The following are required to install and test the app: 1. In the **From this URL** field, provide a link to the app descriptor. The host and port must point to your GitLab instance. For example: + ``` https://xxxx.serveo.net/-/jira_connect/app_descriptor.json ``` + 1. Click **Upload**. If the install was successful, you should see the **GitLab for Jira** app under **Manage apps**. diff --git a/doc/development/interacting_components.md b/doc/development/interacting_components.md new file mode 100644 index 00000000000..5e6dc8d460a --- /dev/null +++ b/doc/development/interacting_components.md @@ -0,0 +1,29 @@ +# Developing against interacting components or features + +It's not uncommon that a single code change can reflect and interact with multiple parts of GitLab +codebase. Furthermore, an existing feature might have an underlying integration or behavior that +might go unnoticed even by reviewers and maintainers. + +The goal of this section is to briefly list interacting pieces to think about +when making _backend_ changes that might involve multiple features or [components](architecture.md#components). + +## Uploads + +GitLab supports uploads to [object storage]. That means every feature and +change that affects uploads should also be tested against [object storage], +which is _not_ enabled by default in [GDK](https://gitlab.com/gitlab-org/gitlab-development-kit). + +When working on a related feature, make sure to enable and test it +against [Minio](https://gitlab.com/gitlab-org/gitlab-development-kit/blob/master/doc/howto/object_storage.md). + +See also [File Storage in GitLab](file_storage.md). + +## Merge requests + +### Forks + +GitLab supports a great amount of features for [merge requests](../user/project/merge_requests/index.md). One +of them is the ability to create merge requests from and to [forks](../gitlab-basics/fork-project.md), +which should also be highly considered and tested upon development phase. + +[object storage]: https://docs.gitlab.com/charts/advanced/external-object-storage/ diff --git a/doc/development/kubernetes.md b/doc/development/kubernetes.md index 4b2d48903ac..f4528667814 100644 --- a/doc/development/kubernetes.md +++ b/doc/development/kubernetes.md @@ -107,7 +107,7 @@ Mitigation strategies include: ## Debugging Logs related to the Kubernetes integration can be found in -[kubernetes.log](../administration/logs.md#kuberneteslog). On a local +[`kubernetes.log`](../administration/logs.md#kuberneteslog). On a local GDK install, this will be present in `log/kubernetes.log`. Some services such as diff --git a/doc/development/lfs.md b/doc/development/lfs.md index 8c3408eb6e2..cb4c2d8967b 100644 --- a/doc/development/lfs.md +++ b/doc/development/lfs.md @@ -8,4 +8,4 @@ In April 2019, Francisco Javier López hosted a [Deep Dive] on GitLab's [Git LFS [Git LFS]: ../workflow/lfs/manage_large_binaries_with_git_lfs.html [recording on YouTube]: https://www.youtube.com/watch?v=Yyxwcksr0Qc [Google Slides]: https://docs.google.com/presentation/d/1E-aw6-z0rYd0346YhIWE7E9A65zISL9iIMAOq2zaw9E/edit -[PDF]: https://gitlab.com/gitlab-org/create-stage/uploads/07a89257a140db067bdfb484aecd35e1/Git_LFS_Deep_Dive__Create_.pdf
\ No newline at end of file +[PDF]: https://gitlab.com/gitlab-org/create-stage/uploads/07a89257a140db067bdfb484aecd35e1/Git_LFS_Deep_Dive__Create_.pdf diff --git a/doc/development/licensed_feature_availability.md b/doc/development/licensed_feature_availability.md index 80ec7b8c0cf..29e4ace157b 100644 --- a/doc/development/licensed_feature_availability.md +++ b/doc/development/licensed_feature_availability.md @@ -14,7 +14,7 @@ it should be restricted on namespace scope. 1. Add the feature symbol on `EES_FEATURES`, `EEP_FEATURES` or `EEU_FEATURES` constants in `ee/app/models/license.rb`. Note on `ee/app/models/ee/namespace.rb` that _Bronze_ GitLab.com features maps to on-premise _EES_, _Silver_ to _EEP_ and _Gold_ to _EEU_. -2. Check using: +1. Check using: ```ruby project.feature_available?(:feature_symbol) @@ -29,8 +29,8 @@ the instance license. 1. Add the feature symbol on `EES_FEATURES`, `EEP_FEATURES` or `EEU_FEATURES` constants in `ee/app/models/license.rb`. -2. Add the same feature symbol to `GLOBAL_FEATURES` -3. Check using: +1. Add the same feature symbol to `GLOBAL_FEATURES` +1. Check using: ```ruby License.feature_available?(:feature_symbol) diff --git a/doc/development/logging.md b/doc/development/logging.md index 4f63c84fc0e..b43f1029cc6 100644 --- a/doc/development/logging.md +++ b/doc/development/logging.md @@ -133,7 +133,7 @@ importer progresses. Here's what to do: logs in `/var/log/gitlab/gitlab-rails/*.log` every hour and [keep at most 30 compressed files](https://docs.gitlab.com/omnibus/settings/logs.html#logrotate). On GitLab.com, that setting is only 6 compressed files. These settings should suffice - for most users, but you may need to tweak them in [omnibus-gitlab](https://gitlab.com/gitlab-org/omnibus-gitlab). + for most users, but you may need to tweak them in [Omnibus GitLab](https://gitlab.com/gitlab-org/omnibus-gitlab). 1. If you add a new file, submit an issue to the [production tracker](https://gitlab.com/gitlab-com/gl-infra/production/issues) or diff --git a/doc/development/migration_style_guide.md b/doc/development/migration_style_guide.md index 0c7601b415e..4740cf4de7b 100644 --- a/doc/development/migration_style_guide.md +++ b/doc/development/migration_style_guide.md @@ -1,18 +1,16 @@ # Migration Style Guide When writing migrations for GitLab, you have to take into account that -these will be ran by hundreds of thousands of organizations of all sizes, some with +these will be run by hundreds of thousands of organizations of all sizes, some with many years of data in their database. In addition, having to take a server offline for an upgrade small or big is a -big burden for most organizations. For this reason it is important that your -migrations are written carefully, can be applied online and adhere to the style +big burden for most organizations. For this reason, it is important that your +migrations are written carefully, can be applied online, and adhere to the style guide below. Migrations are **not** allowed to require GitLab installations to be taken -offline unless _absolutely necessary_. Downtime assumptions should be based on -the behaviour of a migration when performed using PostgreSQL, as various -operations in MySQL may require downtime without there being alternatives. +offline unless _absolutely necessary_. When downtime is necessary the migration has to be approved by: @@ -87,7 +85,38 @@ be possible to downgrade in case of a vulnerability or bugs. In your migration, add a comment describing how the reversibility of the migration was tested. -## Multi Threading +## Atomicity + +By default, migrations are single transaction. That is, a transaction is opened +at the beginning of the migration, and committed after all steps are processed. + +Running migrations in a single transaction makes sure that if one of the steps fails, +none of the steps will be executed, leaving the database in valid state. +Therefore, either: + +- Put all migrations in one single-transaction migration. +- If necessary, put most actions in one migration and create a separate migration + for the steps that cannot be done in a single transaction. + +For example, if you create an empty table and need to build an index for it, +it is recommended to use a regular single-transaction migration and the default +rails schema statement: [`add_index`](https://api.rubyonrails.org/v5.2/classes/ActiveRecord/ConnectionAdapters/SchemaStatements.html#method-i-add_index). +This is a blocking operation, but it won't cause problems because the table is not yet used, +and therefore it does not have any records yet. + +## Heavy operations in a single transaction + +When using a single-transaction migration, a transaction will hold on a database connection +for the duration of the migration, so you must make sure the actions in the migration +do not take too much time: In general, queries executed in a migration need to fit comfortably +within `15s` on GitLab.com. + +In case you need to insert, update, or delete a significant amount of data, you: + +- Must disable the single transaction with `disable_ddl_transaction!`. +- Should consider doing it in a [Background Migration](background_migrations.md). + +## Multi-Threading Sometimes a migration might need to use multiple Ruby threads to speed up a migration. For this to work your migration needs to include the module @@ -124,16 +153,16 @@ pool. This ensures each thread has its own connection object, and won't time out when trying to obtain one. **NOTE:** PostgreSQL has a maximum amount of connections that it allows. This -limit can vary from installation to installation. As a result it's recommended -you do not use more than 32 threads in a single migration. Usually 4-8 threads +limit can vary from installation to installation. As a result, it's recommended +you do not use more than 32 threads in a single migration. Usually, 4-8 threads should be more than enough. ## Removing indexes -When removing an index make sure to use the method `remove_concurrent_index` instead -of the regular `remove_index` method. The `remove_concurrent_index` method -automatically drops concurrent indexes when using PostgreSQL, removing the -need for downtime. To use this method you must disable single-transaction mode +If the table is not empty when removing an index, make sure to use the method +`remove_concurrent_index` instead of the regular `remove_index` method. +The `remove_concurrent_index` method drops indexes concurrently, so no locking is required, +and there is no need for downtime. To use this method, you must disable single-transaction mode by calling the method `disable_ddl_transaction!` in the body of your migration class like so: @@ -151,19 +180,25 @@ end Note that it is not necessary to check if the index exists prior to removing it. +For a small table (such as an empty one or one with less than `1,000` records), +it is recommended to use `remove_index` in a single-transaction migration, +combining it with other operations that don't require `disable_ddl_transaction!`. + ## Adding indexes -If you need to add a unique index please keep in mind there is the possibility +If you need to add a unique index, please keep in mind there is the possibility of existing duplicates being present in the database. This means that should always _first_ add a migration that removes any duplicates, before adding the unique index. -When adding an index make sure to use the method `add_concurrent_index` instead -of the regular `add_index` method. The `add_concurrent_index` method -automatically creates concurrent indexes when using PostgreSQL, removing the -need for downtime. To use this method you must disable transactions by calling -the method `disable_ddl_transaction!` in the body of your migration class like -so: +When adding an index to a non-empty table make sure to use the method +`add_concurrent_index` instead of the regular `add_index` method. +The `add_concurrent_index` method automatically creates concurrent indexes +when using PostgreSQL, removing the need for downtime. + +To use this method, you must disable single-transactions mode +by calling the method `disable_ddl_transaction!` in the body of your migration +class like so: ```ruby class MyMigration < ActiveRecord::Migration[4.2] @@ -181,16 +216,20 @@ class MyMigration < ActiveRecord::Migration[4.2] end ``` +For a small table (such as an empty one or one with less than `1,000` records), +it is recommended to use `add_index` in a single-transaction migration, combining it with other +operations that don't require `disable_ddl_transaction!`. + ## Adding foreign-key constraints -When adding a foreign-key constraint to either an existing or new -column remember to also add a index on the column. +When adding a foreign-key constraint to either an existing or a new column also +remember to add an index on the column. This is **required** for all foreign-keys, e.g., to support efficient cascading deleting: when a lot of rows in a table get deleted, the referenced records need to be deleted too. The database has to look for corresponding records in the referenced table. Without an index, this will result in a sequential scan on the -table which can take a long time. +table, which can take a long time. Here's an example where we add a new column with a foreign key constraint. Note it includes `index: true` to create an index for it. @@ -204,13 +243,17 @@ class Migration < ActiveRecord::Migration[4.2] end ``` -When adding a foreign-key constraint to an existing column, we -have to employ `add_concurrent_foreign_key` and `add_concurrent_index` +When adding a foreign-key constraint to an existing column in a non-empty table, +we have to employ `add_concurrent_foreign_key` and `add_concurrent_index` instead of `add_reference`. +For an empty table (such as a fresh one), it is recommended to use +`add_reference` in a single-transaction migration, combining it with other +operations that don't require `disable_ddl_transaction!`. + ## Adding Columns With Default Values -When adding columns with default values you must use the method +When adding columns with default values to non-empty tables, you must use `add_column_with_default`. This method ensures the table is updated without requiring downtime. This method is not reversible so you must manually define the `up` and `down` methods in your migration class. @@ -234,10 +277,14 @@ end ``` Keep in mind that this operation can easily take 10-15 minutes to complete on -larger installations (e.g. GitLab.com). As a result you should only add default -values if absolutely necessary. There is a RuboCop cop that will fail if this -method is used on some tables that are very large on GitLab.com, which would -cause other issues. +larger installations (e.g. GitLab.com). As a result, you should only add +default values if absolutely necessary. There is a RuboCop cop that will fail if +this method is used on some tables that are very large on GitLab.com, which +would cause other issues. + +For a small table (such as an empty one or one with less than `1,000` records), +use `add_column` and `change_column_default` in a single-transaction migration, +combining it with other operations that don't require `disable_ddl_transaction!`. ## Updating an existing column @@ -255,8 +302,10 @@ update_column_in_batches(:projects, :foo, 10) do |table, query| end ``` -To perform a computed update, the value can be wrapped in `Arel.sql`, so Arel -treats it as an SQL literal. The below example is the same as the one above, but +If a computed update is needed, the value can be wrapped in `Arel.sql`, so Arel +treats it as an SQL literal. It's also a required deprecation for [Rails 6](https://gitlab.com/gitlab-org/gitlab-ce/issues/61451). + +The below example is the same as the one above, but the value is set to the product of the `bar` and `baz` columns: ```ruby @@ -277,12 +326,12 @@ staging environment - or asking someone else to do so for you - beforehand. By default, an integer column can hold up to a 4-byte (32-bit) number. That is a max value of 2,147,483,647. Be aware of this when creating a column that will -hold file sizes in byte units. If you are tracking file size in bytes this +hold file sizes in byte units. If you are tracking file size in bytes, this restricts the maximum file size to just over 2GB. To allow an integer column to hold up to an 8-byte (64-bit) number, explicitly set the limit to 8-bytes. This will allow the column to hold a value up to -9,223,372,036,854,775,807. +`9,223,372,036,854,775,807`. Rails migration example: @@ -296,9 +345,11 @@ add_column(:projects, :foo, :integer, default: 10, limit: 8) ## Timestamp column type -By default, Rails uses the `timestamp` data type that stores timestamp data without timezone information. -The `timestamp` data type is used by calling either the `add_timestamps` or the `timestamps` method. -Also Rails converts the `:datetime` data type to the `timestamp` one. +By default, Rails uses the `timestamp` data type that stores timestamp data +without timezone information. The `timestamp` data type is used by calling +either the `add_timestamps` or the `timestamps` method. + +Also, Rails converts the `:datetime` data type to the `timestamp` one. Example: @@ -319,14 +370,16 @@ def up end ``` -Instead of using these methods one should use the following methods to store timestamps with timezones: +Instead of using these methods, one should use the following methods to store +timestamps with timezones: - `add_timestamps_with_timezone` - `timestamps_with_timezone` -This ensures all timestamps have a time zone specified. This in turn means existing timestamps won't -suddenly use a different timezone when the system's timezone changes. It also makes it very clear which -timezone was used in the first place. +This ensures all timestamps have a time zone specified. This, in turn, means +existing timestamps won't suddenly use a different timezone when the system's +timezone changes. It also makes it very clear which timezone was used in the +first place. ## Storing JSON in database @@ -343,10 +396,7 @@ class AddOptionsToBuildMetadata < ActiveRecord::Migration[5.0] end ``` -On MySQL the `JSON` and `JSONB` is translated to `TEXT 1MB`, as `JSONB` is PostgreSQL only feature. - -For above reason you have to use a serializer to provide a translation layer -in order to support PostgreSQL and MySQL seamlessly: +You have to use a serializer to provide a translation layer: ```ruby class BuildMetadata @@ -356,7 +406,7 @@ end ## Testing -Make sure that your migration works with MySQL and PostgreSQL with data. An +Make sure that your migration works for databases with data. An empty database does not guarantee that your migration is correct. Make sure your migration can be reversed. @@ -364,7 +414,7 @@ Make sure your migration can be reversed. ## Data migration Please prefer Arel and plain SQL over usual ActiveRecord syntax. In case of -using plain SQL you need to quote all input manually with `quote_string` helper. +using plain SQL, you need to quote all input manually with `quote_string` helper. Example with Arel: @@ -389,7 +439,7 @@ select_all("SELECT name, COUNT(id) as cnt FROM tags GROUP BY name HAVING COUNT(i end ``` -If you need more complex logic you can define and use models local to a +If you need more complex logic, you can define and use models local to a migration. For example: ```ruby @@ -400,13 +450,13 @@ class MyMigration < ActiveRecord::Migration[4.2] end ``` -When doing so be sure to explicitly set the model's table name so it's not +When doing so be sure to explicitly set the model's table name, so it's not derived from the class name or namespace. ### Renaming reserved paths -When a new route for projects is introduced that could conflict with any -existing records. The path for this records should be renamed, and the +When a new route for projects is introduced, it could conflict with any +existing records. The path for these records should be renamed, and the related data should be moved on disk. Since we had to do this a few times already, there are now some helpers to help diff --git a/doc/development/module_with_instance_variables.md b/doc/development/module_with_instance_variables.md index 7bdfa04fc57..443eee0b62c 100644 --- a/doc/development/module_with_instance_variables.md +++ b/doc/development/module_with_instance_variables.md @@ -1,6 +1,6 @@ -## Modules with instance variables could be considered harmful +# Modules with instance variables could be considered harmful -### Background +## Background Rails somehow encourages people using modules and instance variables everywhere. For example, using instance variables in the controllers, @@ -9,7 +9,7 @@ helpers, and views. They're also encouraging the use of saving everything in a giant, single object, and people could access everything in that one giant object. -### The problems +## The problems Of course this is convenient to develop, because we just have everything within reach. However this has a number of downsides when that chosen object @@ -24,7 +24,7 @@ manipulated from 3 different modules. It's hard to track when those variables start giving us troubles. We don't know which module would suddenly change one of the variables. Everything could touch anything. -### Similar concerns +## Similar concerns People are saying multiple inheritance is bad. Mixing multiple modules with multiple instance variables scattering everywhere suffer from the same issue. @@ -40,7 +40,7 @@ Note that `included` doesn't solve the whole issue. They define the dependencies, but they still allow each modules to talk implicitly via the instance variables in the final giant object, and that's where the problem is. -### Solutions +## Solutions We should split the giant object into multiple objects, and they communicate with each other with the API, i.e. public methods. In short, composition over @@ -53,7 +53,7 @@ With clearly defined API, this would make things less coupled and much easier to debug and track, and much more extensible for other objects to use, because they communicate in a clear way, rather than implicit dependencies. -### Acceptable use +## Acceptable use However, it's not always bad to use instance variables in a module, as long as it's contained in the same module; that is, no other modules or @@ -74,7 +74,7 @@ Unfortunately it's not easy to code more complex rules into the cop, so we rely on people's best judgement. If we could find another good pattern we could easily add to the cop, we should do it. -### How to rewrite and avoid disabling this cop +## How to rewrite and avoid disabling this cop Even if we could just disable the cop, we should avoid doing so. Some code could be easily rewritten in simple form. Consider this acceptable method: @@ -181,7 +181,7 @@ rather than whatever includes the module, and those modules which were also included, making it much easier to track down any issues, and reducing the chance of having name conflicts. -### How to disable this cop +## How to disable this cop Put the disabling comment right after your code in the same line: @@ -210,14 +210,14 @@ end Note that you need to enable it at some point, otherwise everything below won't be checked. -### Things we might need to ignore right now +## Things we might need to ignore right now Because of the way Rails helpers and mailers work, we might not be able to avoid the use of instance variables there. For those cases, we could ignore them at the moment. At least we're not going to share those modules with other random objects, so they're still somewhat isolated. -### Instance variables in views +## Instance variables in views They're bad because we can't easily tell who's using the instance variables (from controller's point of view) and where we set them up (from partials' diff --git a/doc/development/namespaces_storage_statistics.md b/doc/development/namespaces_storage_statistics.md new file mode 100644 index 00000000000..2c7e5935435 --- /dev/null +++ b/doc/development/namespaces_storage_statistics.md @@ -0,0 +1,178 @@ +# Database case study: Namespaces storage statistics + +## Introduction + +On [Storage and limits management for groups](https://gitlab.com/groups/gitlab-org/-/epics/886), +we want to facilitate a method for easily viewing the amount of +storage consumed by a group, and allow easy management. + +## Proposal + +1. Create a new ActiveRecord model to hold the namespaces' statistics in an aggregated form (only for root namespaces). +1. Refresh the statistics in this model every time a project belonging to this namespace is changed. + +## Problem + +In GitLab, we update the project storage statistics through a +[callback](https://gitlab.com/gitlab-org/gitlab-ce/blob/v12.2.0.pre/app/models/project.rb#L90) +every time the project is saved. + +The summary of those statistics per namespace is then retrieved +by [`Namespaces#with_statistics`](https://gitlab.com/gitlab-org/gitlab-ce/blob/v12.2.0.pre/app/models/namespace.rb#L70) scope. Analyzing this query we noticed that: + +- It takes up to `1.2` seconds for namespaces with over `15k` projects. +- It can't be analyzed with [ChatOps](chatops_on_gitlabcom.md), as it times out. + +Additionally, the pattern that is currently used to update the project statistics +(the callback) doesn't scale adequately. It is currently one of the largest +[database queries transactions on production](https://gitlab.com/gitlab-org/gitlab-ce/issues/62488) +that takes the most time overall. We can't add one more query to it as +it will increase the transaction's length. + +Because of all of the above, we can't apply the same pattern to store +and update the namespaces statistics, as the `namespaces` table is one +of the largest tables on GitLab.com. Therefore we needed to find a performant and +alternative method. + +## Attempts + +### Attempt A: PostgreSQL materialized view + +Model can be updated through a refresh strategy based on a project routes SQL and a [materialized view](https://www.postgresql.org/docs/9.6/rules-materializedviews.html): + +```sql +SELECT split_part("rs".path, '/', 1) as root_path, + COALESCE(SUM(ps.storage_size), 0) AS storage_size, + COALESCE(SUM(ps.repository_size), 0) AS repository_size, + COALESCE(SUM(ps.wiki_size), 0) AS wiki_size, + COALESCE(SUM(ps.lfs_objects_size), 0) AS lfs_objects_size, + COALESCE(SUM(ps.build_artifacts_size), 0) AS build_artifacts_size, + COALESCE(SUM(ps.packages_size), 0) AS packages_size +FROM "projects" + INNER JOIN routes rs ON rs.source_id = projects.id AND rs.source_type = 'Project' + INNER JOIN project_statistics ps ON ps.project_id = projects.id +GROUP BY root_path +``` + +We could then execute the query with: + +```sql +REFRESH MATERIALIZED VIEW root_namespace_storage_statistics; +``` + +While this implied a single query update (and probably a fast one), it has some downsides: + +- Materialized views syntax varies from PostgreSQL and MySQL. While this feature was worked on, MySQL was still supported by GitLab. +- Rails does not have native support for materialized views. We'd need to use a specialized gem to take care of the management of the database views, which implies additional work. + +### Attempt B: An update through a CTE + +Similar to Attempt A: Model update done through a refresh strategy with a [Common Table Expression](https://www.postgresql.org/docs/9.1/queries-with.html) + +```sql +WITH refresh AS ( + SELECT split_part("rs".path, '/', 1) as root_path, + COALESCE(SUM(ps.storage_size), 0) AS storage_size, + COALESCE(SUM(ps.repository_size), 0) AS repository_size, + COALESCE(SUM(ps.wiki_size), 0) AS wiki_size, + COALESCE(SUM(ps.lfs_objects_size), 0) AS lfs_objects_size, + COALESCE(SUM(ps.build_artifacts_size), 0) AS build_artifacts_size, + COALESCE(SUM(ps.packages_size), 0) AS packages_size + FROM "projects" + INNER JOIN routes rs ON rs.source_id = projects.id AND rs.source_type = 'Project' + INNER JOIN project_statistics ps ON ps.project_id = projects.id + GROUP BY root_path) +UPDATE namespace_storage_statistics +SET storage_size = refresh.storage_size, + repository_size = refresh.repository_size, + wiki_size = refresh.wiki_size, + lfs_objects_size = refresh.lfs_objects_size, + build_artifacts_size = refresh.build_artifacts_size, + packages_size = refresh.packages_size +FROM refresh + INNER JOIN routes rs ON rs.path = refresh.root_path AND rs.source_type = 'Namespace' +WHERE namespace_storage_statistics.namespace_id = rs.source_id +``` + +Same benefits and downsides as attempt A. + +### Attempt C: Get rid of the model and store the statistics on Redis + +We could get rid of the model that stores the statistics in aggregated form and instead use a Redis Set. +This would be the [boring solution](https://about.gitlab.com/handbook/values/#boring-solutions) and the fastest one +to implement, as GitLab already includes Redis as part of its [Architecture](architecture.md#redis). + +The downside of this approach is that Redis does not provide the same persistence/consistency guarantees as PostgreSQL, +and this is information we can't afford to lose in a Redis failure. + +### Attempt D: Tag the root namespace and its child namespaces + +Directly relate the root namespace to its child namespaces, so +whenever a namespace is created without a parent, this one is tagged +with the root namespace ID: + +| id | root_id | parent_id +|:---|:--------|:---------- +| 1 | 1 | NULL +| 2 | 1 | 1 +| 3 | 1 | 2 + +To aggregate the statistics inside a namespace we'd execute something like: + +```sql +SELECT COUNT(...) +FROM projects +WHERE namespace_id IN ( + SELECT id + FROM namespaces + WHERE root_id = X +) +``` + +Even though this approach would make aggregating much easier, it has some major downsides: + +- We'd have to migrate **all namespaces** by adding and filling a new column. Because of the size of the table, dealing with time/cost will not be great. The background migration will take approximately `153h`, see <https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/29772>. +- Background migration has to be shipped one release before, delaying the functionality by another milestone. + +### Attempt E (final): Update the namespace storage statistics in async way + +This approach consists of keep using the incremental statistics updates we currently already have, +but we refresh them through Sidekiq jobs and in different transactions: + +1. Create a second table (`namespace_aggregation_schedules`) with two columns `id` and `namespace_id`. +1. Whenever the statistics of a project changes, insert a row into `namespace_aggregation_schedules` + - We don't insert a new row if there's already one related to the root namespace. + - Keeping in mind the length of the transaction that involves updating `project_statistics`(<https://gitlab.com/gitlab-org/gitlab-ce/issues/62488>), the insertion should be done in a different transaction and through a Sidekiq Job. +1. After inserting the row, we schedule another worker to be executed async at two different moments: + - One enqueued for immediate execution and another one scheduled in `1.5h` hours. + - We only schedule the jobs, if we can obtain a `1.5h` lease on Redis on a key based on the root namespace ID. + - If we can't obtain the lease, it indicates there's another aggregation already in progress, or scheduled in no more than `1.5h`. +1. This worker will: + - Update the root namespace storage statistics by querying all the namespaces through a service. + - Delete the related `namespace_aggregation_schedules` after the update. +1. Another Sidekiq job is also included to traverse any remaining rows on the `namespace_aggregation_schedules` table and schedule jobs for every pending row. + - This job is scheduled with cron to run every night (UTC). + +This implementation has the following benefits: + +- All the updates are done async, so we're not increasing the length of the transactions for `project_statistics`. +- We're doing the update in a single SQL query. +- It is compatible with PostgreSQL and MySQL. +- No background migration required. + +The only downside of this approach is that namespaces' statistics are updated up to `1.5` hours after the change is done, +which means there's a time window in which the statistics are inaccurate. Because we're still not +[enforcing storage limits](https://gitlab.com/gitlab-org/gitlab-ce/issues/30421), this is not a major problem. + +## Conclusion + +Updating the storage statistics asynchronously, was the less problematic and +performant approach of aggregating the root namespaces. + +All the details regarding this use case can be found on: + +- <https://gitlab.com/gitlab-org/gitlab-ce/issues/62214> +- Merge Request with the implementation: <https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/28996> + +Performance of the namespace storage statistics were measured in staging and production (GitLab.com). All results were posted +on <https://gitlab.com/gitlab-org/gitlab-ce/issues/64092>: No problem has been reported so far. diff --git a/doc/development/new_fe_guide/dependencies.md b/doc/development/new_fe_guide/dependencies.md index 12a4f089d41..8a6930acd37 100644 --- a/doc/development/new_fe_guide/dependencies.md +++ b/doc/development/new_fe_guide/dependencies.md @@ -15,6 +15,18 @@ Exceptions are made for some tools that we require in the `gitlab:assets:compile` CI job such as `webpack-bundle-analyzer` to analyze our production assets post-compile. +To add or upgrade a dependency, run: + +```sh +yarn add <your dependency here> +``` + +This may introduce duplicate dependencies. To de-duplicate `yarn.lock`, run: + +```sh +node_modules/.bin/yarn-deduplicate --list --strategy fewer yarn.lock && yarn install +``` + --- > TODO: Add Dependencies diff --git a/doc/development/new_fe_guide/development/accessibility.md b/doc/development/new_fe_guide/development/accessibility.md index 81a29170129..ae5c4c6a6cc 100644 --- a/doc/development/new_fe_guide/development/accessibility.md +++ b/doc/development/new_fe_guide/development/accessibility.md @@ -1,17 +1,21 @@ # Accessiblity + Using semantic HTML plays a key role when it comes to accessibility. ## Accessible Rich Internet Applications - ARIA + WAI-ARIA, the Accessible Rich Internet Applications specification, defines a way to make Web content and Web applications more accessible to people with disabilities. > Note: It is [recommended][using-aria] to use semantic elements as the primary method to achieve accessibility rather than adding aria attributes. Adding aria attributes should be seen as a secondary method for creating accessible elements. ### Role + The `role` attribute describes the role the element plays in the context of the document. Check the list of WAI-ARIA roles [here][roles] ## Icons + When using icons or images that aren't absolutely needed to understand the context, we should use `aria-hidden="true"`. On the other hand, if an icon is crucial to understand the context we should do one of the following: @@ -20,6 +24,7 @@ On the other hand, if an icon is crucial to understand the context we should do 1. Use `aria-labelledby` to point to an element that contains the explanation for that icon ## Form inputs + In forms we should use the `for` attribute in the label statement: ``` diff --git a/doc/development/new_fe_guide/development/performance.md b/doc/development/new_fe_guide/development/performance.md index c54b8305991..d41239693bf 100644 --- a/doc/development/new_fe_guide/development/performance.md +++ b/doc/development/new_fe_guide/development/performance.md @@ -5,7 +5,7 @@ We have a performance dashboard available in one of our [grafana instances](https://dashboards.gitlab.net/d/1EBTz3Dmz/sitespeed-page-summary?orgId=1). This dashboard automatically aggregates metric data from [sitespeed.io](https://www.sitespeed.io/) every 6 hours. These changes are displayed after a set number of pages are aggregated. These pages can be found inside a text file in the gitlab-build-images [repository](https://gitlab.com/gitlab-org/gitlab-build-images) called [gitlab.txt](https://gitlab.com/gitlab-org/gitlab-build-images/blob/master/scripts/gitlab.txt) -Any frontend engineer can contribute to this dashboard. They can contribute by adding or removing urls of pages from this text file. Please have a [frontend monitoring expert](https://about.gitlab.com/company/team) review your changes before assigning to a maintainer of the `gitlab-build-images` project. The changes will go live on the next scheduled run after the changes are merged into `master`. +Any frontend engineer can contribute to this dashboard. They can contribute by adding or removing urls of pages from this text file. Please have a [frontend monitoring expert](https://about.gitlab.com/company/team/) review your changes before assigning to a maintainer of the `gitlab-build-images` project. The changes will go live on the next scheduled run after the changes are merged into `master`. There are 3 recommended high impact metrics to review on each page: diff --git a/doc/development/new_fe_guide/development/testing.md b/doc/development/new_fe_guide/development/testing.md index 2b62c2a41fe..e0d413b748b 100644 --- a/doc/development/new_fe_guide/development/testing.md +++ b/doc/development/new_fe_guide/development/testing.md @@ -1,361 +1,6 @@ -# Overview of Frontend Testing +--- +redirect_to: '../../testing_guide/frontend_testing.md' +--- -Tests relevant for frontend development can be found at the following places: +This document was moved to [another location](../../testing_guide/frontend_testing.md). -- `spec/javascripts/` which are run by Karma (command: `yarn karma`) and contain - - [frontend unit tests](#frontend-unit-tests) - - [frontend component tests](#frontend-component-tests) - - [frontend integration tests](#frontend-integration-tests) -- `spec/frontend/` which are run by Jest (command: `yarn jest`) and contain - - [frontend unit tests](#frontend-unit-tests) - - [frontend component tests](#frontend-component-tests) - - [frontend integration tests](#frontend-integration-tests) -- `spec/features/` which are run by RSpec and contain - - [feature tests](#feature-tests) - -All tests in `spec/javascripts/` will eventually be migrated to `spec/frontend/` (see also [#52483](https://gitlab.com/gitlab-org/gitlab-ce/issues/52483)). - -In addition there were feature tests in `features/` run by Spinach in the past. -These have been removed from our codebase in May 2018 ([#23036](https://gitlab.com/gitlab-org/gitlab-ce/issues/23036)). - -See also: - -- [Old testing guide](../../testing_guide/frontend_testing.html). -- [Notes on testing Vue components](../../fe_guide/vue.html#testing-vue-components). - -## Frontend unit tests - -Unit tests are on the lowest abstraction level and typically test functionality that is not directly perceivable by a user. - -### When to use unit tests - -<details> - <summary>exported functions and classes</summary> - Anything that is exported can be reused at various places in a way you have no control over. - Therefore it is necessary to document the expected behavior of the public interface with tests. -</details> - -<details> - <summary>Vuex actions</summary> - Any Vuex action needs to work in a consistent way independent of the component it is triggered from. -</details> - -<details> - <summary>Vuex mutations</summary> - For complex Vuex mutations it helps to identify the source of a problem by separating the tests from other parts of the Vuex store. -</details> - -### When *not* to use unit tests - -<details> - <summary>non-exported functions or classes</summary> - Anything that is not exported from a module can be considered private or an implementation detail and doesn't need to be tested. -</details> - -<details> - <summary>constants</summary> - Testing the value of a constant would mean to copy it. - This results in extra effort without additional confidence that the value is correct. -</details> - -<details> - <summary>Vue components</summary> - Computed properties, methods, and lifecycle hooks can be considered an implementation detail of components and don't need to be tested. - They are implicitly covered by component tests. - The <a href="https://vue-test-utils.vuejs.org/guides/#getting-started">official Vue guidelines</a> suggest the same. -</details> - -### What to mock in unit tests - -<details> - <summary>state of the class under test</summary> - Modifying the state of the class under test directly rather than using methods of the class avoids side-effects in test setup. -</details> - -<details> - <summary>other exported classes</summary> - Every class needs to be tested in isolation to prevent test scenarios from growing exponentially. -</details> - -<details> - <summary>single DOM elements if passed as parameters</summary> - For tests that only operate on single DOM elements rather than a whole page, creating these elements is cheaper than loading a whole HTML fixture. -</details> - -<details> - <summary>all server requests</summary> - When running frontend unit tests, the backend may not be reachable. - Therefore all outgoing requests need to be mocked. -</details> - -<details> - <summary>asynchronous background operations</summary> - Background operations cannot be stopped or waited on, so they will continue running in the following tests and cause side effects. -</details> - -### What *not* to mock in unit tests - -<details> - <summary>non-exported functions or classes</summary> - Everything that is not exported can be considered private to the module and will be implicitly tested via the exported classes / functions. -</details> - -<details> - <summary>methods of the class under test</summary> - By mocking methods of the class under test, the mocks will be tested and not the real methods. -</details> - -<details> - <summary>utility functions (pure functions, or those that only modify parameters)</summary> - If a function has no side effects because it has no state, it is safe to not mock it in tests. -</details> - -<details> - <summary>full HTML pages</summary> - Loading the HTML of a full page slows down tests, so it should be avoided in unit tests. -</details> - -## Frontend component tests - -Component tests cover the state of a single component that is perceivable by a user depending on external signals such as user input, events fired from other components, or application state. - -### When to use component tests - -- Vue components - -### When *not* to use component tests - -<details> - <summary>Vue applications</summary> - Vue applications may contain many components. - Testing them on a component level requires too much effort. - Therefore they are tested on frontend integration level. -</details> - -<details> - <summary>HAML templates</summary> - HAML templates contain only Markup and no frontend-side logic. - Therefore they are not complete components. -</details> - -### What to mock in component tests - -<details> - <summary>DOM</summary> - Operating on the real DOM is significantly slower than on the virtual DOM. -</details> - -<details> - <summary>properties and state of the component under test</summary> - Similarly to testing classes, modifying the properties directly (rather than relying on methods of the component) avoids side-effects. -</details> - -<details> - <summary>Vuex store</summary> - To avoid side effects and keep component tests simple, Vuex stores are replaced with mocks. -</details> - -<details> - <summary>all server requests</summary> - Similar to unit tests, when running component tests, the backend may not be reachable. - Therefore all outgoing requests need to be mocked. -</details> - -<details> - <summary>asynchronous background operations</summary> - Similar to unit tests, background operations cannot be stopped or waited on, so they will continue running in the following tests and cause side effects. -</details> - -<details> - <summary>child components</summary> - Every component is tested individually, so child components are mocked. - See also <a href="https://vue-test-utils.vuejs.org/api/#shallowmount">shallowMount()</a> -</details> - -### What *not* to mock in component tests - -<details> - <summary>methods or computed properties of the component under test</summary> - By mocking part of the component under test, the mocks will be tested and not the real component. -</details> - -<details> - <summary>functions and classes independent from Vue</summary> - All plain JavaScript code is already covered by unit tests and needs not to be mocked in component tests. -</details> - -## Frontend integration tests - -Integration tests cover the interaction between all components on a single page. -Their abstraction level is comparable to how a user would interact with the UI. - -### When to use integration tests - -<details> - <summary>page bundles (<code>index.js</code> files in <code>app/assets/javascripts/pages/</code>)</summary> - Testing the page bundles ensures the corresponding frontend components integrate well. -</details> - -<details> - <summary>Vue applications outside of page bundles</summary> - Testing Vue applications as a whole ensures the corresponding frontend components integrate well. -</details> - -### What to mock in integration tests - -<details> - <summary>HAML views (use fixtures instead)</summary> - Rendering HAML views requires a Rails environment including a running database which we cannot rely on in frontend tests. -</details> - -<details> - <summary>all server requests</summary> - Similar to unit and component tests, when running component tests, the backend may not be reachable. - Therefore all outgoing requests need to be mocked. -</details> - -<details> - <summary>asynchronous background operations that are not perceivable on the page</summary> - Background operations that affect the page need to be tested on this level. - All other background operations cannot be stopped or waited on, so they will continue running in the following tests and cause side effects. -</details> - -### What *not* to mock in integration tests - -<details> - <summary>DOM</summary> - Testing on the real DOM ensures our components work in the environment they are meant for. - Part of this will be delegated to <a href="https://gitlab.com/gitlab-org/quality/team-tasks/issues/45">cross-browser testing</a>. -</details> - -<details> - <summary>properties or state of components</summary> - On this level, all tests can only perform actions a user would do. - For example to change the state of a component, a click event would be fired. -</details> - -<details> - <summary>Vuex stores</summary> - When testing the frontend code of a page as a whole, the interaction between Vue components and Vuex stores is covered as well. -</details> - -## Feature tests - -In contrast to [frontend integration tests](#frontend-integration-tests), feature tests make requests against the real backend instead of using fixtures. -This also implies that database queries are executed which makes this category significantly slower. - -See also the [RSpec testing guidelines](../../testing_guide/best_practices.md#rspec). - -### When to use feature tests - -- use cases that require a backend and cannot be tested using fixtures -- behavior that is not part of a page bundle but defined globally - -### Relevant notes - -A `:js` flag is added to the test to make sure the full environment is loaded. - -``` -scenario 'successfully', :js do - sign_in(create(:admin)) -end -``` - -The steps of each test are written using capybara methods ([documentation](https://www.rubydoc.info/gems/capybara/2.15.1)). - -Bear in mind <abbr title="XMLHttpRequest">XHR</abbr> calls might require you to use `wait_for_requests` in between steps, like so: - -```rspec -find('.form-control').native.send_keys(:enter) - -wait_for_requests - -expect(page).not_to have_selector('.card') -``` - -## Test helpers - -### Vuex Helper: `testAction` - -We have a helper available to make testing actions easier, as per [official documentation](https://vuex.vuejs.org/guide/testing.html): - -``` -testAction( - actions.actionName, // action - { }, // params to be passed to action - state, // state - [ - { type: types.MUTATION}, - { type: types.MUTATION_1, payload: {}}, - ], // mutations committed - [ - { type: 'actionName', payload: {}}, - { type: 'actionName1', payload: {}}, - ] // actions dispatched - done, -); -``` - -Check an example in [spec/javascripts/ide/stores/actions_spec.jsspec/javascripts/ide/stores/actions_spec.js](https://gitlab.com/gitlab-org/gitlab-ce/blob/master/spec/javascripts/ide/stores/actions_spec.js). - -### Vue Helper: `mountComponent` - -To make mounting a Vue component easier and more readable, we have a few helpers available in `spec/helpers/vue_mount_component_helper`. - -- `createComponentWithStore` -- `mountComponentWithStore` - -Examples of usage: - -``` -beforeEach(() => { - vm = createComponentWithStore(Component, store); - - vm.$store.state.currentBranchId = 'master'; - - vm.$mount(); -}, -``` - -``` -beforeEach(() => { - vm = mountComponentWithStore(Component, { - el: '#dummy-element', - store, - props: { badge }, - }); -}, -``` - -Don't forget to clean up: - -``` -afterEach(() => { - vm.$destroy(); -}); -``` - -## Testing with older browsers - -Some regressions only affect a specific browser version. We can install and test in particular browsers with either Firefox or Browserstack using the following steps: - -### Browserstack - -[Browserstack](https://www.browserstack.com/) allows you to test more than 1200 mobile devices and browsers. -You can use it directly through the [live app](https://www.browserstack.com/live) or you can install the [chrome extension](https://chrome.google.com/webstore/detail/browserstack/nkihdmlheodkdfojglpcjjmioefjahjb) for easy access. -You can find the credentials on 1Password, under `frontendteam@gitlab.com`. - -### Firefox - -#### macOS - -You can download any older version of Firefox from the releases FTP server, <https://ftp.mozilla.org/pub/firefox/releases/> - -1. From the website, select a version, in this case `50.0.1`. -1. Go to the mac folder. -1. Select your preferred language, you will find the dmg package inside, download it. -1. Drag and drop the application to any other folder but the `Applications` folder. -1. Rename the application to something like `Firefox_Old`. -1. Move the application to the `Applications` folder. -1. Open up a terminal and run `/Applications/Firefox_Old.app/Contents/MacOS/firefox-bin -profilemanager` to create a new profile specific to that Firefox version. -1. Once the profile has been created, quit the app, and run it again like normal. You now have a working older Firefox version. diff --git a/doc/development/new_fe_guide/index.md b/doc/development/new_fe_guide/index.md index 0e8f5486861..227d03bd86f 100644 --- a/doc/development/new_fe_guide/index.md +++ b/doc/development/new_fe_guide/index.md @@ -19,7 +19,6 @@ Learn about all the internal JavaScript modules that make up our frontend. Style guides to keep our code consistent. - ## [Tips](tips.md) Tips from our frontend team to develop more efficiently and effectively. diff --git a/doc/development/new_fe_guide/modules/dirty_submit.md b/doc/development/new_fe_guide/modules/dirty_submit.md index 6c03958b463..217743ea395 100644 --- a/doc/development/new_fe_guide/modules/dirty_submit.md +++ b/doc/development/new_fe_guide/modules/dirty_submit.md @@ -1,7 +1,6 @@ # Dirty Submit -> [Introduced][ce-21115] in GitLab 11.3. -> [dirty_submit][dirty-submit] +> [Introduced](https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/21115) in GitLab 11.3. ## Summary @@ -9,6 +8,9 @@ Prevent submitting forms with no changes. Currently handles `input`, `textarea` and `select` elements. +Also, see [the code](https://gitlab.com/gitlab-org/gitlab-ce/blob/master/app/assets/javascripts/dirty_submit/) +within the GitLab project. + ## Usage ```js @@ -18,6 +20,3 @@ new DirtySubmitForm(document.querySelector('form')); // or new DirtySubmitForm(document.querySelectorAll('form')); ``` - -[ce-21115]: https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/21115 -[dirty-submit]: https://gitlab.com/gitlab-org/gitlab-ce/blob/master/app/assets/javascripts/dirty_submit/
\ No newline at end of file diff --git a/doc/development/new_fe_guide/style/index.md b/doc/development/new_fe_guide/style/index.md index 335d9e66240..f073dc56f1f 100644 --- a/doc/development/new_fe_guide/style/index.md +++ b/doc/development/new_fe_guide/style/index.md @@ -8,7 +8,7 @@ ## [Vue style guide](vue.md) -# Tooling +## Tooling ## [Prettier](prettier.md) diff --git a/doc/development/new_fe_guide/style/javascript.md b/doc/development/new_fe_guide/style/javascript.md index 3019eaa089c..b742d567f41 100644 --- a/doc/development/new_fe_guide/style/javascript.md +++ b/doc/development/new_fe_guide/style/javascript.md @@ -71,7 +71,6 @@ class myClass { } const instance = new myClass(); instance.makeRequest(); - ``` ## Avoid classes to handle DOM events @@ -189,8 +188,8 @@ disabled due to legacy compatibility reasons but they are in the process of bein Do not disable specific ESLint rules. Due to technical debt, you may disable the following rules only if you are invoking/instantiating existing code modules. - - [no-new](http://eslint.org/docs/rules/no-new) - - [class-method-use-this](http://eslint.org/docs/rules/class-methods-use-this) +- [no-new](http://eslint.org/docs/rules/no-new) +- [class-method-use-this](http://eslint.org/docs/rules/class-methods-use-this) > Note: Disable these rules on a per line basis. This makes it easier to refactor - in the future. E.g. use `eslint-disable-next-line` or `eslint-disable-line`. +> in the future. E.g. use `eslint-disable-next-line` or `eslint-disable-line`. diff --git a/doc/development/new_fe_guide/style/prettier.md b/doc/development/new_fe_guide/style/prettier.md index 4495f38f262..5f44c640d76 100644 --- a/doc/development/new_fe_guide/style/prettier.md +++ b/doc/development/new_fe_guide/style/prettier.md @@ -4,7 +4,7 @@ Our code is automatically formatted with [Prettier](https://prettier.io) to foll ## Editor -The easiest way to include prettier in your workflow is by setting up your preferred editor (all major editors are supported) accordingly. We suggest setting up prettier to run automatically when each file is saved. Find [here](https://prettier.io/docs/en/editors.html) the best way to set it up in your preferred editor. +The easiest way to include prettier in your workflow is by setting up your preferred editor (all major editors are supported) accordingly. We suggest setting up prettier to run automatically when each file is saved. Find [here](https://prettier.io/docs/en/editors.html) the best way to set it up in your preferred editor. Please take care that you only let Prettier format the same file types as the global Yarn script does (.js, .vue, and .scss). In VSCode by example you can easily exclude file formats in your settings file: @@ -28,6 +28,7 @@ Updates all currently staged files (based on `git diff`) with Prettier and saves ``` yarn prettier-staged ``` + Checks all currently staged files (based on `git diff`) with Prettier and log which files would need manual updating to the console. ``` diff --git a/doc/development/omnibus.md b/doc/development/omnibus.md index 0ba354d28a2..ea5c18f1a8c 100644 --- a/doc/development/omnibus.md +++ b/doc/development/omnibus.md @@ -1,32 +1,32 @@ -# What you should know about omnibus packages +# What you should know about Omnibus packages -Most users install GitLab using our omnibus packages. As a developer it can be -good to know how the omnibus packages differ from what you have on your laptop +Most users install GitLab using our Omnibus packages. As a developer it can be +good to know how the Omnibus packages differ from what you have on your laptop when you are coding. ## Files are owned by root by default -All the files in the Rails tree (`app/`, `config/` etc.) are owned by 'root' in -omnibus installations. This makes the installation simpler and it provides -extra security. The omnibus reconfigure script contains commands that give -write access to the 'git' user only where needed. +All the files in the Rails tree (`app/`, `config/` etc.) are owned by `root` in +Omnibus installations. This makes the installation simpler and it provides +extra security. The Omnibus reconfigure script contains commands that give +write access to the `git` user only where needed. -For example, the 'git' user is allowed to write in the `log/` directory, in +For example, the `git` user is allowed to write in the `log/` directory, in `public/uploads`, and they are allowed to rewrite the `db/schema.rb` file. In other cases, the reconfigure script tricks GitLab into not trying to write a file. For instance, GitLab will generate a `.secret` file if it cannot find one -and write it to the Rails root. In the omnibus packages, reconfigure writes the +and write it to the Rails root. In the Omnibus packages, reconfigure writes the `.secret` file first, so that GitLab never tries to write it. ## Code, data and logs are in separate directories -The omnibus design separates code (read-only, under `/opt/gitlab`) from data +The Omnibus design separates code (read-only, under `/opt/gitlab`) from data (read/write, under `/var/opt/gitlab`) and logs (read/write, under `/var/log/gitlab`). To make this happen the reconfigure script sets custom paths where it can in GitLab config files, and where there are no path settings, it uses symlinks. For example, `config/gitlab.yml` is treated as data so that file is a symlink. -The same goes for `public/uploads`. The `log/` directory is replaced by omnibus +The same goes for `public/uploads`. The `log/` directory is replaced by Omnibus with a symlink to `/var/log/gitlab/gitlab-rails`. diff --git a/doc/development/performance.md b/doc/development/performance.md index c034f4a344b..14b3f8204d2 100644 --- a/doc/development/performance.md +++ b/doc/development/performance.md @@ -246,6 +246,7 @@ irb(main):002:0> results.last.attributes.keys irb(main):003:0> results.where(status: "passed").average(:time).to_s => "0.211340155844156" ``` + These results can also be placed into a PostgreSQL database by setting the `RSPEC_PROFILING_POSTGRES_URL` variable. This is used to profile the test suite when running in the CI environment. @@ -266,7 +267,7 @@ piece of code is worth optimizing. The only two things you can do are: 1. Think about what the code does, how it's used, how many times it's called and how much time is spent in it relative to the total execution time (e.g., the total time spent in a web request). -2. Ask others (preferably in the form of an issue). +1. Ask others (preferably in the form of an issue). Some examples of changes that aren't really important/worth the effort: @@ -284,10 +285,10 @@ directly in a web request as much as possible. This has numerous benefits such as: 1. An error won't prevent the request from completing. -2. The process being slow won't affect the loading time of a page. -3. In case of a failure it's easy to re-try the process (Sidekiq takes care of +1. The process being slow won't affect the loading time of a page. +1. In case of a failure it's easy to re-try the process (Sidekiq takes care of this automatically). -4. By isolating the code from a web request it will hopefully be easier to test +1. By isolating the code from a web request it will hopefully be easier to test and maintain. It's especially important to use Sidekiq as much as possible when dealing with diff --git a/doc/development/policies.md b/doc/development/policies.md index c4ac42bb40a..833b0acb13e 100644 --- a/doc/development/policies.md +++ b/doc/development/policies.md @@ -89,8 +89,8 @@ Each line represents a rule that was evaluated. There are a few things to note: 1. The `-` or `+` symbol indicates whether the rule block was evaluated to be `false` or `true`, respectively. -2. The number inside the brackets indicates the score. -3. The last part of the line (e.g. `@john : Issue/1`) shows the username +1. The number inside the brackets indicates the score. +1. The last part of the line (e.g. `@john : Issue/1`) shows the username and subject for that rule. Here you can see that the first four rules were evaluated `false` for diff --git a/doc/development/prometheus_metrics.md b/doc/development/prometheus_metrics.md index 0511e735843..576601372a3 100644 --- a/doc/development/prometheus_metrics.md +++ b/doc/development/prometheus_metrics.md @@ -33,12 +33,10 @@ For example: you might be interested in migrating all dependent data to a differ class ImportCommonMetrics < ActiveRecord::Migration[4.2] include Gitlab::Database::MigrationHelpers - require Rails.root.join('db/importers/common_metrics_importer.rb') - DOWNTIME = false def up - Importers::CommonMetricsImporter.new.execute + ::Gitlab::DatabaseImporters::CommonMetrics::Importer.new.execute end def down diff --git a/doc/development/query_recorder.md b/doc/development/query_recorder.md index a6b60149ea4..3787e2ef187 100644 --- a/doc/development/query_recorder.md +++ b/doc/development/query_recorder.md @@ -36,6 +36,13 @@ it "avoids N+1 database queries" do end ``` +## Use request specs instead of controller specs + +Use a [request spec](https://gitlab.com/gitlab-org/gitlab-ce/tree/master/spec/requests) when writing a N+1 test on the controller level. + +Controller specs should not be used to write N+1 tests as the controller is only initialized once per example. +This could lead to false successes where subsequent "requests" could have queries reduced (e.g. because of memoization). + ## Finding the source of the query It may be useful to identify the source of the queries by looking at the call backtrace. diff --git a/doc/development/rake_tasks.md b/doc/development/rake_tasks.md index c97e179910b..e9d6cfe00b2 100644 --- a/doc/development/rake_tasks.md +++ b/doc/development/rake_tasks.md @@ -9,7 +9,7 @@ bundle exec rake setup ``` The `setup` task is an alias for `gitlab:setup`. -This tasks calls `db:reset` to create the database, calls `add_limits_mysql` that adds limits to the database schema in case of a MySQL database and finally it calls `db:seed_fu` to seed the database. +This tasks calls `db:reset` to create the database, and calls `db:seed_fu` to seed the database. Note: `db:setup` calls `db:seed` but this does nothing. ### Seeding issues for all or a given project diff --git a/doc/development/repository_mirroring.md b/doc/development/repository_mirroring.md index f8c33ff2b85..dc51bf80e92 100644 --- a/doc/development/repository_mirroring.md +++ b/doc/development/repository_mirroring.md @@ -8,4 +8,4 @@ In December 2018, Tiago Botelho hosted a [Deep Dive] on GitLab's [Pull Repositor [Pull Repository Mirroring functionality]: ../workflow/repository_mirroring.md#pulling-from-a-remote-repository-starter [recording on YouTube]: https://www.youtube.com/watch?v=sSZq0fpdY-Y [Google Slides]: https://docs.google.com/presentation/d/17BTT6M6RyNRckV4wTt-dr07nIfBvD325_xVBoLtSoPM/edit?usp=sharing -[PDF]: https://gitlab.com/gitlab-org/create-stage/uploads/8693404888a941fd851f8a8ecdec9675/Gitlab_Create_-_Pull_Mirroring_Deep_Dive.pdf
\ No newline at end of file +[PDF]: https://gitlab.com/gitlab-org/create-stage/uploads/8693404888a941fd851f8a8ecdec9675/Gitlab_Create_-_Pull_Mirroring_Deep_Dive.pdf diff --git a/doc/development/reusing_abstractions.md b/doc/development/reusing_abstractions.md index 59da02ed6fd..fce144f8dc2 100644 --- a/doc/development/reusing_abstractions.md +++ b/doc/development/reusing_abstractions.md @@ -14,13 +14,13 @@ on maintainability, the ability to easily debug problems, or even performance. An example would be to use `ProjectsFinder` in `IssuesFinder` to limit issues to those belonging to a set of projects. While initially this may seem like a good idea, both classes provide a very high level interface with very little control. -This means that `IssuesFinder` may not be able to produce a better optimised +This means that `IssuesFinder` may not be able to produce a better optimized database query, as a large portion of the query is controlled by the internals of `ProjectsFinder`. To work around this problem, you would use the same code used by `ProjectsFinder`, instead of using `ProjectsFinder` itself directly. This allows -you to compose your behaviour better, giving you more control over the behaviour +you to compose your behavior better, giving you more control over the behavior of the code. To illustrate, consider the following code from `IssuableFinder#projects`: @@ -52,7 +52,7 @@ functionality is added to this (high level) interface. Instead of _only_ affecting the cases where this is necessary, it may also start affecting `IssuableFinder` in a negative way. For example, the query produced by `GroupProjectsFinder` may include unnecessary conditions. Since we're using a -finder here, we can't easily opt-out of that behaviour. We could add options to +finder here, we can't easily opt-out of that behavior. We could add options to do so, but then we'd need as many options as we have features. Every option adds two code paths, which means that for four features we have to cover 8 different code paths. @@ -213,6 +213,5 @@ The API provided by Active Record itself, such as the `where` method, `save`, Everything in `app/workers`. -The scheduling of Sidekiq jobs using `SomeWorker.perform_async`, `perform_in`, -etc. Directly invoking a worker using `SomeWorker.new.perform` should be avoided -at all times in application code, though this is fine to use in tests. +Use `SomeWorker.perform_async` or `SomeWorker.perform_in` to schedule Sidekiq +jobs. Never directly invoke a worker using `SomeWorker.new.perform`. diff --git a/doc/development/session.md b/doc/development/session.md index 9edce3dbda0..971795d8816 100644 --- a/doc/development/session.md +++ b/doc/development/session.md @@ -17,7 +17,7 @@ When storing values in a session it is best to: - Use simple primitives and avoid storing objects to avoid marshaling complications. - Clean up after unneeded variables to keep memory usage in Redis down. -## Gitlab::Session +## GitLab::Session Sometimes you might want to persist data in the session instead of another store like the database. `Gitlab::Session` lets you access this without passing the session around extensively. For example, you could access it from within a policy without having to pass the session through to each place permissions are checked from. diff --git a/doc/development/sha1_as_binary.md b/doc/development/sha1_as_binary.md index 3151cc29bbc..6c4252ec634 100644 --- a/doc/development/sha1_as_binary.md +++ b/doc/development/sha1_as_binary.md @@ -2,7 +2,7 @@ Storing SHA1 hashes as strings is not very space efficient. A SHA1 as a string requires at least 40 bytes, an additional byte to store the encoding, and -perhaps more space depending on the internals of PostgreSQL and MySQL. +perhaps more space depending on the internals of PostgreSQL. On the other hand, if one were to store a SHA1 as binary one would only need 20 bytes for the actual SHA1, and 1 or 4 bytes of additional space (again depending diff --git a/doc/development/shell_commands.md b/doc/development/shell_commands.md index 7bdf676be58..1300c99622e 100644 --- a/doc/development/shell_commands.md +++ b/doc/development/shell_commands.md @@ -35,7 +35,7 @@ Gitlab::Popen.popen(%W(find /some/path -not -path /some/path -mmin +120 -delete) This coding style could have prevented CVE-2013-4490. -## Always use the configurable git binary path for git commands +## Always use the configurable Git binary path for Git commands ```ruby # Wrong @@ -114,7 +114,7 @@ user = `whoami` user, exit_status = Gitlab::Popen.popen(%W(whoami)) ``` -In other repositories, such as gitlab-shell you can also use `IO.popen`. +In other repositories, such as GitLab Shell you can also use `IO.popen`. ```ruby # Safe IO.popen example diff --git a/doc/development/shell_scripting_guide/index.md b/doc/development/shell_scripting_guide/index.md new file mode 100644 index 00000000000..0809f8b1a0a --- /dev/null +++ b/doc/development/shell_scripting_guide/index.md @@ -0,0 +1,118 @@ +# Shell scripting standards and style guidelines + +## Overview + +GitLab consists of many various services and sub-projects. The majority of +their backend code is written in [Ruby](https://www.ruby-lang.org) and +[Go](https://golang.org). However, some of them use shell scripts for +automation of routine system administration tasks like deployment, +installation, etc. It's being done either for historical reasons or as an effort +to minimize the dependencies, for instance, for Docker images. + +This page aims to define and organize our shell scripting guidelines, +based on our various experiences. All shell scripts across GitLab project +should be eventually harmonized with this guide. If there are any per-project +deviations from this guide, they should be described in the +`README.md` or `PROCESS.md` file for such a project. + +### Avoid using shell scripts + +CAUTION: **Caution:** +This is a must-read section. + +Having said all of the above, we recommend staying away from shell scripts +as much as possible. A language like Ruby or Python (if required for +consistency with codebases that we leverage) is almost always a better choice. +The high-level interpreted languages have more readable syntax, offer much more +mature capabilities for unit-testing, linting, and error reporting. + +Use shell scripts only if there's a strong restriction on project's +dependencies size or any other requirements that are more important +in a particular case. + +## Scope of this guide + +According to the [GitLab installation requirements](../../install/requirements.md), +this guide covers only those shells that are used by +[supported Linux distributions](../../install/requirements.md#supported-linux-distributions), +that is: + +- [POSIX Shell](https://pubs.opengroup.org/onlinepubs/9699919799/utilities/V3_chap02.html) +- [Bash](https://www.gnu.org/software/bash/) + +## Shell language choice + +- When you need to reduce the dependencies list, use what's provided by the environment. For example, for Docker images it's `sh` from `alpine` which is the base image for most of our tool images. +- Everywhere else, use `bash` if possible. It's more powerful than `sh` but still a widespread shell. + +## Code style and format + +This section describes the tools that should be made a mandatory part of +a project's CI pipeline if it contains shell scripts. These tools +automate shell code formatting, checking for errors or vulnerabilities, etc. + +### Linting + +We're using the [ShellCheck](https://www.shellcheck.net/) utility in its default configuration to lint our +shell scripts. + +All projects with shell scripts should use this GitLab CI/CD job: + +```yaml +shell check: + image: koalaman/shellcheck-alpine + stage: test + before_script: + - shellcheck --version + script: + - shellcheck scripts/**/*.sh # path to your shell scripts +``` + +TIP: **Tip:** +By default, ShellCheck will use the [shell detection](https://github.com/koalaman/shellcheck/wiki/SC2148#rationale) +to determine the shell dialect in use. If the shell file is out of your control and ShellCheck cannot +detect the dialect, use `-s` flag to specify it: `-s sh` or `-s bash`. + +### Formatting + +It's recommended to use the [shfmt](https://github.com/mvdan/sh#shfmt) tool to maintain consistent formatting. +We format shell scripts according to the [Google Shell Style Guide](https://google.github.io/styleguide/shell.xml), +so the following `shfmt` invocation should be applied to the project's script files: + +```bash +shfmt -i 2 -ci scripts/**/*.sh +``` + +TIP: **Tip:** +By default, shfmt will use the [shell detection](https://github.com/mvdan/sh#shfmt) similar to one of ShellCheck +and ignore files starting with a period. To override this, use `-ln` flag to specify the shell dialect: +`-ln posix` or `-ln bash`. + +NOTE: **Note:** +Currently, the `shfmt` tool [is not shipped](https://github.com/mvdan/sh/issues/68) as a Docker image containing +a Linux shell. This makes it impossible to use the [official Docker image](https://hub.docker.com/r/mvdan/shfmt) +in GitLab Runner. This [may change](https://github.com/mvdan/sh/issues/68#issuecomment-507721371) in future. + +## Testing + +NOTE: **Note:** +This is a work in progress. + +It is an [ongoing effort](https://gitlab.com/gitlab-org/gitlab-ce/issues/64016) to evaluate different tools for the +automated testing of shell scripts (like [BATS](https://github.com/sstephenson/bats)). + +## Code Review + +The code review should be performed according to: + +- [ShellCheck Checks list](https://github.com/koalaman/shellcheck/wiki/Checks) +- [Google Shell Style Guide](https://google.github.io/styleguide/shell.xml) +- [Shfmt formatting caveats](https://github.com/mvdan/sh#caveats) + +However, the recommended course of action is to use the aforementioned +tools and address reported offenses. This should eliminate the need +for code review. + +--- + +[Return to Development documentation](../README.md). diff --git a/doc/development/sql.md b/doc/development/sql.md index a256fd46c09..2584dcfb4ca 100644 --- a/doc/development/sql.md +++ b/doc/development/sql.md @@ -15,14 +15,11 @@ FROM issues WHERE title LIKE 'WIP:%'; ``` -On PostgreSQL the `LIKE` statement is case-sensitive. On MySQL this depends on -the case-sensitivity of the collation, which is usually case-insensitive. To -perform a case-insensitive `LIKE` on PostgreSQL you have to use `ILIKE` instead. -This statement in turn isn't supported on MySQL. +On PostgreSQL the `LIKE` statement is case-sensitive. To perform a case-insensitive +`LIKE` you have to use `ILIKE` instead. -To work around this problem you should write `LIKE` queries using Arel instead -of raw SQL fragments as Arel automatically uses `ILIKE` on PostgreSQL and `LIKE` -on MySQL. This means that instead of this: +To handle this automatically you should use `LIKE` queries using Arel instead +of raw SQL fragments, as Arel automatically uses `ILIKE` on PostgreSQL. ```ruby Issue.where('title LIKE ?', 'WIP:%') @@ -45,7 +42,7 @@ table = Issue.arel_table Issue.where(table[:title].matches('WIP:%').or(table[:foo].matches('WIP:%'))) ``` -For PostgreSQL this produces: +On PostgreSQL, this produces: ```sql SELECT * @@ -53,18 +50,10 @@ FROM issues WHERE (title ILIKE 'WIP:%' OR foo ILIKE 'WIP:%') ``` -In turn for MySQL this produces: - -```sql -SELECT * -FROM issues -WHERE (title LIKE 'WIP:%' OR foo LIKE 'WIP:%') -``` - ## LIKE & Indexes -Neither PostgreSQL nor MySQL use any indexes when using `LIKE` / `ILIKE` with a -wildcard at the start. For example, this will not use any indexes: +PostgreSQL won't use any indexes when using `LIKE` / `ILIKE` with a wildcard at +the start. For example, this will not use any indexes: ```sql SELECT * @@ -75,9 +64,8 @@ WHERE title ILIKE '%WIP:%'; Because the value for `ILIKE` starts with a wildcard the database is not able to use an index as it doesn't know where to start scanning the indexes. -MySQL provides no known solution to this problem. Luckily PostgreSQL _does_ -provide a solution: trigram GIN indexes. These indexes can be created as -follows: +Luckily, PostgreSQL _does_ provide a solution: trigram GIN indexes. These +indexes can be created as follows: ```sql CREATE INDEX [CONCURRENTLY] index_name_here diff --git a/doc/development/testing_guide/best_practices.md b/doc/development/testing_guide/best_practices.md index 448d9fd01c4..f30a83a4c71 100644 --- a/doc/development/testing_guide/best_practices.md +++ b/doc/development/testing_guide/best_practices.md @@ -15,16 +15,6 @@ manifest themselves within our code. When designing our tests, take time to revi our test design. We can find some helpful heuristics documented in the Handbook in the [Test Design](https://about.gitlab.com/handbook/engineering/quality/guidelines/test-engineering/test-design/) section. -## Run tests against MySQL - -By default, tests are only run against PostgreSQL, but you can run them on -demand against MySQL by following one of the following conventions: - -| Convention | Valid example | -|:----------------------|:-----------------------------| -| Include `mysql` in your branch name | `enhance-mysql-support` | -| Include `[run mysql]` in your commit message | `Fix MySQL support<br><br>[run mysql]` | - ## Test speed GitLab has a massive test suite that, without [parallelization], can take hours @@ -70,6 +60,7 @@ bundle exec rspec spec/[path]/[to]/[spec].rb - On `before` and `after` hooks, prefer it scoped to `:context` over `:all` - When using `evaluate_script("$('.js-foo').testSomething()")` (or `execute_script`) which acts on a given element, use a Capyabara matcher beforehand (e.g. `find('.js-foo')`) to ensure the element actually exists. +- Use `focus: true` to isolate parts of the specs you want to run. [four-phase-test]: https://robots.thoughtbot.com/four-phase-test @@ -454,6 +445,19 @@ complexity of RSpec expectations.They should be placed under a certain type of specs only (e.g. features, requests etc.) but shouldn't be if they apply to multiple type of specs. +#### `be_like_time` + +Time returned from a database can differ in precision from time objects +in Ruby, so we need flexible tolerances when comparing in specs. We can +use `be_like_time` to compare that times are within one second of each +other. + +Example: + +```ruby +expect(metrics.merged_at).to be_like_time(time) +``` + #### `have_gitlab_http_status` Prefer `have_gitlab_http_status` over `have_http_status` because the former diff --git a/doc/development/testing_guide/ci.md b/doc/development/testing_guide/ci.md index 87d48726268..d9f66a827de 100644 --- a/doc/development/testing_guide/ci.md +++ b/doc/development/testing_guide/ci.md @@ -39,7 +39,6 @@ slowest test files and try to improve them. ## CI setup -- On CE and EE, the test suite runs both PostgreSQL and MySQL. - Rails logging to `log/test.log` is disabled by default in CI [for performance reasons][logging]. To override this setting, provide the `RAILS_ENABLE_TEST_LOG` environment variable. diff --git a/doc/development/testing_guide/end_to_end/index.md b/doc/development/testing_guide/end_to_end/index.md index 59eb3ecfd7e..3ae3ce183d9 100644 --- a/doc/development/testing_guide/end_to_end/index.md +++ b/doc/development/testing_guide/end_to_end/index.md @@ -45,11 +45,11 @@ Results are reported in the `#qa-staging` Slack channel. ### Testing code in merge requests -#### Using the `package-and-qa` job +#### Using the `package-and-qa-manual` job It is possible to run end-to-end tests for a merge request, eventually being run in a pipeline in the [`gitlab-qa`](https://gitlab.com/gitlab-org/gitlab-qa/) project, -by triggering the `package-and-qa` manual action in the `test` stage (not +by triggering the `package-and-qa-manual` manual action in the `test` stage (not available for forks). **This runs end-to-end tests against a custom Omnibus package built from your @@ -65,28 +65,23 @@ Below you can read more about how to use it and how does it work. Currently, we are using _multi-project pipeline_-like approach to run QA pipelines. - - -<details> -<summary>Show mermaid source</summary> -<pre> +```mermaid graph LR A1 -.->|1. Triggers an omnibus-gitlab pipeline and wait for it to be done| A2 - B2[<b>`Trigger-qa` stage</b><br />`Trigger:qa-test` job] -.->|2. Triggers a gitlab-qa pipeline and wait for it to be done| A3 + B2[`Trigger-qa` stage<br>`Trigger:qa-test` job] -.->|2. Triggers a gitlab-qa pipeline and wait for it to be done| A3 -subgraph gitlab-ce/ee pipeline - A1[<b>`test` stage</b><br />`package-and-qa` job] +subgraph "gitlab-ce/ee pipeline" + A1[`test` stage<br>`package-and-qa-manual` job] end -subgraph omnibus-gitlab pipeline - A2[<b>`Trigger-docker` stage</b><br />`Trigger:gitlab-docker` job] -->|once done| B2 +subgraph "omnibus-gitlab pipeline" + A2[`Trigger-docker` stage<br>`Trigger:gitlab-docker` job] -->|once done| B2 end -subgraph gitlab-qa pipeline - A3>QA jobs run] -.->|3. Reports back the pipeline result to the `package-and-qa` job<br />and post the result on the original commit tested| A1 +subgraph "gitlab-qa pipeline" + A3>QA jobs run] -.->|3. Reports back the pipeline result to the `package-and-qa-manual` job<br>and post the result on the original commit tested| A1 end -</pre> -</details> +``` 1. Developer triggers a manual action, that can be found in CE / EE merge requests. This starts a chain of pipelines in multiple projects. @@ -148,7 +143,7 @@ Once you decided where to put [test environment orchestration scenarios] and the [GitLab QA orchestrator README][gitlab-qa-readme], and [the already existing instance-level scenarios][instance-level scenarios]. -Continued reading: +Continued reading: - [Quick Start Guide](quick_start_guide.md) - [Style Guide](style_guide.md) diff --git a/doc/development/testing_guide/end_to_end/page_objects.md b/doc/development/testing_guide/end_to_end/page_objects.md index 05cb03eb4bd..850ea6b60ac 100644 --- a/doc/development/testing_guide/end_to_end/page_objects.md +++ b/doc/development/testing_guide/end_to_end/page_objects.md @@ -27,7 +27,7 @@ When someone later changes `t.text_field :login` in the view associated with this page to `t.text_field :username` it will generate a different field identifier, what would effectively break all tests. -Because we are using `Page::Main::Login.act { sign_in_using_credentials }` +Because we are using `Page::Main::Login.perform(&:sign_in_using_credentials)` everywhere, when we want to sign into GitLab, the page object is the single source of truth, and we will need to update `fill_in :user_login` to `fill_in :user_username` only in a one place. @@ -40,7 +40,7 @@ the time it would take to build packages and test everything. That is why when someone changes `t.text_field :login` to `t.text_field :username` in the _new session_ view we won't know about this change until our GitLab QA nightly pipeline fails, or until someone triggers -`package-and-qa` action in their merge request. +`package-and-qa-manual` action in their merge request. Obviously such a change would break all tests. We call this problem a _fragile tests problem_. @@ -92,20 +92,25 @@ end The `view` DSL method will correspond to the rails View, partial, or vue component that renders the elements. The `element` DSL method in turn declares an element for which a corresponding -`qa-element-name-dasherized` CSS class will need to be added to the view file. +`data-qa-selector=element_name_snaked` data attribute will need to be added to the view file. You can also define a value (String or Regexp) to match to the actual view code but **this is deprecated** in favor of the above method for two reasons: - Consistency: there is only one way to define an element -- Separation of concerns: QA uses dedicated CSS classes instead of reusing code +- Separation of concerns: QA uses dedicated `data-qa-*` attributes instead of reusing code or classes used by other components (e.g. `js-*` classes etc.) ```ruby view 'app/views/my/view.html.haml' do - # Implicitly require `.qa-logout-button` CSS class to be present in the view + + ### Good ### + + # Implicitly require the CSS selector `[data-qa-selector="logout_button"]` to be present in the view element :logout_button + ### Bad ### + ## This is deprecated and forbidden by the `QA/ElementWithPattern` RuboCop cop. # Require `f.submit "Sign in"` to be present in `my/view.html.haml element :my_button, 'f.submit "Sign in"' # rubocop:disable QA/ElementWithPattern @@ -129,24 +134,38 @@ view 'app/views/my/view.html.haml' do end ``` -To add these elements to the view, you must change the rails View, partial, or vue component by adding a `qa-element-descriptor` class +To add these elements to the view, you must change the rails View, partial, or vue component by adding a `data-qa-selector` attribute for each element defined. -In our case, `qa-login-field`, `qa-password-field` and `qa-sign-in-button` +In our case, `data-qa-selector="login_field"`, `data-qa-selector="password_field"` and `data-qa-selector="sign_in_button"` **app/views/my/view.html.haml** ```haml -= f.text_field :login, class: "form-control top qa-login-field", autofocus: "autofocus", autocapitalize: "off", autocorrect: "off", required: true, title: "This field is required." -= f.password_field :password, class: "form-control bottom qa-password-field", required: true, title: "This field is required." -= f.submit "Sign in", class: "btn btn-success qa-sign-in-button" += f.text_field :login, class: "form-control top", autofocus: "autofocus", autocapitalize: "off", autocorrect: "off", required: true, title: "This field is required.", data: { qa_selector: 'login_field' } += f.password_field :password, class: "form-control bottom", required: true, title: "This field is required.", data: { qa_selector: 'password_field' } += f.submit "Sign in", class: "btn btn-success", data: { qa_selector: 'sign_in_button' } ``` Things to note: -- The CSS class must be `kebab-cased` (separated with hyphens "`-`") +- The name of the element and the qa_selector must match and be snake_cased - If the element appears on the page unconditionally, add `required: true` to the element. See [Dynamic element validation](dynamic_element_validation.md) +- You may see `.qa-selector` classes in existing Page Objects. We should prefer the [`data-qa-selector`](#data-qa-selector-vs-qa-selector) + method of definition over the `.qa-selector` CSS class + +### `data-qa-selector` vs `.qa-selector` + +> Introduced in GitLab 12.1 + +There are two supported methods of defining elements within a view. + +1. `data-qa-selector` attribute +1. `.qa-selector` class + +Any existing `.qa-selector` class should be considered deprecated +and we should prefer the `data-qa-selector` method of definition. ## Running the test locally diff --git a/doc/development/testing_guide/end_to_end/quick_start_guide.md b/doc/development/testing_guide/end_to_end/quick_start_guide.md index 064fb0e31dd..e1df8be8b6f 100644 --- a/doc/development/testing_guide/end_to_end/quick_start_guide.md +++ b/doc/development/testing_guide/end_to_end/quick_start_guide.md @@ -101,7 +101,7 @@ it 'replaces an existing label if it has the same key' do page.find('#content-body').click page.refresh - labels_block = page.find('.qa-labels-block') + labels_block = page.find(%q([data-qa-selector="labels_block"])) expect(labels_block).to have_content('animal::dolphin') expect(labels_block).not_to have_content('animal::fox') @@ -110,15 +110,15 @@ end ``` > Notice that the test itself is simple. The most challenging part is the creation of the application state, which will be covered later. - +> > The exemplified test case's MVC is not enough for the change to be merged, but it helps to build up the test logic. The reason is that we do not want to use locators directly in the tests, and tests **must** use [Page Objects] before they can be merged. This way we better separate the responsibilities, where the Page Objects encapsulate elements and methods that allow us to interact with pages, while the spec files describe the test cases in more business-related language. Below are the steps that the test covers: 1. The test finds the 'Edit' link for the labels and clicks on it. -2. Then it fills in the 'Assign labels' input field with the value 'animal::dolphin' and press enters. -3. Then it clicks in the content body to apply the label and refreshes the page. -4. Finally, the expectations check that the previous scoped label was removed and that the new one was added. +1. Then it fills in the 'Assign labels' input field with the value 'animal::dolphin' and press enters. +1. Then it clicks in the content body to apply the label and refreshes the page. +1. Finally, the expectations check that the previous scoped label was removed and that the new one was added. Let's now see how the second test case would look. @@ -130,7 +130,7 @@ it 'keeps both scoped labels when adding a label with a different key' do page.find('#content-body').click page.refresh - labels_block = page.find('.qa-labels-block') + labels_block = page.find(%q([data-qa-selector="labels_block"])) expect(labels_block).to have_content('animal::fox') expect(labels_block).to have_content('plant::orchid') @@ -139,14 +139,14 @@ it 'keeps both scoped labels when adding a label with a different key' do end ``` -> Note that elements are always located using CSS selectors, and a good practice is to add test-specific selectors (this is called adding testability to the application and we will talk more about it later.) For example, the `labels_block` element uses the selector `.qa-labels-block`, which was added specifically for testing purposes. +> Note that elements are always located using CSS selectors, and a good practice is to add test-specific selectors (this is called "testability"). For example, the `labels_block` element uses the CSS selector [`data-qa-selector="labels_block"`](page_objects.md#data-qa-selector-vs-qa-selector), which was added specifically for testing purposes. Below are the steps that the test covers: 1. The test finds the 'Edit' link for the labels and clicks on it. -2. Then it fills in the 'Assign labels' input field with the value 'plant::orchid' and press enters. -3. Then it clicks in the content body to apply the label and refreshes the page. -4. Finally, the expectations check that both scoped labels are present. +1. Then it fills in the 'Assign labels' input field with the value 'plant::orchid' and press enters. +1. Then it clicks in the content body to apply the label and refreshes the page. +1. Finally, the expectations check that both scoped labels are present. > Similar to the previous test, this one is also very straightforward, but there is some code duplication. Let's address it. @@ -168,7 +168,7 @@ end it 'replaces an existing label if it has the same key' do select_label_and_refresh @new_label_same_scope - labels_block = page.find('.qa-labels-block') + labels_block = page.find(%q([data-qa-selector="labels_block"])) expect(labels_block).to have_content(@new_label_same_scope) expect(labels_block).not_to have_content(@initial_label) @@ -179,7 +179,7 @@ end it 'keeps both scoped label when adding a label with a different key' do select_label_and_refresh @new_label_different_scope - labels_block = page.find('.qa-labels-block') + labels_block = page.find(%q([data-qa-selector="labels_block"])) expect(labels_blocks).to have_content(@new_label_different_scope) expect(labels_blocks).to have_content(@initial_label) @@ -211,7 +211,7 @@ A pre-condition for the entire test suite is defined in the `before :context` bl > For our test suite, due to the need of the tests being completely independent of each other, we won't use the `before :context` block. The `before :context` block would make the tests dependent on each other because the first test changes the label of the issue, and the second one depends on the `'animal::fox'` label being set. -> **Tip:** In case of a test suite with only one `it` block it's ok to use only the `before` block (see below) with all the test's pre-conditions. +TIP: **Tip:** In case of a test suite with only one `it` block it's ok to use only the `before` block (see below) with all the test's pre-conditions. #### `before` @@ -222,7 +222,7 @@ As the pre-conditions for our test suite, the things that needs to happen before - A project being created with an issue and labels already set; - The issue page being opened with only one scoped label applied to it. -> When running end-to-end tests as part of the GitLab's continuous integration process [a license is already set as an environment variable](https://gitlab.com/gitlab-org/gitlab-ee/blob/1a60d926740db10e3b5724713285780a4f470531/qa/qa/ee/strategy.rb#L20). For running tests locally you can set up such license by following the document [what tests can be run?](https://gitlab.com/gitlab-org/gitlab-qa/blob/master/docs/what_tests_can_be_run.md#supported-remote-grid-environment-variables), based on the [supported GitLab environment variables](https://gitlab.com/gitlab-org/gitlab-qa/blob/master/docs/what_tests_can_be_run.md#supported-gitlab-environment-variables). +> When running end-to-end tests as part of the GitLab's continuous integration process [a license is already set as an environment variable](https://gitlab.com/gitlab-org/gitlab-ee/blob/1a60d926740db10e3b5724713285780a4f470531/qa/qa/ee/strategy.rb#L20). For running tests locally you can set up such license by following the document [what tests can be run?](https://gitlab.com/gitlab-org/gitlab-qa/blob/master/docs/what_tests_can_be_run.md), based on the [supported GitLab environment variables](https://gitlab.com/gitlab-org/gitlab-qa/blob/master/docs/what_tests_can_be_run.md#supported-gitlab-environment-variables). #### Implementation @@ -274,11 +274,11 @@ end In the `before` block we create all the application state needed for the tests to run. We do that by using the `Runtime::Browser.visit` method to go to the login page, by performing a `sign_in_using_credentials` from the `Login` Page Object, by fabricating resources via APIs (`issue`, and `Resource::Label`), and by using the `issue.visit!` to visit the issue page. > A project is created in the background by creating the `issue` resource. - +> > When creating the [Resources], notice that when calling the `fabricate_via_api` method, we pass some attribute:values, like `title`, and `labels` for the `issue` resource; and `project` and `title` for the `label` resource. - +> > What's important to understand here is that by creating the application state mostly using the public APIs we save a lot of time in the test suite setup stage. - +> > Soon we will cover the use of the already existing resources' methods and the creation of your own `fabricate_via_api` methods for resources where this is still not available, but first, let's optimize our implementation. ### 6. Optimization @@ -290,7 +290,7 @@ As already mentioned in the [best practices](best_practices.md) document, end-to Some improvements that we could make in our test suite to optimize its time to run are: 1. Having a single test case (an `it` block) that exercises both scenarios to avoid "wasting" time in the tests' pre-conditions, instead of having two different test cases. -2. Making the selection of labels more performant by allowing for the selection of more than one label in the same reusable method. +1. Making the selection of labels more performant by allowing for the selection of more than one label in the same reusable method. Let's look at a suggestion that addresses the above points, one by one: @@ -305,7 +305,7 @@ module QA it 'correctly applies scoped labels depending on if they are from the same or a different scope' do select_labels_and_refresh [@new_label_same_scope, @new_label_different_scope] - labels_block = page.all('.qa-labels-block') + labels_block = page.all(%q([data-qa-selector="labels_block"])) expect(labels_block).to have_content(@new_label_same_scope) expect(labels_block).to have_content(@new_label_different_scope) @@ -332,8 +332,8 @@ To address point 1, we changed the test implementation from two `it` blocks into > Notice that the implementation of the new and unique `it` block had to change a little bit. Below we describe in details what it does. 1. It selects two scoped labels simultaneously, one from the same scope of the one already applied in the issue during the setup phase (in the `before` block), and another one from a different scope. -2. It asserts that the correct labels are visible in the `labels_block`, and that the labels were correctly added and removed; -3. Finally, the `select_label_and_refresh` method is changed to `select_labels_and_refresh`, which accepts an array of labels instead of a single label, and it iterates on them for faster label selection (this is what is used in step 1 explained above.) +1. It asserts that the correct labels are visible in the `labels_block`, and that the labels were correctly added and removed; +1. Finally, the `select_label_and_refresh` method is changed to `select_labels_and_refresh`, which accepts an array of labels instead of a single label, and it iterates on them for faster label selection (this is what is used in step 1 explained above.) ### 7. Resources @@ -362,7 +362,7 @@ First, in the [issue resource](https://gitlab.com/gitlab-org/gitlab-ee/blob/d358 Add the following `attribute :id` and `attribute :labels` right above the [`attribute :title`](https://gitlab.com/gitlab-org/gitlab-ee/blob/d3584e80b4236acdf393d815d604801573af72cc/qa/qa/resource/issue.rb#L15). > This line is needed to allow for the issue fabrication, and for labels to be automatically added to the issue when fabricating it via API. - +> > We add the attributes above the existing attribute to keep them alphabetically organized. Then, let's initialize an instance variable for labels to allow an empty array as default value when such information is not passed during the resource fabrication, since this optional. [Between the attributes and the `fabricate!` method](https://gitlab.com/gitlab-org/gitlab-ee/blob/1a1f1408728f19b2aa15887cd20bddab7e70c8bd/qa/qa/resource/issue.rb#L18), add the following: @@ -437,7 +437,7 @@ By defining the `resource_web_url(resource)` method, we override the one from th By defining the `api_get_path` method, we **would** allow for the [`ApiFabricator`](https://gitlab.com/gitlab-org/gitlab-ee/blob/master/qa/qa/resource/api_fabricator.rb) module to know which path to use to get a single label, but since there's no path available for that in the publich API, we raise a `NotImplementedError` instead. -By defining the `api_post_path` method, we allow for the [`ApiFabricator `](https://gitlab.com/gitlab-org/gitlab-ee/blob/master/qa/qa/resource/api_fabricator.rb) module to know which path to use to create a new label in a specific project. +By defining the `api_post_path` method, we allow for the [`ApiFabricator`](https://gitlab.com/gitlab-org/gitlab-ee/blob/master/qa/qa/resource/api_fabricator.rb) module to know which path to use to create a new label in a specific project. By defining the `api_post_body` method, we we allow for the [`ApiFabricator.api_post`](https://gitlab.com/gitlab-org/gitlab-ee/blob/a9177ca1812bac57e2b2fa4560e1d5dd8ffac38b/qa/qa/resource/api_fabricator.rb#L68) method to know which data to send when making the `POST` request. @@ -542,9 +542,9 @@ end Notice that we have not only moved the `select_labels_and_refresh` method, but we have also changed its implementation to: 1. Click the `:edit_link_labels` element previously defined, instead of using `find('.block.labels .edit-link').click` -2. Use `within_element(:dropdown_menu_labels, text: label)`, and inside of it, we call `send_keys_to_element(:dropdown_input_field, [label, :enter])`, which is a method that we will implement in the `QA::Page::Base` class to replace `find('.dropdown-menu-labels .dropdown-input-field').send_keys [label, :enter]` -3. Use `click_body` after iterating on each label, instead of using `find('#content-body').click` -4. Iterate on every label again, and then we use `has_element?(:labels_block, text: label)` after clicking the page body (which applies the labels), and before refreshing the page, to avoid test flakiness due to refreshing too fast. +1. Use `within_element(:dropdown_menu_labels, text: label)`, and inside of it, we call `send_keys_to_element(:dropdown_input_field, [label, :enter])`, which is a method that we will implement in the `QA::Page::Base` class to replace `find('.dropdown-menu-labels .dropdown-input-field').send_keys [label, :enter]` +1. Use `click_body` after iterating on each label, instead of using `find('#content-body').click` +1. Iterate on every label again, and then we use `has_element?(:labels_block, text: label)` after clicking the page body (which applies the labels), and before refreshing the page, to avoid test flakiness due to refreshing too fast. ##### Details of `text_of_labels_block` @@ -552,37 +552,36 @@ The `text_of_labels_block` method is a simple method that returns the `:labels_b #### Updates in the view (*.html.haml) and `dropdowns_helper.rb` files -Now let's change the view and the `dropdowns_helper` files to add the selectors that relate to the Page Object. +Now let's change the view and the `dropdowns_helper` files to add the selectors that relate to the [Page Objects]. -In the [app/views/shared/issuable/_sidebar.html.haml](https://gitlab.com/gitlab-org/gitlab-ee/blob/master/app/views/shared/issuable/_sidebar.html.haml) file, on [line 105 ](https://gitlab.com/gitlab-org/gitlab-ee/blob/84043fa72ca7f83ae9cde48ad670e6d5d16501a3/app/views/shared/issuable/_sidebar.html.haml#L105), add an extra class `qa-edit-link-labels`. +In [`app/views/shared/issuable/_sidebar.html.haml:105`](https://gitlab.com/gitlab-org/gitlab-ee/blob/7ca12defc7a965987b162a6ebef302f95dc8867f/app/views/shared/issuable/_sidebar.html.haml#L105), add a `data: { qa_selector: 'edit_link_labels' }` data attribute. The code should look like this: ```haml -= link_to _('Edit'), '#', class: 'js-sidebar-dropdown-toggle edit-link float-right qa-edit-link-labels' += link_to _('Edit'), '#', class: 'js-sidebar-dropdown-toggle edit-link float-right', data: { qa_selector: 'edit_link_labels' } ``` -In the same file, on [line 121](https://gitlab.com/gitlab-org/gitlab-ee/blob/84043fa72ca7f83ae9cde48ad670e6d5d16501a3/app/views/shared/issuable/_sidebar.html.haml#L121), add an extra class `.qa-dropdown-menu-labels`. +In the same file, on [line 121](https://gitlab.com/gitlab-org/gitlab-ee/blob/7ca12defc7a965987b162a6ebef302f95dc8867f/app/views/shared/issuable/_sidebar.html.haml#L121), add a `data: { qa_selector: 'dropdown_menu_labels' }` data attribute. The code should look like this: ```haml -.dropdown-menu.dropdown-select.dropdown-menu-paging.dropdown-menu-labels.dropdown-menu-selectable.qa-dropdown-menu-labels +.dropdown-menu.dropdown-select.dropdown-menu-paging.dropdown-menu-labels.dropdown-menu-selectable.dropdown-extended-height{ data: { qa_selector: 'dropdown_menu_labels' } } ``` -In the [`dropdowns_helper.rb`](https://gitlab.com/gitlab-org/gitlab-ee/blob/master/app/helpers/dropdowns_helper.rb) file, on [line 94](https://gitlab.com/gitlab-org/gitlab-ee/blob/99e51a374f2c20bee0989cac802e4b5621f72714/app/helpers/dropdowns_helper.rb#L94), add an extra class `qa-dropdown-input-field`. +In [`app/helpers/dropdowns_helper.rb:94`](https://gitlab.com/gitlab-org/gitlab-ee/blob/7ca12defc7a965987b162a6ebef302f95dc8867f/app/helpers/dropdowns_helper.rb#L94), add a `data: { qa_selector: 'dropdown_input_field' }` data attribute. The code should look like this: ```ruby -filter_output = search_field_tag search_id, nil, class: "dropdown-input-field qa-dropdown-input-field", placeholder: placeholder, autocomplete: 'off' +filter_output = search_field_tag search_id, nil, class: "dropdown-input-field", placeholder: placeholder, autocomplete: 'off', data: { qa_selector: 'dropdown_input_field' } ``` -> Classes starting with `qa-` are used for testing purposes only, and by defining such classes in the elements we add **testability** in the application. - -> When defining a class like `qa-labels-block`, it is transformed into `:labels_block` for usage in the Page Objects. So, `qa-edit-link-labels` is transformed into `:edit_link_labels`, `qa-dropdown-menu-labels` is transformed into `:dropdown_menu_labels`, and `qa-dropdown-input-field` is transformed into `:dropdown_input_field`. Also, we use a [sanity test](https://gitlab.com/gitlab-org/gitlab-ce/tree/master/qa/qa/page#how-did-we-solve-fragile-tests-problem) to check that defined elements have their respective `qa-` selectors in the specified views. - -> We did not define the `qa-labels-block` class in the `app/views/shared/issuable/_sidebar.html.haml` file because it was already there to be used. +> `data-qa-*` data attributes and CSS classes starting with `qa-` are used solely for the purpose of QA and testing. +> By defining these, we add **testability** to the application. +> +> When defining a data attribute like: `qa_selector: 'labels_block'`, it should match the element definition: `element :labels_block`. We use a [sanity test](https://gitlab.com/gitlab-org/gitlab-ce/tree/master/qa/qa/page#how-did-we-solve-fragile-tests-problem) to check that defined elements have their respective selectors in the specified views. #### Updates in the `QA::Page::Base` class @@ -600,8 +599,6 @@ This method receives an element (`name`) and the `keys` that it will send to tha As you might remember, in the Issue Page Object we call this method like this: `send_keys_to_element(:dropdown_input_field, [label, :enter])`. -___ - With that, you should be able to start writing end-to-end tests yourself. *Congratulations!* [Page Objects]: page_objects.md diff --git a/doc/development/testing_guide/end_to_end/style_guide.md b/doc/development/testing_guide/end_to_end/style_guide.md index 0272e1810f2..97560e616a1 100644 --- a/doc/development/testing_guide/end_to_end/style_guide.md +++ b/doc/development/testing_guide/end_to_end/style_guide.md @@ -45,7 +45,7 @@ Notice that in the above example, before clicking the `:operations_environments_ > We can create these methods as helpers to abstract multi-step navigation. -### Element naming convention +## Element naming convention When adding new elements to a page, it's important that we have a uniform element naming convention. @@ -63,17 +63,17 @@ We follow a simple formula roughly based on hungarian notation. - `_checkbox` - `_radio` - `_content` - + *Note: This list is a work in progress. This list will eventually be the end-all enumeration of all available types. I.e., any element that does not end with something in this list is bad form.* - -#### Examples + +### Examples **Good** ```ruby view '...' do - element :edit_button + element :edit_button element :notes_tab element :squash_checkbox element :username_field @@ -84,16 +84,61 @@ end **Bad** ```ruby -view '...' do +view '...' do # `_confirmation` should be `_field`. what sort of confirmation? a checkbox confirmation? no real way to disambiguate. # an appropriate replacement would be `element :password_confirmation_field` element :password_confirmation - # `clone_options` is too vague. If it's a dropdown menu, it should be `clone_dropdown`. + # `clone_options` is too vague. If it's a dropdown menu, it should be `clone_dropdown`. # If it's a checkbox, it should be `clone_checkbox` element :clone_options - + # how is this url being displayed? is it a textbox? a simple span? + # If it is content on the page, it should be `ssh_clone_url_content` element :ssh_clone_url end ``` + +## Block argument naming + +To have a standard on how we call pages when using the `.perform` method, we use the name of page object being called, all lowercased, and separated by underscore, if needed (see good and bad examples below.) This also applies to resources. We chose not to simply use `page` because that would shadow the Capybara DSL, potentially leading to confusion and bugs. + +### Examples + +**Good** + +```ruby +# qa/specs/features/browser_ui/1_manage/project/add_project_member_spec.rb + +Page::Project::Settings::Members.perform do |members| + members.do_something +end +``` + +```ruby +# qa/specs/features/ee/browser_ui/3_create/merge_request/add_batch_comments_in_merge_request_spec.rb + +Resource::MergeRequest.fabricate! do |merge_request| + merge_request.do_something_else +end +``` + +**Bad** + +```ruby +# qa/specs/features/browser_ui/1_manage/project/add_project_member_spec.rb + +Page::Project::Settings::Members.perform do |project_settings_members_page| + project_settings_members_page.do_something +end +``` + +```ruby +# qa/specs/features/ee/browser_ui/3_create/merge_request/add_batch_comments_in_merge_request_spec.rb + +Resource::MergeRequest.fabricate! do |merge_request_page| + merge_request_page.do_something_else +end +``` + +> Besides the advantage of having a standard in place, by following this standard we also write shorter lines of code.
\ No newline at end of file diff --git a/doc/development/testing_guide/flaky_tests.md b/doc/development/testing_guide/flaky_tests.md index 931cbc51cae..eb0bf6fc563 100644 --- a/doc/development/testing_guide/flaky_tests.md +++ b/doc/development/testing_guide/flaky_tests.md @@ -35,8 +35,8 @@ Once a test is in quarantine, there are 3 choices: Quarantined tests are run on the CI in dedicated jobs that are allowed to fail: -- `rspec-pg-quarantine` and `rspec-mysql-quarantine` (CE & EE) -- `rspec-pg-quarantine-ee` and `rspec-mysql-quarantine-ee` (EE only) +- `rspec-pg-quarantine` (CE & EE) +- `rspec-pg-quarantine-ee` (EE only) ## Automatic retries and flaky tests detection diff --git a/doc/development/testing_guide/frontend_testing.md b/doc/development/testing_guide/frontend_testing.md index 98df0b5ea7c..7dc89a3fcdb 100644 --- a/doc/development/testing_guide/frontend_testing.md +++ b/doc/development/testing_guide/frontend_testing.md @@ -79,6 +79,37 @@ describe('Component', () => { Remember that the performance of each test depends on the environment. +### Manual module mocks + +Jest supports [manual module mocks](https://jestjs.io/docs/en/manual-mocks) by placing a mock in a `__mocks__/` directory next to the source module. **Don't do this.** We want to keep all of our test-related code in one place (the `spec/` folder), and the logic that Jest uses to apply mocks from `__mocks__/` is rather inconsistent. + +Instead, our test runner detects manual mocks from `spec/frontend/mocks/`. Any mock placed here is automatically picked up and injected whenever you import its source module. + +- Files in `spec/frontend/mocks/ce` will mock the corresponding CE module from `app/assets/javascripts`, mirroring the source module's path. + - Example: `spec/frontend/mocks/ce/lib/utils/axios_utils` will mock the module `~/lib/utils/axios_utils`. +- Files in `spec/frontend/mocks/node` will mock NPM packages of the same name or path. +- We don't support mocking EE modules yet. + +If a mock is found for which a source module doesn't exist, the test suite will fail. 'Virtual' mocks, or mocks that don't have a 1-to-1 association with a source module, are not supported yet. + +#### Writing a mock + +Create a JS module in the appropriate place in `spec/frontend/mocks/`. That's it. It will automatically mock its source package in all tests. + +Make sure that your mock's export has the same format as the mocked module. So, if you're mocking a CommonJS module, you'll need to use `module.exports` instead of the ES6 `export`. + +It might be useful for a mock to expose a property that indicates if the mock was loaded. This way, tests can assert the presence of a mock without calling any logic and causing side-effects. The `~/lib/utils/axios_utils` module mock has such a property, `isMock`, that is `true` in the mock and undefined in the original class. Jest's mock functions also have a `mock` property that you can test. + +#### Bypassing mocks + +If you ever need to import the original module in your tests, use [`jest.requireActual()`](https://jestjs.io/docs/en/jest-object#jestrequireactualmodulename) (or `jest.requireActual().default` for the default export). The `jest.mock()` and `jest.unmock()` won't have an effect on modules that have a manual mock, because mocks are imported and cached before any tests are run. + +#### Keep mocks light + +Global mocks introduce magic and can affect how modules are imported in your tests. Try to keep them as light as possible and dependency-free. A global mock should be useful for any unit test. For example, the `axios_utils` and `jquery` module mocks throw an error when an HTTP request is attempted, since this is useful behaviour in >99% of tests. + +When in doubt, construct mocks in your test file using [`jest.mock()`](https://jestjs.io/docs/en/jest-object#jestmockmodulename-factory-options), [`jest.spyOn()`](https://jestjs.io/docs/en/jest-object#jestspyonobject-methodname), etc. + ## Karma test suite GitLab uses the [Karma][karma] test runner with [Jasmine] as its test @@ -201,7 +232,7 @@ module. GitLab has a custom `spyOnDependency` method which utilizes [babel-plugin-rewire](https://github.com/speedskater/babel-plugin-rewire) to achieve this. It can be used like so: -```js +```javascript // my_module.js import { visitUrl } from '~/lib/utils/url_utility'; @@ -210,7 +241,7 @@ export default function doSomething() { } ``` -```js +```javascript // my_module_spec.js import doSomething from '~/my_module'; @@ -434,7 +465,7 @@ See this [section][vue-test]. For running the frontend tests, you need the following commands: -- `rake karma:fixtures` (re-)generates [fixtures](#frontend-test-fixtures). +- `rake frontend:fixtures` (re-)generates [fixtures](#frontend-test-fixtures). - `yarn test` executes the tests. As long as the fixtures don't change, `yarn test` is sufficient (and saves you some time). @@ -487,8 +518,8 @@ Information on setting up and running RSpec integration tests with Code that is added to HAML templates (in `app/views/`) or makes Ajax requests to the backend has tests that require HTML or JSON from the backend. Fixtures for these tests are located at: -- `spec/javascripts/fixtures/`, for running tests in CE. -- `ee/spec/javascripts/fixtures/`, for running tests in EE. +- `spec/frontend/fixtures/`, for running tests in CE. +- `ee/spec/frontend/fixtures/`, for running tests in EE. Fixture files in: @@ -499,29 +530,29 @@ The following are examples of tests that work for both Karma and Jest: ```javascript it('makes a request', () => { - const responseBody = getJSONFixture('some/fixture.json'); // loads spec/javascripts/fixtures/some/fixture.json + const responseBody = getJSONFixture('some/fixture.json'); // loads spec/frontend/fixtures/some/fixture.json axiosMock.onGet(endpoint).reply(200, responseBody); - + myButton.click(); - + // ... }); it('uses some HTML element', () => { - loadFixtures('some/page.html'); // loads spec/javascripts/fixtures/some/page.html and adds it to the DOM - + loadFixtures('some/page.html'); // loads spec/frontend/fixtures/some/page.html and adds it to the DOM + const element = document.getElementById('#my-id'); - + // ... }); ``` -HTML and JSON fixtures are generated from backend views and controllers using RSpec (see `spec/javascripts/fixtures/*.rb`). +HTML and JSON fixtures are generated from backend views and controllers using RSpec (see `spec/frontend/fixtures/*.rb`). For each fixture, the content of the `response` variable is stored in the output file. This variable gets automagically set if the test is marked as `type: :request` or `type: :controller`. -Fixtures are regenerated using the `bin/rake karma:fixtures` command but you can also generate them individually, -for example `bin/rspec spec/javascripts/fixtures/merge_requests.rb`. +Fixtures are regenerated using the `bin/rake frontend:fixtures` command but you can also generate them individually, +for example `bin/rspec spec/frontend/fixtures/merge_requests.rb`. When creating a new fixture, it often makes sense to take a look at the corresponding tests for the endpoint in `(ee/)spec/controllers/` or `(ee/)spec/requests/`. ## Gotchas @@ -557,11 +588,522 @@ end [jasmine-focus]: https://jasmine.github.io/2.5/focused_specs.html [karma]: http://karma-runner.github.io/ -[vue-test]: https://docs.gitlab.com/ce/development/fe_guide/vue.html#testing-vue-components +[vue-test]: ../fe_guide/vue.md#testing-vue-components [rspec]: https://github.com/rspec/rspec-rails#feature-specs [capybara]: https://github.com/teamcapybara/capybara [jasmine]: https://jasmine.github.io/ +## Overview of Frontend Testing Levels + +Tests relevant for frontend development can be found at the following places: + +- `spec/javascripts/` which are run by Karma (command: `yarn karma`) and contain + - [frontend unit tests](#frontend-unit-tests) + - [frontend component tests](#frontend-component-tests) + - [frontend integration tests](#frontend-integration-tests) +- `spec/frontend/` which are run by Jest (command: `yarn jest`) and contain + - [frontend unit tests](#frontend-unit-tests) + - [frontend component tests](#frontend-component-tests) + - [frontend integration tests](#frontend-integration-tests) +- `spec/features/` which are run by RSpec and contain + - [feature tests](#feature-tests) + +All tests in `spec/javascripts/` will eventually be migrated to `spec/frontend/` (see also [#52483](https://gitlab.com/gitlab-org/gitlab-ce/issues/52483)). + +In addition, there used to be feature tests in `features/`, run by Spinach. +These were removed from the codebase in May 2018 ([#23036](https://gitlab.com/gitlab-org/gitlab-ce/issues/23036)). + +See also [Notes on testing Vue components](../fe_guide/vue.html#testing-vue-components). + +### Frontend unit tests + +Unit tests are on the lowest abstraction level and typically test functionality that is not directly perceivable by a user. + +```mermaid +graph RL + plain[Plain JavaScript]; + Vue[Vue Components]; + feature-flags[Feature Flags]; + license-checks[License Checks]; + + plain---Vuex; + plain---GraphQL; + Vue---plain; + Vue---Vuex; + Vue---GraphQL; + browser---plain; + browser---Vue; + plain---backend; + Vuex---backend; + GraphQL---backend; + Vue---backend; + backend---database; + backend---feature-flags; + backend---license-checks; + + class plain tested; + class Vuex tested; + + classDef node color:#909090,fill:#f0f0f0,stroke-width:2px,stroke:#909090 + classDef label stroke-width:0; + classDef tested color:#000000,fill:#a0c0ff,stroke:#6666ff,stroke-width:2px,stroke-dasharray: 5, 5; + + subgraph " " + tested; + mocked; + class tested tested; + end +``` + +#### When to use unit tests + +<details> + <summary>exported functions and classes</summary> + Anything that is exported can be reused at various places in a way you have no control over. + Therefore it is necessary to document the expected behavior of the public interface with tests. +</details> + +<details> + <summary>Vuex actions</summary> + Any Vuex action needs to work in a consistent way independent of the component it is triggered from. +</details> + +<details> + <summary>Vuex mutations</summary> + For complex Vuex mutations it helps to identify the source of a problem by separating the tests from other parts of the Vuex store. +</details> + +#### When *not* to use unit tests + +<details> + <summary>non-exported functions or classes</summary> + Anything that is not exported from a module can be considered private or an implementation detail and doesn't need to be tested. +</details> + +<details> + <summary>constants</summary> + Testing the value of a constant would mean to copy it. + This results in extra effort without additional confidence that the value is correct. +</details> + +<details> + <summary>Vue components</summary> + Computed properties, methods, and lifecycle hooks can be considered an implementation detail of components and don't need to be tested. + They are implicitly covered by component tests. + The <a href="https://vue-test-utils.vuejs.org/guides/#getting-started">official Vue guidelines</a> suggest the same. +</details> + +#### What to mock in unit tests + +<details> + <summary>state of the class under test</summary> + Modifying the state of the class under test directly rather than using methods of the class avoids side-effects in test setup. +</details> + +<details> + <summary>other exported classes</summary> + Every class needs to be tested in isolation to prevent test scenarios from growing exponentially. +</details> + +<details> + <summary>single DOM elements if passed as parameters</summary> + For tests that only operate on single DOM elements rather than a whole page, creating these elements is cheaper than loading a whole HTML fixture. +</details> + +<details> + <summary>all server requests</summary> + When running frontend unit tests, the backend may not be reachable. + Therefore all outgoing requests need to be mocked. +</details> + +<details> + <summary>asynchronous background operations</summary> + Background operations cannot be stopped or waited on, so they will continue running in the following tests and cause side effects. +</details> + +#### What *not* to mock in unit tests + +<details> + <summary>non-exported functions or classes</summary> + Everything that is not exported can be considered private to the module and will be implicitly tested via the exported classes / functions. +</details> + +<details> + <summary>methods of the class under test</summary> + By mocking methods of the class under test, the mocks will be tested and not the real methods. +</details> + +<details> + <summary>utility functions (pure functions, or those that only modify parameters)</summary> + If a function has no side effects because it has no state, it is safe to not mock it in tests. +</details> + +<details> + <summary>full HTML pages</summary> + Loading the HTML of a full page slows down tests, so it should be avoided in unit tests. +</details> + +### Frontend component tests + +Component tests cover the state of a single component that is perceivable by a user depending on external signals such as user input, events fired from other components, or application state. + +```mermaid +graph RL + plain[Plain JavaScript]; + Vue[Vue Components]; + feature-flags[Feature Flags]; + license-checks[License Checks]; + + plain---Vuex; + plain---GraphQL; + Vue---plain; + Vue---Vuex; + Vue---GraphQL; + browser---plain; + browser---Vue; + plain---backend; + Vuex---backend; + GraphQL---backend; + Vue---backend; + backend---database; + backend---feature-flags; + backend---license-checks; + + class Vue tested; + + classDef node color:#909090,fill:#f0f0f0,stroke-width:2px,stroke:#909090 + classDef label stroke-width:0; + classDef tested color:#000000,fill:#a0c0ff,stroke:#6666ff,stroke-width:2px,stroke-dasharray: 5, 5; + + subgraph " " + tested; + mocked; + class tested tested; + end +``` + +#### When to use component tests + +- Vue components + +#### When *not* to use component tests + +<details> + <summary>Vue applications</summary> + Vue applications may contain many components. + Testing them on a component level requires too much effort. + Therefore they are tested on frontend integration level. +</details> + +<details> + <summary>HAML templates</summary> + HAML templates contain only Markup and no frontend-side logic. + Therefore they are not complete components. +</details> + +#### What to mock in component tests + +<details> + <summary>DOM</summary> + Operating on the real DOM is significantly slower than on the virtual DOM. +</details> + +<details> + <summary>properties and state of the component under test</summary> + Similarly to testing classes, modifying the properties directly (rather than relying on methods of the component) avoids side-effects. +</details> + +<details> + <summary>Vuex store</summary> + To avoid side effects and keep component tests simple, Vuex stores are replaced with mocks. +</details> + +<details> + <summary>all server requests</summary> + Similar to unit tests, when running component tests, the backend may not be reachable. + Therefore all outgoing requests need to be mocked. +</details> + +<details> + <summary>asynchronous background operations</summary> + Similar to unit tests, background operations cannot be stopped or waited on, so they will continue running in the following tests and cause side effects. +</details> + +<details> + <summary>child components</summary> + Every component is tested individually, so child components are mocked. + See also <a href="https://vue-test-utils.vuejs.org/api/#shallowmount">shallowMount()</a> +</details> + +#### What *not* to mock in component tests + +<details> + <summary>methods or computed properties of the component under test</summary> + By mocking part of the component under test, the mocks will be tested and not the real component. +</details> + +<details> + <summary>functions and classes independent from Vue</summary> + All plain JavaScript code is already covered by unit tests and needs not to be mocked in component tests. +</details> + +### Frontend integration tests + +Integration tests cover the interaction between all components on a single page. +Their abstraction level is comparable to how a user would interact with the UI. + +```mermaid +graph RL + plain[Plain JavaScript]; + Vue[Vue Components]; + feature-flags[Feature Flags]; + license-checks[License Checks]; + + plain---Vuex; + plain---GraphQL; + Vue---plain; + Vue---Vuex; + Vue---GraphQL; + browser---plain; + browser---Vue; + plain---backend; + Vuex---backend; + GraphQL---backend; + Vue---backend; + backend---database; + backend---feature-flags; + backend---license-checks; + + class plain tested; + class Vue tested; + class Vuex tested; + class GraphQL tested; + class browser tested; + linkStyle 0,1,2,3,4,5,6 stroke:#6666ff,stroke-width:2px,stroke-dasharray: 5, 5; + + classDef node color:#909090,fill:#f0f0f0,stroke-width:2px,stroke:#909090 + classDef label stroke-width:0; + classDef tested color:#000000,fill:#a0c0ff,stroke:#6666ff,stroke-width:2px,stroke-dasharray: 5, 5; + + subgraph " " + tested; + mocked; + class tested tested; + end +``` + +#### When to use integration tests + +<details> + <summary>page bundles (<code>index.js</code> files in <code>app/assets/javascripts/pages/</code>)</summary> + Testing the page bundles ensures the corresponding frontend components integrate well. +</details> + +<details> + <summary>Vue applications outside of page bundles</summary> + Testing Vue applications as a whole ensures the corresponding frontend components integrate well. +</details> + +#### What to mock in integration tests + +<details> + <summary>HAML views (use fixtures instead)</summary> + Rendering HAML views requires a Rails environment including a running database which we cannot rely on in frontend tests. +</details> + +<details> + <summary>all server requests</summary> + Similar to unit and component tests, when running component tests, the backend may not be reachable. + Therefore all outgoing requests need to be mocked. +</details> + +<details> + <summary>asynchronous background operations that are not perceivable on the page</summary> + Background operations that affect the page need to be tested on this level. + All other background operations cannot be stopped or waited on, so they will continue running in the following tests and cause side effects. +</details> + +#### What *not* to mock in integration tests + +<details> + <summary>DOM</summary> + Testing on the real DOM ensures our components work in the environment they are meant for. + Part of this will be delegated to <a href="https://gitlab.com/gitlab-org/quality/team-tasks/issues/45">cross-browser testing</a>. +</details> + +<details> + <summary>properties or state of components</summary> + On this level, all tests can only perform actions a user would do. + For example to change the state of a component, a click event would be fired. +</details> + +<details> + <summary>Vuex stores</summary> + When testing the frontend code of a page as a whole, the interaction between Vue components and Vuex stores is covered as well. +</details> + +### Feature tests + +In contrast to [frontend integration tests](#frontend-integration-tests), feature tests make requests against the real backend instead of using fixtures. +This also implies that database queries are executed which makes this category significantly slower. + +See also the [RSpec testing guidelines](../testing_guide/best_practices.md#rspec). + +```mermaid +graph RL + plain[Plain JavaScript]; + Vue[Vue Components]; + feature-flags[Feature Flags]; + license-checks[License Checks]; + + plain---Vuex; + plain---GraphQL; + Vue---plain; + Vue---Vuex; + Vue---GraphQL; + browser---plain; + browser---Vue; + plain---backend; + Vuex---backend; + GraphQL---backend; + Vue---backend; + backend---database; + backend---feature-flags; + backend---license-checks; + + class backend tested; + class plain tested; + class Vue tested; + class Vuex tested; + class GraphQL tested; + class browser tested; + linkStyle 0,1,2,3,4,5,6,7,8,9,10 stroke:#6666ff,stroke-width:2px,stroke-dasharray: 5, 5; + + classDef node color:#909090,fill:#f0f0f0,stroke-width:2px,stroke:#909090 + classDef label stroke-width:0; + classDef tested color:#000000,fill:#a0c0ff,stroke:#6666ff,stroke-width:2px,stroke-dasharray: 5, 5; + + subgraph " " + tested; + mocked; + class tested tested; + end +``` + +#### When to use feature tests + +- Use cases that require a backend and cannot be tested using fixtures. +- Behavior that is not part of a page bundle but defined globally. + +#### Relevant notes + +A `:js` flag is added to the test to make sure the full environment is loaded. + +```ruby +scenario 'successfully', :js do + sign_in(create(:admin)) +end +``` + +The steps of each test are written using capybara methods ([documentation](https://www.rubydoc.info/gems/capybara)). + +Bear in mind <abbr title="XMLHttpRequest">XHR</abbr> calls might require you to use `wait_for_requests` in between steps, like so: + +```ruby +find('.form-control').native.send_keys(:enter) + +wait_for_requests + +expect(page).not_to have_selector('.card') +``` + +## Test helpers + +### Vuex Helper: `testAction` + +We have a helper available to make testing actions easier, as per [official documentation](https://vuex.vuejs.org/guide/testing.html): + +```javascript +testAction( + actions.actionName, // action + { }, // params to be passed to action + state, // state + [ + { type: types.MUTATION}, + { type: types.MUTATION_1, payload: {}}, + ], // mutations committed + [ + { type: 'actionName', payload: {}}, + { type: 'actionName1', payload: {}}, + ] // actions dispatched + done, +); +``` + +Check an example in [spec/javascripts/ide/stores/actions_spec.jsspec/javascripts/ide/stores/actions_spec.js](https://gitlab.com/gitlab-org/gitlab-ce/blob/master/spec/javascripts/ide/stores/actions_spec.js). + +### Vue Helper: `mountComponent` + +To make mounting a Vue component easier and more readable, we have a few helpers available in `spec/helpers/vue_mount_component_helper`: + +- `createComponentWithStore` +- `mountComponentWithStore` + +Examples of usage: + +```javascript +beforeEach(() => { + vm = createComponentWithStore(Component, store); + + vm.$store.state.currentBranchId = 'master'; + + vm.$mount(); +}); +``` + +```javascript +beforeEach(() => { + vm = mountComponentWithStore(Component, { + el: '#dummy-element', + store, + props: { badge }, + }); +}); +``` + +Don't forget to clean up: + +```javascript +afterEach(() => { + vm.$destroy(); +}); +``` + +## Testing with older browsers + +Some regressions only affect a specific browser version. We can install and test in particular browsers with either Firefox or Browserstack using the following steps: + +### Browserstack + +[Browserstack](https://www.browserstack.com/) allows you to test more than 1200 mobile devices and browsers. +You can use it directly through the [live app](https://www.browserstack.com/live) or you can install the [chrome extension](https://chrome.google.com/webstore/detail/browserstack/nkihdmlheodkdfojglpcjjmioefjahjb) for easy access. +You can find the credentials on 1Password, under `frontendteam@gitlab.com`. + +### Firefox + +#### macOS + +You can download any older version of Firefox from the releases FTP server, <https://ftp.mozilla.org/pub/firefox/releases/>: + +1. From the website, select a version, in this case `50.0.1`. +1. Go to the mac folder. +1. Select your preferred language, you will find the dmg package inside, download it. +1. Drag and drop the application to any other folder but the `Applications` folder. +1. Rename the application to something like `Firefox_Old`. +1. Move the application to the `Applications` folder. +1. Open up a terminal and run `/Applications/Firefox_Old.app/Contents/MacOS/firefox-bin -profilemanager` to create a new profile specific to that Firefox version. +1. Once the profile has been created, quit the app, and run it again like normal. You now have a working older Firefox version. + --- [Return to Testing documentation](index.md) diff --git a/doc/development/testing_guide/img/qa_on_merge_requests_cicd_architecture.png b/doc/development/testing_guide/img/qa_on_merge_requests_cicd_architecture.png Binary files differdeleted file mode 100644 index 5b93a05db96..00000000000 --- a/doc/development/testing_guide/img/qa_on_merge_requests_cicd_architecture.png +++ /dev/null diff --git a/doc/development/testing_guide/img/review_apps_cicd_architecture.png b/doc/development/testing_guide/img/review_apps_cicd_architecture.png Binary files differdeleted file mode 100644 index 1ee28d3db91..00000000000 --- a/doc/development/testing_guide/img/review_apps_cicd_architecture.png +++ /dev/null diff --git a/doc/development/testing_guide/index.md b/doc/development/testing_guide/index.md index aadbea1a540..96e8c30a679 100644 --- a/doc/development/testing_guide/index.md +++ b/doc/development/testing_guide/index.md @@ -22,62 +22,44 @@ automated testing means, and what are its principles: - [Five Factor Testing](https://www.devmynd.com/blog/five-factor-testing): Why do we need tests? - [Principles of Automated Testing](http://www.lihaoyi.com/post/PrinciplesofAutomatedTesting.html): Levels of testing. Prioritize tests. Cost of tests. ---- - ## [Testing levels](testing_levels.md) Learn about the different testing levels, and how to decide at what level your changes should be tested. ---- - ## [Testing best practices](best_practices.md) Everything you should know about how to write good tests: Test Design, RSpec, FactoryBot, system tests, parameterized tests etc. ---- - ## [Frontend testing standards and style guidelines](frontend_testing.md) Everything you should know about how to write good Frontend tests: Karma, testing promises, stubbing etc. ---- - ## [Flaky tests](flaky_tests.md) What are flaky tests, the different kind of flaky tests we encountered, and what we do about them. ---- - ## [GitLab tests in the Continuous Integration (CI) context](ci.md) How GitLab test suite is run in the CI context: setup, caches, artifacts, parallelization, monitoring. ---- - ## [Review apps](review_apps.md) How review apps are set up for GitLab CE/EE and how to use them. ---- - ## [Testing Rake tasks](testing_rake_tasks.md) Everything you should know about how to test Rake tasks. ---- - ## [End-to-end tests](end_to_end/index.md) Everything you should know about how to run end-to-end tests using [GitLab QA][gitlab-qa] testing framework. ---- - [Return to Development documentation](../README.md) [RSpec]: https://github.com/rspec/rspec-rails#feature-specs diff --git a/doc/development/testing_guide/review_apps.md b/doc/development/testing_guide/review_apps.md index ae40d628717..11449712a04 100644 --- a/doc/development/testing_guide/review_apps.md +++ b/doc/development/testing_guide/review_apps.md @@ -8,38 +8,33 @@ Review Apps are automatically deployed by each pipeline, both in ### CI/CD architecture diagram - - -<details> -<summary>Show mermaid source</summary> -<pre> +```mermaid graph TD build-qa-image -.->|once the `prepare` stage is done| gitlab:assets:compile review-build-cng -->|triggers a CNG-mirror pipeline and wait for it to be done| CNG-mirror review-build-cng -.->|once the `test` stage is done| review-deploy review-deploy -.->|once the `review` stage is done| review-qa-smoke -subgraph 1. gitlab-ce/ee `prepare` stage +subgraph "1. gitlab-ce/ee `prepare` stage" build-qa-image end -subgraph 2. gitlab-ce/ee `test` stage +subgraph "2. gitlab-ce/ee `test` stage" gitlab:assets:compile -->|plays dependent job once done| review-build-cng end -subgraph 3. gitlab-ce/ee `review` stage - review-deploy["review-deploy<br /><br />Helm deploys the Review App using the Cloud<br/>Native images built by the CNG-mirror pipeline.<br /><br />Cloud Native images are deployed to the `review-apps-ce` or `review-apps-ee`<br />Kubernetes (GKE) cluster, in the GCP `gitlab-review-apps` project."] +subgraph "3. gitlab-ce/ee `review` stage" + review-deploy["review-deploy<br><br>Helm deploys the Review App using the Cloud<br/>Native images built by the CNG-mirror pipeline.<br><br>Cloud Native images are deployed to the `review-apps-ce` or `review-apps-ee`<br>Kubernetes (GKE) cluster, in the GCP `gitlab-review-apps` project."] end -subgraph 4. gitlab-ce/ee `qa` stage - review-qa-smoke[review-qa-smoke<br /><br />gitlab-qa runs the smoke suite against the Review App.] +subgraph "4. gitlab-ce/ee `qa` stage" + review-qa-smoke[review-qa-smoke<br><br>gitlab-qa runs the smoke suite against the Review App.] end -subgraph CNG-mirror pipeline +subgraph "CNG-mirror pipeline" CNG-mirror>Cloud Native images are built]; end -</pre> -</details> +``` ### Detailed explanation @@ -115,6 +110,28 @@ On every [pipeline][gitlab-pipeline] in the `qa` stage, the browser performance testing using a [Sitespeed.io Container](../../user/project/merge_requests/browser_performance_testing.md). +## Cluster configuration + +### Node pools + +Both `review-apps-ce` and `review-apps-ee` clusters are currently set up with +two node pools: + +- a node pool of non-preemptible `n1-standard-2` (2 vCPU, 7.5 GB memory) nodes + dedicated to the `tiller` deployment (see below) with a single node. +- a node pool of preemptible `n1-standard-2` (2 vCPU, 7.5 GB memory) nodes, + with a minimum of 1 node and a maximum of 250 nodes. + +### Helm/Tiller + +The `tiller` deployment (the Helm server) is deployed to a dedicated node pool +that has the `app=helm` label and a specific +[taint](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/) +to prevent other pods from being scheduled on this node pool. + +This is to ensure Tiller isn't affected by "noisy" neighbors that could put +their node under pressure. + ## How to: ### Log into my Review App @@ -137,8 +154,8 @@ secure note named **gitlab-{ce,ee} Review App's root password**. ### Run a Rails console -1. [Filter Workloads by your Review App slug](https://console.cloud.google.com/kubernetes/workload?project=gitlab-review-apps) - , e.g. `review-qa-raise-e-12chm0`. +1. [Filter Workloads by your Review App slug](https://console.cloud.google.com/kubernetes/workload?project=gitlab-review-apps), + e.g. `review-qa-raise-e-12chm0`. 1. Find and open the `task-runner` Deployment, e.g. `review-qa-raise-e-12chm0-task-runner`. 1. Click on the Pod in the "Managed pods" section, e.g. `review-qa-raise-e-12chm0-task-runner-d5455cc8-2lsvz`. 1. Click on the `KUBECTL` dropdown, then `Exec` -> `task-runner`. @@ -196,7 +213,7 @@ For the record, the debugging steps to find out this issue were: 1. `kubectl describe pod <pod name>` & confirm exact error message 1. Web search for exact error message, following rabbit hole to [a relevant kubernetes bug report](https://github.com/kubernetes/kubernetes/issues/57345) 1. Access the node over SSH via the GCP console (**Computer Engine > VM - instances** then click the "SSH" button for the node where the `dns-gitlab-review-app-external-dns` pod runs) + instances** then click the "SSH" button for the node where the `dns-gitlab-review-app-external-dns` pod runs) 1. In the node: `systemctl --version` => systemd 232 1. Gather some more information: - `mount | grep kube | wc -l` => e.g. 290 @@ -211,7 +228,7 @@ For the record, the debugging steps to find out this issue were: To resolve the problem, we needed to (forcibly) drain some nodes: 1. Try a normal drain on the node where the `dns-gitlab-review-app-external-dns` - pod runs so that Kubernetes automatically move it to another node: `kubectl drain NODE_NAME` + pod runs so that Kubernetes automatically move it to another node: `kubectl drain NODE_NAME` 1. If that doesn't work, you can also perform a forcible "drain" the node by removing all pods: `kubectl delete pods --field-selector=spec.nodeName=NODE_NAME` 1. In the node: - Perform `systemctl daemon-reload` to remove the dead/inactive units @@ -238,17 +255,8 @@ that a machine will hit the "too many mount points" problem in the future. thousands of unused Docker images.** > We have to start somewhere and improve later. Also, we're using the - CNG-mirror project to store these Docker images so that we can just wipe out - the registry at some point, and use a new fresh, empty one. - -**How big are the Kubernetes clusters (`review-apps-ce` and `review-apps-ee`)?** - - > The clusters are currently set up with a single pool of preemptible nodes, - with a minimum of 1 node and a maximum of 500 nodes. - -**What are the machine running on the cluster?** - - > We're currently using `n1-standard-1` (1 vCPU, 3.75 GB memory) machines. + > CNG-mirror project to store these Docker images so that we can just wipe out + > the registry at some point, and use a new fresh, empty one. **How do we secure this from abuse? Apps are open to the world so we need to find a way to limit it to only us.** diff --git a/doc/development/testing_guide/testing_levels.md b/doc/development/testing_guide/testing_levels.md index e1ce4d3b7d1..0090c84cbf0 100644 --- a/doc/development/testing_guide/testing_levels.md +++ b/doc/development/testing_guide/testing_levels.md @@ -63,10 +63,9 @@ They're useful to test permissions, redirections, what view is rendered etc. | Code path | Tests path | Testing engine | Notes | | --------- | ---------- | -------------- | ----- | -| `app/controllers/` | `spec/controllers/` | RSpec | | +| `app/controllers/` | `spec/controllers/` | RSpec | For N+1 tests, use [request specs](../query_recorder.md#use-request-specs-instead-of-controller-specs) | | `app/mailers/` | `spec/mailers/` | RSpec | | | `lib/api/` | `spec/requests/api/` | RSpec | | -| `lib/ci/api/` | `spec/requests/ci/api/` | RSpec | | | `app/assets/javascripts/` | `spec/javascripts/`, `spec/frontend/` | Karma & Jest | More details in the [Frontend Testing guide](frontend_testing.md) section. | ### About controller tests diff --git a/doc/development/understanding_explain_plans.md b/doc/development/understanding_explain_plans.md index 11aafd7b639..7c926c83a36 100644 --- a/doc/development/understanding_explain_plans.md +++ b/doc/development/understanding_explain_plans.md @@ -199,7 +199,7 @@ more common ones here. A full list of all the available nodes and their descriptions can be found in the [PostgreSQL source file -"plannodes.h"](https://github.com/postgres/postgres/blob/master/src/include/nodes/plannodes.h) +"plannodes.h"](https://gitlab.com/postgres/postgres/blob/master/src/include/nodes/plannodes.h) ### Seq Scan @@ -224,7 +224,7 @@ used when we would read too much data from an index scan, but too little to perform a sequential scan. A bitmap scan uses what is known as a [bitmap index](https://en.wikipedia.org/wiki/Bitmap_index) to perform its work. -The [source code of PostgreSQL](https://github.com/postgres/postgres/blob/1c2cb2744bf3d8ad751cd5cf3b347f10f48492b3/src/include/nodes/plannodes.h#L446-L457) +The [source code of PostgreSQL](https://gitlab.com/postgres/postgres/blob/REL_11_STABLE/src/include/nodes/plannodes.h#L441) states the following on bitmap scans: > Bitmap Index Scan delivers a bitmap of potential tuple locations; it does not diff --git a/doc/development/uploads.md b/doc/development/uploads.md new file mode 100644 index 00000000000..681ce9d9fe8 --- /dev/null +++ b/doc/development/uploads.md @@ -0,0 +1,270 @@ +# Uploads development documentation + +[GitLab Workhorse](https://gitlab.com/gitlab-org/gitlab-workhorse) has special rules for handling uploads. +To prevent occupying a ruby process on I/O operations, we process the upload in workhorse, where is cheaper. +This process can also directly upload to object storage. + +## The problem description + +The following graph explains machine boundaries in a scalable GitLab installation. Without any workhorse optimization in place, we can expect incoming requests to follow the numbers on the arrows. + +```mermaid +graph TB + subgraph "load balancers" + LB(HA Proxy) + end + + subgraph "Shared storage" + nfs(NFS) + end + + subgraph "redis cluster" + r(persisted redis) + end + LB-- 1 -->workhorse + + subgraph "web or API fleet" + workhorse-- 2 -->rails + end + rails-- "3 (write files)" -->nfs + rails-- "4 (schedule a job)" -->r + + subgraph sidekiq + s(sidekiq) + end + s-- "5 (fetch a job)" -->r + s-- "6 (read files)" -->nfs +``` + +We have three challenges here: performance, availability, and scalability. + +### Performance + +Rails process are expensive in terms of both CPU and memory. Ruby [global interpreter lock](https://en.wikipedia.org/wiki/Global_interpreter_lock) adds to cost too because the ruby process will spend time on I/O operations on step 3 causing incoming requests to pile up. + +In order to improve this, [workhorse disk acceleration](#workhorse-disk-acceleration) was implemented. With this, Rails no longer deals with writing uploaded files to disk. + +```mermaid +graph TB + subgraph "load balancers" + LB(HA Proxy) + end + + subgraph "Shared storage" + nfs(NFS) + end + + subgraph "redis cluster" + r(persisted redis) + end + LB-- 1 -->workhorse + + subgraph "web or API fleet" + workhorse-- "3 (without files)" -->rails + end + workhorse -- "2 (write files)" -->nfs + rails-- "4 (schedule a job)" -->r + + subgraph sidekiq + s(sidekiq) + end + s-- "5 (fetch a job)" -->r + s-- "6 (read files)" -->nfs +``` + +### Availability + +There's also an availability problem in this setup, NFS is a [single point of failure](https://en.wikipedia.org/wiki/Single_point_of_failure). + +To address this problem an HA object storage can be used and it's supported by [workhorse object storage acceleration](#workhorse-object-storage-acceleration) + +### Scalability + +Scaling NFS is outside of our support scope, and NFS is not a part of cloud native installations. + +All features that require sidekiq and do not use object storage acceleration won't work without NFS. In Kubernetes, machine boundaries translate to PODs, and in this case the uploaded file will be written into the POD private disk. Since sidekiq POD cannot reach into other pods, the operation will fail to read it. + +## How to select the proper level of acceleration? + +Selecting the proper acceleration is a tradeoff between speed of development and operational costs. + +We can identify three major use-cases for an upload: + +1. **storage:** if we are uploading for storing a file (i.e. artifacts, packages, discussion attachments). In this case [object storage acceleration](#workhorse-object-storage-acceleration) is the proper level as it's the less resource-intensive operation. Additional information can be found on [File Storage in GitLab](file_storage.md). +1. **in-controller/synchronous processing:** if we allow processing **small files** synchronously, using [disk acceleration](#workhorse-disk-acceleration) may speed up development. +1. **sidekiq/asynchronous processing:** Async processing must implement [object storage acceleration](#workhorse-object-storage-acceleration), the reason being that it's the only way to support Cloud Native deployments without a shared NFS. + +For more details about currently broken feature see [epic &1802](https://gitlab.com/groups/gitlab-org/-/epics/1802). + +### Handling repository uploads + +Some features involves git repository uploads without using a regular git client. +Some examples are uploading a repository file from the web interface and [design management](../user/project/issues/design_management.md). + +Those uploads requires the rails controller to act as a git client in lieu of the user. +Those operation falls into _in-controller/synchronous processing_ category, but we have no warranties on the file size. + +In case of a LFS upload, the file pointer is committed synchronously, but file upload to object storage is performed asynchronously with sidekiq. + +## Upload encodings + +By upload encoding we mean how the file is included within the incoming request. + +We have three kinds of file encoding in our uploads: + +1. <i class="fa fa-check-circle"></i> **multipart**: `multipart/form-data` is the most common, a file is encoded as a part of a multipart encoded request. +1. <i class="fa fa-check-circle"></i> **body**: some APIs uploads files as the whole request body. +1. <i class="fa fa-times-circle"></i> **JSON**: some JSON API uploads files as base64 encoded strings. This requires [gitlab-workhorse#226](https://gitlab.com/gitlab-org/gitlab-workhorse/issues/226) to be implemented. + +## Uploading technologies + +By uploading technologies we mean how all the involved services interact with each other. + +GitLab supports 3 kinds of uploading technologies, here follows a brief description with a sequence diagram for each one. Diagrams are not meant to be exhaustive. + +### Regular rails upload + +This is the default kind of upload, and it's most expensive in terms of resources. + +In this case, workhorse is unaware of files being uploaded and acts as a regular proxy. + +When a multipart request reaches the rails application, `Rack::Multipart` leaves behind tempfiles in `/tmp` and uses valuable Ruby process time to copy files around. + +```mermaid +sequenceDiagram + participant c as Client + participant w as Workhorse + participant r as Rails + + activate c + c ->>+w: POST /some/url/upload + w->>+r: POST /some/url/upload + + r->>r: save the incoming file on /tmp + r->>r: read the file for processing + + r-->>-c: request result + deactivate c + deactivate w +``` + +### Workhorse disk acceleration + +This kind of upload avoids wasting resources caused by handling upload writes to `/tmp` in rails. + +This optimization is not active by default on REST API requests. + +When enabled, Workhorse looks for files in multipart MIME requests, uploading +any it finds to a temporary file on shared storage. The MIME data in the request +is replaced with the path to the corresponding file before it is forwarded to +Rails. + +To prevent abuse of this feature, Workhorse signs the modified request with a +special header, stating which entries it modified. Rails will ignore any +unsigned path entries. + +```mermaid +sequenceDiagram + participant c as Client + participant w as Workhorse + participant r as Rails + participant s as NFS + + activate c + c ->>+w: POST /some/url/upload + + w->>+s: save the incoming file on a temporary location + s-->>-w: + + w->>+r: POST /some/url/upload + Note over w,r: file was replaced with its location<br>and other metadata + + opt requires async processing + r->>+redis: schedule a job + redis-->>-r: + end + + r-->>-c: request result + deactivate c + w->>-w: cleanup + + opt requires async processing + activate sidekiq + sidekiq->>+redis: fetch a job + redis-->>-sidekiq: job + + sidekiq->>+s: read file + s-->>-sidekiq: file + + sidekiq->>sidekiq: process file + + deactivate sidekiq + end +``` + +### Workhorse object storage acceleration + +This is the more advanced acceleration technique we have in place. + +Workhorse asks rails for temporary pre-signed object storage URLs and directly uploads to object storage. + +In this setup an extra rails route needs to be implemented in order to handle authorization, +you can see an example of this in [`Projects::LfsStorageController`](https://gitlab.com/gitlab-org/gitlab-ce/blob/master/app/controllers/projects/lfs_storage_controller.rb) +and [its routes](https://gitlab.com/gitlab-org/gitlab-ce/blob/v12.2.0/config/routes/git_http.rb#L31-32). + +**note:** this will fallback to _Workhorse disk acceleration_ when object storage is not enabled in the gitlab instance. The answer to the `/authorize` call will only contain a file system path. + +```mermaid +sequenceDiagram + participant c as Client + participant w as Workhorse + participant r as Rails + participant os as Object Storage + + activate c + c ->>+w: POST /some/url/upload + + w ->>+r: POST /some/url/upload/authorize + Note over w,r: this request has an empty body + r-->>-w: presigned OS URL + + w->>+os: PUT file + Note over w,os: file is stored on a temporary location. Rails select the destination + os-->>-w: + + w->>+r: POST /some/url/upload + Note over w,r: file was replaced with its location<br>and other metadata + + r->>+os: move object to final destination + os-->>-r: + + opt requires async processing + r->>+redis: schedule a job + redis-->>-r: + end + + r-->>-c: request result + deactivate c + w->>-w: cleanup + + opt requires async processing + activate sidekiq + sidekiq->>+redis: fetch a job + redis-->>-sidekiq: job + + sidekiq->>+os: get object + os-->>-sidekiq: file + + sidekiq->>sidekiq: process file + + deactivate sidekiq + end +``` + +## What does the `direct_upload` setting mean? + +[Object storage setting](../administration/uploads.md#object-storage-settings) allows instance administators to enable `direct_upload`, this in an option that only affects the behavior of [workhorse object storage acceleration](#workhorse-object-storage-acceleration). + +This option affect the response to the `/authorize` call. When not enabled, the API response will not contain presigned URLs and workhorse will write the file the shared disk, on the path is provided by rails, acting like object storage was disabled. + +Once the request reachs rails, it will schedule an object storage upload as a sidekiq job. diff --git a/doc/development/utilities.md b/doc/development/utilities.md index 0e396baccff..4021756343c 100644 --- a/doc/development/utilities.md +++ b/doc/development/utilities.md @@ -6,44 +6,44 @@ We developed a number of utilities to ease development. - Deep merges an array of hashes: - ``` ruby - Gitlab::Utils::MergeHash.merge( - [{ hello: ["world"] }, - { hello: "Everyone" }, - { hello: { greetings: ['Bonjour', 'Hello', 'Hallo', 'Dzien dobry'] } }, - "Goodbye", "Hallo"] - ) - ``` - - Gives: - - ``` ruby - [ - { - hello: - [ - "world", - "Everyone", - { greetings: ['Bonjour', 'Hello', 'Hallo', 'Dzien dobry'] } - ] - }, - "Goodbye" - ] - ``` + ``` ruby + Gitlab::Utils::MergeHash.merge( + [{ hello: ["world"] }, + { hello: "Everyone" }, + { hello: { greetings: ['Bonjour', 'Hello', 'Hallo', 'Dzien dobry'] } }, + "Goodbye", "Hallo"] + ) + ``` + + Gives: + + ``` ruby + [ + { + hello: + [ + "world", + "Everyone", + { greetings: ['Bonjour', 'Hello', 'Hallo', 'Dzien dobry'] } + ] + }, + "Goodbye" + ] + ``` - Extracts all keys and values from a hash into an array: - ``` ruby - Gitlab::Utils::MergeHash.crush( - { hello: "world", this: { crushes: ["an entire", "hash"] } } - ) - ``` + ``` ruby + Gitlab::Utils::MergeHash.crush( + { hello: "world", this: { crushes: ["an entire", "hash"] } } + ) + ``` - Gives: + Gives: - ``` ruby - [:hello, "world", :this, :crushes, "an entire", "hash"] - ``` + ``` ruby + [:hello, "world", :this, :crushes, "an entire", "hash"] + ``` ## [`Override`](https://gitlab.com/gitlab-org/gitlab-ce/blob/master/lib/gitlab/utils/override.rb) @@ -53,9 +53,9 @@ We developed a number of utilities to ease development. `ENV['STATIC_VERIFICATION']` is set to avoid production runtime overhead. This is useful to check: - - If we have typos in overriding methods. - - If we renamed the overridden methods, making original overriding methods - overrides nothing. + - If we have typos in overriding methods. + - If we renamed the overridden methods, making original overriding methods + overrides nothing. Here's a simple example: @@ -94,47 +94,47 @@ We developed a number of utilities to ease development. - Memoize the value even if it is `nil` or `false`. - We often do `@value ||= compute`, however this doesn't work well if - `compute` might eventually give `nil` and we don't want to compute again. - Instead we could use `defined?` to check if the value is set or not. - However it's tedious to write such pattern, and `StrongMemoize` would - help us use such pattern. + We often do `@value ||= compute`, however this doesn't work well if + `compute` might eventually give `nil` and we don't want to compute again. + Instead we could use `defined?` to check if the value is set or not. + However it's tedious to write such pattern, and `StrongMemoize` would + help us use such pattern. - Instead of writing patterns like this: + Instead of writing patterns like this: - ``` ruby - class Find - def result - return @result if defined?(@result) + ``` ruby + class Find + def result + return @result if defined?(@result) - @result = search - end + @result = search end - ``` + end + ``` - We could write it like: + We could write it like: - ``` ruby - class Find - include Gitlab::Utils::StrongMemoize + ``` ruby + class Find + include Gitlab::Utils::StrongMemoize - def result - strong_memoize(:result) do - search - end + def result + strong_memoize(:result) do + search end end - ``` + end + ``` - Clear memoization - ``` ruby - class Find - include Gitlab::Utils::StrongMemoize - end + ``` ruby + class Find + include Gitlab::Utils::StrongMemoize + end - Find.new.clear_memoization(:result) - ``` + Find.new.clear_memoization(:result) + ``` ## [`RequestCache`](https://gitlab.com/gitlab-org/gitlab-ce/blob/master/lib/gitlab/cache/request_cache.rb) diff --git a/doc/development/ux_guide/animation.md b/doc/development/ux_guide/animation.md index 583ff19bc69..a998ab74a96 100644 --- a/doc/development/ux_guide/animation.md +++ b/doc/development/ux_guide/animation.md @@ -1,5 +1,5 @@ --- -redirect_to: 'https://design.gitlab.com/foundations/motion' +redirect_to: 'https://design.gitlab.com/product-foundations/motion' --- -The content of this document was moved into the [GitLab Design System](https://design.gitlab.com). +The content of this document was moved into the [GitLab Design System](https://design.gitlab.com/product-foundations/motion). diff --git a/doc/development/ux_guide/illustrations.md b/doc/development/ux_guide/illustrations.md index ed072b6515f..3592d25c95d 100644 --- a/doc/development/ux_guide/illustrations.md +++ b/doc/development/ux_guide/illustrations.md @@ -1,5 +1,5 @@ --- -redirect_to: 'https://design.gitlab.com/foundations/illustration/' +redirect_to: 'https://design.gitlab.com/product-foundations/illustration' --- -The content of this document was moved into the [GitLab Design System](https://design.gitlab.com/). +The content of this document was moved into the [GitLab Design System](https://design.gitlab.com/product-foundations/illustration). diff --git a/doc/development/verifying_database_capabilities.md b/doc/development/verifying_database_capabilities.md index 661ab9cef1a..6b4995aebe2 100644 --- a/doc/development/verifying_database_capabilities.md +++ b/doc/development/verifying_database_capabilities.md @@ -1,15 +1,15 @@ # Verifying Database Capabilities -Sometimes certain bits of code may only work on a certain database and/or +Sometimes certain bits of code may only work on a certain database version. While we try to avoid such code as much as possible sometimes it is necessary to add database (version) specific behaviour. To facilitate this we have the following methods that you can use: -- `Gitlab::Database.postgresql?`: returns `true` if PostgreSQL is being used -- `Gitlab::Database.mysql?`: returns `true` if MySQL is being used +- `Gitlab::Database.postgresql?`: returns `true` if PostgreSQL is being used. + You can normally just assume this is the case. - `Gitlab::Database.version`: returns the PostgreSQL version number as a string - in the format `X.Y.Z`. This method does not work for MySQL + in the format `X.Y.Z`. This allows you to write code such as: @@ -25,7 +25,7 @@ else end ``` -# Read-only database +## Read-only database The database can be used in read-only mode. In this case we have to make sure all GET requests don't attempt any write operations to the diff --git a/doc/development/what_requires_downtime.md b/doc/development/what_requires_downtime.md index 24edd05da2f..944bf5900c5 100644 --- a/doc/development/what_requires_downtime.md +++ b/doc/development/what_requires_downtime.md @@ -7,9 +7,8 @@ downtime. ## Adding Columns -On PostgreSQL you can safely add a new column to an existing table as long as it -does **not** have a default value. For example, this query would not require -downtime: +You can safely add a new column to an existing table as long as it does **not** +have a default value. For example, this query would not require downtime: ```sql ALTER TABLE projects ADD COLUMN random_value int; @@ -27,11 +26,6 @@ This requires updating every single row in the `projects` table so that indexes in a table. This in turn acquires enough locks on the table for it to effectively block any other queries. -As of MySQL 5.6 adding a column to a table is still quite an expensive -operation, even when using `ALGORITHM=INPLACE` and `LOCK=NONE`. This means -downtime _may_ be required when modifying large tables as otherwise the -operation could potentially take hours to complete. - Adding a column with a default value _can_ be done without requiring downtime when using the migration helper method `Gitlab::Database::MigrationHelpers#add_column_with_default`. This method works @@ -140,7 +134,7 @@ done without requiring downtime. However, this does require that any application changes are deployed _first_. Thus, changing the constraints of a column should happen in a post-deployment migration. NOTE: Avoid using `change_column` as it produces inefficient query because it re-defines -the whole column type. For example, to add a NOT NULL constraint, prefer `change_column_null ` +the whole column type. For example, to add a NOT NULL constraint, prefer `change_column_null` ## Changing Column Types @@ -311,8 +305,7 @@ migrations](background_migrations.md#cleaning-up). ## Adding Indexes Adding indexes is an expensive process that blocks INSERT and UPDATE queries for -the duration. When using PostgreSQL one can work around this by using the -`CONCURRENTLY` option: +the duration. You can work around this by using the `CONCURRENTLY` option: ```sql CREATE INDEX CONCURRENTLY index_name ON projects (column_name); @@ -336,17 +329,9 @@ end Note that `add_concurrent_index` can not be reversed automatically, thus you need to manually define `up` and `down`. -When running this on PostgreSQL the `CONCURRENTLY` option mentioned above is -used. On MySQL this method produces a regular `CREATE INDEX` query. - -MySQL doesn't really have a workaround for this. Supposedly it _can_ create -indexes without the need for downtime but only for variable width columns. The -details on this are a bit sketchy. Since it's better to be safe than sorry one -should assume that adding indexes requires downtime on MySQL. - ## Dropping Indexes -Dropping an index does not require downtime on both PostgreSQL and MySQL. +Dropping an index does not require downtime. ## Adding Tables @@ -370,7 +355,7 @@ transaction this means this approach would require downtime. GitLab allows you to work around this by using `Gitlab::Database::MigrationHelpers#add_concurrent_foreign_key`. This method -ensures that when PostgreSQL is used no downtime is needed. +ensures that no downtime is needed. ## Removing Foreign Keys |