summaryrefslogtreecommitdiff
path: root/doc
diff options
context:
space:
mode:
authorGitLab Bot <gitlab-bot@gitlab.com>2020-04-14 21:09:52 +0000
committerGitLab Bot <gitlab-bot@gitlab.com>2020-04-14 21:09:52 +0000
commitae93b284016c07a8a4b47e2510789253d14870f3 (patch)
treec7dc8690b841dd7d3a4eeeca944969d14df582a6 /doc
parentf697dc5e76dfc5894df006d53b2b7e751653cf05 (diff)
downloadgitlab-ce-ae93b284016c07a8a4b47e2510789253d14870f3.tar.gz
Add latest changes from gitlab-org/gitlab@master
Diffstat (limited to 'doc')
-rw-r--r--doc/development/code_review.md64
-rw-r--r--doc/development/testing_guide/end_to_end/best_practices.md119
-rw-r--r--doc/user/clusters/applications.md48
-rw-r--r--doc/user/clusters/img/fluentd_v12_10.pngbin0 -> 26758 bytes
-rw-r--r--doc/user/project/pages/custom_domains_ssl_tls_certification/lets_encrypt_integration.md4
5 files changed, 170 insertions, 65 deletions
diff --git a/doc/development/code_review.md b/doc/development/code_review.md
index c480db54705..52a0672259f 100644
--- a/doc/development/code_review.md
+++ b/doc/development/code_review.md
@@ -13,9 +13,13 @@ You are strongly encouraged to get your code **reviewed** by a
[reviewer](https://about.gitlab.com/handbook/engineering/workflow/code-review/#reviewer) as soon as
there is any code to review, to get a second opinion on the chosen solution and
implementation, and an extra pair of eyes looking for bugs, logic problems, or
-uncovered edge cases. The reviewer can be from a different team, but it is
-recommended to pick someone who knows the domain well. You can read more about the
-importance of involving reviewer(s) in the section on the responsibility of the author below.
+uncovered edge cases.
+
+The default approach is to choose a reviewer from your group or team for the first review.
+This is only a recommendation and the reviewer may be from a different team.
+However, it is recommended to pick someone who is a [domain expert](#domain-experts).
+
+You can read more about the importance of involving reviewer(s) in the section on the responsibility of the author below.
If you need some guidance (for example, it's your first merge request), feel free to ask
one of the [Merge request coaches](https://about.gitlab.com/company/team/).
@@ -32,16 +36,32 @@ widget. Reviewers can add their approval by [approving additionally](../user/pro
Getting your merge request **merged** also requires a maintainer. If it requires
more than one approval, the last maintainer to review and approve it will also merge it.
+### Domain experts
+
+Domain experts are team members who have substantial experience with a specific technology, product feature or area of the codebase. Team members are encouraged to self-identify as domain experts and add it to their [team profile](https://gitlab.com/gitlab-com/www-gitlab-com/-/blob/master/data/team.yml)
+
+When self-identifying as a domain expert, it is recommended to assign the MR changing the `team.yml` to be merged by an already established Domain Expert or a corresponding Engineering Manager.
+
+We make the following assumption with regards to automatically being considered a domain expert:
+
+- Team members working in a specific stage/group (e.g. create: source code) are considered domain experts for that area of the app they work on
+- Team members working on a specific feature (e.g. search) are considered domain experts for that feature
+
+We default to assigning reviews to team members with domain expertise.
+When a suitable [domain expert](#domain-experts) isn't available, you can choose any team member to review the MR, or simply follow the [Reviewer roulette](#reviewer-roulette) recommendation.
+
+Team members' domain expertise can be viewed on the [engineering projects](https://about.gitlab.com/handbook/engineering/projects/) page or on the [GitLab team page](https://about.gitlab.com/company/team/).
+
### Reviewer roulette
The [Danger bot](dangerbot.md) randomly picks a reviewer and a maintainer for
each area of the codebase that your merge request seems to touch. It only makes
-recommendations - feel free to override it if you think someone else is a better
+**recommendations** and you should override it if you think someone else is a better
fit!
It picks reviewers and maintainers from the list at the
[engineering projects](https://about.gitlab.com/handbook/engineering/projects/)
-page, with these behaviours:
+page, with these behaviors:
1. It will not pick people whose [GitLab status](../user/profile/index.md#current-status)
contains the string 'OOO'.
@@ -56,7 +76,7 @@ page, with these behaviours:
As described in the section on the responsibility of the maintainer below, you
are recommended to get your merge request approved and merged by maintainer(s)
-from teams other than your own.
+ with [domain expertise](#domain-experts).
1. If your merge request includes backend changes [^1], it must be
**approved by a [backend maintainer](https://about.gitlab.com/handbook/engineering/projects/#gitlab_maintainers_backend)**.
@@ -103,13 +123,13 @@ To reach the required level of confidence in their solution, an author is expect
to involve other people in the investigation and implementation processes as
appropriate.
-They are encouraged to reach out to domain experts to discuss different solutions
+They are encouraged to reach out to [domain experts](#domain-experts) to discuss different solutions
or get an implementation reviewed, to product managers and UX designers to clear
up confusion or verify that the end result matches what they had in mind, to
database specialists to get input on the data model or specific queries, or to
any other developer to get an in-depth review of the solution.
-If an author is unsure if a merge request needs a domain expert's opinion, that's
+If an author is unsure if a merge request needs a [domain experts's](#domain-experts) opinion, that's
usually a pretty good sign that it does, since without it the required level of
confidence in their solution will not have been reached.
@@ -142,9 +162,8 @@ that it meets all requirements, you should:
- Click the Approve button.
- Advise the author their merge request has been reviewed and approved.
-- Assign the merge request to a maintainer. [Reviewer roulette](#reviewer-roulette)
-should have made a suggestion, but feel free to override if someone else is a
-better choice.
+- Assign the merge request to a maintainer. Default to assigning it to a maintainer with [domain expertise](#domain-experts),
+however, if one isn't available or you think the merge request doesn't need a review by a [domain expert](#domain-experts), feel free to follow the [Reviewer roulette](#reviewer-roulette) suggestion.
### The responsibility of the maintainer
@@ -159,20 +178,17 @@ Since a maintainer's job only depends on their knowledge of the overall GitLab
codebase, and not that of any specific domain, they can review, approve, and merge
merge requests from any team and in any product area.
-In fact, authors are encouraged to get their merge requests merged by maintainers
-from teams other than their own, to ensure that all code across GitLab is consistent
-and can be easily understood by all contributors, from both inside and outside the
-company, without requiring team-specific expertise.
-
Maintainers will do their best to also review the specifics of the chosen solution
-before merging, but as they are not necessarily domain experts, they may be poorly
+before merging, but as they are not necessarily [domain experts](#domain-experts), they may be poorly
placed to do so without an unreasonable investment of time. In those cases, they
-will defer to the judgment of the author and earlier reviewers and involved domain
-experts, in favor of focusing on their primary responsibilities.
+will defer to the judgment of the author and earlier reviewers, in favor of focusing on their primary responsibilities.
+
+If a maintainer feels that an MR is substantial enough that it warrants a review from a [domain expert](#domain-experts),
+and it is unclear whether a domain expert have been involved in the reviews to date,
+they may request a [domain expert's](#domain-experts) review before merging the MR.
If a developer who happens to also be a maintainer was involved in a merge request
-as a domain expert and/or reviewer, it is recommended that they are not also picked
-as the maintainer to ultimately approve and merge it.
+as a reviewer, it is recommended that they are not also picked as the maintainer to ultimately approve and merge it.
Maintainers should check before merging if the merge request is approved by the
required approvers.
@@ -255,11 +271,13 @@ first time.
### Assigning a merge request for a review
-If you want to have your merge request reviewed, you can assign it to any reviewer. The list of reviewers can be found on [Engineering projects](https://about.gitlab.com/handbook/engineering/projects/) page.
+When you are ready to have your merge request reviewed,
+you should default to assigning it to a reviewer from your group or team for the first review,
+however, you can also assign it to any reviewer. The list of reviewers can be found on [Engineering projects](https://about.gitlab.com/handbook/engineering/projects/) page.
You can also use `workflow::ready for review` label. That means that your merge request is ready to be reviewed and any reviewer can pick it. It is recommended to use that label only if there isn't time pressure and make sure the merge request is assigned to a reviewer.
-When your merge request was reviewed and can be passed to a maintainer you can either pick a specific maintainer or use a label `ready for merge`.
+When your merge request was reviewed and can be passed to a maintainer, you should default to choosing a maintainer with [domain expertise](#domain-experts), and otherwise follow the Reviewer Roulette recommendation or use the label `ready for merge`.
It is responsibility of the author of a merge request that the merge request is reviewed. If it stays in `ready for review` state too long it is recommended to assign it to a specific reviewer.
diff --git a/doc/development/testing_guide/end_to_end/best_practices.md b/doc/development/testing_guide/end_to_end/best_practices.md
index fbb2a17bef1..57cfcf34726 100644
--- a/doc/development/testing_guide/end_to_end/best_practices.md
+++ b/doc/development/testing_guide/end_to_end/best_practices.md
@@ -1,43 +1,59 @@
-# Best practices when writing end-to-end tests
+# End-to-end testing Best Practices
-## Avoid using a GUI when it's not required
+NOTE: **Note:**
+This is an tailored extension of the Best Practices [found in the testing guide](../best_practices.md).
-The majority of the end-to-end tests require some state to be built in the application for the tests to happen.
+## Prefer API over UI
-A good example is a user being logged in as a pre-condition for testing the feature.
+The end-to-end testing framework has the ability to fabricate its resources on a case-by-case basis.
+Resources should be fabricated via the API wherever possible.
-But if the login feature is already covered with end-to-end tests through the GUI, there is no reason to perform such an expensive task to test the functionality of creating a project, or importing a repo, even if these features depend on a user being logged in. Let's see an example to make things clear.
+We can save both time and money by fabricating resources that our test will need via the API.
-Let's say that, on average, the process to perform a successful login through the GUI takes 2 seconds.
+[Learn more](resources.md) about resources.
-Now, realize that almost all tests need the user to be logged in, and that we need every test to run in isolation, meaning that tests cannot interfere with each other. This would mean that for every test the user needs to log in, and "waste 2 seconds".
+## Avoid superfluous expectations
-Now, multiply the number of tests per 2 seconds, and as your test suite grows, the time to run it grows with it, and this is not sustainable.
+To keep tests lean, it is important that we only test what we need to test.
-An alternative to perform a login in a cheaper way would be having an endpoint (available only for testing) where we could pass the user's credentials as encrypted values as query strings, and then we would be redirected to the logged in home page if the credentials are valid. Let's say that, on average, this process takes only 200 milliseconds.
+Ensure that you do not add any `expect()` statements that are unrelated to what needs to be tested.
-You see the point right?
+For example:
-Performing a login through the GUI for every test would cost a lot in terms of tests' execution.
-
-And there is another reason.
-
-Let's say that you don't follow the above suggestion, and depend on the GUI for the creation of every application state in order to test a specific feature. In this case we could be talking about the **Issues** feature, that depends on a project to exist, and the user to be logged in.
-
-What would happen if there was a bug in the project creation page, where the 'Create' button is disabled, not allowing for the creation of a project through the GUI, but the API logic is still working?
-
-In this case, instead of having only the project creation test failing, we would have many tests that depend on a project to be failing too.
+```ruby
+#=> Good
+Flow::Login.sign_in
+Page::Main::Menu.perform do |menu|
+ expect(menu).to be_signed_in
+end
-But, if we were following the best practices, only one test would be failing, and tests for other features that depend on a project to exist would continue to pass, since they could be creating the project behind the scenes interacting directly with the public APIs, ensuring a more reliable metric of test failure rate.
+#=> Bad
+Flow::Login.sign_in(as: user)
+Page::Main::Menu.perform do |menu|
+ expect(menu).to be_signed_in
+ expect(page).to have_content(user.name) #=> we already validated being signed in. redundant.
+ expect(menu).to have_element(:nav_bar) #=> likely unnecessary. already validated in lower-level. test doesn't call for validating this.
+end
-Finally, interacting with the application only by its GUI generates a higher rate of test flakiness, and we want to avoid that at max.
+#=> Good
+issue = Resource::Issue.fabricate_via_api! do |issue|
+ issue.name = 'issue-name'
+end
-**The takeaways here are:**
+Project::Issues::Index.perform do |index|
+ expect(index).to have_issue(issue)
+end
-- Building state through the GUI is time consuming and it's not sustainable as the test suite grows.
-- When depending only on the GUI to create the application's state and tests fail due to front-end issues, we can't rely on the test failures rate, and we generate a higher rate of test flakiness.
+#=> Bad
+issue = Resource::Issue.fabricate_via_api! do |issue|
+ issue.name = 'issue-name'
+end
-Now that we are aware of all of it, [let's go create some tests](quick_start_guide.md).
+Project::Issues::Index.perform do |index|
+ expect(index).to have_issue(issue)
+ expect(page).to have_content(issue.name) #=> page content check is redundant as the issue was already validated in the line above.
+end
+```
## Prefer to split tests across multiple files
@@ -54,17 +70,18 @@ In summary:
- **Do**: Split tests across separate files, unless the tests share expensive setup.
- **Don't**: Put new tests in an existing file without considering the impact on parallelization.
-## Limit the use of `before(:all)` and `after` hooks
+## Limit the use of the UI in `before(:context)` and `after` hooks
-Limit the use of `before(:all)` hook to perform setup tasks with only API calls, non UI operations
-or basic UI operations such as login.
+Limit the use of `before(:context)` hooks to perform setup tasks with only API calls,
+non-UI operations, or basic UI operations such as login.
-We use [`capybara-screenshot`](https://github.com/mattheworiordan/capybara-screenshot) library to automatically save screenshots on failures.
-This library [saves the screenshots in the RSpec's `after` hook](https://github.com/mattheworiordan/capybara-screenshot/blob/master/lib/capybara-screenshot/rspec.rb#L97).
-[If there is a failure in `before(:all)`, the `after` hook is not called](https://github.com/rspec/rspec-core/pull/2652/files#diff-5e04af96d5156e787f28d519a8c99615R148) and so the screenshots are not saved.
+We use [`capybara-screenshot`](https://github.com/mattheworiordan/capybara-screenshot) library to automatically save a screenshot on
+failure.
-Given this fact, we should limit the use of `before(:all)` to only those operations where a screenshot is not
-necessary in case of failure and QA logs would be enough for debugging.
+`capybara-screenshot` [saves the screenshot in the RSpec's `after` hook](https://github.com/mattheworiordan/capybara-screenshot/blob/master/lib/capybara-screenshot/rspec.rb#L97).
+[If there is a failure in `before(:context)`, the `after` hook is not called](https://github.com/rspec/rspec-core/pull/2652/files#diff-5e04af96d5156e787f28d519a8c99615R148) and so the screenshot is not saved.
+
+Given this fact, we should limit the use of `before(:context)` to only those operations where a screenshot is not needed.
Similarly, the `after` hook should only be used for non-UI operations. Any UI operations in `after` hook in a test file
would execute before the `after` hook that takes the screenshot. This would result in moving the UI status away from the
@@ -72,16 +89,11 @@ point of failure and so the screenshot would not be captured at the right moment
## Ensure tests do not leave the browser logged in
-All QA tests expect to be able to log in at the start of the test.
-
-That's not possible if a test leaves the browser logged in when it finishes. Normally this isn't a
-problem because [Capybara resets the session after each test](https://github.com/teamcapybara/capybara/blob/9ebc5033282d40c73b0286e60217515fd1bb0b5d/lib/capybara/rspec.rb#L18).
-But Capybara does that in an `after` block, so when a test logs in within an `after(:context)` block,
-the browser returns to a logged in state *after* Capybara had logged it out. And so the next test will fail.
+All tests expect to be able to log in at the start of the test.
For an example see: <https://gitlab.com/gitlab-org/gitlab/issues/34736>
-Ideally, any actions performed in an `after(:context)` (or [`before(:context)`](#limit-the-use-of-beforeall-and-after-hooks)) block would be performed via the API. But if it's necessary to do so via the UI (e.g., if API functionality doesn't exist), make sure to log out at the end of the block.
+Ideally, any actions performed in an `after(:context)` (or [`before(:context)`](#limit-the-use-of-the-ui-in-beforecontext-and-after-hooks)) block would be performed via the API. But if it's necessary to do so via the UI (e.g., if API functionality doesn't exist), make sure to log out at the end of the block.
```ruby
after(:all) do
@@ -100,3 +112,30 @@ We don't run tests that require Administrator access against our Production envi
When you add a new test that requires Administrator access, apply the RSpec metadata `:requires_admin` so that the test will not be included in the test suites executed against Production and other environments on which we don't want to run those tests.
Note: When running tests locally or configuring a pipeline, the environment variable `QA_CAN_TEST_ADMIN_FEATURES` can be set to `false` to skip tests that have the `:requires_admin` tag.
+
+## Prefer `Commit` resource over `ProjectPush`
+
+In line with [using the API](#prefer-api-over-ui), use a `Commit` resource whenever possible.
+
+`ProjectPush` uses raw shell commands via the Git Command Line Interface (CLI) whereas the `Commit` resource makes an HTTP request.
+
+```ruby
+# Using a commit resource
+Resource::Commit.fabricate_via_api! do |commit|
+ commit.commit_message = 'Initial commit'
+ commit.add_files([
+ {file_path: 'README.md', content: 'Hello, GitLab'}
+ ])
+end
+
+# Using a ProjectPush
+Resource::Repository::ProjectPush.fabricate! do |push|
+ push.commit_message = 'Initial commit'
+ push.file_name = 'README.md'
+ push.file_content = 'Hello, GitLab'
+end
+```
+
+NOTE: **Note:**
+A few exceptions for using a `ProjectPush` would be when your test calls for testing SSH integration or
+using the Git CLI.
diff --git a/doc/user/clusters/applications.md b/doc/user/clusters/applications.md
index f8fd07276c5..73ef9482e71 100644
--- a/doc/user/clusters/applications.md
+++ b/doc/user/clusters/applications.md
@@ -43,6 +43,7 @@ The following applications can be installed:
- [Knative](#knative)
- [Crossplane](#crossplane)
- [Elastic Stack](#elastic-stack)
+- [Fluentd](#fluentd)
With the exception of Knative, the applications will be installed in a dedicated
namespace called `gitlab-managed-apps`.
@@ -523,6 +524,28 @@ kubectl port-forward svc/kibana 5601:443
Then, you can visit Kibana at `http://localhost:5601`.
+### Fluentd
+
+> Introduced in GitLab 12.10 for project- and group-level clusters.
+
+[Fluentd](https://www.fluentd.org/) is an open source data collector, which enables
+you to unify the data collection and consumption to better use and understand
+your data. Fluentd sends logs in syslog format.
+
+To enable Fluentd:
+
+1. Navigate to **{cloud-gear}** **Operations > Kubernetes** and click
+ **Applications**. You will be prompted to enter a host, port and protocol
+ where the WAF logs will be sent to via syslog.
+1. Provide the host domain name or URL in **SIEM URL or Host**.
+1. Provide the host port number in **SIEM Port**.
+1. Select a **SIEM Protocol**.
+1. Check **Send ModSecurity Logs**. If you do not select this checkbox, the **Install**
+ button is disabled.
+1. Click **Install**.
+
+![Fluentd input fields](img/fluentd_v12_10.png)
+
### Future apps
Interested in contributing a new GitLab managed app? Visit the
@@ -552,6 +575,7 @@ Supported applications:
- [JupyterHub](#install-jupyterhub-using-gitlab-cicd)
- [Elastic Stack](#install-elastic-stack-using-gitlab-cicd)
- [Crossplane](#install-crossplane-using-gitlab-cicd)
+- [Fluentd](#install-fluentd-using-gitlab-cicd)
### Usage
@@ -1036,6 +1060,30 @@ management project. Refer to the
[chart](https://github.com/crossplane/crossplane/tree/master/cluster/charts/crossplane#configuration) for the
available configuration options. Note that this link points to the docs for the current development release, which may differ from the version you have installed. You can check out a specific version in the branch/tag switcher.
+### Install Fluentd using GitLab CI/CD
+
+> [Introduced](https://gitlab.com/gitlab-org/cluster-integration/cluster-applications/-/merge_requests/76) in GitLab 12.10.
+
+To install Fluentd into the `gitlab-managed-apps` namespace of your cluster using GitLab CI/CD, define the following configuration in `.gitlab/managed-apps/config.yaml`:
+
+```yaml
+Fluentd:
+ installed: true
+```
+
+You can also review the default values set for this chart in the [values.yaml](https://github.com/helm/charts/blob/master/stable/fluentd/values.yaml) file.
+
+You can customize the installation of Fluentd by defining
+`.gitlab/managed-apps/fluentd/values.yaml` file in your cluster management
+project. Refer to the
+[configuration chart for the current development release of Fluentd](https://github.com/helm/charts/tree/master/stable/fluentd#configuration)
+for the available configuration options.
+
+NOTE: **Note:**
+The configuration chart link points to the current development release, which
+may differ from the version you have installed. To ensure compatibility, switch
+to the specific branch or tag you are using.
+
## Upgrading applications
> [Introduced](https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/24789) in GitLab 11.8.
diff --git a/doc/user/clusters/img/fluentd_v12_10.png b/doc/user/clusters/img/fluentd_v12_10.png
new file mode 100644
index 00000000000..7593f99ab51
--- /dev/null
+++ b/doc/user/clusters/img/fluentd_v12_10.png
Binary files differ
diff --git a/doc/user/project/pages/custom_domains_ssl_tls_certification/lets_encrypt_integration.md b/doc/user/project/pages/custom_domains_ssl_tls_certification/lets_encrypt_integration.md
index 42b1570d213..f80b741fb77 100644
--- a/doc/user/project/pages/custom_domains_ssl_tls_certification/lets_encrypt_integration.md
+++ b/doc/user/project/pages/custom_domains_ssl_tls_certification/lets_encrypt_integration.md
@@ -60,11 +60,11 @@ associated Pages domain. It also will be renewed automatically by GitLab.
## Troubleshooting
-### Error "Something went wrong while obtaining Let's Encrypt certificate"
+### Error "Something went wrong while obtaining the Let's Encrypt certificate"
> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/30146) in GitLab 13.0.
-If you get an error **Something went wrong while obtaining Let's Encrypt certificate**, you can try obtaining the certificate again by following these steps:
+If you get an error **Something went wrong while obtaining the Let's Encrypt certificate**, you can try obtaining the certificate again by following these steps:
1. Go to your project's **Settings > Pages**.
1. Click **Edit** on your domain.