diff options
author | GitLab Bot <gitlab-bot@gitlab.com> | 2020-08-20 18:42:06 +0000 |
---|---|---|
committer | GitLab Bot <gitlab-bot@gitlab.com> | 2020-08-20 18:42:06 +0000 |
commit | 6e4e1050d9dba2b7b2523fdd1768823ab85feef4 (patch) | |
tree | 78be5963ec075d80116a932011d695dd33910b4e /doc/development/testing_guide | |
parent | 1ce776de4ae122aba3f349c02c17cebeaa8ecf07 (diff) | |
download | gitlab-ce-6e4e1050d9dba2b7b2523fdd1768823ab85feef4.tar.gz |
Add latest changes from gitlab-org/gitlab@13-3-stable-ee
Diffstat (limited to 'doc/development/testing_guide')
8 files changed, 314 insertions, 202 deletions
diff --git a/doc/development/testing_guide/best_practices.md b/doc/development/testing_guide/best_practices.md index 4e46e691405..b60a26c29b5 100644 --- a/doc/development/testing_guide/best_practices.md +++ b/doc/development/testing_guide/best_practices.md @@ -57,7 +57,7 @@ bundle exec guard When using spring and guard together, use `SPRING=1 bundle exec guard` instead to make use of spring. -Use [Factory Doctor](https://test-prof.evilmartians.io/#/factory_doctor.md) to find cases on un-necessary database manipulation, which can cause slow tests. +Use [Factory Doctor](https://test-prof.evilmartians.io/#/profilers/factory_doctor) to find cases on un-necessary database manipulation, which can cause slow tests. ```shell # run test for path @@ -261,8 +261,8 @@ As much as possible, do not implement this using `before(:all)` or `before(:cont you would need to manually clean up the data as those hooks run outside a database transaction. Instead, this can be achieved by using -[`let_it_be`](https://test-prof.evilmartians.io/#/let_it_be) variables and the -[`before_all`](https://test-prof.evilmartians.io/#/before_all) hook +[`let_it_be`](https://test-prof.evilmartians.io/#/recipes/let_it_be) variables and the +[`before_all`](https://test-prof.evilmartians.io/#/recipes/before_all) hook from the [`test-prof` gem](https://rubygems.org/gems/test-prof). ```ruby @@ -315,109 +315,7 @@ end ### Feature flags in tests -All feature flags are stubbed to be enabled by default in our Ruby-based -tests. - -To disable a feature flag in a test, use the `stub_feature_flags` -helper. For example, to globally disable the `ci_live_trace` feature -flag in a test: - -```ruby -stub_feature_flags(ci_live_trace: false) - -Feature.enabled?(:ci_live_trace) # => false -``` - -If you wish to set up a test where a feature flag is enabled only -for some actors and not others, you can specify this in options -passed to the helper. For example, to enable the `ci_live_trace` -feature flag for a specific project: - -```ruby -project1, project2 = build_list(:project, 2) - -# Feature will only be enabled for project1 -stub_feature_flags(ci_live_trace: project1) - -Feature.enabled?(:ci_live_trace) # => false -Feature.enabled?(:ci_live_trace, project1) # => true -Feature.enabled?(:ci_live_trace, project2) # => false -``` - -This represents an actual behavior of FlipperGate: - -1. You can enable an override for a specified actor to be enabled -1. You can disable (remove) an override for a specified actor, - falling back to default state -1. There's no way to model that you explicitly disable a specified actor - -```ruby -Feature.enable(:my_feature) -Feature.disable(:my_feature, project1) -Feature.enabled?(:my_feature) # => true -Feature.enabled?(:my_feature, project1) # => true -``` - -```ruby -Feature.disable(:my_feature2) -Feature.enable(:my_feature2, project1) -Feature.enabled?(:my_feature2) # => false -Feature.enabled?(:my_feature2, project1) # => true -``` - -#### `stub_feature_flags` vs `Feature.enable*` - -It is preferred to use `stub_feature_flags` for enabling feature flags -in testing environment. This method provides a simple and well described -interface for a simple use-cases. - -However, in some cases a more complex behaviors needs to be tested, -like a feature flag percentage rollouts. This can be achieved using -the `.enable_percentage_of_time` and `.enable_percentage_of_actors` - -```ruby -# Good: feature needs to be explicitly disabled, as it is enabled by default if not defined -stub_feature_flags(my_feature: false) -stub_feature_flags(my_feature: true) -stub_feature_flags(my_feature: project) -stub_feature_flags(my_feature: [project, project2]) - -# Bad -Feature.enable(:my_feature_2) - -# Good: enable my_feature for 50% of time -Feature.enable_percentage_of_time(:my_feature_3, 50) - -# Good: enable my_feature for 50% of actors/gates/things -Feature.enable_percentage_of_actors(:my_feature_4, 50) -``` - -Each feature flag that has a defined state will be persisted -for test execution time: - -```ruby -Feature.persisted_names.include?('my_feature') => true -Feature.persisted_names.include?('my_feature_2') => true -Feature.persisted_names.include?('my_feature_3') => true -Feature.persisted_names.include?('my_feature_4') => true -``` - -#### Stubbing gate - -It is required that a gate that is passed as an argument to `Feature.enabled?` -and `Feature.disabled?` is an object that includes `FeatureGate`. - -In specs you can use a `stub_feature_flag_gate` method that allows you to have -quickly your custom gate: - -```ruby -gate = stub_feature_flag_gate('CustomActor') - -stub_feature_flags(ci_live_trace: gate) - -Feature.enabled?(:ci_live_trace) # => false -Feature.enabled?(:ci_live_trace, gate) # => true -``` +This section was moved to [developing with feature flags](../feature_flags/development.md). ### Pristine test environments diff --git a/doc/development/testing_guide/end_to_end/best_practices.md b/doc/development/testing_guide/end_to_end/best_practices.md index 7df3cd614c7..3b193721143 100644 --- a/doc/development/testing_guide/end_to_end/best_practices.md +++ b/doc/development/testing_guide/end_to_end/best_practices.md @@ -55,6 +55,33 @@ Project::Issues::Index.perform do |index| end ``` +## Prefer `aggregate_failures` when there are back-to-back expectations + +In cases where there must be multiple (back-to-back) expectations within a test case, it is preferable to use `aggregate_failures`. + +This allows you to group a set of expectations and see all the failures altogether, rather than having the test being aborted on the first failure. + +For example: + +```ruby +#=> Good +Page::Search::Results.perform do |search| + search.switch_to_code + + aggregate_failures 'testing search results' do + expect(search).to have_file_in_project(template[:file_name], project.name) + expect(search).to have_file_with_content(template[:file_name], content[0..33]) + end +end + +#=> Bad +Page::Search::Results.perform do |search| + search.switch_to_code + expect(search).to have_file_in_project(template[:file_name], project.name) + expect(search).to have_file_with_content(template[:file_name], content[0..33]) +end +``` + ## Prefer to split tests across multiple files Our framework includes a couple of parallelization mechanisms that work by executing spec files in parallel. diff --git a/doc/development/testing_guide/end_to_end/index.md b/doc/development/testing_guide/end_to_end/index.md index ac051b827d2..f61eab5c8f3 100644 --- a/doc/development/testing_guide/end_to_end/index.md +++ b/doc/development/testing_guide/end_to_end/index.md @@ -178,6 +178,13 @@ Once you decided where to put [test environment orchestration scenarios](https:/ the [GitLab QA orchestrator README](https://gitlab.com/gitlab-org/gitlab-qa/tree/master/README.md), and [the already existing instance-level scenarios](https://gitlab.com/gitlab-org/gitlab-foss/tree/master/qa/qa/specs/features). +### Consider **not** writing an end-to-end test + +We should follow these best practices for end-to-end tests: + +- Do not write an end-to-end test if a lower-level feature test exists. End-to-end tests require more work and resources. +- Troubleshooting for end-to-end tests can be more complex as connections to the application under test are not known. + Continued reading: - [Beginner's Guide](beginners_guide.md) diff --git a/doc/development/testing_guide/end_to_end/page_objects.md b/doc/development/testing_guide/end_to_end/page_objects.md index d43d88779c7..6ce44b2d359 100644 --- a/doc/development/testing_guide/end_to_end/page_objects.md +++ b/doc/development/testing_guide/end_to_end/page_objects.md @@ -4,22 +4,22 @@ In GitLab QA we are using a known pattern, called _Page Objects_. This means that we have built an abstraction for all pages in GitLab that we use to drive GitLab QA scenarios. Whenever we do something on a page, like filling -in a form, or clicking a button, we do that only through a page object +in a form or clicking a button, we do that only through a page object associated with this area of GitLab. For example, when GitLab QA test harness signs in into GitLab, it needs to fill -in a user login and user password. In order to do that, we have a class, called +in user login and user password. To do that, we have a class, called `Page::Main::Login` and `sign_in_using_credentials` methods, that is the only -piece of the code, that has knowledge about `user_login` and `user_password` +piece of the code, that reads the `user_login` and `user_password` fields. ## Why do we need that? -We need page objects, because we need to reduce duplication and avoid problems +We need page objects because we need to reduce duplication and avoid problems whenever someone changes some selectors in GitLab's source code. Imagine that we have a hundred specs in GitLab QA, and we need to sign into -GitLab each time, before we make assertions. Without a page object one would +GitLab each time, before we make assertions. Without a page object, one would need to rely on volatile helpers or invoke Capybara methods directly. Imagine invoking `fill_in :user_login` in every `*_spec.rb` file / test example. @@ -28,7 +28,7 @@ this page to `t.text_field :username` it will generate a different field identifier, what would effectively break all tests. Because we are using `Page::Main::Login.perform(&:sign_in_using_credentials)` -everywhere, when we want to sign into GitLab, the page object is the single +everywhere, when we want to sign in to GitLab, the page object is the single source of truth, and we will need to update `fill_in :user_login` to `fill_in :user_username` only in a one place. @@ -42,23 +42,23 @@ That is why when someone changes `t.text_field :login` to change until our GitLab QA nightly pipeline fails, or until someone triggers `package-and-qa` action in their merge request. -Obviously such a change would break all tests. We call this problem a _fragile +Such a change would break all tests. We call this problem a _fragile tests problem_. -In order to make GitLab QA more reliable and robust, we had to solve this +To make GitLab QA more reliable and robust, we had to solve this problem by introducing coupling between GitLab CE / EE views and GitLab QA. ## How did we solve fragile tests problem? Currently, when you add a new `Page::Base` derived class, you will also need to -define all selectors that your page objects depends on. +define all selectors that your page objects depend on. Whenever you push your code to CE / EE repository, `qa:selectors` sanity test job is going to be run as a part of a CI pipeline. This test is going to validate all page objects that we have implemented in `qa/page` directory. When it fails, you will be notified about missing -or invalid views / selectors definition. +or invalid views/selectors definition. ## How to properly implement a page object? @@ -89,7 +89,7 @@ end ### Defining Elements -The `view` DSL method will correspond to the rails View, partial, or Vue component that renders the elements. +The `view` DSL method will correspond to the Rails view, partial, or Vue component that renders the elements. The `element` DSL method in turn declares an element for which a corresponding `data-qa-selector=element_name_snaked` data attribute will need to be added to the view file. @@ -134,7 +134,7 @@ view 'app/views/my/view.html.haml' do end ``` -To add these elements to the view, you must change the rails View, partial, or Vue component by adding a `data-qa-selector` attribute +To add these elements to the view, you must change the Rails view, partial, or Vue component by adding a `data-qa-selector` attribute for each element defined. In our case, `data-qa-selector="login_field"`, `data-qa-selector="password_field"` and `data-qa-selector="sign_in_button"` @@ -228,7 +228,7 @@ expect(the_page).to have_element(:model, index: 1) #=> select on the first model ### Exceptions -In some cases it might not be possible or worthwhile to add a selector. +In some cases, it might not be possible or worthwhile to add a selector. Some UI components use external libraries, including some maintained by third parties. Even if a library is maintained by GitLab, the selector sanity test only runs diff --git a/doc/development/testing_guide/end_to_end/running_tests_that_require_special_setup.md b/doc/development/testing_guide/end_to_end/running_tests_that_require_special_setup.md index 77d820e1686..2cf2bb5b1d0 100644 --- a/doc/development/testing_guide/end_to_end/running_tests_that_require_special_setup.md +++ b/doc/development/testing_guide/end_to_end/running_tests_that_require_special_setup.md @@ -48,3 +48,89 @@ only to prevent it from running in the pipelines for live environments such as S If Jenkins Docker container exits without providing any information in the logs, try increasing the memory used by the Docker Engine. + +## Gitaly Cluster tests + +The tests tagged `:gitaly_ha` are orchestrated tests that can only be run against a set of Docker containers as configured and started by [the `Test::Integration::GitalyCluster` GitLab QA scenario](https://gitlab.com/gitlab-org/gitlab-qa/-/blob/master/docs/what_tests_can_be_run.md#testintegrationgitalycluster-ceeefull-image-address). + +As described in the documentation about the scenario noted above, the following command will run the tests: + +```shell +gitlab-qa Test::Integration::GitalyCluster EE +``` + +However, that will remove the containers after it finishes running the tests. If you would like to do further testing, for example, if you would like to run a single test via a debugger, you can use [the `--no-tests` option](https://gitlab.com/gitlab-org/gitlab-qa#command-line-options) to make `gitlab-qa` skip running the tests, and to leave the containers running so that you can continue to use them. + +```shell +gitlab-qa Test::Integration::GitalyCluster EE --no-tests +``` + +When all the containers are running you will see the output of the `docker ps` command, showing on which ports the GitLab container can be accessed. For example: + +```plaintext +CONTAINER ID ... PORTS NAMES +d15d3386a0a8 ... 22/tcp, 443/tcp, 0.0.0.0:32772->80/tcp gitlab-gitaly-ha +``` + +That shows that the GitLab instance running in the `gitlab-gitaly-ha` container can be reached via `http://localhost:32772`. However, Git operations like cloning and pushing are performed against the URL revealed via the UI as the clone URL. It uses the hostname configured for the GitLab instance, which in this case matches the Docker container name and network, `gitlab-gitaly-ha.test`. Before you can run the tests you need to configure your computer to access the container via that address. One option is to [use caddyserver as described for running tests against GDK](https://gitlab.com/gitlab-org/gitlab-qa/-/blob/master/docs/run_qa_against_gdk.md#workarounds). + +Another option is to use NGINX. + +In both cases you will need to configure your machine to translate `gitlab-gitlab-ha.test` into an appropriate IP address: + +```shell +echo '127.0.0.1 gitlab-gitaly-ha.test' | sudo tee -a /etc/hosts +``` + +Then install NGINX: + +```shell +# on macOS +brew install nginx + +# on Debian/Ubuntu +apt install nginx + +# on Fedora +yum install nginx +``` + +Finally, configure NGINX to pass requests for `gitlab-gitaly-ha.test` to the GitLab instance: + +```plaintext +# On Debian/Ubuntu, in /etc/nginx/sites-enabled/gitlab-cluster +# On macOS, in /usr/local/etc/nginx/nginx.conf + +server { + server_name gitlab-gitaly-ha.test; + client_max_body_size 500m; + + location / { + proxy_pass http://127.0.0.1:32772; + proxy_set_header Host gitlab-gitaly-ha.test; + } +} +``` + +Restart NGINX for the configuration to take effect. For example: + +```shell +# On Debian/Ubuntu +sudo systemctl restart nginx + +# on macOS +sudo nginx -s reload +``` + +You could then run the tests from the `/qa` directory: + +```shell +CHROME_HEADLESS=false bin/qa Test::Instance::All http://gitlab-gitaly-ha.test -- --tag gitaly_ha +``` + +Once you have finished testing you can stop and remove the Docker containers: + +```shell +docker stop gitlab-gitaly-ha praefect postgres gitaly3 gitaly2 gitaly1 +docker rm gitlab-gitaly-ha praefect postgres gitaly3 gitaly2 gitaly1 +``` diff --git a/doc/development/testing_guide/frontend_testing.md b/doc/development/testing_guide/frontend_testing.md index ef9fd748dbb..42ca65a74f2 100644 --- a/doc/development/testing_guide/frontend_testing.md +++ b/doc/development/testing_guide/frontend_testing.md @@ -24,9 +24,8 @@ We have started to migrate frontend tests to the [Jest](https://jestjs.io) testi Jest tests can be found in `/spec/frontend` and `/ee/spec/frontend` in EE. -> **Note:** -> -> Most examples have a Jest and Karma example. See the Karma examples only as explanation to what's going on in the code, should you stumble over some use cases during your discovery. The Jest examples are the one you should follow. +NOTE: **Note:** +Most examples have a Jest and Karma example. See the Karma examples only as explanation to what's going on in the code, should you stumble over some use cases during your discovery. The Jest examples are the one you should follow. ## Karma test suite @@ -170,22 +169,14 @@ Some more examples can be found in the [Frontend unit tests section](testing_lev Another common gotcha is that the specs end up verifying the mock is working. If you are using mocks, the mock should support the test, but not be the target of the test. -**Bad:** - ```javascript const spy = jest.spyOn(idGenerator, 'create') spy.mockImplementation = () = '1234' +// Bad expect(idGenerator.create()).toBe('1234') -``` - -**Good:** - -```javascript -const spy = jest.spyOn(idGenerator, 'create') -spy.mockImplementation = () = '1234' -// Actually focusing on the logic of your component and just leverage the controllable mocks output +// Good: actually focusing on the logic of your component and just leverage the controllable mocks output expect(wrapper.find('div').html()).toBe('<div id="1234">...</div>') ``` @@ -204,29 +195,67 @@ Following you'll find some general common practices you will find as part of our ### How to query DOM elements -When it comes to querying DOM elements in your tests, it is best to uniquely and semantically target the element. Sometimes this cannot be done feasibly. In these cases, adding test attributes to simplify the selectors might be the best option. +When it comes to querying DOM elements in your tests, it is best to uniquely and semantically target +the element. + +Preferentially, this is done by targeting text the user actually sees using [DOM Testing Library](https://testing-library.com/docs/dom-testing-library/intro). +When selecting by text it is best to use [`getByRole` or `findByRole`](https://testing-library.com/docs/dom-testing-library/api-queries#byrole) +as these enforce accessibility best practices as well. The examples below demonstrate the order of preference. -Preferentially, in component testing with `@vue/test-utils`, you should query for child components using the component itself. This helps enforce that specific behavior can be covered by that component's individual unit tests. Otherwise, try to use: +Sometimes this cannot be done feasibly. In these cases, adding test attributes to simplify the +selectors might be the best option. - A semantic attribute like `name` (also verifies that `name` was setup properly) - A `data-testid` attribute ([recommended by maintainers of `@vue/test-utils`](https://github.com/vuejs/vue-test-utils/issues/1498#issuecomment-610133465)) - a Vue `ref` (if using `@vue/test-utils`) -Examples: - ```javascript +import { mount, shallowMount } from '@vue/test-utils' +import { getByRole, getByText } from '@testing-library/dom' + +let wrapper +let el + +const createComponent = (mountFn = shallowMount) => { + wrapper = mountFn(Component) + el = wrapper.vm.$el // reference to the container element +} + +beforeEach(() => { + createComponent() +}) + + +it('exists', () => { + // Best + + // NOTE: both mount and shallowMount work as long as a DOM element is available + // Finds a properly formatted link with an accessable name of "Click Me" + getByRole(el, 'link', { name: /Click Me/i }) + getByRole(el, 'link', { name: 'Click Me' }) + // Finds any element with the text "Click Me" + getByText(el, 'Click Me') + // Regex is also available + getByText(el, /Click Me/i) + + // Good + wrapper.find('input[name=foo]'); + wrapper.find('[data-testid="foo"]'); + wrapper.find({ ref: 'foo'}); + + // Bad + wrapper.find('.js-foo'); + wrapper.find('.btn-primary'); + wrapper.find('.qa-foo-component'); + wrapper.find('[data-qa-selector="foo"]'); +}); + +// Good it('exists', () => { - // Good wrapper.find(FooComponent); wrapper.find('input[name=foo]'); wrapper.find('[data-testid="foo"]'); wrapper.find({ ref: 'foo'}); - - // Bad - wrapper.find('.js-foo'); - wrapper.find('.btn-primary'); - wrapper.find('.qa-foo-component'); - wrapper.find('[data-qa-selector="foo"]'); }); ``` @@ -234,28 +263,47 @@ It is not recommended that you add `.js-*` classes just for testing purposes. On Do not use a `.qa-*` class or `data-qa-selector` attribute for any tests other than QA end-to-end testing. +### Querying for child components + +When testing Vue components with `@vue/test-utils` another possible approach is querying for child +components instead of querying for DOM nodes. This assumes that implementation details of behavior +under test should be covered by that component's individual unit test. There is no strong preference +in writing DOM or component queries as long as your tests reliably cover expected behavior for the +component under test. + +Example: + +```javascript +it('exists', () => { + wrapper.find(FooComponent); +}); +``` + ### Naming unit tests When writing describe test blocks to test specific functions/methods, please use the method name as the describe block name. +**Bad**: + ```javascript -// Good -describe('methodName', () => { +describe('#methodName', () => { it('passes', () => { expect(true).toEqual(true); }); }); -// Bad -describe('#methodName', () => { +describe('.methodName', () => { it('passes', () => { expect(true).toEqual(true); }); }); +``` -// Bad -describe('.methodName', () => { +**Good**: + +```javascript +describe('methodName', () => { it('passes', () => { expect(true).toEqual(true); }); @@ -286,61 +334,67 @@ it('tests a promise rejection', async () => { You can also work with Promise chains. In this case, you can make use of the `done` callback and `done.fail` in case an error occurred. Following are some examples: +**Bad**: + ```javascript -// Good +// missing done callback +it('tests a promise', () => { + promise.then(data => { + expect(data).toBe(asExpected); + }); +}); + +// missing catch it('tests a promise', done => { promise .then(data => { expect(data).toBe(asExpected); }) - .then(done) - .catch(done.fail); + .then(done); }); -// Good -it('tests a promise rejection', done => { +// use done.fail in asynchronous tests +it('tests a promise', done => { promise - .then(done.fail) - .catch(error => { - expect(error).toBe(expectedError); + .then(data => { + expect(data).toBe(asExpected); }) .then(done) - .catch(done.fail); -}); - -// Bad (missing done callback) -it('tests a promise', () => { - promise.then(data => { - expect(data).toBe(asExpected); - }); + .catch(fail); }); -// Bad (missing catch) -it('tests a promise', done => { +// missing catch +it('tests a promise rejection', done => { promise - .then(data => { - expect(data).toBe(asExpected); + .catch(error => { + expect(error).toBe(expectedError); }) .then(done); }); +``` -// Bad (use done.fail in asynchronous tests) +**Good**: + +```javascript +// handling success it('tests a promise', done => { promise .then(data => { expect(data).toBe(asExpected); }) .then(done) - .catch(fail); + .catch(done.fail); }); -// Bad (missing catch) +// failure case it('tests a promise rejection', done => { promise + .then(done.fail) .catch(error => { expect(error).toBe(expectedError); }) - .then(done); + .then(done) + .catch(done.fail); }); ``` @@ -564,11 +618,11 @@ Examples: ```javascript const foo = 1; -// good -expect(foo).toBe(1); - -// bad +// Bad expect(foo).toEqual(1); + +// Good +expect(foo).toBe(1); ``` #### Prefer more befitting matchers @@ -621,12 +675,11 @@ Jest has the tricky `toBeDefined` matcher that can produce false positive test. the given value for `undefined` only. ```javascript -// good -expect(wrapper.find('foo').exists()).toBe(true); - -// bad -// if finder returns null, the test will pass +// Bad: if finder returns null, the test will pass expect(wrapper.find('foo')).toBeDefined(); + +// Good +expect(wrapper.find('foo').exists()).toBe(true); ``` #### Avoid using `setImmediate` @@ -771,13 +824,37 @@ yarn karma -f 'spec/javascripts/ide/**/file_spec.js' ## Frontend test fixtures -Code that is added to HAML templates (in `app/views/`) or makes Ajax requests to the backend has tests that require HTML or JSON from the backend. -Fixtures for these tests are located at: +Frontend fixtures are files containing responses from backend controllers. These responses can be either HTML +generated from haml templates or JSON payloads. Frontend tests that rely on these responses are +often using fixtures to validate correct integration with the backend code. + +### Generate fixtures + +You can find code to generate test fixtures in: - `spec/frontend/fixtures/`, for running tests in CE. - `ee/spec/frontend/fixtures/`, for running tests in EE. -Fixture files in: +You can generate fixtures by running: + +- `bin/rake frontend:fixtures` to generate all fixtures +- `bin/rspec spec/frontend/fixtures/merge_requests.rb` to generate specific fixtures (in this case for `merge_request.rb`) + +You can find generated fixtures are in `tmp/tests/frontend/fixtures-ee`. + +#### Creating new fixtures + +For each fixture, you can find the content of the `response` variable in the output file. +For example, test named `"merge_requests/diff_discussion.json"` in `spec/frontend/fixtures/merge_requests.rb` +will produce output file `tmp/tests/frontend/fixtures-ee/merge_requests/diff_discussion.json`. +The `response` variable gets automatically set if the test is marked as `type: :request` or `type: :controller`. + +When creating a new fixture, it often makes sense to take a look at the corresponding tests for the +endpoint in `(ee/)spec/controllers/` or `(ee/)spec/requests/`. + +### Use fixtures + +Jest and Karma test suites import fixtures in different ways: - The Karma test suite are served by [jasmine-jquery](https://github.com/velesin/jasmine-jquery). - Jest use `spec/frontend/helpers/fixtures.js`. @@ -803,14 +880,6 @@ it('uses some HTML element', () => { }); ``` -HTML and JSON fixtures are generated from backend views and controllers using RSpec (see `spec/frontend/fixtures/*.rb`). - -For each fixture, the content of the `response` variable is stored in the output file. -This variable gets automatically set if the test is marked as `type: :request` or `type: :controller`. -Fixtures are regenerated using the `bin/rake frontend:fixtures` command but you can also generate them individually, -for example `bin/rspec spec/frontend/fixtures/merge_requests.rb`. -When creating a new fixture, it often makes sense to take a look at the corresponding tests for the endpoint in `(ee/)spec/controllers/` or `(ee/)spec/requests/`. - ## Data-driven tests Similar to [RSpec's parameterized tests](best_practices.md#table-based--parameterized-tests), diff --git a/doc/development/testing_guide/review_apps.md b/doc/development/testing_guide/review_apps.md index 54f8ca0d98b..68816ccfe45 100644 --- a/doc/development/testing_guide/review_apps.md +++ b/doc/development/testing_guide/review_apps.md @@ -142,6 +142,9 @@ the following node pools: - `e2-highcpu-16` (16 vCPU, 16 GB memory) pre-emptible nodes with autoscaling +Node pool image type must be `Container-Optimized OS (cos)`, not `Container-Optimized OS with Containerd (cos_containerd)`, +due to this [known issue on GitLab Runner Kubernetes executor](https://gitlab.com/gitlab-org/gitlab-runner/-/issues/4755) + ### Helm The Helm version used is defined in the @@ -153,13 +156,12 @@ used by the `review-deploy` and `review-stop` jobs. ### Get access to the GCP Review Apps cluster You need to [open an access request (internal link)](https://gitlab.com/gitlab-com/access-requests/-/issues/new) -for the `gcp-review-apps-sg` GCP group. In order to join a group, you must specify the desired GCP role in your access request. -The role is what will grant you specific permissions in order to engage with Review App containers. +for the `gcp-review-apps-dev` GCP group and role. -Here are some permissions you may want to have, and the roles that grant them: +This will grant you the following permissions for: -- `container.pods.getLogs` - Required to [retrieve pod logs](#dig-into-a-pods-logs). Granted by [Viewer (`roles/viewer`)](https://cloud.google.com/iam/docs/understanding-roles#kubernetes-engine-roles). -- `container.pods.exec` - Required to [run a Rails console](#run-a-rails-console). Granted by [Kubernetes Engine Developer (`roles/container.developer`)](https://cloud.google.com/iam/docs/understanding-roles#kubernetes-engine-roles). +- [Retrieving pod logs](#dig-into-a-pods-logs). Granted by [Viewer (`roles/viewer`)](https://cloud.google.com/iam/docs/understanding-roles#kubernetes-engine-roles). +- [Running a Rails console](#run-a-rails-console). Granted by [Kubernetes Engine Developer (`roles/container.pods.exec`)](https://cloud.google.com/iam/docs/understanding-roles#kubernetes-engine-roles). ### Log into my Review App diff --git a/doc/development/testing_guide/testing_migrations_guide.md b/doc/development/testing_guide/testing_migrations_guide.md index 8ee758177c3..a5bcb651d71 100644 --- a/doc/development/testing_guide/testing_migrations_guide.md +++ b/doc/development/testing_guide/testing_migrations_guide.md @@ -37,15 +37,37 @@ ensures proper isolation. To test an `ActiveRecord::Migration` class (i.e., a regular migration `db/migrate` or a post-migration `db/post_migrate`), you -will need to manually `require` the migration file because it is not -autoloaded with Rails. Example: +will need to load the migration file by using the `require_migration!` helper +method because it is not autoloaded by Rails. + +Example: ```ruby -require Rails.root.join('db', 'post_migrate', '20170526185842_migrate_pipeline_stages.rb') +require 'spec_helper' + +require_migration! + +RSpec.describe ... ``` ### Test helpers +#### `require_migration!` + +Since the migration files are not autoloaded by Rails, you will need to manually +load the migration file. To do so, you can use the `require_migration!` helper method +which can automatically load the correct migration file based on the spec file name. + +For example, if your spec file is named as `populate_foo_column_spec.rb` then the +helper method will try to load `${schema_version}_populate_foo_column.rb` migration file. + +In case there is no pattern between your spec file and the actual migration file, +you can provide the migration file name without the schema version, like so: + +```ruby +require_migration!('populate_foo_column') +``` + #### `table` Use the `table` helper to create a temporary `ActiveRecord::Base`-derived model @@ -110,7 +132,8 @@ migration. You can find the complete spec in ```ruby require 'spec_helper' -require Rails.root.join('db', 'post_migrate', '20170526185842_migrate_pipeline_stages.rb') + +require_migration! RSpec.describe MigratePipelineStages do # Create test data - pipeline and CI/CD jobs. |