diff options
Diffstat (limited to 'doc/development/testing_guide')
9 files changed, 279 insertions, 8 deletions
diff --git a/doc/development/testing_guide/best_practices.md b/doc/development/testing_guide/best_practices.md index 32e079f915c..fe3989474e6 100644 --- a/doc/development/testing_guide/best_practices.md +++ b/doc/development/testing_guide/best_practices.md @@ -44,6 +44,14 @@ bundle exec rspec bundle exec rspec spec/[path]/[to]/[spec].rb ``` +Use [guard](https://github.com/guard/guard) to continuously monitor for changes and only run matching tests: + +```sh +bundle exec guard +``` + +When using spring and guard together, use `SPRING=1 bundle exec guard` instead to make use of spring. + ### General guidelines - Use a single, top-level `describe ClassName` block. @@ -61,6 +69,7 @@ bundle exec rspec spec/[path]/[to]/[spec].rb - When using `evaluate_script("$('.js-foo').testSomething()")` (or `execute_script`) which acts on a given element, use a Capyabara matcher beforehand (e.g. `find('.js-foo')`) to ensure the element actually exists. - Use `focus: true` to isolate parts of the specs you want to run. +- Use [`:aggregate_failures`](https://relishapp.com/rspec/rspec-core/docs/expectation-framework-integration/aggregating-failures) when there is more than one expectation in a test. ### System / Feature tests @@ -356,10 +365,22 @@ However, if a spec makes direct Redis calls, it should mark itself with the `:clean_gitlab_redis_cache`, `:clean_gitlab_redis_shared_state` or `:clean_gitlab_redis_queues` traits as appropriate. -Sidekiq jobs are typically not run in specs, but this behaviour can be altered -in each spec through the use of `perform_enqueued_jobs` blocks. Any spec that -causes Sidekiq jobs to be pushed to Redis should use the `:sidekiq` trait, to -ensure that they are removed once the spec completes. +#### Background jobs / Sidekiq + +By default, Sidekiq jobs are enqueued into a jobs array and aren't processed. +If a test enqueues Sidekiq jobs and need them to be processed, the +`:sidekiq_inline` trait can be used. + +The `:sidekiq_might_not_need_inline` trait was added when [Sidekiq inline mode was +changed to fake mode](https://gitlab.com/gitlab-org/gitlab/merge_requests/15479) +to all the tests that needed Sidekiq to actually process jobs. Tests with +this trait should be either fixed to not rely on Sidekiq processing jobs, or their +`:sidekiq_might_not_need_inline` trait should be updated to `:sidekiq_inline` if +the processing of background jobs is needed/expected. + +NOTE: **Note:** +The usage of `perform_enqueued_jobs` is currently useless since our +workers aren't inheriting from `ApplicationJob` / `ActiveJob::Base`. #### Filesystem diff --git a/doc/development/testing_guide/end_to_end/best_practices.md b/doc/development/testing_guide/end_to_end/best_practices.md index 042879b47aa..e2a0d267ba1 100644 --- a/doc/development/testing_guide/end_to_end/best_practices.md +++ b/doc/development/testing_guide/end_to_end/best_practices.md @@ -65,3 +65,31 @@ This library [saves the screenshots in the RSpec's `after` hook](https://github. Given this fact, we should limit the use of `before(:all)` to only those operations where a screenshot is not necessary in case of failure and QA logs would be enough for debugging. + +## Ensure tests do not leave the browser logged in + +All QA tests expect to be able to log in at the start of the test. + +That's not possible if a test leaves the browser logged in when it finishes. Normally this isn't a problem because [Capybara resets the session after each test](https://github.com/teamcapybara/capybara/blob/9ebc5033282d40c73b0286e60217515fd1bb0b5d/lib/capybara/rspec.rb#L18). But Capybara does that in an `after` block, so when a test logs in in an `after(:context)` block, the browser returns to a logged in state *after* Capybara had logged it out. And so the next test will fail. + +For an example see: <https://gitlab.com/gitlab-org/gitlab/issues/34736> + +Ideally, any actions peformed in an `after(:context)` (or [`before(:context)`](#limit-the-use-of-beforeall-hook)) block would be performed via the API. But if it's necessary to do so via the UI (e.g., if API functionality doesn't exist), make sure to log out at the end of the block. + +```ruby +after(:all) do + login unless Page::Main::Menu.perform(&:signed_in?) + + # Do something while logged in + + Page::Main::Menu.perform(&:sign_out) +end +``` + +## Tag tests that require Administrator access + +We don't run tests that require Administrator access against our Production environments. + +When you add a new test that requires Administrator access, apply the RSpec metadata `:requires_admin` so that the test will not be included in the test suites executed against Production and other environments on which we don't want to run those tests. + +Note: When running tests locally or configuring a pipeline, the environment variable `QA_CAN_TEST_ADMIN_FEATURES` can be set to `false` to skip tests that have the `:requires_admin` tag. diff --git a/doc/development/testing_guide/end_to_end/feature_flags.md b/doc/development/testing_guide/end_to_end/feature_flags.md new file mode 100644 index 00000000000..bf1e70be9cb --- /dev/null +++ b/doc/development/testing_guide/end_to_end/feature_flags.md @@ -0,0 +1,27 @@ +# Testing with feature flags + +To run a specific test with a feature flag enabled you can use the `QA::Runtime::Feature` class to enabled and disable feature flags ([via the API](../../../api/features.md)). + +Note that administrator authorization is required to change feature flags. `QA::Runtime::Feature` will automatically authenticate as an administrator as long as you provide an appropriate access token via `GITLAB_QA_ADMIN_ACCESS_TOKEN` (recommended), or provide `GITLAB_ADMIN_USERNAME` and `GITLAB_ADMIN_PASSWORD`. + +```ruby +context "with feature flag enabled" do + before do + Runtime::Feature.enable('feature_flag_name') + end + + it "feature flag test" do + # Execute a test with a feature flag enabled + end + + after do + Runtime::Feature.disable('feature_flag_name') + end +end +``` + +## Running a scenario with a feature flag enabled + +It's also possible to run an entire scenario with a feature flag enabled, without having to edit existing tests or write new ones. + +Please see the [QA readme](https://gitlab.com/gitlab-org/gitlab/tree/master/qa#running-tests-with-a-feature-flag-enabled) for details. diff --git a/doc/development/testing_guide/end_to_end/flows.md b/doc/development/testing_guide/end_to_end/flows.md new file mode 100644 index 00000000000..fb1d82914aa --- /dev/null +++ b/doc/development/testing_guide/end_to_end/flows.md @@ -0,0 +1,56 @@ +# Flows in GitLab QA + +Flows are frequently used sequences of actions. They are a higher level +of abstraction than page objects. Flows can include multiple page objects, +or any other relevant code. + +For example, the sign in flow encapsulates two steps that are included +in every browser UI test. + +```ruby +# QA::Flow::Login + +def sign_in(as: nil) + Runtime::Browser.visit(:gitlab, Page::Main::Login) + Page::Main::Login.perform { |login| login.sign_in_using_credentials(user: as) } +end + +# When used in a test + +it 'performs a test after signing in as the default user' do + Flow::Login.sign_in + + # Perform the test +end +``` + +`QA::Flow::Login` provides an even more useful flow, allowing a test to easily switch users. + +```ruby +# QA::Flow::Login + +def while_signed_in(as: nil) + Page::Main::Menu.perform(&:sign_out_if_signed_in) + + sign_in(as: as) + + yield + + Page::Main::Menu.perform(&:sign_out) +end + +# When used in a test + +it 'performs a test as one user and verifies as another' do + user1 = Resource::User.fabricate_or_use(Runtime::Env.gitlab_qa_username_1, Runtime::Env.gitlab_qa_password_1) + user2 = Resource::User.fabricate_or_use(Runtime::Env.gitlab_qa_username_2, Runtime::Env.gitlab_qa_password_2) + + Flow::Login.while_signed_in(as: user1) do + # Perform some setup as user1 + end + + Flow::Login.sign_in(as: user2) + + # Perform the rest of the test as user2 +end +``` diff --git a/doc/development/testing_guide/end_to_end/index.md b/doc/development/testing_guide/end_to_end/index.md index a9fb4be284e..19885f5756f 100644 --- a/doc/development/testing_guide/end_to_end/index.md +++ b/doc/development/testing_guide/end_to_end/index.md @@ -130,6 +130,8 @@ Continued reading: - [Quick Start Guide](quick_start_guide.md) - [Style Guide](style_guide.md) - [Best Practices](best_practices.md) +- [Testing with feature flags](feature_flags.md) +- [Flows](flows.md) ## Where can I ask for help? diff --git a/doc/development/testing_guide/end_to_end/page_objects.md b/doc/development/testing_guide/end_to_end/page_objects.md index 28111c18378..554995fa2e2 100644 --- a/doc/development/testing_guide/end_to_end/page_objects.md +++ b/doc/development/testing_guide/end_to_end/page_objects.md @@ -167,6 +167,65 @@ There are two supported methods of defining elements within a view. Any existing `.qa-selector` class should be considered deprecated and we should prefer the `data-qa-selector` method of definition. +### Dynamic element selection + +> Introduced in GitLab 12.5 + +A common occurrence in automated testing is selecting a single "one-of-many" element. +In a list of several items, how do you differentiate what you are selecting on? +The most common workaround for this is via text matching. Instead, a better practice is +by matching on that specific element by a unique identifier, rather than by text. + +We got around this by adding the `data-qa-*` extensible selection mechanism. + +#### Examples + +**Example 1** + +Given the following Rails view (using GitLab Issues as an example): + +```haml +%ul.issues-list + - @issues.each do |issue| + %li.issue{data: { qa_selector: 'issue', qa_issue_title: issue.title } }= link_to issue +``` + +We can select on that specific issue by matching on the Rails model. + +```ruby +class Page::Project::Issues::Index < Page::Base + def has_issue?(issue) + has_element? :issue, issue_title: issue + end +end +``` + +In our test, we can validate that this particular issue exists. + +```ruby +describe 'Issue' do + it 'has an issue titled "hello"' do + Page::Project::Issues::Index.perform do |index| + expect(index).to have_issue('hello') + end + end +end +``` + +**Example 2** + +*By an index...* + +```haml +%ol + - @some_model.each_with_index do |model, idx| + %li.model{ data: { qa_selector: 'model', qa_index: idx } } +``` + +```ruby +expect(the_page).to have_element(:model, index: 1) #=> select on the first model that appears in the list +``` + ### Exceptions In some cases it might not be possible or worthwhile to add a selector. diff --git a/doc/development/testing_guide/flaky_tests.md b/doc/development/testing_guide/flaky_tests.md index 0823c2e02b8..3a96f8204fc 100644 --- a/doc/development/testing_guide/flaky_tests.md +++ b/doc/development/testing_guide/flaky_tests.md @@ -83,6 +83,7 @@ This was originally implemented in: <https://gitlab.com/gitlab-org/gitlab-foss/m - In JS tests, shifting elements can cause Capybara to misclick when the element moves at the exact time Capybara sends the click - [Dropdowns rendering upward or downward due to window size and scroll position](https://gitlab.com/gitlab-org/gitlab/merge_requests/17660) - [Lazy loaded images can cause Capybara to misclick](https://gitlab.com/gitlab-org/gitlab/merge_requests/18713) +- [Triggering JS events before the event handlers are set up](https://gitlab.com/gitlab-org/gitlab/merge_requests/18742) #### Capybara viewport size related issues diff --git a/doc/development/testing_guide/frontend_testing.md b/doc/development/testing_guide/frontend_testing.md index 314995ca9b3..236f175cee5 100644 --- a/doc/development/testing_guide/frontend_testing.md +++ b/doc/development/testing_guide/frontend_testing.md @@ -119,6 +119,50 @@ Global mocks introduce magic and can affect how modules are imported in your tes When in doubt, construct mocks in your test file using [`jest.mock()`](https://jestjs.io/docs/en/jest-object#jestmockmodulename-factory-options), [`jest.spyOn()`](https://jestjs.io/docs/en/jest-object#jestspyonobject-methodname), etc. +### Data-driven tests + +Similar to [RSpec's parameterized tests](best_practices.md#table-based--parameterized-tests), +Jest supports data-driven tests for: + +- Individual tests using [`test.each`](https://jestjs.io/docs/en/api#testeachtable-name-fn-timeout) (aliased to `it.each`). +- Groups of tests using [`describe.each`](https://jestjs.io/docs/en/api#describeeachtable-name-fn-timeout). + +These can be useful for reducing repetition within tests. Each option can take an array of +data values or a tagged template literal. + +For example: + +```javascript +// function to test +const icon = status => status ? 'pipeline-passed' : 'pipeline-failed' +const message = status => status ? 'pipeline-passed' : 'pipeline-failed' + +// test with array block +it.each([ + [false, 'pipeline-failed'], + [true, 'pipeline-passed'] +])('icon with %s will return %s', + (status, icon) => { + expect(renderPipeline(status)).toEqual(icon) + } +); + +// test suite with tagged template literal block +describe.each` + status | icon | message + ${false} | ${'pipeline-failed'} | ${'Pipeline failed - boo-urns'} + ${true} | ${'pipeline-passed'} | ${'Pipeline succeeded - win!'} +`('pipeline component', ({ status, icon, message }) => { + it(`returns icon ${icon} with status ${status}`, () => { + expect(icon(status)).toEqual(message) + }) + + it(`returns message ${message} with status ${status}`, () => { + expect(message(status)).toEqual(message) + }) +}); +``` + ## Karma test suite GitLab uses the [Karma][karma] test runner with [Jasmine] as its test @@ -457,6 +501,39 @@ it('waits for an event', () => { }); ``` +#### Ensuring that tests are isolated + +Tests are normally architected in a pattern which requires a recurring setup and breakdown of the component under test. This is done by making use of the `beforeEach` and `afterEach` hooks. + +Example + +```javascript + let wrapper; + + beforeEach(() => { + wrapper = mount(Component); + }); + + afterEach(() => { + wrapper.destroy(); + }); +``` + +When looking at this initially you'd suspect that the component is setup before each test and then broken down afterwards, providing isolation between tests. + +This is however not entirely true as the `destroy` method does not remove everything which has been mutated on the `wrapper` object. For functional components, destroy only removes the rendered DOM elements from the document. + +In order to ensure that a clean wrapper object and DOM are being used in each test, the breakdown of the component should rather be performed as follows: + +```javascript + afterEach(() => { + wrapper.destroy(); + wrapper = null; + }); +``` + +See also the [Vue Test Utils documention on `destroy`](https://vue-test-utils.vuejs.org/api/wrapper/#destroy). + #### Migrating flaky Karma tests to Jest Some of our Karma tests are flaky because they access the properties of a shared scope. diff --git a/doc/development/testing_guide/review_apps.md b/doc/development/testing_guide/review_apps.md index 3dd403f148e..ecfcbc731e1 100644 --- a/doc/development/testing_guide/review_apps.md +++ b/doc/development/testing_guide/review_apps.md @@ -189,10 +189,10 @@ that the `review-apps-ce/ee` cluster is unhealthy. Leading indicators may be hea The following items may help diagnose this: -- [Instance group CPU Utilization in GCP](https://console.cloud.google.com/compute/instanceGroups/details/us-central1-b/gke-review-apps-ee-preemp-n1-standard-8affc0f5-grp?project=gitlab-review-apps&tab=monitoring&graph=GCE_CPU&duration=P30D) - helpful to identify if nodes are problematic or the entire cluster is trending towards unhealthy -- [Instance Group size in GCP](https://console.cloud.google.com/compute/instanceGroups/details/us-central1-b/gke-review-apps-ee-preemp-n1-standard-8affc0f5-grp?project=gitlab-review-apps&tab=monitoring&graph=GCE_SIZE&duration=P30D) - aids in identifying load spikes on the cluster. Kubernetes will add nodes up to 220 based on total resource requests. -- `kubectl top nodes --sort-by=cpu` - can identify if node spikes are common or load on specific nodes which may get rebalanced by the Kubernetes scheduler. -- `kubectl top pods --sort-by=cpu` - +- [Review Apps Health dashboard](https://app.google.stackdriver.com/dashboards/6798952013815386466?project=gitlab-review-apps&timeDomain=1d) + - Aids in identifying load spikes on the cluster, and if nodes are problematic or the entire cluster is trending towards unhealthy. +- `kubectl top nodes | sort --key 3 --numeric` - can identify if node spikes are common or load on specific nodes which may get rebalanced by the Kubernetes scheduler. +- `kubectl top pods | sort --key 2 --numeric` - - [K9s] - K9s is a powerful command line dashboard which allows you to filter by labels. This can help identify trends with apps exceeding the [review-app resource requests](https://gitlab.com/gitlab-org/gitlab/blob/master/scripts/review_apps/base-config.yaml). Kubernetes will schedule pods to nodes based on resource requests and allow for CPU usage up to the limits. - In K9s you can sort or add filters by typing the `/` character - `-lrelease=<review-app-slug>` - filters down to all pods for a release. This aids in determining what is having issues in a single deployment |