| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|\ |
|
| |
| |
| |
| | |
+ remove complete leftover when Issues were tagged using acts_as_taggable
|
|/ |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
Refactor overall code and fix failing specs
Fix Project#to_reference
Fix wrong spaces and update changelog
Refactor #to_reference for Project & Issue
Fix and improves Project#to_reference
|
|
|
|
| |
Also added relevant specs and refactored to_references in a bunch of places to be more consistent.
|
| |
|
|
|
|
|
|
|
| |
Issue#visible_to_user moved to IssuesFinder
Fixes https://gitlab.com/gitlab-org/gitlab-ce/issues/24637.
See merge request !2039
|
| |
|
|\
| |
| |
| |
| |
| |
| | |
Remove caching of events data
This MR removes the caching of events data as this was deemed unnecessary while increasing load on the database. See https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/6578#note_18864037 and 5371da341e9d7768ebab8e159b3e2cc8fad1d827 for more information.
See merge request !6578
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Flushing the events cache worked by updating a recent number of rows in
the "events" table. This has the result that on PostgreSQL a lot of dead
tuples are produced on a regular basis. This in turn means that
PostgreSQL will spend considerable amounts of time vacuuming this table.
This in turn can lead to an increase of database load.
For GitLab.com we measured the impact of not using events caching and
found no measurable increase in response timings. Meanwhile not flushing
the events cache lead to the "events" table having no more dead tuples
as now rows are only inserted into this table.
As a result of this we are hereby removing events caching as it does not
appear to help and only increases database load.
For more information see the following comment:
https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/6578#note_18864037
|
|/
|
|
| |
Closes #23938
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
disable markdown in comments when referencing disabled features
fixes https://gitlab.com/gitlab-org/gitlab-ce/issues/23548
This MR prevents the following references when tool is disabled:
- issues
- snippets
- commits - when repo is disabled
- commit range - when repo is disabled
- milestones
This MR does not prevent references to repository files, since they are just markdown links and don't leak
information.
See merge request !2011
Signed-off-by: Rémy Coutable <remy@rymai.me>
|
|
|
|
|
|
| |
fix issues pointed out in !6527
add task completion status feature to CHANGELOG
|
|\ |
|
| |
| |
| |
| | |
method
|
|\ \
| |/ |
|
| | |
|
| | |
|
|/ |
|
|
|
|
|
|
|
| |
`MergeRequestsClosingIssues`
- Instead of overriding `create` and `update` in `MergeRequests::BaseService`
- Get all merge request service specs passing
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Don't use `TableReferences` - using `.arel_table` is shorter!
- Move some database-related code to `Gitlab::Database`
- Remove the `MergeRequest#issues_closed` and
`Issue#closed_by_merge_requests` associations. They were either
shadowing or were too similar to existing methods. They are not being
used anywhere, so it's better to remove them to reduce confusion.
- Use Rails 3-style validations
- Index for `MergeRequest::Metrics#first_deployed_to_production_at`
- Only include `CycleAnalyticsHelpers::TestGeneration` for specs that
need it.
- Other minor refactorings.
|
|
|
|
|
|
| |
- Move things common to `Issue` and `MergeRequest` into `Issuable`
- Move more database-specific functions into `Gitlab::Database`
- Indentation changes and other minor refactorings.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1. Change multiple updates to a single `update_all`
2. Use cascading deletes
3. Extract an average function for the database median.
4. Move database median to `lib/gitlab/database`
5. Use `delete_all` instead of `destroy_all`
6. Minor refactoring
|
|
|
|
|
|
|
|
| |
1. Add indexes to `CreateMergeRequestsClosingIssues` columns.
2. Remove an extraneous `check_if_open` check that is redundant now.
It would've been better to rebase this in, but that's not possible
because more people are working on this branch.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1. These changes bring down page load time for 100 issues from more than
a minute to about 1.5 seconds.
2. This entire commit is composed of these types of performance
enhancements:
- Cache relevant data in `IssueMetrics` wherever possible.
- Cache relevant data in `MergeRequestMetrics` wherever possible.
- Preload metrics
3. Given these improvements, we now only need to make 4 SQL calls:
- Load all issues
- Load all merge requests
- Load all metrics for the issues
- Load all metrics for the merge requests
4. A list of all the data points that are now being pre-calculated:
a. The first time an issue is mentioned in a commit
- In `GitPushService`, find all issues mentioned by the given commit
using `ReferenceExtractor`. Set the `first_mentioned_in_commit_at`
flag for each of them.
- There seems to be a (pre-existing) bug here - files (and
therefore commits) created using the Web CI don't have
cross-references created, and issues are not closed even when
the commit title is "Fixes #xx".
b. The first time a merge request is deployed to production
When a `Deployment` is created, find all merge requests that
were merged in before the deployment, and set the
`first_deployed_to_production_at` flag for each of them.
c. The start / end time for a merge request pipeline
Hook into the `Pipeline` state machine. When the `status` moves to
`running`, find the merge requests whose tip commit matches the
pipeline, and record the `latest_build_started_at` time for each
of them. When the `status` moves to `success`, record the
`latest_build_finished_at` time.
d. The merge requests that close an issue
- This was a big cause of the performance problems we were having
with Cycle Analytics. We need to use `ReferenceExtractor` to make
this calculation, which is slow when we have to run it on a large
number of merge requests.
- When a merge request is created, updated, or refreshed, find the
issues it closes, and create an instance of
`MergeRequestsClosingIssues`, which acts as a join model between
merge requests and issues.
- If a `MergeRequestsClosingIssues` instance links a merge request
and an issue, that issue closes that merge request.
5. The `Queries` module was changed into a class, so we can cache the
results of `issues` and `merge_requests_closing_issues` across
various cycle analytics stages.
6. The code added in this commit is untested. Tests will be added in the
next commit.
|
| |
|
|
|
|
|
| |
- And store the `first_associated_with_milestone_at` and
`first_added_to_board_at` times, when an issue is saved.
|
| |
|
|
|
|
| |
- Refactored SpamCheckService into SpamService
|
|
|
|
|
|
|
| |
- Removed unnecessary column from `SpamLog`
- Moved creation of SpamLogs out of its own service and into SpamCheckService
- Simplified code in SpamCheckService.
- Moved move spam related code into Spammable concern
|
|
|
|
|
|
|
| |
- Merged `AkismetSubmittable` into `Spammable`
- Clean up `SpamCheckService`
- Added tests for `Spammable`
- Added submit (ham or spam) options to `AkismetHelper`
|
|
|
|
|
|
| |
- New concern `AkismetSubmittable` to allow issues and other `Spammable` models to be submitted to Akismet.
- New model `UserAgentDetail` to store information needed for Akismet.
- Services needed for their creation and tests.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This concern provides an optimized/simplified version of the "cache_key"
method. This method is about 9 times faster than the default "cache_key"
method.
The produced cache keys _are_ different from the previous ones but this
is worth the performance improvement. To showcase this I set up a
benchmark (using benchmark-ips) that compares FasterCacheKeys#cache_key
with the regular cache_key. The output of this benchmark was:
Calculating -------------------------------------
cache_key 4.825k i/100ms
cache_key_fast 21.723k i/100ms
-------------------------------------------------
cache_key 59.422k (± 7.2%) i/s - 299.150k
cache_key_fast 543.243k (± 9.2%) i/s - 2.694M
Comparison:
cache_key_fast: 543243.4 i/s
cache_key: 59422.0 i/s - 9.14x slower
To see the impact on real code I applied these changes and benchmarked
Issue#referenced_merge_requests. For an issue referencing 10 merge
requests these changes shaved off between 40 and 60 milliseconds.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The method Ability.issues_readable_by_user takes a list of users and an
optional user and returns an Array of issues readable by said user. This
method in turn is used by
Banzai::ReferenceParser::IssueParser#nodes_visible_to_user so this
method no longer needs to get all the available abilities just to check
if a user has the "read_issue" ability.
To test this I benchmarked an issue with 222 comments on my development
environment. Using these changes the time spent in nodes_visible_to_user
was reduced from around 120 ms to around 40 ms.
|
|
|
|
| |
legibility in `SpamCheckService`
|
|
|
|
|
|
|
|
|
|
| |
user projects
Currently, even when searching for all authorized issues of *one* project, we run the
`Users#authorized_projects` query (which can be rather slow). This update checks if
we are handling issues of just one project and does the authorization check locally.
It does have the downside of basically repeating the logic of `Users#authorized_projects`
on `Project#authorized_for_user`.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
| |
of max database values
When using #XYZ in Markdown text, if XYZ exceeds the maximum value of a signed 32-bit integer, we
get an exception when the Markdown render attempts to run `where(iids: XYZ)`. Introduce a method
that will throw out out-of-bounds values.
Closes #18777
|
| |
|
| |
|
|
|
|
| |
Signed-off-by: Rémy Coutable <remy@rymai.me>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There are several changes to this module:
1. The use of an explicit stack in Participable#participants
2. Proc behaviour has been changed
3. Batch permissions checking
== Explicit Stack
Participable#participants no longer uses recursion to process "self" and
all child objects, instead it uses an Array and processes objects in
breadth-first order. This allows us to for example create a single
Gitlab::ReferenceExtractor instance and pass this to any Procs. Re-using
a ReferenceExtractor removes the need for running potentially many SQL
queries every time a Proc is called on a new object.
== Proc Behaviour Changed
Previously a Proc in Participable was expected to return an Array of
User instances. This has been changed and instead it's now expected that
a Proc modifies the Gitlab::ReferenceExtractor passed to it. The return
value of the Proc is ignored.
== Permissions Checking
The method Participable#participants uses
Ability.users_that_can_read_project to check if the returned users have
access to the project of "self" _without_ running multiple SQL queries
for every user.
|
|
|
|
|
|
|
|
|
| |
In 8278b763d96ef10c6494409b18b7eb541463af29 the default behaviour of annotation
has changes, which was causing a lot of noise in diffs. We decided in #17382
that it is better to get rid of the whole annotate gem, and instead let people
look at schema.rb for the columns in a table.
Fixes: #17382
|